operator-sdk | Provides high level APIs | SDK library
kandi X-RAY | operator-sdk Summary
kandi X-RAY | operator-sdk Summary
This project is a component of the Operator Framework, an open source toolkit to manage Kubernetes native applications, called Operators, in an effective, automated, and scalable way. Read more in the introduction blog post. Operators make it easy to manage complex stateful applications on top of Kubernetes. However writing an Operator today can be difficult because of challenges such as using low level APIs, writing boilerplate, and a lack of modularity which leads to duplication.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of operator-sdk
operator-sdk Key Features
operator-sdk Examples and Code Snippets
Community Discussions
Trending Discussions on operator-sdk
QUESTION
I'm currently writing a kubernetes operator in go using the operator-sdk
.
This operator creates two StatefulSet
and two Service
, with some business logic around it.
I'm wondering what CRD status is about ? In my reconcile method I use the default client (i.e. r.List(ctx, &setList, opts...)
) to fetch data from the cluster, shall I instead store data in the status to use it later ?
If so how reliable this status is ? I mean is it persisted ? If the control plane dies is it still available ?
What about disaster recovery, what if the persisted data disappear ? Doesn't that case invalidate the use of the CRD status ?
ANSWER
Answered 2021-Apr-21 at 15:38The status subresource of a CRD can be considered to have the same objective of non-custom resources. While the spec defines the desired state of that particular resource, basically I declare what I want, the status instead explains the current situation of the resource I declared on the cluster and should help in understanding what is different between the desired state and the actual state.
Like a StatefulSet spec could say I want 3 replicas and its status say that right now only 1 of those replicas is ready and the next one is still starting, a custom resource status may tell me what is the current situation of whatever I declared in the specs.
For example, using the Rook Operator, I could declare I want a CephCluster made in a certain way. Since a CephCluster is a pretty complex thing (made of several StatefulSets, Daemons and more), the status of the custom resource definition will tell me the current situation of the whole ceph cluster, if it's health is ok or if something requires my attention and so on.
From my understandings of the Kubernetes API, you shouldn't rely on status subresource to decide what your operator should do regarding a CRD as it is way better and less prone to errors to always check the current situation of the cluster (at operator start or when a resource is defined, updated or deleted)
Last, let me quote from Kubernetes API conventions as it exaplins the convention pretty well ( https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#spec-and-status )
By convention, the Kubernetes API makes a distinction between the specification of the desired state of an object (a nested object field called "spec") and the status of the object at the current time (a nested object field called "status").
The specification is a complete description of the desired state, including configuration settings provided by the user, default values expanded by the system, and properties initialized or otherwise changed after creation by other ecosystem components (e.g., schedulers, auto-scalers), and is persisted in stable storage with the API object. If the specification is deleted, the object will be purged from the system.
The status summarizes the current state of the object in the system, and is usually persisted with the object by automated processes but may be generated on the fly. At some cost and perhaps some temporary degradation in behavior, the status could be reconstructed by observation if it were lost.
When a new version of an object is POSTed or PUT, the "spec" is updated and available immediately. Over time the system will work to bring the "status" into line with the "spec". The system will drive toward the most recent "spec" regardless of previous versions of that stanza. In other words, if a value is changed from 2 to 5 in one PUT and then back down to 3 in another PUT the system is not required to 'touch base' at 5 before changing the "status" to 3. In other words, the system's behavior is level-based rather than edge-based. This enables robust behavior in the presence of missed intermediate state changes.
QUESTION
In a Kubernetes operator based on operator-sdk, do you know how to write code to synchronize CR resource when CR specification is updated with kubectl apply
? Could you please provide some code samples?
ANSWER
Answered 2021-Mar-19 at 20:30It is mostly up to how you deploy things. The default skeleton gives you a Kustomize-based deployment structure so kustomize build config/default | kubectl apply -f
. This is also wrapped up for you behind make deploy
. There is also make install
for just installing the generated CRD files.
QUESTION
I get the error "CRD is present in bundle but not defined in CSV" when I run make bundle
.
The full output is
...ANSWER
Answered 2021-Mar-19 at 07:10The error on the bottom is a red herring. The actual error is further up and uncolored when you experience it in person.
Specifically, a Kustomize yaml is expecting an myapplicationui.yaml
but can't find it.
This can easily happen when someone in your team attempts to rename files (e.g. to myapplicationui_sample.yaml
) without checking all of the references.
QUESTION
I'm looking to see if it's currently possible to run Kubernetes locally on a 2020 M1 MacBook air.
The environment I need is relatively simple, just for going through some tutorials. As an example, this operator-sdk guide.
So far I've tried microk8s
and minikube
, as they're tools I've used before on other machines.
For both of these, I've installed them using brew
after opening the terminal app "with Rosetta 2"
(i.e like this). My progress is then:
Minikube
When I run minikube start --driver=docker
(having installed the tech preview of Docker Desktop for M1), an initialization error occurs. It seems to me that this is being tracked here https://github.com/kubernetes/minikube/issues/9224.
Microk8s
microk8s install
asks to install multipass
, which then errors with An error occurred with the instance when trying to start with 'multipass': returned exit code 2. Ensure that 'multipass' is setup correctly and try again.
. Multipass shows a microk8s-vm
stuck in starting. I think this may relate to this issue https://github.com/canonical/multipass/issues/1857.
I'm aware I'd probably be better chasing up those issues for help on these particular errors. What would be great is any general advice on if it's currently possible/advisable to setup a basic Kubernetes env for playing with on an M1 mac. I'm not experienced with the underlying technologies here, so any additional context is welcome. :)
If anyone has suggestions for practising Kubernetes, alternative to setting up a local cluster, I'd also appreciate them. Thanks!
...ANSWER
Answered 2021-Jan-23 at 20:35First, it is usually good to have Docker when working with containers. Docker now has a Tech Preview of Docker for Apple M1 based macs.
When you have a workin Docker on your machine, it should also work to use Kind - a way to run Kubernetes on Docker containers.
QUESTION
In Kubernetes and Operator-sdk, we can define CRD (Custom Resource Definition) and CR (Custom Resource). In my operator controller, when a CR is initialized, then I create a new Deployment and service.
When we delete a CR object, then the correlated resources (such as Deployment or service) will be deleted as well at the same time. I understand it should be done by CR
or CRD
finalizer, this is just my guess.
Now I hit an issue, during Operator testing, under envTest
environment, when I delete a CR
, its correlated resources (Deployment or service) have not been deleted.
I am confused. In real k8s cluster
, the correlated resources (Deployment or service) can be deleted automatically when I delete a CR
, under envTest environment
, why it doesn't delete correlated resources?
Could anybody point out the reason.
...ANSWER
Answered 2020-Nov-13 at 18:02Deletion of orphaned resources is done by Kubernetes's garbage collector, which is implemented in kubelet. When you test operator in envTest
environment, garbage collection doesn't work because kubelet
is missing in that environment(it only deploys API server and etcd).
QUESTION
In Kubernetes and Operator-sdk, we can define CRD (Custom Resource Definition) and CR (Custom Resource). In my operator controller, when a CR is initialized, then the controller reconcillation create a new Deployment and service.
When we delete a CR object, then the associated resources (such as Deployment or service) will be deleted as well at the same time. I understand it should be done by CR Finalizer. But, in Operator-SDK and my controller code, I never see any code to register or add Finalizer for CR, is there any default behavior for Operator-Sdk?
Could anybody point how it work for the case - "while deleting CR, the associated Deployment and Service have deleted as well"? Which part in controller is responsible for that?
...ANSWER
Answered 2020-Nov-13 at 18:11Deletion of associated resources is not part of a controller. It's done by Kubernetes's garbage collector.
Basically, garbage collector using OwnerReference objects to find orphaned resources and delete them. Most likely, you set OwnerReference
by calling controllerutil.SetControllerReference
method somewhere in your code.
QUESTION
I am a newbie for operator-sdk
. Now I am writing test for operator with envtest
framework, so I had a fake control-plane for environments.
Inside controller reconcile loop, once I initialize a CR, then controller will pull down an image for pod and deploy that Pod.
All behaviour in the above happens in the real k8s cluster. My question is, under envtest
environemnts, does controller really pull down image for deploying Pods?
ANSWER
Answered 2020-Nov-10 at 16:53That depends on envtest
configuration. Here is quotes from kubebuilder book:
[envtest] setting up and starting an instance of etcd and the Kubernetes API server, without kubelet, controller-manager or other components
Unless you’re using an existing cluster, keep in mind that no built-in controllers are running in the test context
So, if you don't set USE_EXISTING_CLUSTER
env var to true, envtest
will set control plane with only API server and etcd. For example, if your controller should create Deployment at some event, there's no deployment controller in test environment that gonna create ReplicaSet and Pods. Basically, all it does is stores state of test environment in etcd.
QUESTION
I'm writing an operator by operator-sdk
and I have created statefulset pod in operator by using k8s api like :
r.client.Create(context.TODO(), statefulset)
It's works correctly and the statefulset pod
is crated. But now I want to upgrade the operator already run in k8s so that I can add some command for pod like
ANSWER
Answered 2020-Oct-02 at 12:01You should delete statefulset itself instead of statefulset pod
. The problem is when you delete statefulset pod
- new pod automatically creates from old statefulset
spec.
Once you delete/recreate statefulset
- as expected you schedule proper updated pods.
Probably you can add additional logic to operator
that will patch already existed statefulset
- that can be resolution for avoiding redeploy statefulset each time.
QUESTION
I am working on operator-sdk, in the controller, we often need to create a Deployment object, and Deployment resource has a lot of configuration items, such as environment variables or ports definition or others as following. I am wondering what is best way to get these values, I don't want to hard code them, for example, variable_a or variable_b.
Probably, you can put them in the CRD as spec, then pass them to Operator Controller; Or maybe you can put them in the configmap, then pass configmap name to Operator Controller, Operator Controller can access configmap to get them; Or maybe you can put in the template file, then in the Operator Controller, controller has to read that template file.
What is best way or best practice to deal with this situation? Thanks for sharing your ideas or points.
...ANSWER
Answered 2020-Sep-04 at 07:17It can be convenient that your app gets your data as environment variables.
Environment variables fromConfigMap
For non-sensitive data, you can store your variables in a ConfigMap
and then define container environment variables using the ConfigMap
data.
Create the ConfigMap
first. File configmaps.yaml
:
QUESTION
Using the Operator-sdk I deploy a CR that has a Job with a pod. CR has a Status struct something like below
...ANSWER
Answered 2020-Jul-13 at 22:31You can start by creating a controller (you might have already):
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install operator-sdk
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page