kubernetes-cron | Cron Jobs
kandi X-RAY | kubernetes-cron Summary
kandi X-RAY | kubernetes-cron Summary
Cron Jobs are now supported natively in Kubernetes as a beta feature. Please use this instead of this repository. This is a basic implementation of the Kubernetes CronJob spec. It is intended to be used temporarily on clusters where alpha resources cannot be enabled, such as GKE until CronJobs exit alpha.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
- Watch for jobs
- Watch the given endpoint
- Watch cronjobs
- Start the iteration
- Trigger a cronjob
- Iterate through all cronjobs
- Create a job
- Return a copy of metadata
- Update an existing CronJob
- Make a request to the WSS API
- Perform an HTTP PUT request
- HTTP POST operation
kubernetes-cron Key Features
kubernetes-cron Examples and Code Snippets
Community Discussions
Trending Discussions on kubernetes-cron
QUESTION
Kubernetes Version
...ANSWER
Answered 2021-Aug-04 at 20:31Reading logs! Always helpful.
ContextFor context, the job itself scales an AKS nodepool. We have two, the default system
one, and a new user controlled one. The cronjob is meant to scale the new user
(Not system
pool).
I noticed that the scale-down
job always takes longer compared to the scale-up
job, this is due to the image pull always happening when the scale down job runs.
I also noticed that the Killing
event mentioned above originates from the kubelet. (kubectl get events -o wide
)
I went to check the kubelet logs on the host, and realised that the host name was a little atypical (aks-burst-XXXXXXXX-vmss00000d
) in the sense that most hosts in our small development cluster usually has numbers on the end, not d
There I realised the naming was different because this node was not part of the default nodepool, and I could not check the kubelet logs because the host had been removed.
CauseThe job scales down compute resources. The scale down would fail, because it was always preceeded by a scale up, in which point a new node was in the cluster. This node had nothing running on it, so the next Job was scheduled on it. The Job started on the new node, told Azure to scale down the new node to 0, and subsequently the Kubelet killed the job as it was running.
Always being scheduled on the new node explains why the image pull happened each time as well.
FixI changed the spec and added a NodeSelector so that the Job would always run on the system
pool, which is more stable than the user
pool
QUESTION
I found this Pass date command as parameter to kubernetes cronjob which is similar, but did not solve my problem.
I'm trying to backup etcd using a cronjob, but etcd image doesn't have "date" command.
...ANSWER
Answered 2020-Oct-27 at 11:54You can use printf like this:
QUESTION
I have a Kubernetes Cronjob that runs on GKE and runs Cucumber JVM tests. In case a Step fails due to assertion failure, some resource being unavailable, etc., Cucumber rightly throws an exception which leads the Cronjob job to fail and the Kubernetes pod's status changes to ERROR
. This leads to creation of a new pod that tries to run the same Cucumber tests again, which fails again and retries again.
I don't want any of these retries to happen. If a Cronjob job fails, I want it to remain in the failed status and not retry at all. Based on this, I have already tried setting backoffLimit: 0
in combination with restartPolicy: Never
in combination with concurrencyPolicy: Forbid
, but it still retries by creating new pods and running the tests again.
What am I missing? Here's my kube manifest for the Cronjob:
...ANSWER
Answered 2020-Apr-22 at 15:01To make things as simple as possible I tested it using this example from the official kubernetes documentation, applying to it minor modifications to illustrate what really happens in different scenarios.
I can confirm that when backoffLimit
is set to 0
and restartPolicy
to Never
everything works exactly as expected and there are no retries. Note that every single run of your Job
which in your example is scheduled to run at intervals of 60 seconds (schedule: "*/1 * * * *"
) IS NOT considerd a retry.
Let's take a closer look at the following example (base yaml
avialable here):
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install kubernetes-cron
You can use kubernetes-cron like any standard Python library. You will need to make sure that you have a development environment consisting of a Python distribution including header files, a compiler, pip, and git installed. Make sure that your pip, setuptools, and wheel are up to date. When using pip it is generally recommended to install packages in a virtual environment to avoid changes to the system.
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page