autoscaler | Autoscaling components for Kubernetes
kandi X-RAY | autoscaler Summary
kandi X-RAY | autoscaler Summary
This repository contains autoscaling-related components for Kubernetes.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of autoscaler
autoscaler Key Features
autoscaler Examples and Code Snippets
Community Discussions
Trending Discussions on autoscaler
QUESTION
I am using the ECK operator, to create an Elasticsearch
instance.
The instance uses a StorageClass
that has Retain
(instead of Delete
) as its reclaim policy.
Here are my PVC
s before deleting the Elasticsearch
instance
ANSWER
Answered 2021-Jun-14 at 15:38with the hope that due to the Retain policy, the new pods (i.e. their PVCs would bind to the existing PVs (and data wouldn't get lost)
It is explicitly written in the documentation that this is not what happens. the PVs are not available for another PVC after delete of a PVC.
the PersistentVolume still exists and the volume is considered "released". But it is not yet available for another claim because the previous claimant's data remains on the volume.
QUESTION
I have a Github repo with 2 branches on it, develop
and main
. The first is the "test" environment and the other is the "production" environment. I am working with Google Kubernetes Engine and I have automated deployment from the push on Github to the deploy on GKE. So our workflow is :
- Pull
develop
- Write code and test locally
- When everything is fine locally, push on
develop
(it will automatically deploy on GKE workloadapp_name_develop
) - QA tests on
app_name_develop
- If QA tests passed, we create a pull request to put
develop
intomain
- Automatically deploy on GKE workload
app_name_production
(from themain
branch)
The deployment of the container is defined in Dockerfile
and the Kubernetes deployment is defined in kubernetes/app.yaml
. Those two files are tracked with Git inside the repo.
The problem here is when we create a pull request to put develop
into main
, it also take the two files app.yaml
and Dockerfile
from develop
to main
. We end up with the settings from develop
in main
, and it messes the whole thing.
I can't define env variables in those files because it could end up in the wrong branch. My question is : How can I exclude those files from the pull request ? Or is there any way to manage multiples environment without having to manually modify the files after each pull request ?
I don't know if it can hlphere is my Dockerfile :
...ANSWER
Answered 2021-Jun-04 at 14:40You can't ignore some files from a pull request selectively. But there are 2 simple workarounds for this :
First -
Create a new branch from ‘develop’
Replace the non-required files from 'main'
Create pull request from this new branch
Second -
Create a new branch from 'main'
Put changes of required files from 'develop'
Create pull request from this new branch
Any of these methods will work. Which will be easier depends on how many files are to be included / excluded.
Example :
Considering main as target and dev as source
QUESTION
I am trying to change the sync period as mentioned in the following k8s document. I found the file named kube-controller-manager.yaml
in /etc/kubernetes/manifests. I changed the timeoutSeconds:
value from 15 secs(default) to 60 secs. Now I have 2 questions based on above info:
-
- Is this the right way to change the sync period ? Cause I have read in the document that there is some flag named
--horizontal-pod-autoscaler-sync-period
, but adding that is also not working.
- Is this the right way to change the sync period ? Cause I have read in the document that there is some flag named
-
- And any changes made to the file
kube-controller-manager.yaml
are getting restored to default whenever I restart the minikube? what should I do? Please let me know any solution or any view in this.
- And any changes made to the file
ANSWER
Answered 2021-May-28 at 09:02To change the sync period you have to run following command while starting the minikube. minikube start --extra-config 'controller-manager.horizontal-pod-autoscaler-sync-period=10s'
QUESTION
I am trying to auto-scale my pods based on CloudSQL instance response time. We are using cloudsql-proxy for secure connection. Deployed the Custom Metrics Adapter.
...ANSWER
Answered 2021-May-21 at 09:35- Please refer to the link below to deploy a HorizontalPodAutoscaler (HPA) resource to scale your application based on Cloud Monitoring metrics.
https://cloud.google.com/kubernetes-engine/docs/tutorials/autoscaling-metrics#custom-metric_4
Looks like the custom metric name is different in the app and hpa deployment configuration files(yaml). Metric and application names should be the same in both app and hpa deployment configuration files.
In the hpa deployment yaml file,
a. Replace custom-metric-stackdriver-adapter with custom-metric (Or change the metric name to custom-metric-stackdriver-adapter in the app deployment yaml file).
b. Add “namespace: default” next to the application name at metadata.Also ensure you are adding the namespace in the app deployment configuration file.
c. Delete the duplicate lines 6 & 7 (minReplicas: 1, maxReplicas: 5).
d. Go to Cloud Console->Kubernetes Engine->Workloads. Delete the workloads (application-name & custom-metrics-stackdriver-adapter) created by app deployment yaml and adapter_new_resource_model.yaml files.
e. Now apply configurations to resource model, app and hpa (yaml files).
QUESTION
I came across this page regarding the kube auto-scaler: https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#what-are-the-parameters-to-ca
From this, I can tell that part of the reason why some of my nodes aren't being scaled down, is because they have local-data set on them...
By default, the auto-scaler ignores nodes that have local data. I want to change this to false. However, I can't find instructions anywhere about how to change it.
I can't find any obvious things to edit. There's nothing in kube-system
that implies it is about the autoscaler configuration - just the autoscaler status ConfigMap.
Could somebody help please? Thank you!
...ANSWER
Answered 2021-May-13 at 22:22You cannot. The only option the GKE gives is a vague "autoscaling profile" choice between the default and "optimize utilization". You can, however, override it with per-pod annotations.
QUESTION
I have a GCP managed instance group that I want to scale out and in between 0 and 1 instances using a cron schedule. GCP has a limitation that says:
Scaling schedules can only be used for MIGs that have at least one other type of autoscaling signal, such as a signal for scaling based on average CPU utilization, load balancing serving capacity, or Cloud Monitoring metrics.
So I must specify an additional autoscaling signal. The documentation goes on to suggest a workaround:
to scale based only on a schedule, you can set your CPU utilization target to 100%.
So I did. But then the managed group does not scale in to 0, it just stays at 1.
I've not used the Scale-in controls
, so the only thing AFAICT that can prevent scale in is the 10 minute Stabilization period
, which I have accounted for.
My autoscaler configuration:
...ANSWER
Answered 2021-May-11 at 12:11You are allowing auto scaling after CPU utilization reaches 100% (Autoscaling Policy). Because of that performance will be impacted. So you can set the policy between 60% to 90%.
Minimum number of instances (minNumReplicas) for instance groups with/without auto scaling should be 1, So Scale In at 0 is not possible.
For other signals/metrics also (HTTP Load balancing utilization, Stackdriver Monitoring Metric) Scale In at 0 is not possible.
Use Scale In controls. It helps if sudden load spikes occur.
QUESTION
I want to add some flags to change sync periods. can I do it with minikube and kubectl? Or will I have to install and use kubeadm for any such kind of initialization? I refered the this link.
I created and ran the yaml file but there was an error stating that
error: unable to recognize "sync.yaml": no matches for kind "ClusterConfiguration" in version "kubeadm.k8s.io/v1beta2"
sync.yaml that I have used to change the flag (with minikube):
...ANSWER
Answered 2021-May-10 at 05:59Minikube and kubeadm are separate tools, but you can pass custom CLI options to minikube control plane components as detailed here https://minikube.sigs.k8s.io/docs/handbook/config/#modifying-kubernetes-defaults
QUESTION
I see there are 2 separate metrics ApproximateNumberOfMessagesVisible
and ApproximateNumberOfMessagesNotVisible
.
Using number of messages visible causes processing pods to get triggered for termination immediately after they pick up the message from queue, as they're no longer visible. If I use number of messages not visible, it will not scale up.
I'm trying to scale a kubernetes service using horizontal pod autoscaler and external metric from SQS. Here is template external metric:
...ANSWER
Answered 2021-Mar-21 at 16:13This seems to be a case of Thrashing - the number of replicas keeps fluctuating frequently due to the dynamic nature of the metrics evaluated.
IMHO, you've got a couple of options here.
You could look at adding a StabilizationWindow to your HPA and also probably limit the scale down rate. You'd have to try a few combination of metrics and see what works best for you as you'd best know the nature of metrics (ApproximateNumberOfMessagesVisible
in this case) you see in your infrastructure.
QUESTION
I am running a Kubernetes cluster including metrics server add-on and Prometheus monitoring. I would like to know which Kubernetes components or activities use/can use monitoring data from the cluster.
What do I mean with "Kubernetes components or activities"?
Obviously, one of the main use cases for monitoring data are all autoscaling mechanisms, including Horizontal Pod Autoscaler, Vertical Pod Autoscaler and Cluster Autoscaler. I am searching for further components or activities, which use live monitoring data from a Kubernetes cluster, and potentially a short explanation why they use it (if it is not obvious). Also it would be interesting to know, which of those components or activities must work with monitoring data and which can work with monitoring data, i.e. can be configured to work with monitoring data.
What do I mean with "monitoring data"?
Monitoring data include, but are not limited to: Node metrics, Pod metrics, Container metrics, network metrics and custom/application-specific metrics (e.g. captured and exposed by third-party tools like Prometheus).
I am thankful for every answer or comment in advance!
...ANSWER
Answered 2021-May-08 at 09:03metrics-server
data is used by kubectl top
and by the HorizontalPodAutoscaler system. I am not aware of any other places the use the metrics.k8s.io API (technically doesn't have to be served by metrics-server
but usually is).
QUESTION
This is very strange today, I used AWS EKS cluster, and it works fine for my HPA yesterday and today morning. Starting afternoon, nothing change, my HPA suddenly not work!!
This is my HPA:
...ANSWER
Answered 2021-Mar-05 at 08:21One thing that comes to mind is that your metrics-server might not be running correctly. Without data from the metrics-server, Horizontal Pod Autoscaling won't work.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install autoscaler
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page