autoscaler | Autoscaling components for Kubernetes

 by   kubernetes Go Version: cluster-autoscaler-chart-9.29.1 License: Apache-2.0

kandi X-RAY | autoscaler Summary

kandi X-RAY | autoscaler Summary

autoscaler is a Go library. autoscaler has no bugs, it has no vulnerabilities, it has a Permissive License and it has medium support. You can download it from GitHub.

This repository contains autoscaling-related components for Kubernetes.
Support
    Quality
      Security
        License
          Reuse

            kandi-support Support

              autoscaler has a medium active ecosystem.
              It has 6885 star(s) with 3475 fork(s). There are 147 watchers for this library.
              OutlinedDot
              It had no major release in the last 12 months.
              There are 138 open issues and 1754 have been closed. On average issues are closed in 249 days. There are 47 open pull requests and 0 closed requests.
              It has a neutral sentiment in the developer community.
              The latest version of autoscaler is cluster-autoscaler-chart-9.29.1

            kandi-Quality Quality

              autoscaler has no bugs reported.

            kandi-Security Security

              autoscaler has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.

            kandi-License License

              autoscaler is licensed under the Apache-2.0 License. This license is Permissive.
              Permissive licenses have the least restrictions, and you can use them in most projects.

            kandi-Reuse Reuse

              autoscaler releases are available to install and integrate.
              Installation instructions are not available. Examples and code snippets are available.

            Top functions reviewed by kandi - BETA

            kandi's functional review helps you automatically verify the functionalities of the libraries and avoid rework.
            Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of autoscaler
            Get all kandi verified functions for this library.

            autoscaler Key Features

            No Key Features are available at this moment for autoscaler.

            autoscaler Examples and Code Snippets

            No Code Snippets are available at this moment for autoscaler.

            Community Discussions

            QUESTION

            PVCs not created at all after deletion, when using Retail reclaim policy in corresponding StorageClass
            Asked 2021-Jun-14 at 15:38

            I am using the ECK operator, to create an Elasticsearch instance.

            The instance uses a StorageClass that has Retain (instead of Delete) as its reclaim policy.

            Here are my PVCs before deleting the Elasticsearch instance

            ...

            ANSWER

            Answered 2021-Jun-14 at 15:38

            with the hope that due to the Retain policy, the new pods (i.e. their PVCs would bind to the existing PVs (and data wouldn't get lost)

            It is explicitly written in the documentation that this is not what happens. the PVs are not available for another PVC after delete of a PVC.

            the PersistentVolume still exists and the volume is considered "released". But it is not yet available for another claim because the previous claimant's data remains on the volume.

            Source https://stackoverflow.com/questions/67971628

            QUESTION

            Manage environments with Github and Google Kubernetes Engine
            Asked 2021-Jun-04 at 14:40

            I have a Github repo with 2 branches on it, develop and main. The first is the "test" environment and the other is the "production" environment. I am working with Google Kubernetes Engine and I have automated deployment from the push on Github to the deploy on GKE. So our workflow is :

            1. Pull develop
            2. Write code and test locally
            3. When everything is fine locally, push on develop (it will automatically deploy on GKE workload app_name_develop)
            4. QA tests on app_name_develop
            5. If QA tests passed, we create a pull request to put develop into main
            6. Automatically deploy on GKE workload app_name_production (from the main branch)

            The deployment of the container is defined in Dockerfile and the Kubernetes deployment is defined in kubernetes/app.yaml. Those two files are tracked with Git inside the repo.

            The problem here is when we create a pull request to put develop into main, it also take the two files app.yaml and Dockerfile from develop to main. We end up with the settings from develop in main, and it messes the whole thing.

            I can't define env variables in those files because it could end up in the wrong branch. My question is : How can I exclude those files from the pull request ? Or is there any way to manage multiples environment without having to manually modify the files after each pull request ?

            I don't know if it can hlphere is my Dockerfile :

            ...

            ANSWER

            Answered 2021-Jun-04 at 14:40

            You can't ignore some files from a pull request selectively. But there are 2 simple workarounds for this :

            First -
            Create a new branch from ‘develop’

            Replace the non-required files from 'main'

            Create pull request from this new branch

            Second -
            Create a new branch from 'main'

            Put changes of required files from 'develop'

            Create pull request from this new branch

            Any of these methods will work. Which will be easier depends on how many files are to be included / excluded.

            Example :
            Considering main as target and dev as source

            Source https://stackoverflow.com/questions/67808747

            QUESTION

            Kubernetes HPA changing sync period
            Asked 2021-May-28 at 09:02

            I am trying to change the sync period as mentioned in the following k8s document. I found the file named kube-controller-manager.yaml in /etc/kubernetes/manifests. I changed the timeoutSeconds: value from 15 secs(default) to 60 secs. Now I have 2 questions based on above info:

              1. Is this the right way to change the sync period ? Cause I have read in the document that there is some flag named --horizontal-pod-autoscaler-sync-period, but adding that is also not working.
              1. And any changes made to the file kube-controller-manager.yaml are getting restored to default whenever I restart the minikube? what should I do? Please let me know any solution or any view in this.
            ...

            ANSWER

            Answered 2021-May-28 at 09:02

            To change the sync period you have to run following command while starting the minikube. minikube start --extra-config 'controller-manager.horizontal-pod-autoscaler-sync-period=10s'

            Source https://stackoverflow.com/questions/67434896

            QUESTION

            Autoscaling Deployments with Cloud Monitoring metrics
            Asked 2021-May-21 at 09:35

            I am trying to auto-scale my pods based on CloudSQL instance response time. We are using cloudsql-proxy for secure connection. Deployed the Custom Metrics Adapter.

            https://raw.githubusercontent.com/GoogleCloudPlatform/k8s-stackdriver/master/custom-metrics-stackdriver-adapter/deploy/production/adapter_new_resource_model.yaml

            ...

            ANSWER

            Answered 2021-May-21 at 09:35
            1. Please refer to the link below to deploy a HorizontalPodAutoscaler (HPA) resource to scale your application based on Cloud Monitoring metrics.

            https://cloud.google.com/kubernetes-engine/docs/tutorials/autoscaling-metrics#custom-metric_4

            1. Looks like the custom metric name is different in the app and hpa deployment configuration files(yaml). Metric and application names should be the same in both app and hpa deployment configuration files.

            2. In the hpa deployment yaml file,

              a. Replace custom-metric-stackdriver-adapter with custom-metric (Or change the metric name to custom-metric-stackdriver-adapter in the app deployment yaml file).

              b. Add “namespace: default” next to the application name at metadata.Also ensure you are adding the namespace in the app deployment configuration file.

              c. Delete the duplicate lines 6 & 7 (minReplicas: 1, maxReplicas: 5).

              d. Go to Cloud Console->Kubernetes Engine->Workloads. Delete the workloads (application-name & custom-metrics-stackdriver-adapter) created by app deployment yaml and adapter_new_resource_model.yaml files.

              e. Now apply configurations to resource model, app and hpa (yaml files).

            Source https://stackoverflow.com/questions/67261520

            QUESTION

            How do I edit the GKE auto-scaler settings?
            Asked 2021-May-13 at 22:22

            I came across this page regarding the kube auto-scaler: https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#what-are-the-parameters-to-ca

            From this, I can tell that part of the reason why some of my nodes aren't being scaled down, is because they have local-data set on them...

            By default, the auto-scaler ignores nodes that have local data. I want to change this to false. However, I can't find instructions anywhere about how to change it.

            I can't find any obvious things to edit. There's nothing in kube-system that implies it is about the autoscaler configuration - just the autoscaler status ConfigMap.

            Could somebody help please? Thank you!

            ...

            ANSWER

            Answered 2021-May-13 at 22:22

            You cannot. The only option the GKE gives is a vague "autoscaling profile" choice between the default and "optimize utilization". You can, however, override it with per-pod annotations.

            Source https://stackoverflow.com/questions/67526786

            QUESTION

            GCP managed instance group won't scale to zero
            Asked 2021-May-11 at 12:11

            I have a GCP managed instance group that I want to scale out and in between 0 and 1 instances using a cron schedule. GCP has a limitation that says:

            Scaling schedules can only be used for MIGs that have at least one other type of autoscaling signal, such as a signal for scaling based on average CPU utilization, load balancing serving capacity, or Cloud Monitoring metrics.

            So I must specify an additional autoscaling signal. The documentation goes on to suggest a workaround:

            to scale based only on a schedule, you can set your CPU utilization target to 100%.

            So I did. But then the managed group does not scale in to 0, it just stays at 1. I've not used the Scale-in controls, so the only thing AFAICT that can prevent scale in is the 10 minute Stabilization period, which I have accounted for.

            My autoscaler configuration:

            ...

            ANSWER

            Answered 2021-May-11 at 12:11

            You are allowing auto scaling after CPU utilization reaches 100% (Autoscaling Policy). Because of that performance will be impacted. So you can set the policy between 60% to 90%.

            Minimum number of instances (minNumReplicas) for instance groups with/without auto scaling should be 1, So Scale In at 0 is not possible.

            For other signals/metrics also (HTTP Load balancing utilization, Stackdriver Monitoring Metric) Scale In at 0 is not possible.

            Use Scale In controls. It helps if sudden load spikes occur.

            Source https://stackoverflow.com/questions/67285594

            QUESTION

            Changing kube-controller-manager.yaml with minkube
            Asked 2021-May-11 at 01:35

            I want to add some flags to change sync periods. can I do it with minikube and kubectl? Or will I have to install and use kubeadm for any such kind of initialization? I refered the this link.

            I created and ran the yaml file but there was an error stating that

            error: unable to recognize "sync.yaml": no matches for kind "ClusterConfiguration" in version "kubeadm.k8s.io/v1beta2"

            sync.yaml that I have used to change the flag (with minikube):

            ...

            ANSWER

            Answered 2021-May-10 at 05:59

            Minikube and kubeadm are separate tools, but you can pass custom CLI options to minikube control plane components as detailed here https://minikube.sigs.k8s.io/docs/handbook/config/#modifying-kubernetes-defaults

            Source https://stackoverflow.com/questions/67465340

            QUESTION

            Is it possible to get total number of message in SQS?
            Asked 2021-May-10 at 11:25

            I see there are 2 separate metrics ApproximateNumberOfMessagesVisible and ApproximateNumberOfMessagesNotVisible.

            Using number of messages visible causes processing pods to get triggered for termination immediately after they pick up the message from queue, as they're no longer visible. If I use number of messages not visible, it will not scale up.

            I'm trying to scale a kubernetes service using horizontal pod autoscaler and external metric from SQS. Here is template external metric:

            ...

            ANSWER

            Answered 2021-Mar-21 at 16:13

            This seems to be a case of Thrashing - the number of replicas keeps fluctuating frequently due to the dynamic nature of the metrics evaluated.

            IMHO, you've got a couple of options here. You could look at adding a StabilizationWindow to your HPA and also probably limit the scale down rate. You'd have to try a few combination of metrics and see what works best for you as you'd best know the nature of metrics (ApproximateNumberOfMessagesVisible in this case) you see in your infrastructure.

            Source https://stackoverflow.com/questions/66708621

            QUESTION

            Which Kubernetes components do actually use monitoring data?
            Asked 2021-May-10 at 07:37

            I am running a Kubernetes cluster including metrics server add-on and Prometheus monitoring. I would like to know which Kubernetes components or activities use/can use monitoring data from the cluster.

            What do I mean with "Kubernetes components or activities"?

            Obviously, one of the main use cases for monitoring data are all autoscaling mechanisms, including Horizontal Pod Autoscaler, Vertical Pod Autoscaler and Cluster Autoscaler. I am searching for further components or activities, which use live monitoring data from a Kubernetes cluster, and potentially a short explanation why they use it (if it is not obvious). Also it would be interesting to know, which of those components or activities must work with monitoring data and which can work with monitoring data, i.e. can be configured to work with monitoring data.

            What do I mean with "monitoring data"?

            Monitoring data include, but are not limited to: Node metrics, Pod metrics, Container metrics, network metrics and custom/application-specific metrics (e.g. captured and exposed by third-party tools like Prometheus).

            I am thankful for every answer or comment in advance!

            ...

            ANSWER

            Answered 2021-May-08 at 09:03

            metrics-server data is used by kubectl top and by the HorizontalPodAutoscaler system. I am not aware of any other places the use the metrics.k8s.io API (technically doesn't have to be served by metrics-server but usually is).

            Source https://stackoverflow.com/questions/67435494

            QUESTION

            Kubernetes HPA doesn't scale up
            Asked 2021-Apr-29 at 19:11

            This is very strange today, I used AWS EKS cluster, and it works fine for my HPA yesterday and today morning. Starting afternoon, nothing change, my HPA suddenly not work!!

            This is my HPA:

            ...

            ANSWER

            Answered 2021-Mar-05 at 08:21

            One thing that comes to mind is that your metrics-server might not be running correctly. Without data from the metrics-server, Horizontal Pod Autoscaling won't work.

            Source https://stackoverflow.com/questions/66486913

            Community Discussions, Code Snippets contain sources that include Stack Exchange Network

            Vulnerabilities

            No vulnerabilities reported

            Install autoscaler

            You can download it from GitHub.

            Support

            Interested in autoscaling? Want to talk? Have questions, concerns or great ideas?. Please join us on #sig-autoscaling at https://kubernetes.slack.com/, or join one of our weekly meetings. See the Kubernetes Community Repo for more information.
            Find more information at:

            Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items

            Find more libraries

            Stay Updated

            Subscribe to our newsletter for trending solutions and developer bootcamps

            Agree to Sign up and Terms & Conditions

            Share this Page

            share link