kube-aws | line tool to declaratively manage Kubernetes clusters | AWS library

 by   kubernetes-retired Go Version: v0.16.4 License: Apache-2.0

kandi X-RAY | kube-aws Summary

kandi X-RAY | kube-aws Summary

kube-aws is a Go library typically used in Cloud, AWS applications. kube-aws has no bugs, it has no vulnerabilities, it has a Permissive License and it has medium support. You can download it from GitHub.

[EOL] A command-line tool to declaratively manage Kubernetes clusters on AWS
Support
    Quality
      Security
        License
          Reuse

            kandi-support Support

              kube-aws has a medium active ecosystem.
              It has 1136 star(s) with 301 fork(s). There are 78 watchers for this library.
              OutlinedDot
              It had no major release in the last 12 months.
              There are 0 open issues and 848 have been closed. On average issues are closed in 185 days. There are no pull requests.
              It has a neutral sentiment in the developer community.
              The latest version of kube-aws is v0.16.4

            kandi-Quality Quality

              kube-aws has no bugs reported.

            kandi-Security Security

              kube-aws has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.

            kandi-License License

              kube-aws is licensed under the Apache-2.0 License. This license is Permissive.
              Permissive licenses have the least restrictions, and you can use them in most projects.

            kandi-Reuse Reuse

              kube-aws releases are available to install and integrate.

            Top functions reviewed by kandi - BETA

            kandi's functional review helps you automatically verify the functionalities of the libraries and avoid rework.
            Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of kube-aws
            Get all kandi verified functions for this library.

            kube-aws Key Features

            No Key Features are available at this moment for kube-aws.

            kube-aws Examples and Code Snippets

            No Code Snippets are available at this moment for kube-aws.

            Community Discussions

            QUESTION

            Organising folders in terraform
            Asked 2020-May-29 at 20:24

            I have a terraform setup in which i am creating resources in aws, i am using s3, ec2 and also kubernetes. For kubernetes i have more than 5 .tf files. I have created a folder called kube-aws and placed the .tf files there. Right now i have a setup like below

            ...

            ANSWER

            Answered 2020-May-29 at 20:24

            The resources in kube-aws directory will not be included when scripts is your working directory. The scripts directory is considered the root module in this instance (see Modules documentation):

            The .tf files in your working directory when you run terraform plan or terraform apply together form the root module.

            You have two options to include kube-aws resources:

            1. Move them up to the scripts directory.
            2. Create a module block in one of the scripts/*.tf files and pass in required variables.

            For example, in, say, s3.tf:

            Source https://stackoverflow.com/questions/62087863

            QUESTION

            How is the preemption notice handled?
            Asked 2019-Feb-25 at 00:43

            I'm currently running on AWS and use kube-aws/kube-spot-termination-notice-handler to intercept an AWS spot termination notice and gracefully evict the pods.

            I'm reading this GKE documentation page and I see:

            Preemptible instances terminate after 30 seconds upon receiving a preemption notice.

            Going into the Compute Engine documentation, I see that a ACPI G2 Soft Off is sent 30 seconds before the termination happens but this issue suggests that the kubelet itself doesn't handle it.

            So, how does GKE handle preemption? Will the node do a drain/cordon operation or does it just do a hard shutdown?

            ...

            ANSWER

            Answered 2018-Apr-19 at 11:14

            Yes you are right, so far there is no built in way to handle ACPI G2 Soft Off.

            Notice that if normal preemptible instance supports shutdown scripts (where you could introduce some kind of logic to perform drain/cordon), this is not the case if they are Kubernetes nodes:

            Currently, preemptible VMs do not support shutdown scripts.

            You can perform some test but quoting again from documentation:

            You can simulate an instance preemption by stopping the instance.

            And so far if you stop the instance, even if it is a Kubernetes node no action is taken to cordon/drain and gratefully remove the node from the cluster.

            However this feature is still in beta therefore it is at its early stage of life and in this moment it is a matter of discussion if and how introduce this feature.

            Disclaimer: I work For Google Cloud Platform Support

            Source https://stackoverflow.com/questions/49916965

            QUESTION

            How do I install ansible-galaxy on mac os using brew?
            Asked 2018-Nov-26 at 09:03

            Is it possible to install ansible galaxy using brew on mac os? I tried:

            ...

            ANSWER

            Answered 2018-Nov-21 at 22:59

            Once you install ansible on your machine using brew or pip you will get ansible-galaxy automatically it's not a package it's a subcommand of the ansible like ansible-vault ansible-doc etc.

            Source https://stackoverflow.com/questions/53368321

            QUESTION

            kubernetes on AWS: certificate signed by unknown authority
            Asked 2017-Dec-21 at 10:09

            I followed this guide https://coreos.com/kubernetes/docs/latest/kubernetes-on-aws-launch.html to create a kubernetes cluster on AWS with kube-aws.

            I am using kube-aws to v0.9.4-rc2

            After successfully do kube-aws up --s3-uri s3://.., I tried to get the nodes with kubectl get nodes, and that's when I get this error:

            ...

            ANSWER

            Answered 2017-Feb-07 at 22:00

            It seems like the problem was because my credentials were not all generated correctly. So perhaps the apiserver cert was signed with a wrong ca cert? Not sure how that might've happened.

            Anyway, deleting the credentials directory, then destroy the cluster and bring it up again solved the problem for me. Luckily it's still an experimental cluster, so I could do that. Not sure if I could've fixed it without destroying the cluster.

            Source https://stackoverflow.com/questions/42099872

            QUESTION

            Kubernetes: how to properly change apiserver runtime settings
            Asked 2017-Dec-11 at 21:47

            I'm using kube-aws to run a Kubernetes cluster on AWS, and everything works as expected.

            Now, I realize that cron jobs aren't turned on in the version I'm using (v1.7.10_coreos.0), while the documentation for Kubernetes only states the following:

            For previous versions of cluster (< 1.8) you need to explicitly enable batch/v2alpha1 API by passing --runtime-config=batch/v2alpha1=true to the API server (see Turn on or off an API version for your cluster for more).

            And the documentation directed to in that text only states this (it's the actual, full documentation):

            Specific API versions can be turned on or off by passing --runtime-config=api/ flag while bringing up the API server. For example: to turn off v1 API, pass --runtime-config=api/v1=false. runtime-config also supports 2 special keys: api/all and api/legacy to control all and legacy APIs respectively. For example, for turning off all API versions except v1, pass --runtime-config=api/all=false,api/v1=true. For the purposes of these flags, legacy APIs are those APIs which have been explicitly deprecated (e.g. v1beta3).

            I have been unsuccessful in finding information about how to change the configuration of a running cluster, and I, of course, don't want to try to re-run the command on api-server.

            Note that kube-aws still use hyperkube, and not kubeadm. Also, the /etc/kubernetes/manifests-directory only contains the ssl-directory.

            The setting I want to apply is this: --runtime-config=batch/v2alpha1=true

            What is the proper way, preferably using kubectl, to apply this setting and have the apiservers restarted?

            Thanks.

            ...

            ANSWER

            Answered 2017-Dec-11 at 21:47

            batch/v2alpha1=true is set by default in kube-aws. You can find it here

            Source https://stackoverflow.com/questions/47744137

            QUESTION

            Better framework/tool for kubernetes cluster - kops vs kube-aws
            Asked 2017-Oct-29 at 09:37

            I'm going to create Kubernetes cluster on AWS infrastructure. I have two options in kops or kube-aws. which is the best one to use for creating and managing k8s cluster in AWS? What are the pros and cons of those tools?

            I have one master and two worker nodes in different AZ.

            Thank you

            ...

            ANSWER

            Answered 2017-Oct-29 at 09:37

            I used kube-aws for two months (it did the job properly)

            But than I switched to kops because it was much simpler to use. That was 6 months ago, and I am still satisfied.

            Source https://stackoverflow.com/questions/44410970

            QUESTION

            How to deploy Kubernetes on AWS?
            Asked 2017-Mar-02 at 22:14

            I'm wondering how people are deploying a production-caliber Kubernetes cluster in AWS and, more importantly, how they chose their approach.

            The k8s documentation points towards kops for Debian, Ubuntu, CentOS, and RHEL or kube-aws for CoreOS/Container Linux. Among these choices it's not clear how to pick one over the others. CoreOS seems like the most compelling option since it's designed for container workloads.

            But wait, there's more.

            bootkube seems to be next iteration of the CoreOS deployment technology and is on the roadmap for inclusion within kube-aws. Should I wait until kube-aws uses bootkube?

            Heptio recently announced a Quickstart architecture for deploying k8s in AWS. This is the newest approach and so probably the least mature approach but it does seem to have gained traction from within AWS.

            Lastly kubeadm is a thing and I'm not really sure where it fits into all of this.

            There are probably more approaches that I'm missing too.

            Given the number of options with overlapping intent it's very difficult to choose a path forward. I'm not interested in a proof-of-concept. I want to be able to deploy a secure, highly-available cluster for production use and be able to upgrade the cluster (host OS, etcd, and k8s system components) over time.

            What did you choose and how did you decide?

            ...

            ANSWER

            Answered 2017-Mar-02 at 22:14

            I'd say pick anything which fit's your needs (see also Picking the right solution)...

            Which could be:

            • Speed of the cluster setup
            • Integration in your existing toolchain
              • e.g. kops integrates with Terraform which might be a good fit for some prople
            • Experience within your team/company/...
              • e.g. how comfortable are you with the related Linux distribution
            • Required maturity of the tool itself
              • some tools are very alpha, are you willing to play to role of an early adaptor?
            • Ability to upgrade between Kubernetes versions
              • kubeadm has this on their agenda, some others prefer to throw away clusters instead of upgrading
            • Required integration into external tools (monitoring, logging, auth, ...)
            • Supported cloud providers

            With your specific requirements I'd pick the Heptio or kubeadm approach.

            • Heptio if you can live with the given constraints (e.g. predefined OS)
            • kubeadm if you need more flexibility, everything done with kubeadm can be transferred to other cloud providers

            Other options for AWS lower on my list:

            • Kubernetes the hard way - using this might be the only true way to setup a production cluster as this is the only way you can fully understand each moving part of the system. Lower on the list, because often the result from any of the tools might just be more than enough, even for production.
            • kube-up.sh - is deprecated by the community, so I'd not use it for new projects
            • kops - my team had some strange experiences with it which seemed due to our (custom) needs back then (existing VPC), that's why it's lower on my list - it would be #1 for an environment where Terraform is used too.
            • bootkube - lower on my list, because it's limitation to CoreOS
            • Rancher - interesting toolchain, seems to be too much for a single cluster

            Offtopic: If you don't have to run on AWS, I'd also always consider to rather run on GCE for production workloads, as this is a well managed platform rather than something you've to build yourself.

            Source https://stackoverflow.com/questions/42563712

            QUESTION

            Replacing AWS ELB in K8 cluster
            Asked 2017-Jan-31 at 14:41

            I have a k8 cluster deployed in AWS using kube-aws. When I deploy a service, a new ELB is added for exposing the service to internet. Can I use ingress-controller to replace ELB or is there any other way to expose services other than ELB?

            ...

            ANSWER

            Answered 2017-Jan-31 at 14:41

            First, replace type: LoadBalancer with type: ClusterIP in your service definition. Then you have to configure the ingress and deploy a controller, like Nginx

            If you are looking for a full example, I have one here: nginx-ingress-controller.

            The ingress will expose you services using some of your workers public IPs, usually 2 of them. Just check your ingress kubectl get ing -o wide and create the DNS records.

            Source https://stackoverflow.com/questions/41954663

            Community Discussions, Code Snippets contain sources that include Stack Exchange Network

            Vulnerabilities

            No vulnerabilities reported

            Install kube-aws

            You can download it from GitHub.

            Support

            This repository is about to enter read-only mode, and no further updates will be made here.
            Find more information at:

            Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items

            Find more libraries
            CLONE
          • HTTPS

            https://github.com/kubernetes-retired/kube-aws.git

          • CLI

            gh repo clone kubernetes-retired/kube-aws

          • sshUrl

            git@github.com:kubernetes-retired/kube-aws.git

          • Stay Updated

            Subscribe to our newsletter for trending solutions and developer bootcamps

            Agree to Sign up and Terms & Conditions

            Share this Page

            share link

            Explore Related Topics

            Consider Popular AWS Libraries

            localstack

            by localstack

            og-aws

            by open-guides

            aws-cli

            by aws

            awesome-aws

            by donnemartin

            amplify-js

            by aws-amplify

            Try Top Libraries by kubernetes-retired

            external-storage

            by kubernetes-retiredGo

            heapster

            by kubernetes-retiredGo

            contrib

            by kubernetes-retiredGo

            kubeadm-dind-cluster

            by kubernetes-retiredShell

            frakti

            by kubernetes-retiredGo