kube-aws | line tool to declaratively manage Kubernetes clusters | AWS library
kandi X-RAY | kube-aws Summary
kandi X-RAY | kube-aws Summary
[EOL] A command-line tool to declaratively manage Kubernetes clusters on AWS
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of kube-aws
kube-aws Key Features
kube-aws Examples and Code Snippets
Community Discussions
Trending Discussions on kube-aws
QUESTION
I have a terraform
setup in which i am creating resources in aws
, i am using s3
, ec2
and also kubernetes
. For kubernetes
i have more than 5 .tf
files. I have created a folder called kube-aws
and placed the .tf
files there. Right now i have a setup like below
ANSWER
Answered 2020-May-29 at 20:24The resources in kube-aws
directory will not be included when scripts
is your working directory. The scripts
directory is considered the root module in this instance (see Modules documentation):
The .tf files in your working directory when you run terraform plan or terraform apply together form the root module.
You have two options to include kube-aws
resources:
- Move them up to the
scripts
directory. - Create a module block in one of the
scripts/*.tf
files and pass in required variables.
For example, in, say, s3.tf
:
QUESTION
I'm currently running on AWS and use kube-aws/kube-spot-termination-notice-handler to intercept an AWS spot termination notice and gracefully evict the pods.
I'm reading this GKE documentation page and I see:
Preemptible instances terminate after 30 seconds upon receiving a preemption notice.
Going into the Compute Engine documentation, I see that a ACPI G2 Soft Off is sent 30 seconds before the termination happens but this issue suggests that the kubelet itself doesn't handle it.
So, how does GKE handle preemption? Will the node do a drain/cordon operation or does it just do a hard shutdown?
...ANSWER
Answered 2018-Apr-19 at 11:14Yes you are right, so far there is no built in way to handle ACPI G2 Soft Off
.
Notice that if normal preemptible instance supports shutdown scripts (where you could introduce some kind of logic to perform drain/cordon), this is not the case if they are Kubernetes nodes:
Currently, preemptible VMs do not support shutdown scripts.
You can perform some test but quoting again from documentation:
You can simulate an instance preemption by stopping the instance.
And so far if you stop the instance, even if it is a Kubernetes node no action is taken to cordon/drain and gratefully remove the node from the cluster.
However this feature is still in beta therefore it is at its early stage of life and in this moment it is a matter of discussion if and how introduce this feature.
Disclaimer: I work For Google Cloud Platform Support
QUESTION
Is it possible to install ansible galaxy using brew on mac os? I tried:
...ANSWER
Answered 2018-Nov-21 at 22:59Once you install ansible on your machine using brew or pip you will get ansible-galaxy automatically it's not a package it's a subcommand of the ansible like ansible-vault ansible-doc etc.
QUESTION
I followed this guide https://coreos.com/kubernetes/docs/latest/kubernetes-on-aws-launch.html to create a kubernetes cluster on AWS with kube-aws
.
I am using kube-aws
to v0.9.4-rc2
After successfully do kube-aws up --s3-uri s3://..
, I tried to get the nodes with kubectl get nodes
, and that's when I get this error:
ANSWER
Answered 2017-Feb-07 at 22:00It seems like the problem was because my credentials were not all generated correctly. So perhaps the apiserver cert was signed with a wrong ca cert? Not sure how that might've happened.
Anyway, deleting the credentials
directory, then destroy the cluster and bring it up again solved the problem for me. Luckily it's still an experimental cluster, so I could do that. Not sure if I could've fixed it without destroying the cluster.
QUESTION
I'm using kube-aws
to run a Kubernetes cluster on AWS, and everything works as expected.
Now, I realize that cron jobs aren't turned on in the version I'm using (v1.7.10_coreos.0
), while the documentation for Kubernetes only states the following:
For previous versions of cluster (< 1.8) you need to explicitly enable batch/v2alpha1 API by passing --runtime-config=batch/v2alpha1=true to the API server (see Turn on or off an API version for your cluster for more).
And the documentation directed to in that text only states this (it's the actual, full documentation):
Specific API versions can be turned on or off by passing --runtime-config=api/ flag while bringing up the API server. For example: to turn off v1 API, pass --runtime-config=api/v1=false. runtime-config also supports 2 special keys: api/all and api/legacy to control all and legacy APIs respectively. For example, for turning off all API versions except v1, pass --runtime-config=api/all=false,api/v1=true. For the purposes of these flags, legacy APIs are those APIs which have been explicitly deprecated (e.g. v1beta3).
I have been unsuccessful in finding information about how to change the configuration of a running cluster, and I, of course, don't want to try to re-run the command on api-server
.
Note that kube-aws still use hyperkube
, and not kubeadm
. Also, the /etc/kubernetes/manifests
-directory only contains the ssl
-directory.
The setting I want to apply is this: --runtime-config=batch/v2alpha1=true
What is the proper way, preferably using kubectl
, to apply this setting and have the apiserver
s restarted?
Thanks.
...ANSWER
Answered 2017-Dec-11 at 21:47batch/v2alpha1=true
is set by default in kube-aws
. You can find it here
QUESTION
ANSWER
Answered 2017-Oct-29 at 09:37I used kube-aws for two months (it did the job properly)
But than I switched to kops because it was much simpler to use. That was 6 months ago, and I am still satisfied.
QUESTION
I'm wondering how people are deploying a production-caliber Kubernetes cluster in AWS and, more importantly, how they chose their approach.
The k8s documentation points towards kops for Debian, Ubuntu, CentOS, and RHEL or kube-aws for CoreOS/Container Linux. Among these choices it's not clear how to pick one over the others. CoreOS seems like the most compelling option since it's designed for container workloads.
But wait, there's more.
bootkube seems to be next iteration of the CoreOS deployment technology and is on the roadmap for inclusion within kube-aws. Should I wait until kube-aws uses bootkube?
Heptio recently announced a Quickstart architecture for deploying k8s in AWS. This is the newest approach and so probably the least mature approach but it does seem to have gained traction from within AWS.
Lastly kubeadm is a thing and I'm not really sure where it fits into all of this.
There are probably more approaches that I'm missing too.
Given the number of options with overlapping intent it's very difficult to choose a path forward. I'm not interested in a proof-of-concept. I want to be able to deploy a secure, highly-available cluster for production use and be able to upgrade the cluster (host OS, etcd, and k8s system components) over time.
What did you choose and how did you decide?
...ANSWER
Answered 2017-Mar-02 at 22:14I'd say pick anything which fit's your needs (see also Picking the right solution)...
Which could be:
- Speed of the cluster setup
- Integration in your existing toolchain
- e.g. kops integrates with Terraform which might be a good fit for some prople
- Experience within your team/company/...
- e.g. how comfortable are you with the related Linux distribution
- Required maturity of the tool itself
- some tools are very alpha, are you willing to play to role of an early adaptor?
- Ability to upgrade between Kubernetes versions
- kubeadm has this on their agenda, some others prefer to throw away clusters instead of upgrading
- Required integration into external tools (monitoring, logging, auth, ...)
- Supported cloud providers
With your specific requirements I'd pick the Heptio or kubeadm approach.
- Heptio if you can live with the given constraints (e.g. predefined OS)
- kubeadm if you need more flexibility, everything done with kubeadm can be transferred to other cloud providers
Other options for AWS lower on my list:
- Kubernetes the hard way - using this might be the only true way to setup a production cluster as this is the only way you can fully understand each moving part of the system. Lower on the list, because often the result from any of the tools might just be more than enough, even for production.
- kube-up.sh - is deprecated by the community, so I'd not use it for new projects
- kops - my team had some strange experiences with it which seemed due to our (custom) needs back then (existing VPC), that's why it's lower on my list - it would be #1 for an environment where Terraform is used too.
- bootkube - lower on my list, because it's limitation to CoreOS
- Rancher - interesting toolchain, seems to be too much for a single cluster
Offtopic: If you don't have to run on AWS, I'd also always consider to rather run on GCE for production workloads, as this is a well managed platform rather than something you've to build yourself.
QUESTION
I have a k8 cluster deployed in AWS using kube-aws. When I deploy a service, a new ELB is added for exposing the service to internet. Can I use ingress-controller to replace ELB or is there any other way to expose services other than ELB?
...ANSWER
Answered 2017-Jan-31 at 14:41First, replace type: LoadBalancer
with type: ClusterIP
in your service definition. Then you have to configure the ingress and deploy a controller, like Nginx
If you are looking for a full example, I have one here: nginx-ingress-controller.
The ingress will expose you services using some of your workers public IPs, usually 2 of them. Just check your ingress kubectl get ing -o wide
and create the DNS records.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install kube-aws
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page