kube-cluster | Deploy Kubernetes clusters to AWS using kubeadm | Infrastructure Automation library
kandi X-RAY | kube-cluster Summary
kandi X-RAY | kube-cluster Summary
Deploy Kubernetes clusters to AWS using kubeadm, terraform and packer
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of kube-cluster
kube-cluster Key Features
kube-cluster Examples and Code Snippets
Community Discussions
Trending Discussions on kube-cluster
QUESTION
I am running on prem kubernetes. I have a release that is running with 3 pods. At one time (I assume) I deployed the helm chart with 3 replicas. But I have since deployed an update that has 2 replicas.
When I run helm get manifest my-release-name -n my-namespace
, it shows that the deployment yaml has replicas set to 2.
But it still has 3 pods when I run kubectl get pods -n my-namespace
.
What is needed (from a helm point of view) to get the number of replicas down to the limit I set?
Update
I noticed this when I was debugging a crash loop backoff for the release.
This is an example of what a kubectl describe pod
looks like on one of the three pods.
ANSWER
Answered 2021-May-11 at 19:34What is needed (from a helm point of view) to get the number of replicas down to the limit I set?
Your pods need to be in a "healthy" state. Then they are in your desired number of replicas.
First, you deployed 3 replicas. This is managed by a ReplicaSet.
Then you deployed a new revision, with 2 replicas. A "rolling deployment" will be performed. First pods with your new revision will be created, but replicas of your old ReplicaSet will only be scaled down when you have healthy instances of your new revision.
QUESTION
I'm trying to setup a kubernetes cluster in virtualbox. I followed https://kubernetes.io/blog/2019/03/15/kubernetes-setup-using-ansible-and-vagrant and so far everything seems to work.
But I cannot get the dashboard application to work. I followed the guide from https://github.com/kubernetes/dashboard and https://github.com/kubernetes/dashboard/blob/master/docs/user/access-control/creating-sample-user.md but I cannot access the Web-UI from my host machine.
My whole setup can be found here: https://github.com/sebastian9486/v-kube-cluster/tree/feature/deploy-dashboard
My Vagrantfile is in src/main/kube-cluster and my ansible playbooks are in src/main/kube-cluster/kubernetes-setup. These parts so far work.
In src/main/kube-cluster/kubernetes-setup/deploy/ is the dashboard.sh to deploy the dashboard application. There may be some more elegant way, but for know I try to just get it running.
Installation looks okay. Output from my dashboard.sh
...ANSWER
Answered 2021-Mar-02 at 13:14The problem was resolved in the comments section but for better visibility I decided to provide an answer.
After deploying Kubernetes Dashboard, to access it from a local workstation we can create a secure channel to Kubernetes cluster (proxy server between our machine and Kubernetes API server) by running:
QUESTION
I've setup a kubernetes cluster using vagrant and virtual vbox. So far everything seems to wo work fine.
Next step was to deploy the dashboard application using the guide from https://github.com/kubernetes/dashboard (and the linked "Create An Authentication Token (RBAC)" section).
Now I run into an issue. Since I setup all nodes as virtual machines I need to deploy my application to a VM. But how can I access these services from my host machine via browser?
Running kubectl proxy &
inside the VM does not do the trick to access from my host.
My whole setup is available on Github: https://github.com/sebastian9486/v-kube-cluster
- src/main/ -> Start / Stop Scripts
- src/main/kube-cluster/ -> Vagrantfile, Ansible playbooks and script to deploy dashboard application
Output from my dashboard.sh (the IP addresses are from my vagrantboxes)
...ANSWER
Answered 2021-Feb-23 at 18:32Try exposing proxy with this way:
QUESTION
I've been struggling with this all weekend, and I now on my knees hoping one of you geniuses can solve my problem.
I short: I have an ingress-nginx controller (Image: nginx/nginx-ingress:1.5.8) with whom I'm trying to achieve a self-signed mutual authentication.
The https aspect works all fine, but the problem I'm having (I think) is that the ingress controller reroute the request with the default cert and the ingress validates with the default CA(because it can't find my CA).
So.. Help!
Steps I've gone through on this cluster-f*** of a journey (pun intended):
I've tested it in a local Minikube-cluster and it all works like a charm. When I exec -it into the ingress-controller-pod and cat the nginx.conf for both my clusters (Minikube and Azure) I did find large differences; hence I just found out that I'm working with apples and pears in terms of minikube- vs azure-k8s nginx-ingresses.
This is the ingress setup that worked as a charm for my minikube cluster (the ingress I'm using is more or less a duplicate of the file you'll find in the link): https://kubernetes.github.io/ingress-nginx/examples/auth/client-certs/
In addition i found this which in a long way describes the problem that I'm having: https://success.docker.com/article/how-to-configure-a-default-tls-certificate-for-the-kubernetes-nginx-ingress-controller From the link above the solution is simple; nuke the ingress from orbit and create a new one. Well.. Here's the thing, this is a production cluster and my bosses would be all but pleased if I did that.
Another discovery that I made whilst "exec -it bash"-roaming around inside the Azure-ingress-controller is that there is no public root cert folder (/etc/ssl/) to be found. Do not know why, but though I'd mention it.
I've also discovered the param --default-ssl-certificate=default/foo-tls, but this is a default. As there will be other needs for different client-auths later I have to be able to specify dynamic CA-certs for different ingresses.
I'll past my nginx.conf that I think is the problem below. Hoping to hear back from some of you because at this point in time I'm thoroughly lost. Hit me up if additional information is needed.
...ANSWER
Answered 2020-May-01 at 21:39So the problem came down to the ingress-controller being old and outdated. Didn't have the original helm-chart that is was deployed with so I was naturally worried about rollback options. Anyhoo -> took a leap of faith in the middle of the night local time and nuked the namespace; recreated the namespace; helm install stable/nginx-ingress.
There was a minimum downtime -> 1 min at most, but beware to lock down the public IP that's attached to the load balancer before going all 3.rd world war on your services.
Had to add an argument to the standard azure helm install command to imperatively set the public IP for the resource; pasting it below if any poor soul should find himself in the unfortunate situation with a new helm-cli and lost charts.
That's it; keep your services up to date and make sure to save your charts!
QUESTION
I have a simple docker-image based on nginx:alpine
. On my local docker-deamon I can start it without any problems.
But when I deploy it via k8s, then the container fails to start with the following error:
2020/03/04 08:01:38 [emerg] 1#1: open() "/var/run/nginx.pid" failed (13: Permission denied)
nginx: [emerg] open() "/var/run/nginx.pid" failed (13: Permission denied)
Has anybody an idea what happend? I bet, that there is something wrong with the k8s-cluster.
And my dockerfile looks like this:
...ANSWER
Answered 2020-Mar-16 at 17:04I'am pretty sure that the bug comes from kaniko.
See this https://github.com/GoogleContainerTools/kaniko/issues/550
and https://github.com/GoogleContainerTools/kaniko/issues/647
So, we can't build with our Pipelines and build images local until we update our kaniko.
QUESTION
I am trying to use kuberenets as cluster manger for spark
. I also want to ship the container logs to splunk
. Now I do have monitoring stack deployed (fluent-bit, prometheus etc)in the same namespace and the way it works is if your pod
has a certain environment_variable
it will start reading the logs and push it to splunk.
The thing I am not able to find is how do I set a environment variable and populate it
ANSWER
Answered 2020-Feb-11 at 20:02To configure additional Spark Driver Pod environment variables you can pass additional --conf spark.kubernetes.driverEnv.EnvironmentVariableName=EnvironmentVariableValue
(please refer docs for more details).
To configure additional Spark Executor Pods environment variables you can pass additional --conf spark.executorEnv.EnvironmentVariableName=EnvironmentVariableValue
(please refer docs for more details).
Hope it helps.
QUESTION
i found that apparently in Ubuntu 18 the whole DNS-setup is very confusing. Im connected through an pritunl VPN to my kube-cluster and im trying to use the kube-dns server. So i first tried to use https://github.com/jonathanio/update-systemd-resolved to update my DNS settings with the pushed DNS server from the VPN, but it seems that currently something is broken (https://github.com/jonathanio/update-systemd-resolved/issues/64).
As im ok with hard-coding the DNS ip somewhere, i tried putting the IP in some places: installing resolvconf and putting it in /etc/resolvconf/resolv.conf.d/head
, putting it in /etc/systemd/resolved.conf
, of course also trying to put it directly into /etc/resolv.conf
, as im a naiv person. After restarting some things a couple of times, i reached an even more confusing state:
ANSWER
Answered 2019-Jun-13 at 12:40To answer my own question: I digged a bit deeper and learned a bit about avahi
, nscd
, systemd-resolve
and the magic of nsswitch
. So apparently the problem was this line in my /etc/nsswitch.conf
:
QUESTION
I think just a quick sanity check, maybe my eyes are getting confused. I'm breaking a monolithic terraform file into modules.
My main.tf
call just two modules, gke
for the google kubernetes engine and storage
which creates a persistent volume on the cluster created previously.
Module gke
has an outputs.tf
which outputs the following:
ANSWER
Answered 2018-Nov-28 at 15:16In your /root-folder/variables.tf
, delete the following entries:
QUESTION
I'm trying to access a locally hosted service from within a Minikube cluster running on Hyper-V. In this question the user made use of the VirtualBox-provided IP (10.0.2.2
) to loop back to the host IP.
Is there an equivalent IP for Hyper-V?
I'm running Hyper-V on Windows 10 Pro with Minikube v0.28.2.
...ANSWER
Answered 2018-Sep-21 at 11:34You can use the IP of Hyper-V External switch
created for minikube connection rendering.
- Open your
Hyper-V manager
and check the name ofExternal virtual switch
. In this guide it's called "Primary Virtual Switch". - Check the IP address for this
Hyper-V
network interface on your Windows 10 machine. - Try to use this IP address in your minikube project for communication purposes with a service residing on the local machine.
I've used busybox
Pod and tested it on my machine:
QUESTION
I have installed Minikube and am following the "Hello minikube" tutorial: https://kubernetes.io/docs/tutorials/stateless-application/hello-minikube/#create-a-minikube-cluster
I have started minikube using the Hyperv driver:
minikube start --vm-driver="hyperv" --hyperv-virtual-switch="New Virtual Switch" --alsologtostderr
When I try to build an image using the Minikube Docker daemon, get the following error:
Step 1/4 : FROM node:6.9.2
Get https://registry-1.docker.io/v2/: dial tcp: lookup registry-1.docker.io on 172.24.209.161:53: server misbehaving
What is going wrong here, and how can I fix it?
Here is some info about my environment:
minikube version: v0.23.0
OS: Windows 10 17_09
...ANSWER
Answered 2017-Nov-08 at 09:34Try to clear all Docker data. This has done the job to me ;)
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install kube-cluster
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page