kandi X-RAY | google_containers Summary
kandi X-RAY | google_containers Summary
google_containers
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of google_containers
google_containers Key Features
google_containers Examples and Code Snippets
Community Discussions
Trending Discussions on google_containers
QUESTION
- i init kubeadm with follow command
kubeadm init
--apiserver-advertise-address=192.168.64.104
--image-repository registry.cn-hangzhou.aliyuncs.com/google_containers
--kubernetes-version v1.17.3
--service-cidr=172.96.0.0/16
--pod-network-cidr=172.244.0.0/16
- i have one master and two node , i test follow command in three machine
curl -k https://172.96.0.1:443/version
command result:
...
ANSWER
Answered 2021-Mar-15 at 05:23i have solved my problem.
when i review my install process, i think this problem maybe associate with the step of apply follow command
kubectl apply -f kube-flannel.yml
i find follow config code in my local kube-flannel.yml
net-conf.json: | { "Network": "10.244.0.0/16", "Backend": { "Type": "vxlan" } }
- i think this ip maybe have to be same with
pod-network-cidr
parameter in kubeadm init command, so i reset k8s, and change my kubeadm init command like follow
kubeadm init
--apiserver-advertise-address=192.168.64.104
--image-repository registry.cn-hangzhou.aliyuncs.com/google_containers
--kubernetes-version v1.17.3
--service-cidr=10.96.0.0/16
--pod-network-cidr=10.244.0.0/16
finally, it's works
- although i solve this problem, i still don't figure it out what happen in this situation
QUESTION
We have a service which queries database records periodically. For HA we want to have replicas. But with replicas all of then queries the database records.
Following Deployment
manifest is used to deploy. But in this configuration one pod is receiving the traffic. But all of them queries db and performing actions.
ANSWER
Answered 2020-Dec-19 at 12:24Different levels of availability can be achieved in Kubernetes, it all depends on your requirements.
Your use case seem to be that only one replica should be active at the time to the database.
Single ReplicaEven if you use a single replica in a Kubernetes Deployment or StatefulSet, it is regularly probed, using your declared LivenessProbe and ReadinessProbe.
If your app does not respond on LivenessProbe, a new pod will be created immediately.
Multiple replicas using Leader electionSince only one replica at a time should have an active connection to your database, a leader election solution is viable.
The passive replicas, that currently does not have the lock, should regularly try to get the lock - so that they get active in case the old active pod has died. How this is done depend on the implementation and configuration.
If you want that only the active Pod in a multiplie replica solution should query the database, the app must first check if it has the lock (e.g. is the active instance).
ConclusionThere is not much difference between a single replica Deployment and a multi replica Deployment using leader election. There might be small differences in the time a failover takes.
For a single replica solution, you might consider using a StatefulSet instead of a Deployment due to different behavior when a node becomes unreachable.
QUESTION
I'm working with microk8s using Kubernetes 1.19. The provided ingress.yaml does not work. Given my troubleshooting below, it seems like ngnix cannot connect to the default-http-backend. Microk8s was installed on a ubuntu 20.04 using snap. I know that there exists a ingress addon. But nonetheless, I would like it to work with this setup.
microk8s kubectl get pods --all-namespaces
...ANSWER
Answered 2020-Oct-20 at 06:29As mentioned in the logs
QUESTION
I have the cluster setup below in AKS
...ANSWER
Answered 2020-Oct-05 at 08:40As posted by user @David Maze:
What's the exact URL you're trying to connect to? What error do you get? (On the load-balancer-autoscaler service, the targetPort needs to match the name or number of a ports: in the pod, or you could just change the hpa-example service to type: LoadBalancer.)
I reproduced your scenario and found out issue in your configuration that could deny your ability to connect to this Deployment
.
From the perspective of Deployment
and Service
of type NodePort
everything seems to work okay.
If it comes to the Service
of type LoadBalancer
on the other hand:
QUESTION
I have the following JSON output.
...ANSWER
Answered 2020-Sep-04 at 17:17Assuming the minor syntactic errors have been fixed in the shown sample, the following produces the result you're expecting:
QUESTION
I am using calico as my kubernetes CNI plugin, but when I ping service from kubernetes pod, it failed.First I find the service ip:
...ANSWER
Answered 2020-Jul-11 at 18:47This is a very common issue and it required from me a full migration of CIDR IPs.
Most probably, this issue about the overlap of CIDRs between Pods CIDR ( which is IP pool used to assign IPs for services and pods) and CIDR of your network.
in this case, route tables of each node (VM) will ensure that:
QUESTION
I've accidentally drained/uncordoned all nodes in Kubernetes (even master) and now I'm trying to bring it back by connecting to the ETCD and manually change some keys in there. I successfuly bashed into etcd container:
...ANSWER
Answered 2020-Jun-24 at 16:48This context deadline exceeded
generally happens because of
Using wrong certificates. You could be using peer certificates instead of client certificates. You need to check the Kubernetes API Server parameters which will tell you where are the client certificates located because Kubernetes API Server is a client to ETCD. Then you can use those same certificates in the
etcdctl
command from the node.The etcd cluster is not operational anymore because peer members are down.
QUESTION
I accidentally drained all nodes in Kubernetes (even master). How can I bring my Kubernetes back? kubectl is not working anymore:
...ANSWER
Answered 2020-Jun-24 at 01:28If you have production or 'live' workloads, the best safe approach is to provision a new cluster and switch the workloads gradually.
Kubernetes keeps its state in etcd so you could potentially connect to etcd and clear the 'drained' state but you will probably have to look at the source code and see where that happens and where the specific key/values are stored in etcd.
The logs that you shared are basically showing that the kube-apiserver cannot start so it's likely that it's trying to connect to etcd/startup and etcd is telling it: "you cannot start on this node because it has been drained".
The typical startup sequence for the masters is something like this:
- etcd
- kube-apiserver
- kube-controller-manager
- kube-scheduler
You can also follow any guide to connect to etcd and see if you can troubleshoot any further. For example, this one. Then you could examine/delete some of the node keys at your own risk:
QUESTION
- My Mac OS/X Version : 10.15.3
- Minikube Version: 1.9.2
I start the minikube use the following command without any extra configuration.
...ANSWER
Answered 2020-May-03 at 14:15Please check the Kubernetes version you launched with Minikube.
Spark v2.4.5
fabric8 Kubernetes client v4.6.1
is compatible with Kubernetes API up to 1.15.2
(refer answer).
You can launch the specific Kubernetes API version with Minikube by adding --kubernetes-version
flag to minikube start
command (refer docs).
Also the issue might be caused by OkHttp library bug described in the comment of this qustion.
QUESTION
I have several docker images that I want to use with minikube
. I don't want to first have to upload and then download the same image instead of just using the local image directly. How do I do this?
Stuff I tried :
1. I tried running these commands (separately, deleting the instances of minikube both times and starting fresh)
ANSWER
Answered 2019-Mar-15 at 09:31This Answer isnt limited to minikube!
Use a local registry:
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install google_containers
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page