etcd-manager | manager manages an etcd cluster | Key Value Database library
kandi X-RAY | etcd-manager Summary
kandi X-RAY | etcd-manager Summary
etcd-manager manages an etcd cluster, on a cloud or on bare-metal. It borrows from the ideas of the etcd-operator, but avoids the circular dependency of relying on kubernetes (which itself relies on etcd). etcd-manager performs a minimal amount of gossip-like discovery of peers for initial cluster membership, then "pivots to etcd" as soon as possible. etcd-manager also watches for peer membership changes to manage the correct number of members for the desired cluster quorum. Actual cluster membership changes to etcd continue to take place through raft. Communication with etcd-manager happens via the etcd-manager-ctl binary.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of etcd-manager
etcd-manager Key Features
etcd-manager Examples and Code Snippets
Community Discussions
Trending Discussions on etcd-manager
QUESTION
I am trying to create a very simple cluster on aws with kops with one master and 2 worker nodes. But after creating, kops validate cluster complains that cluster is not healthy.
cluster created with:
...ANSWER
Answered 2021-Feb-11 at 06:41I don't see anything particularly wrong with the command you are running. However, t2.micro are very small, and may be too small for the cluster to function.
You can have a look at the kops-operator logs why it is not starting. Try kubectl logs kops-controller-xxxx
and kubectl describe pod kops-controller-xxx
QUESTION
After a cluster upgrade, one of three masters can't connect back to the cluster. I have a HA cluster running in us-east-1a, us-east-1b and us-east-1c, my master that is running in us-east-1a can't join back to the cluster.
I tried to scale down the master-us-east-1a instance group to zero nodes and back it to one node but the EC2 machine starts with the same problem, can't join back to the cluster again, seems to starts with a backup or something.
I tried to connect to the master to restart the services, maybe protukube or docker, but I can't solve the problem too.
Connecting via ssh in the master I noticed that the flannel service is not running in this machine. I tried to run manually via docker without success. Seems that flannel is the network service that should be running and is not.
- Can I reset the master of us-east-1a and create it from zero?
- Any ideas about getting flannel service running in this master?
Thanks in advance.
attachments
...ANSWER
Answered 2020-Jan-14 at 14:50The Kubelet is trying to register the master node us-east-1a with an API Server endpoint https://127.0.0.1:443. I believe this should be API server endpoint of any of the other two masters. Kubelet uses kubelet.conf file to talk to the API Server to register node.Change the server
in kubelet.conf file located at /etc/kubernetes
to point to one of the below:
- Elastic IP or public IP of master node at us-east-1b or us-east-1c ex https://xx.xx.xx.xx:6443
- Private IP of current master node us-east-1b or us-east-1c ex https://xx.xx.xx.xx:6443
- FQDN of current master node if you have a Load balancer in-front of your master nodes running the kubernetes API server.
After changing kubelet.conf restart kubelet.
Edit: Since you are using etcd manager can you try the Kubernetes service unavailable / flannel issues troubleshooting step described here
QUESTION
I have just terminated a AWS K8S node, and now.
K8S recreated a new one, and installed new pods. Everything seems good so far.
But when I do:
...ANSWER
Answered 2019-Oct-07 at 13:24It looks like your nodes/master is running low on storage? I see only 1GB for ephemeral storage available.
You should free up some space on the node and master. It should get rid of your problem.
QUESTION
I"m new to Kubernetes and AWS, treat me like a noob.
I've got Kubernetes running in AWS with the following names:
...ANSWER
Answered 2019-Jun-19 at 01:25You have an nginx ingress controller running already. Is it working? If so, you should probably use that instead of a new load balancer.
1) Configure your domain so that it is pointing to your ingress load balancer. If you are using route53, you can set a wildcard A Record so that *.mydomain.com goes to the load balancer.
2) Add the appropriate ingress section to your values.yaml: https://gitlab.doc.ic.ac.uk/help/install/kubernetes/gitlab_chart.md#ingress-routing
3) Use serviceType=ClusterIP.
If you can't or don't want to use that Ingress Controller, then yes, serviceType=LoadBalancer is appropriate. It will create an AWS ELB for you. You'll need to add an A record for your domain pointing to that ELB.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install etcd-manager
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page