discovery.etcd.io | etcd discovery service | Key Value Database library
kandi X-RAY | discovery.etcd.io Summary
kandi X-RAY | discovery.etcd.io Summary
If you are having issues with discovery.etcd.io please file a ticket here:
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of discovery.etcd.io
discovery.etcd.io Key Features
discovery.etcd.io Examples and Code Snippets
Community Discussions
Trending Discussions on discovery.etcd.io
QUESTION
My ultimate goal is to run Kubernetes on a 3-node CoreOS cluster (if anyone has better suggestions, I'm all ears: at this stage, I'm considering CoreOS a complete waste of my time).
I've started following the CoreOS guide to run a Vagrant cluster (I even have the CoreOs in Action book and that's not much help either) - I have obtained a fresh discovery token
and modified the user-data
and config.rb
files as described there:
ANSWER
Answered 2018-Apr-04 at 11:27Unfortunately, the documentation you used is currently outdated. Right now ETCD version 3
is used as a Kubernetes
data storage. It provisions with Ignition
(VirtualBox Provider (default)):
When using the VirtualBox provider for Vagrant (the default), Ignition is used to provision the machine.
1. Install vagrant-ignition plugin (just in case if this plugin isn't automatically installed when using the default Vagrantfile from coreos-vagrant
repo):
QUESTION
I have 2 clusters running in Azure for 2 different Availability Zones and I would like to cluster the etcd masters following https://kubernetes.io/docs/admin/high-availability/#replicated-api-servers .
I created the discovery token for 3 masters. When I try to init etcd container it fails:
...ANSWER
Answered 2017-Nov-03 at 17:06Highly Available setups for Kubernetes masters assume you are running multiple (usually 3 so you can have a voting quorum) masters within the same cluster. Your current setup consists of 2 separate 1-master clusters.
When you have multiple clusters, you'll want to look at Cluster Federation although I'd wager this is not what you want, as you'd generally have federated clusters having 3 master setups each as well.
If you can't afford to destroy your existing clusters and boot them up already in HA mode, I suggest this excellent guide for migrating from single to multiple master setups as well as considering using kops for this type of operations.
QUESTION
I am working on setting up a new Kubernetes cluster using the CoreOS documentation. This one uses the CoreOS v1.6.1 image. I am following this documentation from link CoreOS Master setup. I looked in the journalctl logs and I see that the kubeapi-server seems to exit and restart.
The following is a journalctl log indicating on the kube-apiserver :
checking backoff for container "kube-apiserver" in pod "kube-apiserver-10.138.192.31_kube-system(16c7e04edcd7e775efadd4bdcb1940c4)"
Back-off 5m0s restarting failed container=kube-apiserver pod=kube-apiserver-10.138.192.31_kube-system(16c7e04edcd7e775efadd4bdcb1940c4)
Error syncing pod 16c7e04edcd7e775efadd4bdcb1940c4 ("kube-apiserver-10.138.192.31_kube-system(16c7e04edcd7e775efadd4bdcb1940c4)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kube-apiserver pod=kube-apiserver-10.138.192.31_kube-system(16c7e04edcd7e775efadd4bdcb1940c4)"
I am wondering if it's because I need to start the new etcd3 version instead of the etcd2? Any hints or suggestion is appreciated.
The following is my cloud-config:
...
ANSWER
Answered 2017-May-16 at 09:40You use etcd2, so you need to pass the flag '--storage-backend=etcd2' to your kube-apiserver in your manifest.
QUESTION
I have Container Linux by CoreOS alpha (1325.1.0) Installed on a pc at home.
I played with kubernetes for a couple of month, but now after reinstalling ContainerOS and trying to install kubernetes using my fork at https://github.com/kfirufk/coreos-kubernetes I fail to properly install kubernetes.
I use hyperkube image v1.6.0-beta.0_coreos.0
.
the problem is that it seems that hyperkube doesn't try to initiate any manifests from /etc/kubernetes/manifests
. I configured kubelet to run with rkt.
when I run journalctl -xef -u kubelet
after restarting kubelet, I get the following output:
ANSWER
Answered 2017-Mar-04 at 15:05thanks to @AntoineCotten the problem was easily resolved.
first, I downgraded hyperkube from v1.6.0-beta.0_coreos.0
to v1.5.3_coreos.0
. then I noticed an error in the kubelet log that made me understand that I had a major typo in /opt/bin/host-rkt
.
I had exec nsenter -m -u -i -n -p -t 1 -- /usr/bin/rkt "\$@"
instead of exec nsenter -m -u -i -n -p -t 1 -- /usr/bin/rkt "$@"
.
I escaped the $
when trying to paste the command line arguments, which then.. didn't. so.. not using 1.6.0-beta0 for now, that's ok! and fixed the script. now everything works again. thanks
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install discovery.etcd.io
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page