cloud-provider-vsphere | Kubernetes Cloud Provider for vSphere https | Azure library
kandi X-RAY | cloud-provider-vsphere Summary
kandi X-RAY | cloud-provider-vsphere Summary
Kubernetes Cloud Provider for vSphere
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of cloud-provider-vsphere
cloud-provider-vsphere Key Features
cloud-provider-vsphere Examples and Code Snippets
Community Discussions
Trending Discussions on cloud-provider-vsphere
QUESTION
I have a network issue on my cluster and at first I thought it was a routing issue but discovered that maybe the outgoing packet from the cluster isn't getting wrapped with the node ip when leaving the node.
Background is that I have two clusters. I set up the first one (months ago) manually using this guide and it worked great. Then the second one I built multiple times as I created/debugged anisble scripts to automate how I created the first cluster.
On cluster2 I have the network issue... I can get to pods on other nodes but can't get to anything on my regular network. I have tcpdump'd the physical interface on node0 in cluster2 when pinging from a busybox pod and the 172.16.0.x internal pod ip is visible at that interface as the source ip - and my network outside the node has no idea what to do with it. But on cluster1 this same test shows the node ip in place of the pod ip - which is how I assume it should work.
My question is how can I troubleshoot this? Any ideas would be great as I have been at this for several days now. Even if it seems obvious as I can no longer see the forest through the trees... ie. both clusters look the same everywhere I know how to check :)
caveat to "my clusters are the same": Cluster1 is running kubectl 1.16 cluster2 is running 1.18
----edit after @Matt dropped some kube-proxy knowledge on me----
Did not know that kube-proxy rules could just be read by iptables command! Awesome!
I think my problem is those 10.net addresses in the broke cluster. I don't even know where those came from as they are not in any of my ansible config scripts or kube init files... I use all 172's in my configs.
I do pull some configs direct from source (flannel and CSI/CPI stuff) I'll pull those down and inspect them to see if the 10's are in there... Hopefully it's in the flannel defaults or something and I can just change that yml file!
cluster1 working:
...ANSWER
Answered 2020-Apr-03 at 19:14Boom! @Matt advice for the win.
Using iptables to verify the nat rules that flannel was applying did the trick. I was able to find the 10.244 subnet in the flannel config that was referenced in the guide I was using.
I had two options. 1. download and alter the flannel yaml before deploying the CNI or 2. make my kubeadmin init subnet declaration match what flannel has.
I went with option 2 because I don't want to alter the flannel config everytime... I just want to pull down their latest file and run with it. This worked quite nicely to resolve my issue.
QUESTION
I am installing k8s and vsphere CPI/CSI following the instructions located here
My setup: 2x centos 7.7 vSphere VM's (50g hd/16g ram), 1 master & 1 node in k8s cluster.
Made it to the part where I create the storageClass (near the end) when I discovered this github issue exactly. OP of the linked issue just started from scratch and their issue went away, so the report was closed. This has not been the case for me as I've redeployed my k8s cluster from scratch a bunch of times now and always hit this wall. Below is the error if you don't want to check the linked github issue.
Anyone have ideas on what I can try to get past this? I've checked my hd and ram and plenty there.
...ANSWER
Answered 2020-Mar-31 at 13:56Ok turns out this SIGSEGV was a bug or something and it was caused by a network timeout, making this error kind of a red herring.
Details: My vsphere-csi-controller-0 pod was (and still is actually) unable to reach the vsphere server which caused the container in the pod to timeout and trigger this SIGSEV fault. The CSI contributers updated some libraries and the fault is now gone but the timeout remains. Timeout appears to be my problem and not related to CSI but that's a new question :)
If you want the details of what was fixed in the CSI check the github link in the question.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install cloud-provider-vsphere
Get started with Cloud controller manager for vSphere using Helm with this Helm quickstart.
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page