flannel | flannel is a network fabric containers | Networking library

 by   flannel-io Go Version: v0.22.0 License: Apache-2.0

kandi X-RAY | flannel Summary

kandi X-RAY | flannel Summary

flannel is a Go library typically used in Networking, Ansible, Docker applications. flannel has no bugs, it has no vulnerabilities, it has a Permissive License and it has medium support. You can download it from GitHub.

Flannel is a simple and easy way to configure a layer 3 network fabric designed for Kubernetes.
Support
    Quality
      Security
        License
          Reuse

            kandi-support Support

              flannel has a medium active ecosystem.
              It has 8041 star(s) with 2855 fork(s). There are 228 watchers for this library.
              There were 2 major release(s) in the last 12 months.
              There are 33 open issues and 992 have been closed. On average issues are closed in 382 days. There are 1 open pull requests and 0 closed requests.
              It has a neutral sentiment in the developer community.
              The latest version of flannel is v0.22.0

            kandi-Quality Quality

              flannel has 0 bugs and 0 code smells.

            kandi-Security Security

              flannel has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.
              flannel code analysis shows 0 unresolved vulnerabilities.
              There are 0 security hotspots that need review.

            kandi-License License

              flannel is licensed under the Apache-2.0 License. This license is Permissive.
              Permissive licenses have the least restrictions, and you can use them in most projects.

            kandi-Reuse Reuse

              flannel releases are available to install and integrate.
              Installation instructions are available. Examples and code snippets are not available.
              It has 11222 lines of code, 474 functions and 87 files.
              It has high code complexity. Code complexity directly impacts maintainability of the code.

            Top functions reviewed by kandi - BETA

            kandi's functional review helps you automatically verify the functionalities of the libraries and avoid rework.
            Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of flannel
            Get all kandi verified functions for this library.

            flannel Key Features

            No Key Features are available at this moment for flannel.

            flannel Examples and Code Snippets

            No Code Snippets are available at this moment for flannel.

            Community Discussions

            QUESTION

            Error CrushLoopBackOff to start k8s Dashboard
            Asked 2022-Mar-25 at 05:40

            I try to install dashboard to clear private k8s cluster (without internet connection). I did according to this instruction https://github.com/kubernetes/dashboard. When start apply recomended.yaml: metrics scrapper start successfully, but dashboard show error CrashLoopBackOff permanently.

            Docker Version: 19.03.6 K8s version: 1.23.4

            Containers status:

            ...

            ANSWER

            Answered 2022-Mar-25 at 05:40

            By default, the dashboard container is installed on the worker node. In the recommended.yaml file I included installation on the control machine: nodeName: k8s-master1. it works.

            Final yaml file:

            Source https://stackoverflow.com/questions/71574198

            QUESTION

            Kubernetes nginx ingress controller is unreliable
            Asked 2022-Mar-13 at 06:38

            I need help understanding in detail how an ingress controller, specifically the ingress-nginx ingress controller, is supposed to work. To me, it appears as a black box that is supposed to listen on a public IP, terminate TLS, and forward traffic to a pod. But exactly how that happens is a mystery to me.

            The primary goal here is understanding, the secondary goal is troubleshooting an immediate issue I'm facing.

            I have a cluster with five nodes, and am trying to get the Jupyterhub application to run on it. For the most part, it is working fine. I'm using a pretty standard Rancher RKE setup with flannel/calico for the networking. The nodes run RedHat 7.9 with iptables and firewalld, and docker 19.03.

            The Jupyterhub proxy is set up with a ClusterIP service (I also tried a NodePort service, that also works). I also set up an ingress. The ingress sometimes works, but oftentimes does not respond (connection times out). Specifically, if I delete the ingress, and then redeploy my helm chart, the ingress will start working. Also, if I restart one of my nodes, the ingress will start working again. I have not identified the circumstances when the ingress stops working.

            Here are my relevant services:

            ...

            ANSWER

            Answered 2022-Mar-13 at 06:38

            I found the answer to my question here: https://www.stackrox.io/blog/kubernetes-networking-demystified/ There probably is a caveat that this may vary to some extent depending on which networking CNI you are using, although everything I saw was strictly related to Kubernetes itself.

            I'm still trying to digest the content of this blog, and I highly recommend referring directly to that blog, instead of relying on my answer, which could be a poor retelling of the story.

            Here is approximately how a package that arrives on port 443 flows.

            You will need to use the command to see the tables.

            Source https://stackoverflow.com/questions/71013284

            QUESTION

            Kubernetes Pods unable to resolve external host
            Asked 2022-Feb-24 at 00:57

            I am running a 3 Node Kubernetes cluster with Flannel as CNI. I used kubeadm to setup the cluster and the version is 1.23.

            My pods need to talk to external hosts using DNS addresses but there is no DNS server for those hosts. For that, I have added their entries in /etc/hosts on each node in cluster. The nodes can resolve the host from DNS but Pods are not able to resolve them.

            I tried to search this problem over internet and there are suggestions to use HostAlias or update /etc/hosts file inside container. My problem is that the list of hosts is large and it's not feasible to maintain the list in the yaml file.

            I also looked if Kubernetes has some inbuilt flag to make Pod look for entries in Node's /etc/hosts but couldn't find it.

            So My question is -

            1. Why the pods running on the node cannot resolve hosts present in /etc/hosts file.
            2. Is there a way to setup a local DNS server and asks all the Pods to query this DNS server for specific hosts resolutions?

            Any other suggestions or workarounds are also welcomed.

            ...

            ANSWER

            Answered 2022-Feb-24 at 00:57

            Environments in the container should be separated from other containers and machines (including its host machine), and the same goes for /etc/hosts.

            If you are using coreDNS (the default internal DNS), you can easily add extra hosts information by modifying its configMap.

            Open the configMap kubectl edit configmap coredns -n kube-system and edit it so that it includes hosts section:

            Source https://stackoverflow.com/questions/71234801

            QUESTION

            Unable to expose SCTP server running in a kubernetes pod using NodePort
            Asked 2022-Feb-14 at 08:58

            I have a single node kubernetes cluster running in a VM in azure. I have a service running SCTP server in port 38412. I need to expose that port externally. I have tried by changing the port type to NodePort. But no success. I am using flannel as a overlay network. using Kubernetes version 1.23.3.

            This is my service.yaml file

            ...

            ANSWER

            Answered 2022-Feb-13 at 14:03

            Neither AKS nor Flannel supports SCTP at this point of writing. Here's some details about it.

            Source https://stackoverflow.com/questions/71100744

            QUESTION

            Readiness fails in the Eclipse Hono pods of the Cloud2Edge package
            Asked 2022-Feb-09 at 06:58

            I am a bit desperate and I hope someone can help me. A few months ago I installed the eclipse cloud2edge package on a kubernetes cluster by following the installation instructions, creating a persistentVolume and running the helm install command with these options.

            ...

            ANSWER

            Answered 2022-Feb-09 at 06:58

            based on the iconic Failed to create SSL Connection output in the logs, I assume that you have run into the dreaded The demo certificates included in the Hono chart have expired problem.

            The Cloud2Edge package chart is being updated currently (https://github.com/eclipse/packages/pull/337) with the most recent version of the Ditto and Hono charts (which includes fresh certificates that are valid for two more years to come). As soon as that PR is merged and the Eclipse Packages chart repository has been rebuilt, you should be able to do a helm repo update and then (hopefully) succesfully install the c2e package.

            Source https://stackoverflow.com/questions/71034254

            QUESTION

            Pods unable to resolve hostnames in Kubernetes cluster
            Asked 2022-Feb-09 at 05:15

            I'm working on aws eks, and I'm having issues with networking because none of the pods can resolve hostnames.

            Checking the kube-config pods, I found this:

            ...

            ANSWER

            Answered 2022-Feb-09 at 05:15

            Amazon VPC CNI and Flannel cannot co-exist on EKS. Note Flannel is not on the suggested alternate compatible CNI. To get an idea what does it take to use Flannel on EKS checkout this excellent blog.

            Source https://stackoverflow.com/questions/71040878

            QUESTION

            Kubernetes NodePort is not available on all nodes - Oracle Cloud Infrastructure (OCI)
            Asked 2022-Jan-31 at 14:37

            I've been trying to get over this but I'm out of ideas for now hence I'm posting the question here.

            I'm experimenting with the Oracle Cloud Infrastructure (OCI) and I wanted to create a Kubernetes cluster which exposes some service.

            The goal is:

            • A running managed Kubernetes cluster (OKE)
            • 2 nodes at least
            • 1 service that's accessible for external parties

            The infra looks the following:

            • A VCN for the whole thing
            • A private subnet on 10.0.1.0/24
            • A public subnet on 10.0.0.0/24
            • NAT gateway for the private subnet
            • Internet gateway for the public subnet
            • Service gateway
            • The corresponding security lists for both subnets which I won't share right now unless somebody asks for it
            • A containerengine K8S (OKE) cluster in the VCN with public Kubernetes API enabled
            • A node pool for the K8S cluster with 2 availability domains and with 2 instances right now. The instances are ARM machines with 1 OCPU and 6GB RAM running Oracle-Linux-7.9-aarch64-2021.12.08-0 images.
            • A namespace in the K8S cluster (call it staging for now)
            • A deployment which refers to a custom NextJS application serving traffic on port 3000

            And now it's the point where I want to expose the service running on port 3000.

            I have 2 obvious choices:

            • Create a LoadBalancer service in K8S which will spawn a classic Load Balancer in OCI, set up it's listener and set up the backendset referring to the 2 nodes in the cluster, plus it adjusts the subnet security lists to make sure traffic can flow
            • Create a Network Load Balancer in OCI and create a NodePort on K8S and manually configure the NLB to the ~same settings as the classic Load Balancer

            The first one works perfectly fine but I want to use this cluster with minimal costs so I decided to experiment with option 2, the NLB since it's way cheaper (zero cost).

            Long story short, everything works and I can access the NextJS app on the IP of the NLB most of the time but sometimes I couldn't. I decided to look it up what's going on and turned out the NodePort that I exposed in the cluster isn't working how I'd imagine.

            The service behind the NodePort is only accessible on the Node that's running the pod in K8S. Assume NodeA is running the service and NodeB is just there chilling. If I try to hit the service on NodeA, everything is fine. But when I try to do the same on NodeB, I don't get a response at all.

            That's my problem and I couldn't figure out what could be the issue.

            What I've tried so far:

            • Switching from ARM machines to AMD ones - no change
            • Created a bastion host in the public subnet to test which nodes are responding to requests. Turned out only the node responds that's running the pod.
            • Created a regular LoadBalancer in K8S with the same config as the NodePort (in this case OCI will create a classic Load Balancer), that works perfectly
            • Tried upgrading to Oracle 8.4 images for the K8S nodes, didn't fix it
            • Ran the Node Doctor on the nodes, everything is fine
            • Checked the logs of kube-proxy, kube-flannel, core-dns, no error
            • Since the cluster consists of 2 nodes, I gave it a try and added one more node and the service was not accessible on the new node either
            • Recreated the cluster from scratch

            Edit: Some update. I've tried to use a DaemonSet instead of a regular Deployment for the pod to ensure that as a temporary solution, all nodes are running at least one instance of the pod and surprise. The node that was previously not responding to requests on that specific port, it still does not, even though a pod is running on it.

            Edit2: Originally I was running the latest K8S version for the cluster (v1.21.5) and I tried downgrading to v1.20.11 and unfortunately the issue is still present.

            Edit3: Checked if the NodePort is open on the node that's not responding and it is, at least kube-proxy is listening on it.

            ...

            ANSWER

            Answered 2022-Jan-31 at 12:06

            Might not be the ideal fix, but can you try changing the externalTrafficPolicy to Local. This would prevent the health check on the nodes which don't run the application to fail. This way the traffic will only be forwarded to the node where the application is . Setting externalTrafficPolicy to local is also a requirement to preserve source IP of the connection. Also, can you share the health check config for both NLB and LB that you are using. When you change the externalTrafficPolicy, note that the health check for LB would change and the same needs to be applied to NLB.

            Edit: Also note that you need a security list/ network security group added to your node subnet/nodepool, which allows traffic on all protocols from the worker node subnet.

            Source https://stackoverflow.com/questions/70893487

            QUESTION

            kubernetes master node and admin user don't have permissions after update
            Asked 2022-Jan-31 at 11:52

            I've googled few days and haven't found any decisions. I've tried to update k8s from 1.19.0 to 1.19.6 In Ubuntu-20. (cluster manually installed k81 - master and k82 - worker node)

            ...

            ANSWER

            Answered 2022-Jan-28 at 10:13

            QUESTION

            UDP/TCP Broadcast in Managed Kubernetes Services (specifically AWS-EKS)
            Asked 2022-Jan-22 at 17:26

            We have an app that uses UDP broadcast messages to form a "cluster" of all instances running in the same subnet.

            We can successfully run this app in our (pretty std) local K8s installation by using hostNetwork:true for pods. This works because all K8s nodes are in the same subnet and broadcasting is possible. (a minor note: the K8s setup uses flannel networking plugin)

            Now we want to move this app to the managed K8s service @ AWS. But our initial attempts have failed. The 2 daemons running in 2 different pods didn't see each other. We thought that was most likely due to the auto-generated EC2 worker node instances for the AWS K8s service residing on different subnets. Then we created 2 completely new EC2 instances in the same subnet (and the same availability-zone) and tried running the app directly on them (not as part of K8s), but that also failed. They could not communicate via broadcast messages even though the 2 EC2 instances were on the same subnet/availability-zone.

            Hence, the following questions:

            • Our preliminary search shows that AWS EC2 does probably not support broadcasting/multicasting, but still wanted to ask if there is a way to enable it? (on AWS or other cloud provider)?

            • We had used hostNetwork:true because we thought it would be much harder, if not impossible, to get broadcasting working with K8s pod-networking. But it seems some companies offer K8s network plugins that support this. Does anybody have experience with (or recommendation for) any of them? Would they work on AWS for example, considering that AWS doesn't support it on EC2 level?

            • Would much appreciate any pointers as to how to approach this and whether we have any options at all..

            Thanks

            ...

            ANSWER

            Answered 2022-Jan-22 at 17:26

            Conceptually, you need to create overlay network on top of the VPC native like this. There's a CNI that support multicast and here's the AWS blog about it.

            Source https://stackoverflow.com/questions/70814551

            QUESTION

            Why is ArgoCD confusing GitHub.com with my own public IP?
            Asked 2022-Jan-10 at 17:37

            I have just set up a kubernetes cluster on bare metal using kubeadm, Flannel and MetalLB. Next step for me is to install ArgoCD.

            I installed the ArgoCD yaml from the "Getting Started" page and logged in.

            When adding my Git repositories ArgoCD gives me very weird error messages: The error message seems to suggest that ArgoCD for some reason is resolving github.com to my public IP address (I am not exposing SSH, therefore connection refused).

            I can not find any reason why it would do this. When using https:// instead of SSH I get the same result, but on port 443.

            I have put a dummy pod in the same namespace as ArgoCD and made some DNS queries. These queries resolved correctly.

            What makes ArgoCD think that github.com resolves to my public IP address?

            EDIT:

            I have also checked for network policies in the argocd namespace and found no policy that was restricting egress.

            I have had this working on clusters in the same network previously and have not changed my router firewall since then.

            ...

            ANSWER

            Answered 2022-Jan-08 at 21:04

            That looks like argoproj/argo-cd issue 1510, where the initial diagnostic was that the cluster is blocking outbound connections to GitHub. And it suggested to check the egress configuration.

            Yet, the issue was resolved with an ingress rule configuration:

            need to define in values.yaml.
            argo-cd default provide subdomain but in our case it was /argocd

            Source https://stackoverflow.com/questions/70600322

            Community Discussions, Code Snippets contain sources that include Stack Exchange Network

            Vulnerabilities

            No vulnerabilities reported

            Install flannel

            The easiest way to deploy flannel with Kubernetes is to use one of several deployment tools and distributions that network clusters with flannel by default. For example, CoreOS's Tectonic sets up flannel in the Kubernetes clusters it creates using the open source Tectonic Installer to drive the setup process. Though not required, it's recommended that flannel uses the Kubernetes API as its backing store which avoids the need to deploy a discrete etcd cluster for flannel. This flannel mode is known as the kube subnet manager.
            flannel is also widely used outside of kubernetes. When deployed outside of kubernetes, etcd is always used as the datastore. For more details integrating flannel with Docker see Running.

            Support

            Building (and releasing)ConfigurationBackendsRunningTroubleshootingProjects integrating with flannelProduction users
            Find more information at:

            Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items

            Find more libraries
            CLONE
          • HTTPS

            https://github.com/flannel-io/flannel.git

          • CLI

            gh repo clone flannel-io/flannel

          • sshUrl

            git@github.com:flannel-io/flannel.git

          • Stay Updated

            Subscribe to our newsletter for trending solutions and developer bootcamps

            Agree to Sign up and Terms & Conditions

            Share this Page

            share link

            Explore Related Topics

            Consider Popular Networking Libraries

            Moya

            by Moya

            diaspora

            by diaspora

            kcptun

            by xtaci

            cilium

            by cilium

            kcp

            by skywind3000

            Try Top Libraries by flannel-io

            cni-plugin

            by flannel-ioGo