coredns | CoreDNS is a DNS server that chains plugins | DNS library

 by   coredns Go Version: v1.10.1 License: Apache-2.0

kandi X-RAY | coredns Summary

kandi X-RAY | coredns Summary

coredns is a Go library typically used in Networking, DNS applications. coredns has no bugs, it has no vulnerabilities, it has a Permissive License and it has medium support. You can download it from GitHub.

CoreDNS is a DNS server/forwarder, written in Go, that chains plugins. Each plugin performs a (DNS) function. CoreDNS is a Cloud Native Computing Foundation graduated project. CoreDNS is a fast and flexible DNS server. The key word here is flexible: with CoreDNS you are able to do what you want with your DNS data by utilizing plugins. If some functionality is not provided out of the box you can add it by writing a plugin. CoreDNS can listen for DNS requests coming in over UDP/TCP (go'old DNS), TLS (RFC 7858), also called DoT, DNS over HTTP/2 - DoH - (RFC 8484) and gRPC (not a standard).

            kandi-support Support

              coredns has a medium active ecosystem.
              It has 10753 star(s) with 1929 fork(s). There are 235 watchers for this library.
              It had no major release in the last 12 months.
              There are 83 open issues and 2054 have been closed. On average issues are closed in 43 days. There are 6 open pull requests and 0 closed requests.
              It has a neutral sentiment in the developer community.
              The latest version of coredns is v1.10.1

            kandi-Quality Quality

              coredns has 0 bugs and 0 code smells.

            kandi-Security Security

              coredns has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.
              coredns code analysis shows 0 unresolved vulnerabilities.
              There are 0 security hotspots that need review.

            kandi-License License

              coredns is licensed under the Apache-2.0 License. This license is Permissive.
              Permissive licenses have the least restrictions, and you can use them in most projects.

            kandi-Reuse Reuse

              coredns releases are available to install and integrate.
              Installation instructions are not available. Examples and code snippets are available.
              It has 45292 lines of code, 1919 functions and 558 files.
              It has high code complexity. Code complexity directly impacts maintainability of the code.

            Top functions reviewed by kandi - BETA

            kandi's functional review helps you automatically verify the functionalities of the libraries and avoid rework.
            Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of coredns
            Get all kandi verified functions for this library.

            coredns Key Features

            No Key Features are available at this moment for coredns.

            coredns Examples and Code Snippets

            No Code Snippets are available at this moment for coredns.

            Community Discussions


            How to configure coredns Corefile similar to unbound configurations?
            Asked 2022-Apr-01 at 09:33

            Is there a possibility to configure all the unbound configurations listed here similarly in kubernetes coredns 'Corefile' configuration like this. Only few options are listed here. I am looking for the below server options in unbound conf to be done on kubernetes Corefile coredns configmap.

            1. do-ip6
            2. verbosity
            3. outgoing-port-avoid, outgoing-port-permit
            4. domain-insecure
            5. access-control
            6. local-zone

            Example unbound conf which I am looking to do as same in kubernetes Corefile configuration:



            Answered 2022-Mar-31 at 07:09

            CoreDNS supports some requested features via plugins:

            • do-ip6 - CoreDNS works with ipv6 by default (if cluster is dual-stack)
            • verbosity - log plugin will show more details about queries, it can have different format and what it shows (success, denial, errors, everything)
            • outgoing-port-avoid, outgoing-port-permit - did not find any support of this
            • domain-insecure - please check if dnssec can help (It looks similar to what unbound has, but I'm not really familiar with it).
            • access-control - acl plugin does it.
            • local-zone - local plugin can be tried for this purpose, it doesn't have lots of options though.

            Bonus point:

            • CoreDNS config's change - reload allows automatic reload of a changed Corefile.

            All mentioned above plugins have syntax and examples on their pages.



            2-Node Cluster, Master goes down, Worker fails
            Asked 2022-Mar-28 at 15:13

            We have a 2 node K3S cluster with one master and one worker node and would like "reasonable availability" in that, if one or the other nodes goes down the cluster still works i.e. ingress reaches the services and pods which we have replicated across both nodes. We have an external load balancer (F5) which does active health checks on each node and only sends traffic to up nodes.

            Unfortunately, if the master goes down the worker will not serve any traffic (ingress).

            This is strange because all the service pods (which ingress feeds) on the worker node are running.

            We suspect the reason is that key services such as the traefik ingress controller and coredns are only running on the master.

            Indeed when we simulated a master failure, restoring it from a backup, none of the pods on the worker could do any DNS resolution. Only a reboot of the worker solved this.

            We've tried to increase the number of replicas of the traefik and coredns deployment which helps a bit BUT:

            • This gets lost on the next reboot
            • The worker still functions when the master is down but every 2nd ingress request fails
              • It seems the worker still blindly (round-robin) sends traffic to a non-existant master

            We would appreciate some advice and explanation:

            • Should not key services such as traefik and coredns be DaemonSets by default?
            • How can we change the service description (e.g. replica count) in a persistent way that does not get lost
            • How can we get intelligent traffic routing with ingress to only "up" nodes
            • Would it make sense to make this a 2-master cluster?

            UPDATE: Ingress Description:



            Answered 2022-Mar-18 at 09:50

            Running single node or two node masters in k8s cluster is not recommended and it doesnt tolerate failure of master components. Consider running 3 masters in your kubernetes cluster.

            Following link would be helpful -->



            kubernetes dashboard (web ui) has nothing to display
            Asked 2022-Mar-28 at 13:46

            After I deployed the webui (k8s dashboard), I logined to the dashboard but nothing found there, instead a list of errors in notification.



            Answered 2021-Aug-24 at 14:00

            I have recreated the situation according to the attached tutorial and it works for me. Make sure, that you are trying properly login:

            To protect your cluster data, Dashboard deploys with a minimal RBAC configuration by default. Currently, Dashboard only supports logging in with a Bearer Token. To create a token for this demo, you can follow our guide on creating a sample user.

            Warning: The sample user created in the tutorial will have administrative privileges and is for educational purposes only.

            You can also create admin role:



            Enable use of images from the local library on Kubernetes
            Asked 2022-Mar-20 at 13:23

            I'm following a tutorial,

            currently, I have the right image



            Answered 2022-Mar-16 at 08:10

            If your image has a latest tag, the Pod's ImagePullPolicy will be automatically set to Always. Each time the pod is created, Kubernetes tries to pull the newest image.

            Try not tagging the image as latest or manually setting the Pod's ImagePullPolicy to Never. If you're using static manifest to create a Pod, the setting will be like the following:



            Golang REST API Deployment on AWS EKS Fails with CrashLoopBackOff
            Asked 2022-Mar-16 at 17:23

            I'm trying to deploy a simple REST API written in Golang to AWS EKS.

            I created an EKS cluster on AWS using Terraform and applied the AWS load balancer controller Helm chart to it.

            All resources in the cluster look like:



            Answered 2022-Mar-15 at 15:23

            A CrashloopBackOff means that you have a pod starting, crashing, starting again, and then crashing again.

            Maybe the error come from the application itself that it can not connect to database, redis,...

            You may find something useful here:

            My kubernetes pods keep crashing with "CrashLoopBackOff" but I can't find any log



            AWS EKS nodes creation failure
            Asked 2022-Mar-12 at 20:23

            I have a cluster in AWS created by these instructions.

            Then I tried to add nodes in this cluster according to this documentation.

            It seems that the nodes fail to be created with vpc-cni and coredns health issue type: insufficientNumberOfReplicas The add-on is unhealthy because it doesn't have the desired number of replicas.

            The status of the pods kubectl get pods -n kube-system:



            Answered 2021-Dec-02 at 22:52

            It's most likely a problem with the node service role. You can get more information if you exec into the pod and then view the ipamd.log



            Minikube always reset to initial state when restart it
            Asked 2022-Mar-07 at 08:38

            I faced this problem since yesterday, no problems before.
            My environment is

            • Windows 11
            • Docker Desktop 4.4.4
            • minikube 1.25.1
            • kubernetes-cli 1.23.3
            Reproduce 1. Start minikube and create cluster ...


            Answered 2022-Mar-07 at 08:38

            This seems to be a bug introduced with 1.25.0 version of minikube: . A PR to revert the changes introducing the bug is already open:

            The fix is scheduled for minikube v1.26.



            FluentBit setup
            Asked 2022-Feb-01 at 13:47

            I'm trying to set up FluentBit for my EKS cluster in Terraform, via this module, and I have couple of questions:

            cluster_identity_oidc_issuer - what is this? Frankly, I was just told to set this up, so I have very little knowledge about FluentBit, but I assume this "issuer" provides an identity with needed permissions. For example, Okta? We use Okta, so what would I use as a value in here?

            cluster_identity_oidc_issuer_arn - no idea what this value is supposed to be.

            worker_iam_role_name - as in the role with autoscaling capabilities (oidc)?

            This is what looks like:



            Answered 2022-Feb-01 at 13:47

            Since you are using a Terraform EKS module, you can access attributes of the created resources by looking at the Outputs tab [1]. There you can find the following outputs:

            • cluster_id
            • cluster_oidc_issuer_url
            • oidc_provider_arn

            They are accessible by using the following syntax:



            kubernetes master node and admin user don't have permissions after update
            Asked 2022-Jan-31 at 11:52

            I've googled few days and haven't found any decisions. I've tried to update k8s from 1.19.0 to 1.19.6 In Ubuntu-20. (cluster manually installed k81 - master and k82 - worker node)



            Answered 2022-Jan-28 at 10:13


            Connection refused from pod to pod via service clusterIP
            Asked 2022-Jan-16 at 15:05

            Something wrong happend with my RPi 4 cluster based on k3sup.

            Everything works as expected until yesterday when I had to reinstall master node operating system. For example, I have a redis installed on master node and then some pods on worker nodes. My pods can not connect to redis via DNS: redis-master.database.svc.cluster.local (but they do day before).

            It throws an error that can not resolve domain when I test with busybox like:



            Answered 2022-Jan-16 at 15:05

            There was one more thing that was not mentioned. I'm using OpenVPN with NordVPN server list on master node, and use a privoxy for worker nodes.

            When you install and run OpenVPN before running kubernetes master, OpenVPN add rules that block kubernetes networking. So, coredns does not work and you can't reach any pod via IP as well.

            I'm using RPi 4 cluster, so for me it was good enough to just re-install master node, install kubernetes at first and then configure openvpn. Now everything is working as expected.

            It's good enough to order your system units by adding After or Before in service definition. I have VPN systemd service that looks like below:


            Community Discussions, Code Snippets contain sources that include Stack Exchange Network


            No vulnerabilities reported

            Install coredns

            You can download it from GitHub.


            We're most active on Github (and Slack):.
            Find more information at:

            Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items

            Find more libraries
          • HTTPS


          • CLI

            gh repo clone coredns/coredns

          • sshUrl


          • Stay Updated

            Subscribe to our newsletter for trending solutions and developer bootcamps

            Agree to Sign up and Terms & Conditions

            Share this Page

            share link

            Explore Related Topics

            Consider Popular DNS Libraries


            by AdguardTeam


            by coredns


            by fanux


            by sshuttle


            by miekg

            Try Top Libraries by coredns


            by corednsShell


            by corednsCSS


            by corednsGo


            by corednsGo


            by corednsGo