coredns | CoreDNS is a DNS server that chains plugins | DNS library
kandi X-RAY | coredns Summary
kandi X-RAY | coredns Summary
CoreDNS is a DNS server/forwarder, written in Go, that chains plugins. Each plugin performs a (DNS) function. CoreDNS is a Cloud Native Computing Foundation graduated project. CoreDNS is a fast and flexible DNS server. The key word here is flexible: with CoreDNS you are able to do what you want with your DNS data by utilizing plugins. If some functionality is not provided out of the box you can add it by writing a plugin. CoreDNS can listen for DNS requests coming in over UDP/TCP (go'old DNS), TLS (RFC 7858), also called DoT, DNS over HTTP/2 - DoH - (RFC 8484) and gRPC (not a standard).
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of coredns
coredns Key Features
coredns Examples and Code Snippets
Community Discussions
Trending Discussions on coredns
QUESTION
Is there a possibility to configure all the unbound configurations listed here similarly in kubernetes coredns 'Corefile' configuration like this. Only few options are listed here. I am looking for the below server options in unbound conf to be done on kubernetes Corefile coredns configmap.
- do-ip6
- verbosity
- outgoing-port-avoid, outgoing-port-permit
- domain-insecure
- access-control
- local-zone
Example unbound conf which I am looking to do as same in kubernetes Corefile configuration:
...ANSWER
Answered 2022-Mar-31 at 07:09CoreDNS
supports some requested features via plugins
:
do-ip6
- CoreDNS works with ipv6 by default (if cluster is dual-stack)verbosity
-log
plugin will show more details about queries, it can have different format and what it shows (success, denial, errors, everything)outgoing-port-avoid, outgoing-port-permit
- did not find any support of thisdomain-insecure
- please check ifdnssec
can help (It looks similar to whatunbound
has, but I'm not really familiar with it).access-control
-acl
plugin does it.local-zone
-local
plugin can be tried for this purpose, it doesn't have lots of options though.
Bonus point:
- CoreDNS config's change -
reload
allows automatic reload of a changed Corefile.
All mentioned above plugins have syntax and examples on their pages.
QUESTION
We have a 2 node K3S cluster with one master and one worker node and would like "reasonable availability" in that, if one or the other nodes goes down the cluster still works i.e. ingress reaches the services and pods which we have replicated across both nodes. We have an external load balancer (F5) which does active health checks on each node and only sends traffic to up nodes.
Unfortunately, if the master goes down the worker will not serve any traffic (ingress).
This is strange because all the service pods (which ingress feeds) on the worker node are running.
We suspect the reason is that key services such as the traefik
ingress controller and coredns
are only running on the master.
Indeed when we simulated a master failure, restoring it from a backup, none of the pods on the worker could do any DNS resolution. Only a reboot of the worker solved this.
We've tried to increase the number of replicas of the traefik
and coredns
deployment which helps a bit BUT:
- This gets lost on the next reboot
- The worker still functions when the master is down but every 2nd ingress request fails
- It seems the worker still blindly (round-robin) sends traffic to a non-existant master
We would appreciate some advice and explanation:
- Should not key services such as
traefik
andcoredns
be DaemonSets by default? - How can we change the service description (e.g. replica count) in a persistent way that does not get lost
- How can we get intelligent traffic routing with ingress to only "up" nodes
- Would it make sense to make this a 2-master cluster?
UPDATE: Ingress Description:
...ANSWER
Answered 2022-Mar-18 at 09:50Running single node or two node masters in k8s cluster is not recommended and it doesnt tolerate failure of master components. Consider running 3 masters in your kubernetes cluster.
Following link would be helpful --> https://netapp-trident.readthedocs.io/en/stable-v19.01/dag/kubernetes/kubernetes_cluster_architecture_considerations.html
QUESTION
After I deployed the webui (k8s dashboard), I logined to the dashboard but nothing found there, instead a list of errors in notification.
...ANSWER
Answered 2021-Aug-24 at 14:00I have recreated the situation according to the attached tutorial and it works for me. Make sure, that you are trying properly login:
To protect your cluster data, Dashboard deploys with a minimal RBAC configuration by default. Currently, Dashboard only supports logging in with a Bearer Token. To create a token for this demo, you can follow our guide on creating a sample user.
Warning: The sample user created in the tutorial will have administrative privileges and is for educational purposes only.
You can also create admin role
:
QUESTION
I'm following a tutorial https://docs.openfaas.com/tutorials/first-python-function/,
currently, I have the right image
...ANSWER
Answered 2022-Mar-16 at 08:10If your image has a latest
tag, the Pod's ImagePullPolicy
will be automatically set to Always
. Each time the pod is created, Kubernetes tries to pull the newest image.
Try not tagging the image as latest
or manually setting the Pod's ImagePullPolicy
to Never
.
If you're using static manifest to create a Pod, the setting will be like the following:
QUESTION
I'm trying to deploy a simple REST API written in Golang to AWS EKS.
I created an EKS cluster on AWS using Terraform and applied the AWS load balancer controller Helm chart to it.
All resources in the cluster look like:
...ANSWER
Answered 2022-Mar-15 at 15:23A CrashloopBackOff means that you have a pod starting, crashing, starting again, and then crashing again.
Maybe the error come from the application itself that it can not connect to database, redis,...
You may find something useful here:
My kubernetes pods keep crashing with "CrashLoopBackOff" but I can't find any log
QUESTION
I have a cluster in AWS created by these instructions.
Then I tried to add nodes in this cluster according to this documentation.
It seems that the nodes fail to be created with vpc-cni
and coredns
health issue type: insufficientNumberOfReplicas The add-on is unhealthy because it doesn't have the desired number of replicas.
The status of the pods kubectl get pods -n kube-system
:
ANSWER
Answered 2021-Dec-02 at 22:52It's most likely a problem with the node service role. You can get more information if you exec into the pod and then view the ipamd.log
QUESTION
I faced this problem since yesterday, no problems before.
My environment is
- Windows 11
- Docker Desktop 4.4.4
- minikube 1.25.1
- kubernetes-cli 1.23.3
ANSWER
Answered 2022-Mar-07 at 08:38This seems to be a bug introduced with 1.25.0 version of minikube: https://github.com/kubernetes/minikube/issues/13503 . A PR to revert the changes introducing the bug is already open: https://github.com/kubernetes/minikube/pull/13506
The fix is scheduled for minikube v1.26.
QUESTION
I'm trying to set up FluentBit for my EKS cluster in Terraform, via this module, and I have couple of questions:
cluster_identity_oidc_issuer - what is this? Frankly, I was just told to set this up, so I have very little knowledge about FluentBit, but I assume this "issuer" provides an identity with needed permissions. For example, Okta? We use Okta, so what would I use as a value in here?
cluster_identity_oidc_issuer_arn - no idea what this value is supposed to be.
worker_iam_role_name - as in the role with autoscaling capabilities (oidc)?
This is what eks.tf looks like:
...ANSWER
Answered 2022-Feb-01 at 13:47Since you are using a Terraform EKS module, you can access attributes of the created resources by looking at the Outputs
tab [1]. There you can find the following outputs:
cluster_id
cluster_oidc_issuer_url
oidc_provider_arn
They are accessible by using the following syntax:
QUESTION
I've googled few days and haven't found any decisions. I've tried to update k8s from 1.19.0 to 1.19.6 In Ubuntu-20. (cluster manually installed k81 - master and k82 - worker node)
...ANSWER
Answered 2022-Jan-28 at 10:13The solution for the issue is to regenerate the kubeconfig file for the admin:
QUESTION
Something wrong happend with my RPi 4 cluster based on k3sup.
Everything works as expected until yesterday when I had to reinstall master node operating system. For example, I have a redis installed on master node and then some pods on worker nodes. My pods can not connect to redis via DNS: redis-master.database.svc.cluster.local
(but they do day before).
It throws an error that can not resolve domain when I test with busybox like:
...ANSWER
Answered 2022-Jan-16 at 15:05There was one more thing that was not mentioned. I'm using OpenVPN with NordVPN server list on master node, and use a privoxy for worker nodes.
When you install and run OpenVPN before running kubernetes master, OpenVPN add rules that block kubernetes networking. So, coredns does not work and you can't reach any pod via IP as well.
I'm using RPi 4 cluster, so for me it was good enough to just re-install master node, install kubernetes at first and then configure openvpn. Now everything is working as expected.
It's good enough to order your system units by adding After
or Before
in service definition. I have VPN systemd service that looks like below:
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install coredns
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page