microk8s | MicroK8s is a small , fast , single-package Kubernetes
kandi X-RAY | microk8s Summary
kandi X-RAY | microk8s Summary
MicroK8s is a small, fast, single-package Kubernetes for developers, IoT and edge.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
- Reset the DQLite installation .
- Configures the configuration .
- Runs the remote operation .
- Check if the Dashboard is running .
- Join a cluster .
- Update kqlite cluster .
- Join a node .
- Demonstrates how to execute an action .
- Join connection .
- Performs the upgrade .
microk8s Key Features
microk8s Examples and Code Snippets
Community Discussions
Trending Discussions on microk8s
QUESTION
I have microk8s v1.22.2 running on Ubuntu 20.04.3 LTS.
Output from /etc/hosts
:
ANSWER
Answered 2021-Oct-10 at 18:29error: unable to recognize "ingress.yaml": no matches for kind "Ingress" in version "extensions/v1beta1"
QUESTION
I have transferred my microk8s setup to a new server and found that the once-working ingress setup my trial setup stopped working.
I am running this minimal whoami-app:
...ANSWER
Answered 2022-Mar-03 at 19:37I don't really know how it happened, but the endpoint was not matching the pod IP.
I deleted the endpoint manually usind kubectl delete endpoints whoami
, it got recreated with the now correct IP, now ingress seems to work.
QUESTION
Here is the issue: We have several microk8s cluster running on different networks; yet each have access to our storage network where our NAS are.
within Kubernetes, we create disks with an nfs-provisionner (nfs-externalsubdir). Some disks were created with the IP of the NAS server specified. Once we had to change the IP, we discovered that the disk was bound to the IP, and changing the IP meant creating a new storage resource within.
To avoid this, we would like to be able to set a DNS record on the Kubernetes cluster level so we could create storage resources with the nfs provisionner based on a name an not an IP, and we could alter the DNS record when needed (when we upgrade or migrate our external NAS appliances, for instance) for instance, I'd like to tell every microk8s environnements that:
192.168.1.4 my-nas.mydomain.local
... like I would within the /etc/hosts file.
Is there a proper way to achieve this? I tried to follow the advices on this link: Is there a way to add arbitrary records to kube-dns? (the answer upvoted 15 time, the cluster-wise section), restarted a deployment, but it didn't work
I cannot use the hostAliases feature since it isn't provided on every chart we are using, that's why I'm looking for a more global solution.
Best Regards,
...ANSWER
Answered 2022-Jan-11 at 19:25You can set you custom DNS in K8s using the Kube-DNS (Core-DNS)
You have to inject/pass the configuration file as configmap to Core DNS volume.
Configmap will look like
QUESTION
Usually i am able to find most things by searching on the web - however in this case, the instructions on the web talk about - probably very basic stuff that i don't get yet.
What i am trying to achieve: i am trying to install argocd, using microk8s and nginx-ingress. Right now i am stuck at enabling the SSL passthrough for argocd. Currently i configured an ingress-controller according to the instructions of the argocd-documentation and the ingress-controller runs without errors.
My guess is that as it's mentioned everywhere that i have to start the ingress controller with the "--enable-ssl-passthrough"-flag.
https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#ssl-passthrough "SSL Passthrough is disabled by default and requires starting the controller with the --enable-ssl-passthrough flag."
Now my problem: How do i do that? - for me, the ingress controller is "just always running". I can delete and recreate the controller with the kubectl-command "create -f ingress.yaml" - which creates an ingress within the argocd-namespace.
i kind of lack the basics of kubernetes and don't get how i could stop and restart the ingress-controller with flags (perhaps i mistake what the "ingress controller" is too). Does anyone have an idea how i could achieve that?
I am running microk8s v1.23.1 on Ubuntu 18.04.3 LTS
...ANSWER
Answered 2022-Jan-25 at 11:40If you can edit deployment YAML config to pass argument like
QUESTION
I am running into a very strange issue, I cannot set single quotes that are required by Content-Security-Policy. I assume I was running an older version of ingress which only got updated after I disabled and re-enabled it (microk8s).
...ANSWER
Answered 2022-Jan-21 at 08:43Changes has been appeared exactly in 1.0.5 related to sanitizing annotation inputs.
You may want to check CVE-2021-25742: Ingress-nginx custom snippets. I put in bold interested for you part.
annotation-value-word-blocklist defaults are "load_module,lua_package,_by_lua,location,root,proxy_pass,serviceaccount,{,},',"
Users from mod_security and other features should be aware that some blocked values may be used by those features and must be manually unblocked by the Ingress Administrator.
It seems to me your issue related to mod_security
+ above blocklist, that contains '
symbol.
For more details please check https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#annotation-value-word-blocklist
In order to fix your issue you should either
- set the value of annotation-value-word-blocklist to an empty string ""
or
- change the value of annotation-value-word-blocklist and remove
'
from its list.
QUESTION
I'm trying to deploy an elixir (phoenix) application in a microk8s cluster namespace with TLS using let's encrypt. The cluster is hosted on an AWS EC2 instance.
The problem I'm facing- The ingress is created in the namespace
- ingress routes to the correct domain
- the application is working and displayed on the given domain
The TLS secret is not being created in the namespace and a 'default' one is created
The secrets after deploying both phoenix app and httpbin app:
...ANSWER
Answered 2022-Jan-06 at 22:47I found out that you can actually check for certificates with kubectl:
kubectl get certificate -n production
The status of this certificate was READY = FALSE.
I checked the description:
kubectl describe certificate -n production
At the bottom it said: Too many certificates have been created in the last 164 hours for this exact domain.
I just changed the domain and voila! It works.
QUESTION
When i do this command kubectl get pods --all-namespaces
I get this Unable to connect to the server: dial tcp [::1]:8080: connectex: No connection could be made because the target machine actively refused it.
All of my pods are running and ready 1/1, but when I use this microk8s kubectl get service -n kube-system
I get
ANSWER
Answered 2021-Dec-27 at 08:21Posting answer from comments for better visibility: Problem solved by reinstalling multipass and microk8s. Now it works.
QUESTION
I would like to deploy an ngingx-ingress controller on my self-hosted Kubernetes (microk8s) that is configurable to listen on one or more interfaces (external IPs).
Not even sure if that is easily possible and I should just switch to an external solution, such as HAProxy or an nginx.
Required behavior:
...ANSWER
Answered 2021-Sep-13 at 04:48I would like to deploy an ngingx-ingress controller on my self-hosted Kubernetes (microk8s) that is configurable to listen on one or more interfaces (external IPs).
For above scenario, you have to deploy the multiple ingress controller of Nginx ingress and keep the different class name in it.
Official document : https://kubernetes.github.io/ingress-nginx/user-guide/multiple-ingress/
So in this scenario, you have to create the Kubernetes service with Loadbalancer IP and each will point to the respective deployment and class will be used in the ingress object.
If you looking forward to use the multiple domains with a single ingress controller you can easily do it by mentioning the host into ingress.
Example for two domain :
- bar.foo.dev
- foo.bar.dev
YAML example
QUESTION
I have a Docker container with MariaDB running in Microk8s (running on a single Unix machine).
...ANSWER
Answered 2021-Oct-14 at 13:26The answer has been written in comments section, but to clarify I am posting here solution as Community Wiki.
In this case problem with connection has been resolved by setting spec.selector
.
The
.spec.selector
field defines how the Deployment finds which Pods to manage. In this case, you select a label that is defined in the Pod template (app: nginx
).
.spec.selector
is a required field that specifies a label selector for the Pods targeted by this Deployment.
QUESTION
I have a Docker Swarm running a container with our custom code (.net core 3.1 on Linux) in with no issue. I have just setup a test 4 node Kubernetes cluster using Microk8s. When I load the image to Kubernetes, it appears to go through fine but the container starts and then stops immediately with error "Back-off restarting failed container". Looking at the error from the pod, I get "It was not possible to find any installed .NET Core SDKs. Did you mean to run .NET Core SDK commands? Install a .NET Core SDK from: https://aka.ms/dotnet-download".
The image is the same image running in the Swarm. My searches have led me to ENTRYPOINT potentially being the issue from the build dockerfile but I haven't had any luck with with changes to this. I have put my dockerfile below in case it is relevant to this problem.
...ANSWER
Answered 2021-Oct-11 at 13:26Mounts:
/app from config-v5api-871dbe27-9933-416a-9830-ef1ec93a82e9 (rw)
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install microk8s
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page