hostpath-provisioner | Dynamic Provisioning of Kubernetes HostPath Volumes
kandi X-RAY | hostpath-provisioner Summary
kandi X-RAY | hostpath-provisioner Summary
This Reposirory and Code is #Unmaintained. You should not use it in production.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
- main is the main entrypoint for kubernetes
- NewHostPathProvisioner returns a new host path provisioner
- Delete removes the specified volume .
hostpath-provisioner Key Features
hostpath-provisioner Examples and Code Snippets
Community Discussions
Trending Discussions on hostpath-provisioner
QUESTION
I know there have been already a lot of questions about this, and I read already most of them, but my problem does not seem to fit them.
I am running a postgresql from bitnami using a helm chart as described below. A clean setup is no problem and everything starts fine. But after some time, until now I could not find any pattern, the pod goes into CrashLoopBackOff and I cannot recover it whatever I try!
Helm uninstall/install does not fix the problem. The PVs seem to be the problem, but I do not know why. And I do not get any error message, which is the weird and scary part of it.
I use a minikube to run the k8s and helm v3.
Here are the definitions and logs:
...ANSWER
Answered 2022-Jan-04 at 18:31I really hope nobody else runs across this, but finally I found the problem and for once it was not only between the chair and the monitor, but also RTFM was involved.
As mentioned I am using minikube to run my k8s cluster which provides PVs stored on the host disk. Where it is stored you may ask? Exaclty, here: /tmp/hostpath-provisioner/default/data-sessiondb-0/data/
. You find the problem? No, I also took some time to figure it out. WHY ON EARTH does minikube use the tmp
folder to store persistant volume claims?
This folder gets autom. cleared every now and so on.
SOLUTION: Change the path and DO NOT STORE PVs IN
tmp
FOLDERS.
They mention this here: https://minikube.sigs.k8s.io/docs/handbook/persistent_volumes/#a-note-on-mounts-persistence-and-minikube-hosts and give an example.
But why use the "dangerous" tmp
path per default and not, let's say, data
without putting a Warning banner there?
Sigh. Closing this question ^^
--> Workaround: https://github.com/kubernetes/minikube/issues/7511#issuecomment-612099413
Github issues to this topic:
- https://github.com/kubernetes/minikube/issues/7511
- https://github.com/kubernetes/minikube/issues/13038
- https://github.com/kubernetes/minikube/issues/3318
- https://github.com/kubernetes/minikube/issues/5144
My Github issue for clarification in the docs: https://github.com/kubernetes/minikube/issues/13038#issuecomment-981821696
QUESTION
I'm using minikube for running my Kubernetes deployment:
pvc:
...ANSWER
Answered 2021-Dec-20 at 23:22The local mount you created mounts the specified directory into minikube, but not from the guest to the host as you would like it to.
Depending on your host machine's OS you will have to set up proper file sharing using either host folder sharing or a network based file system.
With a bit of work, one could set up Syncthing between the host and the guest VM for persistent file synchronization.
Grab the latest release of Syncthing for your operating system & unpack it (if you use Debian/Ubuntu you may want to use the Debian repository)
At this point Syncthing will also have set up a folder called Default Folder for you, in a directory called Sync in your home directory (%USERPROFILE% on Windows). You can use this as a starting point, then remove it or add more folders later.
The admin GUI starts automatically and remains available on http://localhost:8384/. Cookies are essential to the correct functioning of the GUI; please ensure your browser accepts them.
On the left is the list of “folders”, or directories to synchronize. You can see the Default Folder was created for you, and it’s currently marked “Unshared” since it’s not yet shared with any other device. On the right is the list of devices. Currently there is only one device: the computer you are running this on.
For Syncthing to be able to synchronize files with another device, it must be told about that device. This is accomplished by exchanging “device IDs”. A device ID is a unique, cryptographically-secure identifier that is generated as part of the key generation the first time you start Syncthing. It is printed in a log, and you can see it in the web GUI by selecting “Actions” (top right) and “Show ID”.
Two devices will only connect and talk to each other if they are both configured with each other’s device ID. Since the configuration must be mutual for a connection to happen, device IDs don’t need to be kept secret. They are essentially part of the public key.
To get your two devices to talk to each other click “Add Remote Device” at the bottom right on both devices, and enter the device ID of the other side. You should also select the folder(s) that you want to share. The device name is optional and purely cosmetic. You can change it later if desired. Once you click “Save” the new device will appear on the right side of the GUI (although disconnected) and then connect to the new device after a minute or so. Remember to repeat this step for the other device.
At this point the two devices share an empty directory. Adding files to the shared directory on either device will synchronize those files to the other side.
What is Syncthing: https://syncthing.net/
Installation Guide: https://docs.syncthing.net/intro/getting-started.html
Lates Release of syncthing: https://github.com/syncthing/syncthing/releases/tag/v1.18.5
Debian Repo: https://apt.syncthing.net/
QUESTION
I'm trying to deploy a Mongodb ReplicaSet on microk8s cluster. I have installed a VM running on Ubuntu 20.04. After the deployment, the mongo pods do not run but crash. I've enabled microk8s storage, dns and rbac add-ons but still the same problem persists. Can any one help me find the reason behind it? Below is my manifest file:
...ANSWER
Answered 2021-Sep-08 at 07:32The logs you provided show that you have an incorrectly set parameter wiredTigerCacheSizeGB
. In your case it is 0.1, and according to the message
QUESTION
I am trying to use Microk8s storage addon but my PVC and pod are stuck at pending and I don't know what is wrong. I am also using the "registry" addon which uses the storage and that one works without a problem.
FYI: I already restarted the microk8s multiple times and even totally deleted and reinstalled it but the problem remained.
Yaml files:
...ANSWER
Answered 2021-Mar-23 at 09:23I found the problem. Since the "host-provisioner" takes care of creating PV we should not pass the volumeName
in our PVC yaml file. When I removed that field the provisioner could make a PV and bound my PVC to it and now my pod has started.
Now my PVC is:
QUESTION
Im quite new to kubernetes and Im trying to set up a microk8s test environment on a VPS with CentOS.
What I did:
I set up the cluster, enabled the ingress and metallb
...ANSWER
Answered 2021-Jan-19 at 20:49TL;DR
There are some ways to fix your Ingress
so that it would get the IP address.
You can either:
- Delete the
kubernetes.io/ingress.class: nginx
and addingressClassName: public
underspec
section. - Use the newer example (
apiVersion
) from official documentation that by default will have assigned anIngressClass
:
Example of Ingress
resource that will fix your issue:
QUESTION
I am learning about microk8s and how ingress works.
I have a single node microk8s (v1.18.4) with the following add-ons: DNS, ingress, RBAC, storage
I am trying to get it working with the microbot example. I've read (and reread) through the tutorial but, the address, once the ingress manifest is applied results with the microbot service, routed to 127.0.0.1
(and not the internal pod IP).
I am attempting to access the app: http://192.168.91.166/microbot
via web external to the vm that it's running inside (and have tried curl while logged into the vm) it would result in an error page being returned. 192.168.91.166
is the vm's ip.
ANSWER
Answered 2020-Jul-15 at 17:24In microK8s you should be using http://127.0.0.1/microbot
to access a pod via ingress from outside the cluster i.e a browser. This is giving you 502
error in nginx ingress controller log. Few things to check
Check the service has got
Endpoints
reflecting correct POD IP usingkubectl describe svc microbot -n development
Check if container inside pod is listening on port
8080
. Maybe it's80
or something else.The application running as a container in the pod need to listen on
0.0.0.0
instead of127.0.0.1
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install hostpath-provisioner
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page