local-persist | Create named local volumes that persist in the location ( s
kandi X-RAY | local-persist Summary
kandi X-RAY | local-persist Summary
Create named local volumes that persist in the location(s) you want!.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of local-persist
local-persist Key Features
local-persist Examples and Code Snippets
Community Discussions
Trending Discussions on local-persist
QUESTION
Docker version 20.10.2
I'm just starting out on Docker and following training guides - but something hasn't been mentioned so far (that I have discovered) - when I run a container to write some data out to Docker volume, if I run that container again and attach to the same volume, the newly named data will not append into it ?
Here is my rather basic Dockerfile
...ANSWER
Answered 2021-Jul-16 at 18:24Those commands run once when the image is built.
If you want something to run on container startup, you can use CMD
or ENTRYPOINT
https://docs.docker.com/engine/reference/builder/#cmd https://docs.docker.com/engine/reference/builder/#entrypoint
QUESTION
I have 3 local-persistent-volumes provisioned, each pv is mounted on different node:
- 10.30.18.10
- 10.30.18.11
- 10.30.18.12
When I start my app with 3 replicas using:
...ANSWER
Answered 2021-Jun-30 at 10:28With local Persistent Volumes, this is the expected behaviour. Let me try to explain what happens when using local storage.
The usual setup for local storage on a cluster is the following:
- A local storage class, configured to be
WaitForFirstConsumer
- A series of local persistent volumes, linked to the local storage class
And this is all well documented with examples in the official documentation: https://kubernetes.io/docs/concepts/storage/volumes/#local
With this done, Persistent Volume Claims can request storage from the local storage class and StatefulSets can have a volumeClaimTemplate
which requests storage of the local storage class.
Let me take as example your StatefulSet with 3 replicas, each one requires local storage with the volumeClaimTemplate
.
When the Pods are first created, they request a storage of the required
storageClass
. For example yourmy-local-sc
Since this storage class is manually created and does not support dynamically provisioning of new PVs (like, for example, Ceph or similar) it is checked if a PV attached to the storage class is
available
to be bound.If a PV is selected, it is bound to the newly created PVC (and from now, can be used only with that particular PV, since it is now
Bound
)Since the PV is of type
local
, the PV has anodeAffinity
required which selects a node.This force the Pod, now bound to that PV, to be scheduled only on that particular node.
This is why each Pod was scheduled on the same node of the bounded persistent volume. And this means that the Pod is restricted to run on that node only.
You can test this easily by draining / cordoning one of the nodes and then trying to restart the Pod bound to the PV available on that particular node. What you should see is that the Pod will not start, as the PV is restricted from its nodeAffinity
and the node is not available.
Once each Pod of the StatefulSet is bound to a PV, that Pod will be scheduled only on a specific node.. Pods will not change the PV that they are using, unless the PVC is removed (which will force the Pod to request again a new PV to bound)
Since local storage is handled manually, PV which were bounded and have the related PVC removed from the cluster, enter in Released
state and cannot be claimed anymore, they must be handled by someone.. maybe deleting them and then recreating new ones at the same location (and maybe cleaning the filesystem as well, depending on the situation)
This means that local storage is OK to be used only:
If HA is not a problem.. for example, I don't care if my app is blocked by a single node not working
If HA is handled directly by the app itself. For example, a StatefulSet with 3 Pods like a multi-primary database (Galera, Clickhouse, Percona for examples) or ElasticSearch or Kafka, Zookeeper or something like that.. all will handle the HA on their own as they can resist one of their nodes being down as long as there's quorum.
UPDATE
Regarding the Scenario 2 of your question. Let's say you have multiple Available PVs and a single Pod which starts and wants to Bound to one of them. This is a normal behaviour and the control plane would select one of those PVs on its own (if they match with the requests in Claim)
There's a specific way to pre-bind a PV and a PVC, so that they will always bind together. This is described in the docs as "reserving a PV": https://kubernetes.io/docs/concepts/storage/persistent-volumes/#reserving-a-persistentvolume
But the problem is that this cannot be applied to olume claim templates, as it requires the claim to be created manually with special properties.
The volume claim template tho, as a selector field which can be used to restrict the selection of a PV based on labels. It can be seen in the API specs ( https://v1-18.docs.kubernetes.io/docs/reference/generated/kubernetes-api/v1.18/#persistentvolumeclaimspec-v1-core )
When you create a PV, you label it with what you want.. for example you could label it like the following:
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install local-persist
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page