csi | Container Storage Interfaces for CNCF | Continuous Deployment library
kandi X-RAY | csi Summary
kandi X-RAY | csi Summary
Container Storage Interfaces for CNCF
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of csi
csi Key Features
csi Examples and Code Snippets
Community Discussions
Trending Discussions on csi
QUESTION
I am using the ECK operator, to create an Elasticsearch
instance.
The instance uses a StorageClass
that has Retain
(instead of Delete
) as its reclaim policy.
Here are my PVC
s before deleting the Elasticsearch
instance
ANSWER
Answered 2021-Jun-14 at 15:38with the hope that due to the Retain policy, the new pods (i.e. their PVCs would bind to the existing PVs (and data wouldn't get lost)
It is explicitly written in the documentation that this is not what happens. the PVs are not available for another PVC after delete of a PVC.
the PersistentVolume still exists and the volume is considered "released". But it is not yet available for another claim because the previous claimant's data remains on the volume.
QUESTION
I have a deployment with 5 containers.
Among them two of them have --endpoint
as argument for which value is set from ENV
So I see this error after deployment
...ANSWER
Answered 2021-Jun-07 at 05:17It has nothing to do with the different containers. Whichever process is crashing is just broken, the code has a bug where it registers the same flag twice which isn't allowed.
QUESTION
I'm trying to create a PostgreSQL database in a Kubernetes cluster on Digital Ocean. To do so, I've created a StatefulSet
and a Service
. And to set up a volume in order to persist data, I took a look at the Add Block Storage Volumes tutorial. My k8s configurations for the StatefulSet
and Service
are down below.
I simply used a volumeClaimTemplates
. The storage class do-block-storage
exists in the cluster (volumeBindingMode
is set as Immediate
). The pv
and the pvc
are successfully created.
A volumeClaimTemplates that is responsible for locating the block storage volume by name csi-pvc. If a volume by that name does not exist, one will be created.
But my pod falls in a CrashLoopBackOff. I'm getting:0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims. Back-off restarting failed container
It is also worth saying that my cluster only has one node.
Can any please help me understand why? Thanks
...ANSWER
Answered 2021-May-24 at 15:22I managed to fix my problem by adding the pvc first instead of using volumeClaimTemplates
QUESTION
According to the documentation:
A PersistentVolume (PV) is a piece of storage in the cluster that has been provisioned ... It is a resource in the cluster just like a node is a cluster resource...
So I was reading about all currently available plugins for PVs and I understand that for 3rd-party / out-of-cluster storage this doesn't matter (e.g. storing data in EBS, Azure or GCE disks) because there are no or very little implications when adding or removing nodes from a cluster. However, there are different ones such as (ignoring hostPath
as that works only for single-node clusters):
- csi
- local
which (at least from what I've read in the docs) don't require 3rd-party vendors/software.
But also:
... local volumes are subject to the availability of the underlying node and are not suitable for all applications. If a node becomes unhealthy, then the local volume becomes inaccessible by the pod. The pod using this volume is unable to run. Applications using local volumes must be able to tolerate this reduced availability, as well as potential data loss, depending on the durability characteristics of the underlying disk.
The local PersistentVolume requires manual cleanup and deletion by the user if the external static provisioner is not used to manage the volume lifecycle.
Use-case
Let's say I have a single-node cluster with a single local
PV and I want to add a new node to the cluster, so I have 2-node cluster (small numbers for simplicity).
Will the data from an already existing local
PV be 1:1 replicated into the new node as in having one PV with 2 nodes of redundancy or is it strictly bound to the existing node only?
If the already existing PV can't be adjusted from 1 to 2 nodes, can a new PV (created from scratch) be created so it's 1:1 replicated between 2+ nodes on the cluster?
Alternatively if not, what would be the correct approach without using a 3rd-party out-of-cluster solution? Will using csi
cause any change to the overall approach or is it the same with redundancy, just different "engine" under the hood?
ANSWER
Answered 2021-May-22 at 22:41Can a new PV be created so it's 1:1 replicated between 2+ nodes on the cluster?
None of the standard volume types are replicated at all. If you can use a volume type that supports ReadWriteMany
access (most readily NFS) then multiple pods can use it simultaneously, but you would have to run the matching NFS server.
Of the volume types you reference:
hostPath
is a directory on the node the pod happens to be running on. It's not a directory on any specific node, so if the pod gets recreated on a different node, it will refer to the same directory but on the new node, presumably with different content. Aside from basic test scenarios I'm not sure when ahostPath
PersistentVolume would be useful.local
is a directory on a specific node, or at least following a node-affinity constraint. Kubernetes knows that not all storage can be mounted on every node, so this automatically constrains the pod to run on the node that has the directory (assuming the node still exists).csi
is an extremely generic extension mechanism, so that you can run storage drivers that aren't on the list you link to. There are some features that might be better supported by the CSI version of a storage backend than the in-tree version. (I'm familiar with AWS: the EBS CSI driver supports snapshots and resizing; the EFS CSI driver can dynamically provision NFS directories.)
In the specific case of a local test cluster (say, using kind) using a local
volume will constrain pods to run on the node that has the data, which is more robust than using a hostPath
volume. It won't replicate the data, though, so if the node with the data is deleted, the data goes away with it.
QUESTION
Why does the line chart on this graph stretch beyond the assigned lengths. In this demo I have a line chart where the height automatically grows to 2355 although I am setting the canvas height to 250. How can I control the height of the line chart? Thank you.
JS:
...ANSWER
Answered 2021-May-19 at 12:34add !important attribute
QUESTION
I have minikube installed on Windows10, and I'm trying to work with Ingress Controller
I'm doing:
...$ minikube addons enable ingress
ANSWER
Answered 2021-May-07 at 12:07As already discussed in the comments the Ingress Controller will be created in the ingress-nginx
namespace instead of the kube-system
namespace. Other than that the rest of the tutorial should work as expected.
QUESTION
I'm applying aws-efs-csi driver like this on a kubernates cluster:
...ANSWER
Answered 2021-May-04 at 12:08It’s a daemonset.
kubectl -n kube-system edit ds/efs-csi-node
QUESTION
I want to enable ReadWriteMany
access mode in EKS Persistent Volume. Came accross io2 volumetype by EBS AWS. SO using io2 type volume
storage_class.yaml
...ANSWER
Answered 2021-Apr-22 at 09:00I looks like there is open feature request on kubernetes-sigs/aws-ebs-csi-driver repo but no progress on this. So I guess that it is not supported at the moment but you can monitor the issue for updates.
QUESTION
I'm trying to use an example from https://github.com/ageitgey/face_recognition
for face detection on Raspberry Pi.
This is the 'facerec_on_raspberry_pi.py' code:
...ANSWER
Answered 2021-Apr-19 at 14:00You can use opencv API directly.
Instead of creating a PiCamera object like:
QUESTION
Is there any way to recover data from deleted RBD volume in Ceph? thanks
...ANSWER
Answered 2021-Apr-13 at 16:06AFAIK, the answer is No. However, I am citing a following explanation from the source that may be helpful for you:
Consider the way Ceph stores data... each RBD is striped into chunks (RADOS objects with 4MB size by default); the chunks are distributed among the OSDs with the configured number of replicates (probably two in your case since you use 2 OSD hosts). RBD uses thin provisioning, so chunks are allocated upon first write access. If an RBD is deleted all of its chunks are deleted on the corresponding OSDs. If you want to recover a deleted RBD, you need to recover all individual chunks. Whether this is possible depends on your filesystem and whether the space of a former chunk is already assigned to other RADOS objects. The RADOS object names are composed of the RBD name and the offset position of the chunk, so if an undelete mechanism exists for the OSDs' filesystem, you have to be able to recover file by their filename, otherwise you might end up mixing the content of various deleted RBDs. Due to the thin provisioning there might be some chunks missing (e.g. never allocated before).
However, there is still some hope. The Ceph book called Mastering Ceph has given some hints to recover the data as following:
there are tools that can search through the OSD data structure, find the object files relating to RBDs, and then assemble these objects back into a disk image, resembling the original RBD image.
May be you need to find the right tool in Ceph source code.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install csi
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page