csi-driver | Kubernetes Container Storage Interface driver for Hetzner | Plugin library
kandi X-RAY | csi-driver Summary
kandi X-RAY | csi-driver Summary
Kubernetes Container Storage Interface driver for Hetzner Cloud Volumes
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of csi-driver
csi-driver Key Features
csi-driver Examples and Code Snippets
Community Discussions
Trending Discussions on csi-driver
QUESTION
I'm following this AWS documentation which explains how to properly configure AWS Secrets Manager to let it works with EKS through Kubernetes Secrets.
I successfully followed step by step all the different commands as explained in the documentation.
The only difference I get is related to this step where I have to run:
...ANSWER
Answered 2022-Mar-06 at 22:24Finally I realized why it wasn't working. As explained here, the error:
QUESTION
I would like to be able to deploy the AWS EFS CSI Driver Helm chart hosted at AWS EFS SIG Repo using Pulumi. With Source from AWS EFS CSI Driver Github Source. I would like to avoid having almost everything managed with Pulumi except this one part of my infrastructure.
Below is the TypeScript class I created to manage interacting with the k8s.helm.v3.Release class:
...ANSWER
Answered 2022-Feb-06 at 20:21QUESTION
I've been trying to deploy a self managed node EKS cluster for a while now, with no success. The error I'm stuck on now are EKS addons:
Error: error creating EKS Add-On (DevOpsLabs2b-dev-test--eks:kube-proxy): InvalidParameterException: Addon version specified is not supported, AddonName: "kube-proxy", ClusterName: "DevOpsLabs2b-dev-test--eks", Message_: "Addon version specified is not supported" } with module.eks-ssp-kubernetes-addons.module.aws_kube_proxy[0].aws_eks_addon.kube_proxy on .terraform/modules/eks-ssp-kubernetes-addons/modules/kubernetes-addons/aws-kube-proxy/main.tf line 19, in resource "aws_eks_addon" "kube_proxy":
This error repeats for coredns as well, but ebs_csi_driver throws:
Error: unexpected EKS Add-On (DevOpsLabs2b-dev-test--eks:aws-ebs-csi-driver) state returned during creation: timeout while waiting for state to become 'ACTIVE' (last state: 'DEGRADED', timeout: 20m0s) [WARNING] Running terraform apply again will remove the kubernetes add-on and attempt to create it again effectively purging previous add-on configuration
My main.tf looks like this:
...ANSWER
Answered 2022-Feb-04 at 09:24K8s is hard to get right sometimes. The examples on Github are shown for version 1.21
[1]. Because of that, if you leave only this:
QUESTION
Created a self signed certificate in Azure KeyVault as below with DNS
Have added the certificate to Azure Kubernetes Service as a secret using secret-store-csi-driver and added to ingress
Problem is while opening the DNS in browser it shows certificate is not valid as below
The Certificate is already added to Trusted store and shows as below
Also, the certificate in browser is the one in Azure Keyvault certificate as evident from the validity date
What could be the issue?
...ANSWER
Answered 2022-Jan-19 at 11:24When you use self sign a certificate, your Operating System or Browser wont trust this Cert, as it is self signed and considered insecure for the Internet.
You need to use a Cert from a valid Certification Authority or import your CA root cert that created the cert into your OS or Browser. But every user need to so this.
A better approach is Cert-Manager ff you are using AKS. Cert-Manager can issue certificates from LetsEncrypt. Here is a workflow from Microsoft for this.
QUESTION
I'm using a juicefs-csi in GKE. I use postgre as meta-store and GCS as storage. The corresponding setting is as follow:
...ANSWER
Answered 2021-Dec-15 at 13:53Ok I misunderstood you at the beginning.
When you are creating GKE
cluster you can specify which GCP Service Account
will be used by this cluster, like below:
By Default
it's Compute Engine default service account
(71025XXXXXX-compute@developer.gserviceaccount.com) which is lack of a few Cloud Product permissions (like Cloud Storage
, it has Read Only
). It's even described in this message.
If you want to check which Service Account
was set by default to VM, you could do this via
Compute Engine > VM Instances > Choose one of the VMs from this cluster > In details find API and identity management
So You have like 3 options to solve this issue:
1. During Cluster creation
In Node Pools
> Security
, you have Access scopes
where you can add some additional permissions.
Allow full access to all Cloud APIs
to allow access for all listed Cloud APIsSet access for each API
In your case you could just use Set access for each API
and change Storage
to Full
.
2. Set permissions with a Service Account
You would need to create a new Service Account
and provide proper permissions for Compute Engine
and Storage
. More details about how to create SA
you can find in Creating and managing service accounts.
3. Use Workload Identity
Workload Identity on your Google Kubernetes Engine (GKE) clusters. Workload Identity allows workloads in your GKE clusters to impersonate Identity and Access Management (IAM) service accounts to access Google Cloud services.
For more details you should check Using Workload Identity.
Useful links
- Configuring Velero - Velero is software for backup and restore, however steps 2 and 3 are mentioned there. You would just need to adjust commands/permissions to your scenario.
- Authenticating to Google Cloud with service accounts
QUESTION
I've been trying to create an EKS cluster with vpc-cni addon due to the pod restrictions for m5.xlarge VMs (57). After creation I can see it is passed to the launchtemplate object but when doing a node describe it still can allocate the previous (wrong?) number
ClusterConfig:
...ANSWER
Answered 2021-Dec-03 at 04:47For managedNodeGroup you need to specify the AMI ID:
aws ssm get-parameter --name /aws/service/eks/optimized-ami/1.21/amazon-linux-2/recommended/image_id --region us-east-1 --query "Parameter.Value" --output text
QUESTION
I deployed an EFS in AWS and a test pod on EKS from this document: Amazon EFS CSI driver.
EFS CSI Controller pods in the kube-system
:
ANSWER
Answered 2021-Nov-04 at 09:10Posted community wiki answer for better visibility. Feel free to expand it.
Based on @Miantian comment:
The reason was the efs driver image is using the different region from mine. I changed to the right one and it works.
You can find steps to setup the Amazon EFS CSI driver in the proper region in this documentation.
QUESTION
I'm using bitnami/etcd chart and it has ability to create snapshots via EFS mounted pvc.
However I get permission error after aws-efs-csi-driver is provisioned and PVC mounted to any non-root pod (user/gid is 1001)
I'm using helm chart https://kubernetes-sigs.github.io/aws-efs-csi-driver/ version 2.2.0
values of the chart:
...ANSWER
Answered 2021-Oct-15 at 23:57By default the StorageClass field provisioningMode
is unset, please set it to provisioningMode: "efs-ap"
to enable dynamic provision with access point.
QUESTION
A couple of weeks ago i published similar question regarding a Kubernetes deployment that uses Key Vault (with User Assigned Managed identity method). The issue was resolved but when trying to implemente everything from scratch something makes not sense to me.
Basically i am getting this error regarding mounting volume:
...ANSWER
Answered 2021-Sep-25 at 00:29After doing some tests, it seems that the process that I was following was correct. Most probably, I was using principalId
instead of clientId
in role assignment for the AKS managed identity.
Key points for someone else that is facing similar issues:
Check what the managed identity created automatically by AKS is. Check for the
clientId
; e.g.,
QUESTION
I'm setting up Keyvault integration with k8s in Azure. I can mount a volume with secrets using the csi driver in Azure using Managed identities. I can verify the secret is mounted by exec-ing into the pod and cat-ing out the secrets. However, now I want to expose the secrets as environment variables, but I'm unclear how to do that. Below is the following SecretProviderClass
and Pod
I have deployed.
spc-keyvault.yaml:
...ANSWER
Answered 2021-Aug-10 at 03:25i was able to solve this issue by updating the entrypoint.sh to export the secrets to env variables. Something like this:
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install csi-driver
Create an API token in the Hetzner Cloud Console.
Create a secret containing the token: # secret.yml apiVersion: v1 kind: Secret metadata: name: hcloud-csi namespace: kube-system stringData: token: YOURTOKEN and apply it: kubectl apply -f <secret.yml>
Deploy the CSI driver and wait until everything is up and running: Have a look at our Version Matrix to pick the correct deployment file. kubectl apply -f https://raw.githubusercontent.com/hetznercloud/csi-driver/v1.5.1/deploy/kubernetes/hcloud-csi.yml
To verify everything is working, create a persistent volume claim and a pod which uses that volume: apiVersion: v1 kind: PersistentVolumeClaim metadata: name: csi-pvc spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi storageClassName: hcloud-volumes --- kind: Pod apiVersion: v1 metadata: name: my-csi-app spec: containers: - name: my-frontend image: busybox volumeMounts: - mountPath: "/data" name: my-csi-volume command: [ "sleep", "1000000" ] volumes: - name: my-csi-volume persistentVolumeClaim: claimName: csi-pvc Once the pod is ready, exec a shell and check that your volume is mounted at /data. kubectl exec -it my-csi-app -- /bin/sh
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page