secret-manager | External secret management for Kubernetes | Identity Management library
kandi X-RAY | secret-manager Summary
kandi X-RAY | secret-manager Summary
Secret Manager is a Kubernetes add-on to automate the creation and renewal of secrets from various external secret sources. Secret Manager can also reformat the sourced secrets to fit the configuration expected by the workloads using the created secrets. Based on the work from godaddy/kubernetes-external-secrets and with borrowed wisdom from jetstack/cert-manager.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
- NewController returns a new controller .
- WaitForSMPod blocks until a SMPod is ready
- NewDefaultFramework creates a new default framework
- NewControllerCmd returns a cobra command for controller manager
- Logs returns the logs for a pod
- getStoreBackend returns the store backend for the given secret storeSpec
- SetConditions appends new conditions to Status .
- CreateNamespace creates a namespace
- GetStore returns the client for the given store
- CreateAWSSecretsManagerSecret creates a secret manager secret
secret-manager Key Features
secret-manager Examples and Code Snippets
Community Discussions
Trending Discussions on secret-manager
QUESTION
I am trying to migrate from google cloud composer composer-1.16.4-airflow-1.10.15 to composer-2.0.1-airflow-2.1.4, However we are getting some difficulties with the libraries as each time I upload the libs, the scheduler fails to work.
here is my requirements.txt
...ANSWER
Answered 2022-Mar-27 at 07:04We have found out what was happening. The root cause was the performances of the workers. To be properly working, composer expects the scanning of the dags to take less than 15% of the CPU ressources. If it exceeds this limit, it fails to schedule or update the dags. We have just taken bigger workers and it has worked well
QUESTION
I'm working with the Python library google-cloud-secret-manager and I'm facing some problems in creating a secret within a defined region.
In the method secretmanager.create_secret seems that there is a metadata parameter that can be filled but I keep receiving errors trying something like:
...ANSWER
Answered 2022-Mar-03 at 06:35If you want to specify the replication key placement manually, you need to specify it like in the example below:
QUESTION
I try to connect from a Google Cloud Function in Python runtime to an external MySQL server db that is not hosted by Google Cloud.
My "requirements.txt":
...ANSWER
Answered 2022-Jan-14 at 22:55If the database is on a VM, and in your VPC, you can create a VPC connector and attach it to your Cloud Function to access it.
If it's deployed else where,
- Either the database has a public IP, and Cloud Functions can directly access it.
- Or the database has a private IP and you need to create a VPN between your VPC and the private foreign network with your database. And again add a serverless VPC connector to Cloud Functions to allow it to your your VPC and the VPN to access the database.
QUESTION
I am using Google Cloud Secrets in a NodeJS Project. I am moving away from using preset environment variables and trying to find out the best practice to store and reuse secrets.
The 3 main routes I've found to use secrets are:
- Fetching all secrets on startup and set them as ENV variables for later use
- Fetching all secrets on startup and set as constant variables
- Each time a secret is required, fetch it from Cloud Secrets
Google's own best practice documentation mentions 2 conflicting things:
- Use ENV variables to set secrets at startup (source)
- Don't use ENV variables as they can be accessed in debug endpoints and traversal attacks among other things (source)
My questions are:
- Should I store secrets as variables to be re-used or should I fetch them each time?
- Does this have an impact on quotas?
ANSWER
Answered 2022-Jan-04 at 15:26The best practice is to load one time the secret (at startup, or the first time is it accessed) to optimize performances and prevent API call latency. And yes, the access secret quotas is impacted on each access.
If a debugger tool is connected to the environment, Variables and Env Var data can be compromised. The threat is roughly the same. Be sure to secure correctly the environment.
QUESTION
I stored my MySQL DB credentials in AWS secrets manager using the Credentials for other database
option. I want to import these credentials in my application.properties
file. Based on a few answers I found in this thread "https://stackoverflow.com/questions/56194579/how-to-integrate-aws-secret-manager-with-spring-boot-application", I did the following:
- Added the dependency
spring-cloud-starter-aws-secrets-manager-config
- Added
spring.application.name =
andspring.config.import = aws-secretsmanager:
inapplication.properties
- Used secret keys as place holders in the following properties:
...
ANSWER
Answered 2021-Dec-16 at 12:48You are trying to use spring.config.import
, and the support for this was introduced in Spring Cloud 2.3.0:
https://spring.io/blog/2021/03/17/spring-cloud-aws-2-3-is-now-available
Secrets Manager
QUESTION
When running the command python manage.py makemigrations
locally on my laptop, I get the following error on my console:
ANSWER
Answered 2021-Oct-23 at 10:35This is apparently caused by two things:
In settings.py, the secret content is loaded into environment variables with
env.read_env(io.StringIO(payload))
, as mentioned in the question. That read_env() function apparently does the following:
QUESTION
I am unable to retrieve an environment variable accessed in code in my bitbucket deployed application.
When my application starts, I want to fetch db uri, like this:
const uri = process.env.MONGODB_CONNECTION_URI;
Whenever I build and push the artifact from local, my environment variables are successfully passed from .env-files I have stored locally on my machine. Obviously I do not want to commit this file.
When I use Bitbucket Pipelines for deploying my application to GCP. I am able to successfully push a new artifact to GCP. But on application startup, it is unable to retrieve my db-uri.
This article is pretty close to describing what I want to achieve, but I don't see how this addresses the fact that the property value is an actual secret that I cannot commit to my repo, and need to access at application startup from somewhere.
This question describes how to access variables from secret manager in the Cloud Pipeline, not in the application itself.
I use the predefined google-app-engine-deploy-pipe
. Relevant parts of my bitbucket-pipelines.yml looks like this:
ANSWER
Answered 2021-Sep-27 at 09:55I would suggest you refer to this documentation link in order to create and access a secret manager.
This documentation link provides resources for using Secret Manager with various Google Cloud services.
For instance, Access Secret Manager secrets and expose them as environment variables or via the filesystem from Cloud Functions. See using Secret Manager secrets with Cloud Functions for detailed information.
Ensure for Adding a secret version requires the Secret Manager Admin role (roles/secretmanager.admin) on the secret, project, folder, or organization. Roles can't be granted on a secret version.
Refer to this discussion on a similar question.
QUESTION
Currently migrating my application to Micronaut 3, I encountered one problem with micronaut-gcp. Especially with the google secret manager. I am using gradle with Kotlin DSL.
Actual configuration: (not working, using plugin io.micronaut.library
version 2.0.4
)
- gradle 7.2
- micronaut 3.0.1
Previous configuration: (working with no plugin, using micronaut-bom
)
- gradle 6.5.1
- micronaut 2.4.0
- micronautGcp 3.5.0
I/ The Problem
My goal is to load some secrets as key/value pairs into a property source.
I followed the documentation that says to use a bootstrap.yml
as follows:
ANSWER
Answered 2021-Sep-30 at 16:59I've been down that rabbit hole. Long story short I got past this by upgrading the google-cloud-secretmanager
dependency from 1.6.4 to e.g. 2.0.2
Like so:
QUESTION
I recently had to bump a google cloud library due to a conflict that was generating a bug. Long story short, I had
...ANSWER
Answered 2021-Sep-28 at 16:38You can achieve this with a constraints file. Just put all your constraints into that file:
QUESTION
I am trying to execute a apache beam pipeline as a dataflow job in Google Cloud Platform.
My project structure is as follows:
...ANSWER
Answered 2021-Sep-23 at 07:23Posting as community wiki. As confirmed by @GopinathS the error and fix are as follows:
The error encountered by the workers is Beam SDK base version 2.32.0 does not match Dataflow Python worker version 2.28.0. Please check Dataflow worker startup logs and make sure that correct version of Beam SDK is installed
.
To fix this "apache-beam[gcp]>=2.20.0" is removed from install_requires
of setup.py since, the '>=' is assigning the latest available version (2.32.0 as of this writing) while the workers version are only 2.28.0.
Updated setup.py:
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install secret-manager
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page