aws-secrets | Manage secrets on AWS instances | AWS library
kandi X-RAY | aws-secrets Summary
kandi X-RAY | aws-secrets Summary
This repository contains a handful of scripts:. They can be used to set up and maintain a file containing environment variables which can then be used by an application running on an Amazon EC2 instance. They can also be used when running an application in a docker container on an EC2 instance.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of aws-secrets
aws-secrets Key Features
aws-secrets Examples and Code Snippets
Community Discussions
Trending Discussions on aws-secrets
QUESTION
The Problem:
I am using Docker Compose to create two containers: One with a Postgres database on it and the other with Flyway on it. The goal is to use Flyway to migrate scripts to the Postgres database instance. When I run docker-compose up I get the following error:
Unable to obtain connection from database (jdbc:postgresql://db:5432/) for user 'luke_skywalker': The connection attempt failed.
My code is below and thank you for your help!
Here is my docker-compose.yml:
...ANSWER
Answered 2021-May-27 at 07:51As the exception message says:
QUESTION
I am pretty new at the AWS SDK world, and my first project is to collect information from secrets using a Spring Application.
I have been using this document https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/manage-credentials-using-aws-secrets-manager.html all good with the code but something I cannot wrap my head around is the "endpoint", where do I find this information inside AWS web console? Is it something that companies can personalize?
This would be the first cooperative project... Thanks in advance for the help.
...ANSWER
Answered 2021-Apr-30 at 18:04Here's the list of public endpoints for AWS Secrets Manager. You would pick the one for the AWS region you are using. If you aren't using a VPC endpoint then you can probably just leave that blank or null
, the AWS SDK should pick the endpoint automatically based on the region.
QUESTION
I have defined a CDK app stack using TypeScript (sensitive information rendomized in the code below):
...ANSWER
Answered 2021-Mar-25 at 12:22There are two issues here:
secrets
is of type index signature. you should therefore name your secret (this is the environment variable that will be exposed in your container)- an
ecs.Secret
is expected (you can create it from ansm.Secret
)
here is a working version:
QUESTION
I have been trying to find a way to use ASP .NET Core 2.1 and retrieve secrets from Secret Manager in AWS.
I found a great blog post and it appears to compile/run without errors but I cannot for the life of me figure out how to access the secrets.
Any help would be appreciated!
https://andrewlock.net/secure-secrets-storage-for-asp-net-core-with-aws-secrets-manager-part-1/
My code:
...ANSWER
Answered 2021-Mar-05 at 19:30OK - so your question is how to READ a secret. Let's try different tutorials:
Example 1: use SecretsManager (much like your original tutorial is doing):
https://nimblegecko.com/how-to-use-aws-secret-manager-secrets-in-dotnet-core-application/
QUESTION
I am trying to deploy the Spark History Server on EKS following these instructions: [https://github.com/helm/charts/tree/master/stable/spark-history-server]. I want my Spark jobs to write to an S3 bucket and the history server to read from that bucket. Both need to authenticate using access key and secret. Writing the logs into the bucket from my application works fine. However, I have trouble to configure the spark history server to read from the bucket. I created a k8s secret as described with my access key and secret. Additionally, I created the following config file:
...ANSWER
Answered 2021-Jan-19 at 15:28It's the general "S3 doesn't like your signature" message.
See troubleshooting s3a for the normative documentation on debugging the S3A connector.
(Moderators: I'm linking to the ASF docs rather than copy the text as (a) it will only become out of date compared to the normative docs and (b) people need to learn to read the documentation)
QUESTION
I'm developing a new Spring Boot application that will interact with an AWS-Postgres database. The serverless DB is hosted in a different AWS account and its secrets are stored in Secretmanager.
How can I effectively fetch the DB credentials from a cross-account secret manager?
In a POC, I did this by constructing a secret manager client using STSAssumeRoleSessionCredentials
like this
ANSWER
Answered 2020-Oct-04 at 22:44You are right, it can be further simplified on code side.
Let's say accountA has secrets and accountB is your app account. Current implementation does the following:
- A client is created inside the accountB using accountA credentials (AssumeRole is followed and is a best practice)
- Secrets are fetched and then used.
What could be done:
- Use resource based policy in accountA that let's the IAM User and/or IAM Role in accountB have access to the secrets placed in accountA.
- Update the KMS key policy in accountA for the key that is used to encrypt/decrypt secrets. Let the same IAM User and/or Role have access to that KMS key. So that they can use it.
- Update the IAM Policy for the IAM User and/or Role in accountB, explicitly allowing it to use the secrets and KMS keys of accountA.
Now, you are able to access the secrets using the same IAM User/Role that is used for the app and theoretically spring-cloud-starter-aws-secrets-manager-config
should fetch the secrets from accountA as well (I have not tested it for myself).
The least benefit you will get is not creating assumedRole client for different account. More details on AWS Blog
QUESTION
I have an existing SpringBoot Application that was running with no issue. I then created a Java library—a standalone repository with only static Java code, no main class. My library is deployed as a GitHub Maven package.
I then proceeded with setting up my GitHub packages repository in my local Maven settings and added the dependency to my original SpringBoot application. The import process is successful, my library's Jar is in the classpath and compilation and build are successful.
What happens next is I run the application now, and I get the following stacktrace:
...ANSWER
Answered 2020-Sep-19 at 18:51You're using different versions of spring-boot-starter-parent (2.3.1.RELEASE and 2.3.4.RELEASE) which is probably leading to inconsistent versions where the later or earlier don't have the method. Try using 2.3.4.RELEASE in your application.
[Update]
You're still getting inconsistent versions of org.springframework:*
on the classpath:
QUESTION
I'm reading the CDK docs about the SecretsManager and I'm not sure if I've mis-understood, but what I thought would work from their example doesn't seem to grant the permission I expected. Essentially I have a stack that contains some Lambdas, and I'd like all of them to be able to Read two secrets from the SecretsManager.
...ANSWER
Answered 2020-Jun-07 at 17:41Depending on your actual context there are two possible variants.
1. Import existing role
If the Lambda function has been predefined (e.g. in a different stack), you can add the additional permissions to the existing Lambda execution role by importing it into this CDK stack first.
QUESTION
I ran a Job in Kubernetes overnight. When I check it in the morning, it had failed. Normally, I'd check the pod logs or the events to determine why. However, the pod was deleted and there are no events.
...ANSWER
Answered 2019-Aug-03 at 23:37The TTL would clean up the Job itself and all it's children objects. ttlSecondsAfterFinished
is unset so the Job hasn't been cleaned up.
From the job docco
Note: If your job has
restartPolicy = "OnFailure"
, keep in mind that your container running the Job will be terminated once the job backoff limit has been reached. This can make debugging the Job’s executable more difficult. We suggest settingrestartPolicy = "Never"
when debugging the Job or using a logging system to ensure output from failed Jobs is not lost inadvertently.
The Job spec you posted doesn't have a backoffLimit
so it should try to run the underlying task 6 times.
If the container process exits with a non zero status then it will fail, so can be entirely silent in the logs.
The spec doesn't specify an activeDeadlineSeconds
seconds defined so I'm not sure what type of timeout you end up with. I assume this would be a hard failure in the container then so a timeout doesn't come in to play.
QUESTION
I'm trying to submit a Spark job on Kubernetes and write logs to S3. I'm using EKS and Spark client mode
I can write my Spark logs to a local directory, e.g., the below works:
...ANSWER
Answered 2020-Feb-29 at 21:01You need to pass the configs with --conf
:
You do: --spark.kubernetes.driver.secretKeyRef.AWS_ACCESS_KEY_ID=aws-secrets:key
You need: --conf spark.kubernetes.driver.secretKeyRef.AWS_ACCESS_KEY_ID=aws-secrets:key
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install aws-secrets
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page