aws-profile | Wrapper script to generate & pass AWS AssumeRole keys | Continuous Backup library
kandi X-RAY | aws-profile Summary
kandi X-RAY | aws-profile Summary
Wrapper script to generate & pass AWS AssumeRole keys to other scripts
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of aws-profile
aws-profile Key Features
aws-profile Examples and Code Snippets
Community Discussions
Trending Discussions on aws-profile
QUESTION
Background:-I have a gateway account( with no permissions) in which users are created and in order to access aws resources we use roles having admin access.
config file
...ANSWER
Answered 2021-Mar-10 at 21:01This is a known issue with Serverless, Serverless only checks ~/.aws/credentials
for the profile and not ~/.aws/config
.
There are multiple Serverless forum posts about this, e.g. this one.
Change your ~/.aws/credentials
file to this and it should work:
QUESTION
I'm trying to create an AWS client for IOT following this article: How can I publish to a MQTT topic in a Amazon AWS Lambda function?
...ANSWER
Answered 2021-Jan-09 at 04:05IoTDataPlane does not have resource. You can only use client
with the IoTDataPlane:
QUESTION
- Create a aws_secretsmanager_secret
- Create a aws_secretsmanager_secret_version
- Store a uniquely generated string as that above version
- Use local-exec provisioner to store the actual secured string using bash
- Reference that string using the secretsmanager resource in for example, an RDS instance deployment.
- Keep all plain text strings out of remote-state residing in a S3 bucket
- Use AWS Secrets Manager to store these strings
- Set once, retrieve by calling the resource in Terraform
Error: Secrets Manager Secret "arn:aws:secretsmanager:us-east-1:82374283744:secret:Example-rds-secret-fff42b69-30c1-df50-8e5c-f512464a4a11-pJvC5U" Version "AWSCURRENT" not found
when running terraform apply
Why isn't it moving the AWSCURRENT version automatically? Am I missing something? Is my bash command wrong? The value does not write to the secret_version, but it does reference it correctly.
Look in main.tf code, which actually performs the command:
...ANSWER
Answered 2020-Sep-23 at 11:11The error likely isn't occuring in your provisioner execution per se, because if you remove the provisioner block the error still occurs on apply--but confusingly only the first time after a destroy.
Removing the data "aws_secretsmanager_secret_version" "rds-secret"
block as well "resolves" the error completely.
I'm guessing there is some sort of config delay issue here...but adding a 20 second delay provisioner to the aws_secretsmanager_secret.rds-secret resource block didn't help. And the value from the data block can be successfully output on subsequent apply runs, so maybe it's not just timing.
Even if you resolve the above more basic issue, it's likely your provisioner will still be confusing things by modifying a resource that Terraform is trying to manage in the same run. I'm not sure there's a way to get around that except perhaps by splitting into two separate operations.
Update:
It turns out that on the first run the data sources are read before the aws_secretsmanager_secret_version resource is created. Just adding depends_on = [aws_secretsmanager_secret_version.rds-secret-version]
to the data "aws_secretsmanager_secret_version"
block resolves this fully and makes the interpolation for your provisioner work as well. I haven't tested the actual provisioner.
Also you may need to consider this (which I take to not always apply to 0.13):
NOTE: In Terraform 0.12 and earlier, due to the data resource behavior of deferring the read until the apply phase when depending on values that are not yet known, using depends_on with data resources will force the read to always be deferred to the apply phase, and therefore a configuration that uses depends_on with a data resource can never converge. Due to this behavior, we do not recommend using depends_on with data resources.
QUESTION
How do I remove/deploy deployment without .serverless directory for team collaboration?
For example if I run sls deploy --aws-profile profile1
with a .yml file it then creates this .serverless directory which I am not including in my git push to hide secrets. Now when someone else clones this repo on my team how can they now manage the same deployment? Is the .yml file and same aws profile sufficient?
ANSWER
Answered 2020-Mar-24 at 18:01The .serverless folder is created by serverless to alocate the cloud formation files. You should not handle them manually (and the folder and it’s content should not be included in source control).
The serverless.yml is the source of truth for the deployment, so it should do the same if running with the same environments.
The AWS account/profile can be set using the AWS cli. Given all the devs use the same account or use accounts with the same level of permissions, each one of you should be able to run deploy/remove.
If you project uses a .env file or environmental variables, each member of the team has to include them in their environment.
QUESTION
We have a team of 3 to 4 members so we wanted to do serverless deploy or update functions or resources using our own personnel AWS credentials without creating new stack but just updating the existing resources. Is there a way to do that? I am aware that we can set up --aws-profile and different profiles for different stages. I am also aware that we cloud just divide the resources into microservices and just deploy or update our own parts. Any help is appreciated.
...ANSWER
Answered 2019-Nov-27 at 17:12It sounds like you already know what to do but need a sanity check. So I'll tell you how I, and everyone else I know, handles this.
We prefix commands with AWS_PROFILE
env var declared and we use --stage
names.
E.g. AWS_PROFILE=mycompany sls deploy --stage shailendra
.
Google aws configure
for examples on how to set up awscli
that uses the AWS_PROFILE
var.
We also name the --stage
with a unique ID, e.g. your name. This way, you and your colleagues all have individual CloudFormation stacks that work independently of eachother and there will be no conflicts.
QUESTION
Instead of running sls deploy --aws-profile
, is there a way to configure the serverless.yml
file to just contain this information?
ANSWER
Answered 2019-Aug-20 at 19:10You can do it in provider
section:
QUESTION
I have a Python Serverless project that uses a private Git (on Github) repo.
Requirements.txt file looks like this:
...ANSWER
Answered 2018-May-28 at 04:45Although not recommeneded. Have you tried using sudo sls deploy --aws-profile my_id --stage dev --region eu-west-1
This error can be also created by using the wrong password or ssh key.
QUESTION
I am a beginner in AWS and I want to send a sample data to s3 bucket using Amazon Kinesis from ASP.Net Core 2.2 Web Api Application. But I am unable to send data. Below is what I have tried. Steps I did:
Created an AWS account and then created one s3 bucket.
Created a Kinesis account and linked the s3 bucket to it.
3. In Main
...ANSWER
Answered 2019-Apr-16 at 11:47Try passing in your AccessKeyId, SecretAccessKey, and Region directly to the constructor as a test (you don't ever want to hard code these in a real release). Make sure the user associated with these credentials has a policy configured to allow access to Kinesis.
Also use async/await.
QUESTION
Using Serverless Framework to
deploy AWS Lambda functions, Serverless creates (or receives) the
specific URL endpoint string. I want to use that string (as a variable)
in another section of the serverless.yml
specification file.
Is that URL endpoint available as a variable in serverless.yml
?
The Serverless Framework documentation on AWS-related variables
does not seem to answer that case.
Details: my serverless.yml
contains a provider:
specification
similar to:
ANSWER
Answered 2019-Apr-05 at 20:31I was able to pass the URL and unique ID for the API Gateway endpoint to a Lambda function as environment variables as follows:
QUESTION
I'm a bit puzzled and would really appreciate some help. I'm new to serverless and would like to play around with it a bit. I've followed this tutorial to setup a serverless test function.
I've also mentioned to deploy my function to AWS:
...ANSWER
Answered 2019-Mar-30 at 20:51When deploying the service you used a non-default AWS profile by passing the argument --aws-profile numpy-serverless-agent
. It means that it is deployed to the account specified by this profile.
When trying to inovke the lambda, you didn't pass this argument, thus using the default profile, which is probably specifying a different AWS account.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install aws-profile
You can use aws-profile like any standard Python library. You will need to make sure that you have a development environment consisting of a Python distribution including header files, a compiler, pip, and git installed. Make sure that your pip, setuptools, and wheel are up to date. When using pip it is generally recommended to install packages in a virtual environment to avoid changes to the system.
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page