s3-example | Simple example using micro for uploading stuff to AWS S3 | Microservice library
kandi X-RAY | s3-example Summary
kandi X-RAY | s3-example Summary
Simple example using Now 2.0, Zeit's micro and the AWS SDK to upload files to the cloud.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of s3-example
s3-example Key Features
s3-example Examples and Code Snippets
Community Discussions
Trending Discussions on s3-example
QUESTION
I have written an integration for the AWS SDK in PHP to send the keys for files on an S3 bucket and retrieve a pre-signed url, which more or less follows this example. This worked perfectly except I need to now check if the file exists on s3 and return empty string if it does not.
I am doing this with my code below:
...ANSWER
Answered 2021-Apr-14 at 18:47You need the s3:GetObject
permission to invoke the HeadObject API, which is what the PHP SDK invokes when your code calls doesObjectExist()
.
If the object you request does not exist, the error Amazon S3 returns depends on whether you also have the s3:ListBucket permission.
- If you have the s3:ListBucket permission on the bucket, Amazon S3 returns an HTTP status code 404 ("no such key") error.
- If you don’t have the s3:ListBucket permission, Amazon S3 returns an HTTP status code 403 ("access denied") error.
So, you probably have s3:ListBucket but not s3:GetObject.
QUESTION
I am a newbie to AWS Lambda. I am trying out the Tutorial from https://docs.aws.amazon.com/lambda/latest/dg/with-s3-example.html. When the user uploads a jpg to a S3 bucket called greetingsproject, the lambda function is triggered.
Error: 9a62ff86-3e24-491d-852e-ded2e2cf5d94
INFO: error while getting object = AccessDenied: Access Denied
I am getting the Access denied error in the following code snippet:
...ANSWER
Answered 2021-Apr-01 at 03:16The comment by Marcin about Lambda execution role put me on the right track. I had followed the below steps previously:
- Created a policy called greetingsProjectPolicy (with the above mentioned permissions)
- Attached this policy to greetingsProjectRole.
- Assigned the greetingsProjectRole to my lambda function.
- I assumed that was it and the policy should be available to my lambda function.
- However when I assigned the greetingsProjectRole to the function, internally AWS created a Execution role called greetingsProject-role-zhcbt61o.
- When I clicked on this role, I was surprised to see that only role it had was the AWSLambdaBasicExecutionRole and the greetingsProjectPolicy was missing.
- I had to add the greetingsProjectPolicy as a inline policy to greetingsProject-role-zhcbt61o. Now I no longer get the access denied error.
Not sure, if this is how AWS works or I am missing something.
QUESTION
I've been all over the internet looking for a solution to this. I have been trying to setup an AWS Lambda function to send a message to SNS every time a file is uploaded to a particular S3 bucket, according to this tutorial. At this point, I have the function setup and I can invoke it successfully. However, when I attempt to connect the function to S3, I get an error stating An error occurred (InvalidArgument) when calling the PutBucketNotification operation: Unable to validate the following destination configurations
. According to this article, I should be able to add a permission that will let S3 invoke the Lambda function, like this:
ANSWER
Answered 2021-Jan-28 at 21:04The thing you need to create is called a "Resource-based policy", and is what should be created by aws lambda add-permission
.
A Resource-based policy gives S3 permission to invoke your lambda. This is a property on your lambda itself, and is not part of your lambda's IAM role (Your lambda's IAM role controls what your lambda can do, a Resource-based policy controls who can do what to your lambda. You can view this resource in the UI on the aws console by going to your lambda, clicking "Permissions" and scrolling down to "Resource-based policy". The keyword you want to look out for is lambda:InvokeFunction
, which is what gives other things permission to call your lambda, including other AWS accounts, and other AWS services on your account (like s3).
That being said, the command you ran should have created this policy. Did you make sure to replace my_account_id
with your actual account id when you ran the command?
In addition, make sure you replace --source-arn arn:aws:s3:::file-import
with the actual ARN of your bucket (I assume you had to create a bucket with a different name because s3 buckets must have globally unique names, and file-import
is almost surely already taken)
QUESTION
I'm using golang to access an AWS S3 bucket to download files, the workflow of my API is very simple, it is just a cron that download a single file each day at certain time, my questions are:
It is mandatory to create the session on each execution of the cron?
How long will be available the session without expiration if I keep the same session for each call?
(I can't find it in the documentation)
I'm using this portion of code to create the session and donwload the file:
...ANSWER
Answered 2021-Feb-09 at 21:40From the comments, it has been established that you have a time.Timer
for triggering the download. Since this is the case, you only need to create a single session object and it can be re-used as many times as you want. If you read the AWS documentation, you can find this line:
Sessions should be cached when possible, because creating a new Session will load all configuration values from the environment, and config files each time the Session is created.
QUESTION
I'm connecting my angular project with the AWS S3 bucket that I created. Also I'm follwing this tutorial https://grokonez.com/aws/angular-4-amazon-s3-example-get-list-files-from-s3-bucket. The error is in step 2.7. When I´m trying to get the files using the [fileUpload] property I'm getting an error
Can't bind to 'fileUpload' since it isn't a known property of 'app-details-upload'.
- If 'app-details-upload' is an Angular component and it has 'fileUpload' input, then verify that it is part of this module.
- If 'app-details-upload' is a Web Component then add 'CUSTOM_ELEMENTS_SCHEMA' to the '@NgModule.schemas' of this component to suppress this message.
- To allow any property add 'NO_ERRORS_SCHEMA' to the '@NgModule.schemas' of this component.
Here is the piece of code that is giving the error
...ANSWER
Answered 2020-Dec-11 at 07:02It is not the piece of code you have provided that is giving the error rather the component is not not declared.
Take it like a simple variable, example, lets say myFunction.myProperty()
. Now without 1st declaring what myFunction
represents then the code will not work.
In Angular
before you can use a component you have to declare the component. To declare a component you need to add the component to the declarations
array in the NgModule
In your app.module.ts
QUESTION
I was developing the frontend using React.js, and I use Javascript SDK for uploading a file to my S3 bucket using my root AWS account. I followed the official doc but kept getting 403 Forbidden. If you encounter the same case, you can try to remove the "ACL" in params while uploading to solve it.
I basically followed the demo code here in the official doc in the addPhoto()
function:
https://docs.aws.amazon.com/sdk-for-javascript/v2/developer-guide/s3-example-photo-album-full.html
I also referred to another blog post here:
https://medium.com/@fabianopb/upload-files-with-node-and-react-to-aws-s3-in-3-steps-fdaa8581f2bd
They all add ACL: 'public-read'
the params in s3.upload(params) function.
ANSWER
Answered 2020-Sep-18 at 00:13Your bucket probably has Amazon S3 block public access activated (which is default).
One of the settings is: "Block public access to buckets and objects granted through new access control lists (ACLs)"
This means that it will block any command (such as yours) that is granting public access via an ACL. Your code is setting the ACL to public-read
, which is therefore being blocked.
The intention of S3 Block Public Access is to default to a setting where nothing S3 content will not be accidentally made public. You can deactivate S3 Block Public Access to change this setting.
S3 Block Public Access is relatively new (November 2018), so a lot of articles on the web might have been written before the "block by default" rule came into effect.
QUESTION
I am trying to create a bucket which is public
Below is the code
...ANSWER
Answered 2020-Jul-05 at 13:23This is because you have no s3_client
variable. In fact you have a function named s3_client
. I fixed this below to call s3_client()
instead.
Also keep an eye on aligning for Python.
QUESTION
INTRO
- I am following this guide as recommended, here is the guide's GitHub repo.
- I have created an AmazonS3FullAccess to it as well
- I am using the guide's 3rd example "Mixing public assets and private assets" with static, media public, media, private version.
- If the user log in (local development environment) he Upload Files from the website but he can NOT access them from the website only from the AWS S3 management website.
- Currently I am blocking all public access as it is in the guide (AWS S3 management panel settings)
- I have added these lines to my CORS configuration editor from this other guide
ANSWER
Answered 2020-Apr-27 at 01:46In the AWS console, click the "Permissions" tab, then on the
- allow public access to your bucket -> Save -> Confirm it
- "Bucket policy" button. An editing box will appear. Replace the "arn:aws:s3:::" in the editing box with the part starting with "arn:" shown above your editing box, but be careful to preserve the "/*" at the end of it. Use the following code bellow. Paste in the following:
QUESTION
I'm looking for a programmatic way to download images from an S3 bucket to my computer.
I tried "Using send_file to download a file from Amazon S3?" but it just redirected me to a link that only shows my PDF object.
This is my download function using the AWS documentation:
...ANSWER
Answered 2020-Apr-22 at 16:14If you change the disposition to attachment
the browser download the file?
This may be related to content-disposition inline value.
Did you checked this question? https://superuser.com/questions/1277819/why-does-chrome-sometimes-download-a-pdf-instead-of-opening-it
QUESTION
Trying out Amazon Lambda / nodejs 8. My goal is to launch ffmpeg, generate a short clip and upload it to S3 bucket.
I created the function following the image resize tutorial. Edited the code to get output from simple linux commands like ls
or cat /proc/cpuinfo
- all works.
Now, added the ffmpeg binary for i686 - ffmpeg static build by JohnVan Sickle (thanks!). Changed the code to launch simple ffmpeg command that is supposed to create sa 2-seconds small video clip.
That fails, according to logs, with the signal SIGSEGV
returned to the "close" event handler of child_process.spawn()
As far as I understand, this could be caused by the ffmpeg binary incompatibility with the static build. Or by some mistake in my code.
Several npm modules rely on the static builds from johnvansickle.com/ffmpeg and there are no such issues filed on their github. Maybe there's some other mistake I made?
Should I compile ffmpeg myself under Amazon Linux AMI amzn-ami-hvm-2017.03.1.20170812-x86_64-gp2
which is under the hood of AWS Lambda?
upd. Launched EC2 t2.micro instance from the same AMI, downloaded the same ffmpeg static build, and it works just fine from the command line. Now I doubt that it is a compilation issue.
Also tried copying ffmpeg executable to /tmp/ffmpeg
and chmod 755
just to make sure.
Running simple ffmpeg --help
command via child_process.execSync()
returns "Error: Command failed: /tmp/ffmpeg --help"
ANSWER
Answered 2019-Apr-08 at 18:22Fixed. Despite the misleading fact that static build of ffmpeg from JohnVanSickle.com does run on Amazon EC2 instance of the AMI, mentioned in Lambda environment, same binary fails to execute under AWS Lambda.
I compiled ffmpeg on the AWS EC2 t2.micro instance of the same AMI using markus-perl/ffmpeg-build-script. It also surprised me with an error of aom
codec version. Changed one line in the script to disable the aom
codec and ffmpeg finally has compiled. Took a couple of hours on the weak t2.micro instance.
The resulting ffmpeg binary is ~10Mb lighter than the static build mentioned above and runs on AWS Lambda just fine!
Hope this will help someone.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install s3-example
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page