s3-example | Simple example using micro for uploading stuff to AWS S3 | Microservice library

 by   paulogdm JavaScript Version: v1.0 License: No License

kandi X-RAY | s3-example Summary

kandi X-RAY | s3-example Summary

s3-example is a JavaScript library typically used in Architecture, Microservice, Nodejs, Docker, Amazon S3 applications. s3-example has no bugs, it has no vulnerabilities and it has low support. You can download it from GitHub.

Simple example using Now 2.0, Zeit's micro and the AWS SDK to upload files to the cloud.
Support
    Quality
      Security
        License
          Reuse

            kandi-support Support

              s3-example has a low active ecosystem.
              It has 42 star(s) with 4 fork(s). There are 3 watchers for this library.
              OutlinedDot
              It had no major release in the last 12 months.
              There are 1 open issues and 0 have been closed. There are no pull requests.
              It has a neutral sentiment in the developer community.
              The latest version of s3-example is v1.0

            kandi-Quality Quality

              s3-example has 0 bugs and 0 code smells.

            kandi-Security Security

              s3-example has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.
              s3-example code analysis shows 0 unresolved vulnerabilities.
              There are 0 security hotspots that need review.

            kandi-License License

              s3-example does not have a standard license declared.
              Check the repository for any license declaration and review the terms closely.
              OutlinedDot
              Without a license, all rights are reserved, and you cannot use the library in your applications.

            kandi-Reuse Reuse

              s3-example releases are available to install and integrate.
              Installation instructions, examples and code snippets are available.

            Top functions reviewed by kandi - BETA

            kandi's functional review helps you automatically verify the functionalities of the libraries and avoid rework.
            Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of s3-example
            Get all kandi verified functions for this library.

            s3-example Key Features

            No Key Features are available at this moment for s3-example.

            s3-example Examples and Code Snippets

            No Code Snippets are available at this moment for s3-example.

            Community Discussions

            QUESTION

            How to test if S3 object does not exist AWS PHP SDK (Version 3)?
            Asked 2021-Apr-14 at 21:23

            I have written an integration for the AWS SDK in PHP to send the keys for files on an S3 bucket and retrieve a pre-signed url, which more or less follows this example. This worked perfectly except I need to now check if the file exists on s3 and return empty string if it does not.

            I am doing this with my code below:

            ...

            ANSWER

            Answered 2021-Apr-14 at 18:47

            You need the s3:GetObject permission to invoke the HeadObject API, which is what the PHP SDK invokes when your code calls doesObjectExist().

            If the object you request does not exist, the error Amazon S3 returns depends on whether you also have the s3:ListBucket permission.

            • If you have the s3:ListBucket permission on the bucket, Amazon S3 returns an HTTP status code 404 ("no such key") error.
            • If you don’t have the s3:ListBucket permission, Amazon S3 returns an HTTP status code 403 ("access denied") error.

            So, you probably have s3:ListBucket but not s3:GetObject.

            Source https://stackoverflow.com/questions/67097271

            QUESTION

            AWS Lambda - Access Denied Error - GetObject
            Asked 2021-Apr-01 at 04:07

            I am a newbie to AWS Lambda. I am trying out the Tutorial from https://docs.aws.amazon.com/lambda/latest/dg/with-s3-example.html. When the user uploads a jpg to a S3 bucket called greetingsproject, the lambda function is triggered.

            Error: 9a62ff86-3e24-491d-852e-ded2e2cf5d94
            INFO: error while getting object = AccessDenied: Access Denied

            I am getting the Access denied error in the following code snippet:

            ...

            ANSWER

            Answered 2021-Apr-01 at 03:16

            The comment by Marcin about Lambda execution role put me on the right track. I had followed the below steps previously:

            1. Created a policy called greetingsProjectPolicy (with the above mentioned permissions)
            2. Attached this policy to greetingsProjectRole.
            3. Assigned the greetingsProjectRole to my lambda function.
            4. I assumed that was it and the policy should be available to my lambda function.
            5. However when I assigned the greetingsProjectRole to the function, internally AWS created a Execution role called greetingsProject-role-zhcbt61o.
            6. When I clicked on this role, I was surprised to see that only role it had was the AWSLambdaBasicExecutionRole and the greetingsProjectPolicy was missing.
            7. I had to add the greetingsProjectPolicy as a inline policy to greetingsProject-role-zhcbt61o. Now I no longer get the access denied error.

            Not sure, if this is how AWS works or I am missing something.

            Source https://stackoverflow.com/questions/66896223

            QUESTION

            Cannot set S3 trigger for Lambda function in AWS
            Asked 2021-Feb-10 at 09:48

            I've been all over the internet looking for a solution to this. I have been trying to setup an AWS Lambda function to send a message to SNS every time a file is uploaded to a particular S3 bucket, according to this tutorial. At this point, I have the function setup and I can invoke it successfully. However, when I attempt to connect the function to S3, I get an error stating An error occurred (InvalidArgument) when calling the PutBucketNotification operation: Unable to validate the following destination configurations. According to this article, I should be able to add a permission that will let S3 invoke the Lambda function, like this:

            ...

            ANSWER

            Answered 2021-Jan-28 at 21:04

            The thing you need to create is called a "Resource-based policy", and is what should be created by aws lambda add-permission.

            A Resource-based policy gives S3 permission to invoke your lambda. This is a property on your lambda itself, and is not part of your lambda's IAM role (Your lambda's IAM role controls what your lambda can do, a Resource-based policy controls who can do what to your lambda. You can view this resource in the UI on the aws console by going to your lambda, clicking "Permissions" and scrolling down to "Resource-based policy". The keyword you want to look out for is lambda:InvokeFunction, which is what gives other things permission to call your lambda, including other AWS accounts, and other AWS services on your account (like s3).

            That being said, the command you ran should have created this policy. Did you make sure to replace my_account_id with your actual account id when you ran the command?

            In addition, make sure you replace --source-arn arn:aws:s3:::file-import with the actual ARN of your bucket (I assume you had to create a bucket with a different name because s3 buckets must have globally unique names, and file-import is almost surely already taken)

            Source https://stackoverflow.com/questions/65914588

            QUESTION

            AWS S3 API session expiration time
            Asked 2021-Feb-09 at 21:40

            I'm using golang to access an AWS S3 bucket to download files, the workflow of my API is very simple, it is just a cron that download a single file each day at certain time, my questions are:

            It is mandatory to create the session on each execution of the cron?

            How long will be available the session without expiration if I keep the same session for each call?

            (I can't find it in the documentation)

            I'm using this portion of code to create the session and donwload the file:

            ...

            ANSWER

            Answered 2021-Feb-09 at 21:40

            From the comments, it has been established that you have a time.Timer for triggering the download. Since this is the case, you only need to create a single session object and it can be re-used as many times as you want. If you read the AWS documentation, you can find this line:

            Sessions should be cached when possible, because creating a new Session will load all configuration values from the environment, and config files each time the Session is created.

            Source https://stackoverflow.com/questions/66127110

            QUESTION

            Can't bind to 'fileUpload' since it isn't a known property of 'app-details-upload'
            Asked 2020-Dec-11 at 07:02

            I'm connecting my angular project with the AWS S3 bucket that I created. Also I'm follwing this tutorial https://grokonez.com/aws/angular-4-amazon-s3-example-get-list-files-from-s3-bucket. The error is in step 2.7. When I´m trying to get the files using the [fileUpload] property I'm getting an error

            Can't bind to 'fileUpload' since it isn't a known property of 'app-details-upload'.

            1. If 'app-details-upload' is an Angular component and it has 'fileUpload' input, then verify that it is part of this module.
            2. If 'app-details-upload' is a Web Component then add 'CUSTOM_ELEMENTS_SCHEMA' to the '@NgModule.schemas' of this component to suppress this message.
            3. To allow any property add 'NO_ERRORS_SCHEMA' to the '@NgModule.schemas' of this component.

            Here is the piece of code that is giving the error

            ...

            ANSWER

            Answered 2020-Dec-11 at 07:02

            It is not the piece of code you have provided that is giving the error rather the component is not not declared.

            Take it like a simple variable, example, lets say myFunction.myProperty(). Now without 1st declaring what myFunction represents then the code will not work.

            In Angular before you can use a component you have to declare the component. To declare a component you need to add the component to the declarations array in the NgModule

            In your app.module.ts

            Source https://stackoverflow.com/questions/65246560

            QUESTION

            Upload to AWS S3 got 403 Forbidden - Solved by remove "ACL" in param
            Asked 2020-Sep-18 at 00:13

            I was developing the frontend using React.js, and I use Javascript SDK for uploading a file to my S3 bucket using my root AWS account. I followed the official doc but kept getting 403 Forbidden. If you encounter the same case, you can try to remove the "ACL" in params while uploading to solve it.

            I basically followed the demo code here in the official doc in the addPhoto() function: https://docs.aws.amazon.com/sdk-for-javascript/v2/developer-guide/s3-example-photo-album-full.html I also referred to another blog post here: https://medium.com/@fabianopb/upload-files-with-node-and-react-to-aws-s3-in-3-steps-fdaa8581f2bd

            They all add ACL: 'public-read' the params in s3.upload(params) function.

            ...

            ANSWER

            Answered 2020-Sep-18 at 00:13

            Your bucket probably has Amazon S3 block public access activated (which is default).

            One of the settings is: "Block public access to buckets and objects granted through new access control lists (ACLs)"

            This means that it will block any command (such as yours) that is granting public access via an ACL. Your code is setting the ACL to public-read, which is therefore being blocked.

            The intention of S3 Block Public Access is to default to a setting where nothing S3 content will not be accidentally made public. You can deactivate S3 Block Public Access to change this setting.

            S3 Block Public Access is relatively new (November 2018), so a lot of articles on the web might have been written before the "block by default" rule came into effect.

            Source https://stackoverflow.com/questions/63947307

            QUESTION

            While creating policy got erro function' object has no attribute 'put_bucket_policy
            Asked 2020-Jul-05 at 13:25

            I am trying to create a bucket which is public

            Below is the code

            ...

            ANSWER

            Answered 2020-Jul-05 at 13:23

            This is because you have no s3_client variable. In fact you have a function named s3_client. I fixed this below to call s3_client() instead.

            Also keep an eye on aligning for Python.

            Source https://stackoverflow.com/questions/62741121

            QUESTION

            AWS S3 Bucket Django 3.0 User Profile Image Upload Access ERROR
            Asked 2020-Apr-27 at 01:46

            INTRO

            • I am following this guide as recommended, here is the guide's GitHub repo.
            • I have created an AmazonS3FullAccess to it as well
            • I am using the guide's 3rd example "Mixing public assets and private assets" with static, media public, media, private version.
            • If the user log in (local development environment) he Upload Files from the website but he can NOT access them from the website only from the AWS S3 management website.
            • Currently I am blocking all public access as it is in the guide (AWS S3 management panel settings)
            • I have added these lines to my CORS configuration editor from this other guide
            ...

            ANSWER

            Answered 2020-Apr-27 at 01:46

            In the AWS console, click the "Permissions" tab, then on the

            1. allow public access to your bucket -> Save -> Confirm it
            2. "Bucket policy" button. An editing box will appear. Replace the "arn:aws:s3:::" in the editing box with the part starting with "arn:" shown above your editing box, but be careful to preserve the "/*" at the end of it. Use the following code bellow. Paste in the following:

            Source https://stackoverflow.com/questions/61322805

            QUESTION

            How to download objects from S3 using AWS SDK - RESTful
            Asked 2020-Apr-22 at 17:30

            I'm looking for a programmatic way to download images from an S3 bucket to my computer.

            I tried "Using send_file to download a file from Amazon S3?" but it just redirected me to a link that only shows my PDF object.

            This is my download function using the AWS documentation:

            ...

            ANSWER

            Answered 2020-Apr-22 at 16:14

            If you change the disposition to attachment the browser download the file?

            This may be related to content-disposition inline value.

            Did you checked this question? https://superuser.com/questions/1277819/why-does-chrome-sometimes-download-a-pdf-instead-of-opening-it

            Source https://stackoverflow.com/questions/61369762

            QUESTION

            SIGSEGV from ffmpeg on Amazon Lambda
            Asked 2020-Apr-22 at 05:49

            Trying out Amazon Lambda / nodejs 8. My goal is to launch ffmpeg, generate a short clip and upload it to S3 bucket.

            I created the function following the image resize tutorial. Edited the code to get output from simple linux commands like ls or cat /proc/cpuinfo - all works.

            Now, added the ffmpeg binary for i686 - ffmpeg static build by JohnVan Sickle (thanks!). Changed the code to launch simple ffmpeg command that is supposed to create sa 2-seconds small video clip.

            That fails, according to logs, with the signal SIGSEGV returned to the "close" event handler of child_process.spawn()

            As far as I understand, this could be caused by the ffmpeg binary incompatibility with the static build. Or by some mistake in my code.

            Several npm modules rely on the static builds from johnvansickle.com/ffmpeg and there are no such issues filed on their github. Maybe there's some other mistake I made?

            Should I compile ffmpeg myself under Amazon Linux AMI amzn-ami-hvm-2017.03.1.20170812-x86_64-gp2 which is under the hood of AWS Lambda?

            upd. Launched EC2 t2.micro instance from the same AMI, downloaded the same ffmpeg static build, and it works just fine from the command line. Now I doubt that it is a compilation issue.

            Also tried copying ffmpeg executable to /tmp/ffmpeg and chmod 755 just to make sure. Running simple ffmpeg --help command via child_process.execSync() returns "Error: Command failed: /tmp/ffmpeg --help"

            ...

            ANSWER

            Answered 2019-Apr-08 at 18:22

            Fixed. Despite the misleading fact that static build of ffmpeg from JohnVanSickle.com does run on Amazon EC2 instance of the AMI, mentioned in Lambda environment, same binary fails to execute under AWS Lambda.

            I compiled ffmpeg on the AWS EC2 t2.micro instance of the same AMI using markus-perl/ffmpeg-build-script. It also surprised me with an error of aom codec version. Changed one line in the script to disable the aom codec and ffmpeg finally has compiled. Took a couple of hours on the weak t2.micro instance.

            The resulting ffmpeg binary is ~10Mb lighter than the static build mentioned above and runs on AWS Lambda just fine!

            Hope this will help someone.

            Source https://stackoverflow.com/questions/55561267

            Community Discussions, Code Snippets contain sources that include Stack Exchange Network

            Vulnerabilities

            No vulnerabilities reported

            Install s3-example

            If you are running this example locally, you should edit the fields on the right. If you are planning to test it on now.sh you need to add secrets. Please refer to the section "deploying to now.sh".

            Support

            For any new features, suggestions and bugs create an issue on GitHub. If you have any questions check and ask questions on community page Stack Overflow .
            Find more information at:

            Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items

            Find more libraries
            CLONE
          • HTTPS

            https://github.com/paulogdm/s3-example.git

          • CLI

            gh repo clone paulogdm/s3-example

          • sshUrl

            git@github.com:paulogdm/s3-example.git

          • Stay Updated

            Subscribe to our newsletter for trending solutions and developer bootcamps

            Agree to Sign up and Terms & Conditions

            Share this Page

            share link