glacier-upload | A simple python script to upload files to AWS Glacier vaults | AWS library
kandi X-RAY | glacier-upload Summary
kandi X-RAY | glacier-upload Summary
A simple python script to upload files to AWS Glacier vaults
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
- Upload a file
- Calculate the hash of a list of checksums
- Calculate the hash of a tree
- Gets the output of the given job
- Retrieve the output of a job
- List all uploads in a vault
- List all multipart uploads in a vault
- Lists the parts of a upload
- List parts in a multipart upload
- Create a delete archive command
- Delete an archive
- Returns the abort command
- Aborts a multipart upload
- Upload a piece of parts
- Create a upload command
glacier-upload Key Features
glacier-upload Examples and Code Snippets
Community Discussions
Trending Discussions on glacier-upload
QUESTION
Does the upload_archive() operation in boto3 for glacier automatically use multi-part upload when the data to be uploaded is larger than 100MB?
I believe this is the case in boto2 (see @lenrok258's answer in Boto Glacier - Upload file larger than 4 GB using multipart upload)
I have tried different ways to view the source code for the upload_archive() operation in boto3 for glacier, but I haven't been able to find it using inspect or ipython. If anyone happens to know how to do this and is willing to share it would be much appreciated.
...ANSWER
Answered 2018-Jul-27 at 13:29Unlike boto2, boto3 does not automatically use multi-part upload.
From a comment from a member of the boto project on an issue on the boto3 Github:
... boto3 does not have the ability to automatically handle multipart uploads to Glacier. That would be a feature request. There are some features that exist in boto2 that have not been implemented in boto3.
You'll have to implement it yourself using the initiate_multipart_upload functionality.
Or, as another commenter on the issue suggests instead:
The optimal usage pattern for interacting with Glacier is generally to upload to S3 and use S3 lifecycle policies to transition the object to Glacier.
QUESTION
I uploaded an archive to AWS glacier using boto3 using the code as described here: https://github.com/tbumi/glacier-upload/blob/develop/main.py
It returned me archive id which I didn't save at the time and i went through the AWS documentation and archive id is needed for archive retrieval.
As i understood from the boto3 documentation, You first need to create a job as follows:
...ANSWER
Answered 2017-Oct-26 at 13:07You can get all the IDs by running a vault inventory.
...you can use Initiate a Job (POST jobs) to initiate a new inventory retrieval for a vault. The inventory contains the archive IDs you use to delete archives using Delete Archive (DELETE archive) . http://boto3.readthedocs.io/en/latest/reference/services/glacier.html
So just use Client.initiate_job
with Type
set to inventory-retrieval
, and you'll get your ID after the inventory finishes.
http://boto3.readthedocs.io/en/latest/reference/services/glacier.html#Glacier.Client.initiate_job
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install glacier-upload
You can use glacier-upload like any standard Python library. You will need to make sure that you have a development environment consisting of a Python distribution including header files, a compiler, pip, and git installed. Make sure that your pip, setuptools, and wheel are up to date. When using pip it is generally recommended to install packages in a virtual environment to avoid changes to the system.
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page