duplicity-unattended | Lightweight scripts for automating Duplicity | Continuous Backup library
kandi X-RAY | duplicity-unattended Summary
kandi X-RAY | duplicity-unattended Summary
Lightweight scripts for automating Duplicity backups
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
- Handler for Lambda function
- Send an email
- Finds the last modification date in the bucket
- Format dates by prefix
duplicity-unattended Key Features
duplicity-unattended Examples and Code Snippets
Community Discussions
Trending Discussions on Continuous Backup
QUESTION
ANSWER
Answered 2022-Feb-22 at 10:59I am not sure if you have seen this message in the portal when you created the account/also mentioned in the doc
"You will not be able to switch between the backup policies after the account has been created"
since you need to select either "Periodic" or "Continuous" at the creation of Cosmos Account, it becomes mandatory.
Update:
You will not see the above in portal anymore, you can Switch from "Periodic" to "Continous" on an existing account and that cannot be reverted. You can read more here.
QUESTION
What would be the consistency of the continuous backup of the write region if the database is using bounded staleness consistency? Will it be equivalent to strong consistent data assuming no failovers happened?
Thanks Guru
...ANSWER
Answered 2021-Nov-25 at 17:15Backups made from any secondary region will have data consistency defined by the guarantees provided by the consistency level chosen. In the case of strong consistency, all secondary region backups will have completely consistent data.
Bounded staleness will have data that may have stale or inconsistent data inside the defined staleness window (minimum 300 seconds or 100k writes). Outside of that staleness window the data will be consistent.
Data for the weaker consistency levels will have no guarantees for consistency from backups in secondary regions.
QUESTION
MongoDB has deprecated the continuous back up of data. It has recommended using CPS (Cloud provider snapshots). As far as I understood, snapshots isn't really going to be effective compared to continuous backup coz, if system breaks, then we can only be able to restore the data till the previous snapshot which isn't gonna make the database up-to-date or close to it atleast.
Am I missing something here in my understanding?
...ANSWER
Answered 2020-May-19 at 10:12Cloud provider snapshots can be combined with point in time restore to give the recovery point objective you require. With oplog based restores you can get granularity of one second.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install duplicity-unattended
Go to CloudFormation in the AWS console and click Create Stack.
Select the option to upload a template to S3 and pick the cfn/host-bucket.yaml template.
Fill in the stack name and bucket name. I suggest including the hostname in both for easy identification.
Accept remaining defaults and acknowledge the IAM resource creation warning.
Wait for stack setup to complete. If it fails, it's likely the S3 bucket name isn't unique. Delete the stack and try again with a different name.
Go to IAM in the AWS console and click on the new user. The user name is prefixed with the stack name so you can identify it that way.
Go to the Security credentials tab and click Create access key.
Copy the generated access key ID and secret key. You'll need them later.
Create the S3 bucket. Default settings are fine.
Create IAM policy with the following permissions: { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "s3:*", "Resource": [ "arn:aws:s3:::<bucket>/*", "arn:aws:s3:::<bucket>" ] } ] } Replace <bucket> with the bucket name.
Create IAM group with the same name as the policy and assign the policy to it.
Create IAM user for programmatic access. Add the user to the group. Don't forget to copy the access key ID and secret access key at the end of the wizard.
Install dependencies: Duplicity boto2 for Python 2 GnuPG Python 3 PyYAML for Python 3
Create new RSA 4096 keypair as the user who will perform the backups. If you're backing up system directories, this probably needs to be root. Do NOT set a passphrase. Leave it blank. gpg --full-generate-key --pinentry loopback
Make an off-host backup of the keypair in a secure location. I use my LastPass vault for this. Don't skip this step or you'll be very sad when you realize the keys perished alongside the rest of your data, rendering your backups useless. gpg --list-keys # to get the key ID gpg --armor --output pubkey.gpg --export <key_id> gpg --armor --output privkey.gpg --export-secret-key <key_id>
Delete the exported key files from the filesystem once they're secure.
Create a file on the host containing the AWS credentials. [Credentials] aws_access_key_id = <access_key_id> aws_secret_access_key = <secret_key> Replace <access_key_id> and <secret_key> with the IAM user credentials. Put it in a location appropriate for the backup user such as /etc/duplicity-unattended/aws_credentials or ~/.duplicity-unattended/aws_credentials.
Make sure only the backup user can access the credentials file. chmod 600 aws_credentials Change ownership if needed.
Copy the duplicity-unattended script to a bin directory and make sure it's runnable. chmod +x duplicity-unattended I usually clone the repo to /usr/local/share and add a symlink in usr/local/bin.
Copy the sample config.yaml file to the same directory as the AWS credentials file. (Or you can put it somewhere else. Doesn't matter.)
Customize the config.yaml file for the host.
Do a dry-run backup as the backup user to validate most of the configuration: duplicity-unattended --config <config_file> --dry-run Replace <config_file> with the path to the YAML config file. Among other things, this will tell you how much would be backed up.
Do an initial backup as the backup user to make sure everything really works: duplicity-unattended --config <config_file>
How do make sure backups keep working in the future? You can set up systemd to email you if something goes wrong, but I prefer an independent mechanism. The cfn/backup-monitor directory contains a CloudFormation template (SAM template, actually) with a Lambda function that monitors a bucket for new backups and emails you if no recent backups have occurred. To set it up for a new host/bucket, follow these steps:. Now let's test it. If all goes well, you will get an email with a summary of the most recent backups found in the bucket. From now on, the function will run once a day and email you only when there have been no recent backups for the number of days you specified. The function will look for recent backups in any S3 "folder" that contains at least one backup set from any time in the past. You can deploy additional stacks for each bucket you want to monitor.
If you have not used AWS Simple Email Service (SES) before, follow the instructions to verify the sender and recipient email addresses. See the overview documentation for more information.
Go to duplicity-unattended-monitor in the AWS Serverless Application Repository and click the Deploy button.
Review the template. (You wouldn't deploy a CloudFormation template into your AWS account without knowing what it does first, would you?)
Change the application/stack name. I suggest a name that includes the host or bucket for easy identification.
Fill in the remaining function parameters. Make sure the email addresses exactly match the ones you verified in SES.
Click Deploy and wait for AWS to finish creating all the resources.
Click on the function link under Resources. This will take you to the Lambda console for the function.
Click the Test button in the upper-right.
Create a new test event with the following content: {"testEmail": true} Give it a name like BackupMonitorTest and click Create.
Now you should see the new named event next to the Test button. Click the Test button again.
Install Pipenv for Python 3 if you don't already have it.
From the source repo directory, install the AWS SAM CLI into a virtual environment: pipenv install --dev
Change to the cfn/backup-monitor directory.
Set up your AWS CLI credentials so SAM can read them (e.g. using AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables).
Run the SAM command to package the CloudFormation template and upload the Lambda function to S3: pipenv run sam package --s3-bucket <code_bucket> --output-template-file packaged.yaml where <code_bucket> is an S3 bucket to which the AWS CLI user has write access.
You can now use the CloudFormation AWS console or the AWS CLI to deploy the packaged.yaml stack template that SAM just created.
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page