mongo-backup | Automated backups for MongoDB | Continuous Backup library
kandi X-RAY | mongo-backup Summary
kandi X-RAY | mongo-backup Summary
Automated backups for MongoDB
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of mongo-backup
mongo-backup Key Features
mongo-backup Examples and Code Snippets
Community Discussions
Trending Discussions on mongo-backup
QUESTION
I am trying to remove all directories inside /mongo-backups directory which last modified 10 minute ago. but as I run the command it also remove the /mongo-backups directory
...ANSWER
Answered 2020-May-14 at 09:09You can use -mindepth 1
to exclude the current directory:
QUESTION
I am currently trying to read a bson file to import it into a database. I can already read the file and print it as as bytes, but I end up just getting an bson.errors.InvalidBSON: objsize too large
error.
This is the code that is trying to decode the file
...ANSWER
Answered 2018-Nov-28 at 20:49print(content)
b'[{"_id": {"$oid": "5bf3cf511c9d44000088c376"}, "some": "sort of"}, {"_id": {"$oid": "5bf3cf5c1c9d44000088c377"}, "test": "data"}]'
QUESTION
I have a MongoDB server and I am using mongodump
command to create backup. I run command mongodump --out ./mongo-backup
then tar -czf ./mongo-backup.tar.gz ./mongo-backup
then gpg --encrypt ./mongo-backup.tar.gz > ./mongo-backup.tar.gz.gpg
and send this file to backup server.
My MongoDB database has 20GB with MongoDB show dbs
command, MongoDB mongodump
backup directory has only 3.8GB, MongoDB gzipped-tarball has only 118MB and my gpg
file has only 119MB in size.
How is this possible to reduce 20GB database to 119MB file? Is it fault tolerant?
I tried to create new server ( clone of production ), enabled firewall to ensure that noone could connect and run this backup procedure. I create fresh new server and import data and there are some differences:
I ran same command from mongo shell use db1; db.db1_collection1.count();
and use db2; db.db2_collection1.count();
and results are:
- 807843 vs. 807831 ( db1.collection1 source server vs. db1.collection1 restored server )
- 3044401 vs. 3044284 ( db2.collection1 source server vs. db2.collection1 restored server )
ANSWER
Answered 2017-Jul-05 at 20:48If you have validated the counts and size of documents/collections in your restored data, this scenario is possible although atypical in the ratios described.
My MongoDB database has 20GB with MongoDB
show dbs
command
This shows you the size of files on disk, including preallocated space that exists from deletion of previous data. Preallocated space is available for reuse, but some MongoDB storage engines are more efficient than others.
MongoDB
mongodump
backup directory has only 3.8GB
The mongodump
tool (as at v3.2.11, which you mention using) exports an uncompressed copy of your data unless you specify the --gzip
option. This total should represent your actual data size but does not include storage used for indexes. The index definitions are exported by mongodump
and the indexes will be rebuilt when the dump is reloaded via mongorestore
.
With WiredTiger the uncompressed mongodump
output is typically larger than the size of files on disk, which are compressed by default. For future backups I would consider using mongodump
's built-in archiving and compression options to save yourself an extra step.
Since your mongodump
output is significantly smaller than the storage size, your data files are either highly fragmented or there is some other data that you have not accounted for such as indexes or data in the local
database. For example, if you have previously initialised this server as a replica set member the local
database would contain a large preallocated replication oplog which will not be exported by mongodump
.
You can potentially reclaim excessive unused space by running the compact
command for a WiredTiger collection. However, there is an important caveat: running compact
on a collection will block operations for the database being operated on so this should only be used during scheduled maintenance periods.
MongoDB gzipped-tarball has only 118MB and my
gpg
file has only 119MB in size.
Since mongodump
output is uncompressed by default, compressing can make a significant difference depending on your data. However, 3.8GB to 119MB seems unreasonably good unless there is something special about your data (large number of small collections? repetitive data?). I would double check that your restored data matches the original in terms of collection counts, document counts, data size, and indexes.
QUESTION
I'm running shell script file from nodeJS file i need to send some parameters to shell script and use it in shell script
My js file
...ANSWER
Answered 2017-May-19 at 10:34although it might be possible, the code mixture looks counter intuitive. I suggest having an environment variable setup for the dbname, both nodejs will capture it with process.env.dbname
and the bash script will capture it with $dbname
.
if you insist on the current structure, simply do this:
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install mongo-backup
On a UNIX-like operating system, using your system’s package manager is easiest. However, the packaged Ruby version may not be the newest one. There is also an installer for Windows. Managers help you to switch between multiple Ruby versions on your system. Installers can be used to install a specific or multiple Ruby versions. Please refer ruby-lang.org for more information.
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page