mysqldump | A multi-threaded MySQL backup and restore tool | Continuous Backup library
kandi X-RAY | mysqldump Summary
kandi X-RAY | mysqldump Summary
支持 MySQL (4.1+), MariaDB.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
- dumpTable dump table
- generateArgs generates the command line arguments .
- Dumper is used to dump database schema .
- EscapeString returns the escaped escape string .
- Loader is the main entry point for xorm .
- loadFiles loads all the files from the given directory .
- restore data from database
- WriteFile writes data to a file
- restoreSchema is used to restore a schema
- init configures the command line flags .
mysqldump Key Features
mysqldump Examples and Code Snippets
./mysqldump -h [HOST] -P [PORT] -u [USER] -p [PASSWORD] -db [DATABASE] -o [OUTDIR] -i [INDIR] -m [MYSQL_SOURCE] -exclude [EXCLUDE_TABLE]
-h string 数据库连接地址
-P int 数据库连接端口(不传则默认3306)
-u string 连接用户名
-p
Community Discussions
Trending Discussions on mysqldump
QUESTION
I'm trying to create a Bash script based on variables. It works well when I'm using bash command line via docker image:
...ANSWER
Answered 2022-Apr-09 at 20:46Bash variables are not expanded inside single-quotes.
QUESTION
Today I was shocked one of my very valuable tables in a mysql DB was almost wiped out, and I couldn't yet understand if it was my mistake or someone else did it due to some security vulnerability.
Anyway, I have a script to do daily backups of the entire mysql database into a .sql.gz
file, via mysqldump
. I have hundreds of those files and I want to check what is the exact day where that table was wiped out.
Can I do a sort of a COUNT
to a table, but from a .sql
file?
ANSWER
Answered 2022-Mar-27 at 22:47No, there is no tool to query the .sql
file as if it's a database. You have to restore that dump file, then you can query its data.
A comment above suggests to count the INSERT statements in the dump files, but that isn't reliable, because by default mysqldump outputs multiple rows per INSERT statement (the --extended-insert option, which is enabled by default). The number of rows per INSERT varies, depending on the length of data.
I once had to solve a problem exactly like yours. A bug in my app caused some rows to vanish, but we didn't know exactly when, because we didn't notice the discrepancy until some days after it happened. We wanted to know exactly when it happened so we could correlate it to other logs and find out what caused the bug. I had daily backups, but no good way to compare them.
Here's how I solved it:
I had to restore every daily backup to a temporary MySQL instance in my development environment. Then I wrote a Perl script to dump all the integer primary key values from the affected table, so each id value corresponded to a pixel in a GIF image. If a primary key value was found in the table, I drew a white pixel in the image. If the primary key value was missing, I drew a black pixel in the position for that integer value.
The image filenames are named for the date of the backup they represent. I repeated the process for each day's backup, writing to a new image.
Then I used an image preview app to scroll through my collection of images slowly using the scroll wheel of my mouse. As expected, each image had a few more pixels than the image before, representing records were added to the database each day. Then at some point, the data loss event happened, and the next image had a row of black pixels where the previous day had white pixels. I could therefore identify which day the data loss occurred on.
After I identified the last backup that contained the data before it was dropped, I exported the rows that I needed to restore to the production system.
QUESTION
I'm using a database with MySQL 5.7, and sometimes, data needs to be updated using a mixture of scripts and manual editing. Because people working with the database are usually not familiar with SQL, I'd like to export the data as a TSV, which then could be manipulated (for example with Python's pandas
module) and then be imported back. I assume the standard way would be to directly connect to the database, but using TSVs has some upsides in this situation, I think. I've been reading the MySQL docs and some stackoverflow questions to find the best way to do this. I've found a couple of solutions, however, they all are somewhat inconvenient. I will list them below and explain my problems with them.
My question is: did I miss something, for example some helpful SQL commands or CLI options to help with this? Or are the solutions I found already the best when importing/exporting TSVs?
My example database looks like this:
Database: Export_test
Table: Sample
Field Type Null Key id int(11) NO PRI text_data text NO optional int(11) YES time timestamp NOExample data:
...ANSWER
Answered 2022-Feb-16 at 22:13Exporting and importing is indeed sort of clunky in MySQL.
One problem is that it introduces a race condition. What if you export data to work on it, then someone modifies the data in the database, then you import your modified data, overwriting your friend's recent changes?
If you say, "no one is allowed to change data until you re-import the data," that could cause an unacceptably long time where clients are blocked, if the table is large.
The trend is that people want the database to minimize downtime, and ideally to have no downtime at all. Advancements in database tools are generally made with this priority in mind, not so much to accommodate your workflow of taking the data out of MySQL for transformations.
Also what if the database is large enough that the exported data causes a problem because where do you store a 500GB TSV file? Does pandas even work on such a large file?
What most people do is modify data while it remains in the database. They use in-place UPDATE statements to modify data. If they can't do this in one pass (there's a practical limit of 4GB for a binary log event, for example), then they UPDATE more modest-size subsets of rows, looping until they have transformed the data on all rows of a given table.
QUESTION
I do not figure out how can I execute an mysqldump for particular tables with where conditions. This is my instruction:
...ANSWER
Answered 2022-Jan-18 at 17:21You can't make a new WHERE condition for each table. There is a single WHERE condition, and it will be applied to every table. So you can only reference a column if it exists in every table included in the backup.
You can run one mysqldump command for each table, but if you do that, you can't get a consistent backup. I mean, you can't use a lock or a transaction to ensure the backups include data from a single point in time. So if the database is in use during this time, it's possible tables you back up later will have references to new rows that have been created since you made the backup of earlier tables.
QUESTION
I'm trying to insert the date into the .sql file name when making a backup.
...ANSWER
Answered 2021-Dec-17 at 23:20date "+%D--%T"
outputs something like 12/18/21--00:19:21
.
/
is not allowed in file names with Linux/Unix.
QUESTION
I've read a post saying that I would have to use this command below, to dump my sql files
$ mysqldump -u [uname] -p db_name > db_backup.sql
However, my question is, I don't quite get where the db_backup.sql comes from. Am I suppose to create a file or is it being created as I enter any name for the backup sql? Also, where can I find the backup.sql at?
More information: I am doing this on Mariadb/phpmyadmin on the xampp shell
...ANSWER
Answered 2021-Dec-13 at 02:36Whichever directory you are in when you run that command will have db_backup.sql.
The statement:
mysqldump -u user -p db_name
generates an output on the terminal screen, assuming you are SSH'ed into the server. Instead of the output written on the screen, you can redirect the output to a file.
To do that, you use > db_backup.sql
after the command.
If you are in directory /home/hyunjae/backups and run the command:
$ mysqldump -u [uname] -p db_name > db_backup.sql
You will see a new file created called db_backup.sql. If there's already a file with that name, it will be overwritten.
If you don't know what directory you are in, you can type pwd
for present working directory.
QUESTION
I want to increase following parameters for mysql Ver 8.0.27-0ubuntu0.20.04.1 for Linux on x86_64 ((Ubuntu))
ANSWER
Answered 2021-Dec-10 at 15:02InnoDB options would only apply to mysqld, so I would put the options you show into /etc/mysql/mysql.conf.d/mysqld.conf
, assuming that's the option file with a [mysqld]
section.
But ultimately, all the option files will be read, and all options in the [mysqld]
section, regardless of which file they appear in, will take effect.
So it's really up to you how you want to organize your own config. The only reason to separate the files is so you can find the options you're setting, or deploy different option files in different environments.
QUESTION
I am upgrading an old Symfony application (v2.8) to Symfony 5.3. I am using the process component where the arguments have to be passed in another way than before.
My previous code was like
...ANSWER
Answered 2021-Nov-29 at 10:45I have found a workaround. Process::fromShellCommandline
can be used to redirect the output. This is my solution:
QUESTION
I have a strange error here. The command I am executing is this:
...ANSWER
Answered 2021-Nov-28 at 19:48By default, when you use mysqldump DB
, the output includes table-creation statements, but no CREATE DATABASE
statement. It just assumes you have created an empty schema first.
So you could do this to create the schema first:
QUESTION
We do database dumps of Shopware 6 databases. The system stores UUIDs in binary(16) fields.
Now when dumping databases with the --hex-blob
option, the data columns are written properly as hex (0x12345....) but we saw that default values are still binary data (see cms_page_version_id
)
ANSWER
Answered 2021-Nov-15 at 16:31There is no such option to mysqldump
. The --hex-blob
option only applies to data values.
Mysqldump gets the CREATE TABLE statement using SHOW CREATE TABLE, which in turn relies on the INFORMATION_SCHEMA.
A bug was reported in 2013 that there's effectively no way to get column DEFAULT values from this method if the value is binary and contains non-printable characters. https://bugs.mysql.com/bug.php?id=71172
The bug report was acknowledged, but so far it has not been fixed. Feel free to upvote the bug using the "Affects Me" button.
Or try to get MariaDB to fix it themselves, instead of the upstream MySQL code.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install mysqldump
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page