percona-toolkit | Percona Toolkit
kandi X-RAY | percona-toolkit Summary
kandi X-RAY | percona-toolkit Summary
Percona Toolkit is a collection of advanced command-line tools used by Percona support staff to perform a variety of MySQL and system tasks that are too difficult or complex to perform manually. These tools are ideal alternatives to private or "one-off" scripts because they are professionally developed, formally tested, and fully documented. They are also fully self-contained, so installation is quick and easy and no libraries are installed. Percona Toolkit is developed and supported by Percona Inc. For more information and other free, open-source software developed by Percona, visit
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of percona-toolkit
percona-toolkit Key Features
percona-toolkit Examples and Code Snippets
Community Discussions
Trending Discussions on percona-toolkit
QUESTION
When I am running my docker image on windows 10. I am getting this error:
...ANSWER
Answered 2018-Jul-31 at 19:28in my case I had to change line ending from CRLF
to LF
for the run.sh
file and the error was gone.
I hope this helps,
Kirsten
QUESTION
Hi I currently have a table which has 100 HASH Partitions. I have decided that this now needs to be increased to 1000 partitions due to future scaling.
Do I need to remove the Partitions from the table and then add the 1000 partitions after or is there a way to add the extra 900 partitions to the already partitioned table?
The way I partitioned was using the below code.
...ANSWER
Answered 2019-Apr-17 at 15:30PARTITION BY HASH
is virtually useless. I don't expect it to help you with 100 partitions, nor with 1000.
You get more bang for your buck by arranging to have venue_id
as the first column in the PRIMARY KEY
.
Does the query always have a single venue_id
? (If not the options get messier.) For now, I will assume you always have WHERE venue_id = constant
.
You have a multi-dimensional indexing problem. INDEXes
are only one dimension, so things get tricky. However, partitioning can be used to sort of get a two-dimensional index.
Let's pick day_epoch
as the partition key and use PARTITION BY RANGE(day_epoch)
. (If you change that from a 4-byte INT to a 3-byte DATE, then use PARTITION BY RANGE(TO_DAYS(day_epoch))
).
Then let's decide on the PRIMARY KEY
. Note: When adding or removing partitioning, the PK should be re-thought. Keep in mind that a PK is a unique index. And the data is clustered on the PK. (However, uniqueness is not guaranteed across partitions.)
So...
QUESTION
We have two very large tables in our Mysql(MariaDb) database. Table_1 holds a many to many map. It has a auto incremented primary key and a composite key of two columns. Table_2 refers to the primary key of Table_1. We wan't to fix this obvious error in design by,
- Use a composite primary key on Table_1
- Add the two columns to Table_2
- Populate the composite key in Table_2 by copying data from Table_1, and create index on it.
- Preferably delete the auto incremented key column from both tables.
These tables have ~300M rows, and the tables are ~10GB range in size. We need to make these updates within a ~6 hour service window. I'm investigating how to do this efficiently and doing trials on a replica db. So far I have not tried to run anything with actual data, because ordinary scripts would be insufficient. I'm not an experienced DB admin. So I need some light shedding to get this done. My question is what would be the best approach/tips to do this efficiently?
Things I have attempted so farI read about the new instant add column feature, but our production DB is on MariaDb version 10.0, which is older.
I have followed suggestions in this answer and ran below script on a latest DB version with instant add column support(Alter table was instant). The table had ~50M rows (1/6th of original). It took about two hours , that also is excluding creating new indexes. Therefore this would not be sufficient.
...ANSWER
Answered 2019-Jan-26 at 22:29Plan A: Use Percona's tool: pt-online-schema-change
.
Plan B: Use a competing product: gh-ost
.
Plan C: Don't use UPDATE
, that is the killer. Instead, rebuild the table(s) in a straightforward way, then use RENAME TABLE
to swap the new version into place.
Partitioning is unlikely to help in any way. Daniel's link helps with doing a lengthy UPDATE
, but trades off time (it takes longer) versus invasiveness (which is not an issue because you have a maintenance window).
Some more details into Plan C (which I prefer for this case):
QUESTION
Suppose I have two queries:
...ANSWER
Answered 2018-Aug-12 at 18:09Yes, you're right, the order of columns in the select-list of a query makes no difference to performance.
But treating those two queries as the same fingerprint could obscure the source of the query.
Suppose you have those two queries, each one is in a different part of your application. You might like to know which one is responsible for 40% of your query load and which is responsible for only 2% of your query load.
It would also be much more complex to produce the fingerprint of a query if it had to detect the commutativity of columns as you describe. That would also apply to boolean terms in the WHERE clause, and to some extent order of joined tables in the FROM clause, and order of unioned queries in a UNION, too.
The code for fingerprint()
is only about 100 lines of code to do pattern-matching implemented with regular expressions. Doing what you describe would require a full-blown SQL parser. See the code here: https://github.com/percona/percona-toolkit/blob/3.0/lib/QueryRewriter.pm
QUESTION
I am trying to keep my development database up-to-date with data from my production database. I discovered pt-table-sync
in the Percona Toolkit.
When I run it, I frequently get the error Cannot add or update a child row: a foreign key constraint fails
. This happens on tables that are frequently updated and have foreign keys.
Is there a way to make use of this tool that avoids this problem? Some other tool I'm missing? The database is quite large. The largest table has nearly eight million rows.
...ANSWER
Answered 2018-Apr-03 at 20:20You can temporarily disable foreign key checks on your local dev database:
QUESTION
I have a simple master->slave setup with MariaDB:
Master: Ubuntu 16.04 LTS with MariaDB 10.2.8 and percona-toolkit 3.0.4
Slave: Ubuntu 16.04 LTS with MariaDB 10.2.7
Replication is running fine and now I want to check if data is identical between master and slave.
I installed percona-toolkit on the master and created a checksum user:
...ANSWER
Answered 2017-Sep-11 at 12:44I noticed a new bug-report during the weekend, and I have confirmed today that this is indeed the problem I am experiencing.
The workaround is to add --set-vars binlog_format=statement
.
When I set this option, the difference reveals itself after the second run.
During the first run the checksum table on the slave changes from:
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install percona-toolkit
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page