percona-toolkit | Percona Toolkit

 by   percona Perl Version: v3.5.3 License: GPL-2.0

kandi X-RAY | percona-toolkit Summary

kandi X-RAY | percona-toolkit Summary

percona-toolkit is a Perl library. percona-toolkit has no bugs, it has no vulnerabilities, it has a Strong Copyleft License and it has medium support. You can download it from GitHub.

Percona Toolkit is a collection of advanced command-line tools used by Percona support staff to perform a variety of MySQL and system tasks that are too difficult or complex to perform manually. These tools are ideal alternatives to private or "one-off" scripts because they are professionally developed, formally tested, and fully documented. They are also fully self-contained, so installation is quick and easy and no libraries are installed. Percona Toolkit is developed and supported by Percona Inc. For more information and other free, open-source software developed by Percona, visit
Support
    Quality
      Security
        License
          Reuse

            kandi-support Support

              percona-toolkit has a medium active ecosystem.
              It has 806 star(s) with 289 fork(s). There are 106 watchers for this library.
              OutlinedDot
              It had no major release in the last 6 months.
              percona-toolkit has no issues reported. There are 26 open pull requests and 0 closed requests.
              It has a neutral sentiment in the developer community.
              The latest version of percona-toolkit is v3.5.3

            kandi-Quality Quality

              percona-toolkit has no bugs reported.

            kandi-Security Security

              percona-toolkit has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.

            kandi-License License

              percona-toolkit is licensed under the GPL-2.0 License. This license is Strong Copyleft.
              Strong Copyleft licenses enforce sharing, and you can use them when creating open source projects.

            kandi-Reuse Reuse

              percona-toolkit releases are not available. You will need to build from source code and install.
              Installation instructions are not available. Examples and code snippets are available.

            Top functions reviewed by kandi - BETA

            kandi's functional review helps you automatically verify the functionalities of the libraries and avoid rework.
            Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of percona-toolkit
            Get all kandi verified functions for this library.

            percona-toolkit Key Features

            No Key Features are available at this moment for percona-toolkit.

            percona-toolkit Examples and Code Snippets

            No Code Snippets are available at this moment for percona-toolkit.

            Community Discussions

            QUESTION

            standard_init_linux.go:190: exec user process caused "no such file or directory" - Docker
            Asked 2020-May-27 at 18:25

            When I am running my docker image on windows 10. I am getting this error:

            ...

            ANSWER

            Answered 2018-Jul-31 at 19:28

            in my case I had to change line ending from CRLF to LF for the run.sh file and the error was gone.

            I hope this helps,
            Kirsten

            Source https://stackoverflow.com/questions/51508150

            QUESTION

            Adding Extra HASH partitions to already HASH partitioned table
            Asked 2019-Apr-17 at 15:30

            Hi I currently have a table which has 100 HASH Partitions. I have decided that this now needs to be increased to 1000 partitions due to future scaling.

            Do I need to remove the Partitions from the table and then add the 1000 partitions after or is there a way to add the extra 900 partitions to the already partitioned table?

            The way I partitioned was using the below code.

            ...

            ANSWER

            Answered 2019-Apr-17 at 15:30

            PARTITION BY HASH is virtually useless. I don't expect it to help you with 100 partitions, nor with 1000.

            You get more bang for your buck by arranging to have venue_id as the first column in the PRIMARY KEY.

            Does the query always have a single venue_id? (If not the options get messier.) For now, I will assume you always have WHERE venue_id = constant.

            You have a multi-dimensional indexing problem. INDEXes are only one dimension, so things get tricky. However, partitioning can be used to sort of get a two-dimensional index.

            Let's pick day_epoch as the partition key and use PARTITION BY RANGE(day_epoch). (If you change that from a 4-byte INT to a 3-byte DATE, then use PARTITION BY RANGE(TO_DAYS(day_epoch))).

            Then let's decide on the PRIMARY KEY. Note: When adding or removing partitioning, the PK should be re-thought. Keep in mind that a PK is a unique index. And the data is clustered on the PK. (However, uniqueness is not guaranteed across partitions.)

            So...

            Source https://stackoverflow.com/questions/55336322

            QUESTION

            How to alter and update large table to add composite key columns form another table
            Asked 2019-Jan-26 at 22:29

            We have two very large tables in our Mysql(MariaDb) database. Table_1 holds a many to many map. It has a auto incremented primary key and a composite key of two columns. Table_2 refers to the primary key of Table_1. We wan't to fix this obvious error in design by,

            1. Use a composite primary key on Table_1
            2. Add the two columns to Table_2
            3. Populate the composite key in Table_2 by copying data from Table_1, and create index on it.
            4. Preferably delete the auto incremented key column from both tables.

            These tables have ~300M rows, and the tables are ~10GB range in size. We need to make these updates within a ~6 hour service window. I'm investigating how to do this efficiently and doing trials on a replica db. So far I have not tried to run anything with actual data, because ordinary scripts would be insufficient. I'm not an experienced DB admin. So I need some light shedding to get this done. My question is what would be the best approach/tips to do this efficiently?

            Things I have attempted so far

            I read about the new instant add column feature, but our production DB is on MariaDb version 10.0, which is older.

            I have followed suggestions in this answer and ran below script on a latest DB version with instant add column support(Alter table was instant). The table had ~50M rows (1/6th of original). It took about two hours , that also is excluding creating new indexes. Therefore this would not be sufficient.

            ...

            ANSWER

            Answered 2019-Jan-26 at 22:29

            Plan A: Use Percona's tool: pt-online-schema-change.

            Plan B: Use a competing product: gh-ost.

            Plan C: Don't use UPDATE, that is the killer. Instead, rebuild the table(s) in a straightforward way, then use RENAME TABLE to swap the new version into place.

            Partitioning is unlikely to help in any way. Daniel's link helps with doing a lengthy UPDATE, but trades off time (it takes longer) versus invasiveness (which is not an issue because you have a maintenance window).

            Some more details into Plan C (which I prefer for this case):

            Source https://stackoverflow.com/questions/54322773

            QUESTION

            Percona query fingerprinting - why does order matter in select columns?
            Asked 2018-Aug-12 at 18:09

            Suppose I have two queries:

            ...

            ANSWER

            Answered 2018-Aug-12 at 18:09

            Yes, you're right, the order of columns in the select-list of a query makes no difference to performance.

            But treating those two queries as the same fingerprint could obscure the source of the query.

            Suppose you have those two queries, each one is in a different part of your application. You might like to know which one is responsible for 40% of your query load and which is responsible for only 2% of your query load.

            It would also be much more complex to produce the fingerprint of a query if it had to detect the commutativity of columns as you describe. That would also apply to boolean terms in the WHERE clause, and to some extent order of joined tables in the FROM clause, and order of unioned queries in a UNION, too.

            The code for fingerprint() is only about 100 lines of code to do pattern-matching implemented with regular expressions. Doing what you describe would require a full-blown SQL parser. See the code here: https://github.com/percona/percona-toolkit/blob/3.0/lib/QueryRewriter.pm

            Source https://stackoverflow.com/questions/51675926

            QUESTION

            MySQL Replication - Percona Toolkit Table Sync - Foreign Key Constraint Fails
            Asked 2018-Apr-03 at 20:20

            I am trying to keep my development database up-to-date with data from my production database. I discovered pt-table-sync in the Percona Toolkit.

            When I run it, I frequently get the error Cannot add or update a child row: a foreign key constraint fails. This happens on tables that are frequently updated and have foreign keys.

            Is there a way to make use of this tool that avoids this problem? Some other tool I'm missing? The database is quite large. The largest table has nearly eight million rows.

            ...

            ANSWER

            Answered 2018-Apr-03 at 20:20

            You can temporarily disable foreign key checks on your local dev database:

            Source https://stackoverflow.com/questions/49636218

            QUESTION

            pt-table-checksum not detecting diffs
            Asked 2017-Sep-11 at 12:44

            I have a simple master->slave setup with MariaDB:

            Master: Ubuntu 16.04 LTS with MariaDB 10.2.8 and percona-toolkit 3.0.4

            Slave: Ubuntu 16.04 LTS with MariaDB 10.2.7

            Replication is running fine and now I want to check if data is identical between master and slave.

            I installed percona-toolkit on the master and created a checksum user:

            ...

            ANSWER

            Answered 2017-Sep-11 at 12:44

            I noticed a new bug-report during the weekend, and I have confirmed today that this is indeed the problem I am experiencing.

            The workaround is to add --set-vars binlog_format=statement.

            When I set this option, the difference reveals itself after the second run.

            During the first run the checksum table on the slave changes from:

            Source https://stackoverflow.com/questions/46099657

            Community Discussions, Code Snippets contain sources that include Stack Exchange Network

            Vulnerabilities

            No vulnerabilities reported

            Install percona-toolkit

            You can download it from GitHub.

            Support

            Run man percona-toolkit to see a list of installed tools, then man tool to read the embedded documentation for a specific tool. You can also read the documentation online at http://www.percona.com/software/percona-toolkit/.
            Find more information at:

            Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items

            Find more libraries
            CLONE
          • HTTPS

            https://github.com/percona/percona-toolkit.git

          • CLI

            gh repo clone percona/percona-toolkit

          • sshUrl

            git@github.com:percona/percona-toolkit.git

          • Stay Updated

            Subscribe to our newsletter for trending solutions and developer bootcamps

            Agree to Sign up and Terms & Conditions

            Share this Page

            share link