gh-ost | GitHub 's Online Schema-migration Tool for MySQL | Data Migration library
kandi X-RAY | gh-ost Summary
kandi X-RAY | gh-ost Summary
GitHub's Online Schema Migrations for MySQL
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of gh-ost
gh-ost Key Features
gh-ost Examples and Code Snippets
Community Discussions
Trending Discussions on gh-ost
QUESTION
I was wondering what happens to the binlog when run an alter using pt-online-schema-change
or gh-ost
?
for the pt-online-schema-change
I have read that it copies the table and use some triggers to apply the changes. I don't know if it create a table from the beginning with the new schema or it just apply the alter after copying the table?
if it alters the table from the beginning, then what happens to binglog? is the positions different than the previous binglog?
...ANSWER
Answered 2022-Feb-21 at 22:53pt-online-schema change copies the table structure and applies the desired ALTER TABLE to the zero-row table. This is virtually instantaneous. Then it creates triggers to mirror changes against the original table. Then it starts copying old data from the original table to the new table.
What happens to the binlog? It gets quite huge. The CREATE TABLE and ALTER TABLE and CREATE TRIGGER are pretty small. DDL is always statement-based in the binlog. The DML changes created by the triggers and the process of copying old data become transactions in the binlog. We prefer row-based binlogs, so these end up being pretty bulky.
gh-ost is similar, but without the triggers. gh-ost reads the binlog to find events that applied to the old table, and it applies those to the new table. Meanwhile, it also copies old data. Together these actions result in a similar volume of extra events in the binlog as occur when using pt-online-schema-change.
So you should check the amount of free disk space before you begin either of these online schema change operations. It will expand the binlogs approximately in proportion to the amount of data to be copied. And of course you need to store two copies of the whole table — the original and the altered version — temporarily, until the original table can be dropped at the end of the process.
I have had to run pt-online-schema change on large tables (500GB+) when I had a disk that was close to being full. It causes some tense moments. I had to PURGE BINARY LOGS periodically to get some more free space, because the schema change would fill the disk to 100% if I didn't! This is not a situation I recommend.
QUESTION
I have a system and I want to test it which executes Alter
queries. I'm looking for a way to simulate a long-running Alter
query that I can test "panic", "resource usage", "concurrency", ... when it's running.
Is there any way that exists I can simulate a long-running Alter
query?
I'm using gh-ost
for alter execution.
ANSWER
Answered 2021-Jul-28 at 17:22Here's what I do when I want to test a long-running ALTER TABLE:
Create a table.
Fill it with a few million rows of random data, until it's large enough that ALTER TABLE takes a few minutes. How many rows are required depends on the speed of your computer.
Run ALTER TABLE on it.
I have not found a better solution, and I've been using MySQL since 2001.
Here's a trick for filling lots of rows without needing a client app or script:
QUESTION
Info: I am using AWS RDS Mysql 5.6.34 (500GB) instance (Without replica, just the master)
Note: Binlog is enabled & set to Row
Target: Modify a column field_1
from enum to tinyint
Extra info: I am using Rails application. So everytime I wanted to add a value to the enum, I need to write a migration. So converting the enum field to tinyint so I can add or delete a enum value without writing a migration using Active Enum
Other info: I also tried LHM but the RDS instance went out of memory at 93%
Database info before running gh-ost:
...ANSWER
Answered 2020-Dec-02 at 06:50I created the mysql DB from a backup of production db.
Production had innodb_file_format
parameter as Barracuda
The new environment had innodb_file_format
parameter as Antelope
The ROW_FORMAT
for the table in production was COMPRESSED
Unfortunately Antelope
db doesn't support ROW_FORMAT
as COMPRESSED
If I had looked more into the details of information_schema, I could have resolved it eatlier!
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install gh-ost
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page