kandi X-RAY | go-mysql-elasticsearch Summary
kandi X-RAY | go-mysql-elasticsearch Summary
go-mysql-elasticsearch is a service syncing your MySQL data into Elasticsearch automatically. It uses mysqldump to fetch the origin data at first, then syncs data incrementally with binlog.
Top functions reviewed by kandi - BETA
go-mysql-elasticsearch Key Features
go-mysql-elasticsearch Examples and Code Snippets
Trending Discussions on go-mysql-elasticsearch
I have seen many similar question to this topic (including this one, which talks about how ElasticSearch version 6 has overcome many of its limitations as the primary data store), but I am still not clear on the following:
I am creating an online shopping website and I am using MySQL as my DB.
This is a simplified version of my DB (Users can post Product on the website for sale)
I am learning about ElasticSearch and I want to use it to search the products on my website. I don't need User and ProductReview to be searched - only Product table.
I can think of 2 solutions to achieve this:
- Periodically copy Product table from MySQL to ES
- Keep User and ProductReview in MySQL and Product in ES
As far as I know, if I use option 1, then I can use go-mysql-elasticsearch to sync ES with MySQL: Is this a good solution?
I am more tempted to use option 2, as it is easier and I don't need to worry about data synchronization. What concerns me about this option is:
- Is ES reliable to be the primary source of data?
- At some point in time, if I have to modify the Product table structure, would I be able to do so without deleting and recreating the Product Index?
- In case of MySQL, I normally take a backup of Prod DB and Restore it on Test DB... Is it still possible to do a Backup and Restore from Prod to Test using ES?
I have no experience with ES/NoSQL and would appreciate any advice....
ANSWERAnswered 2018-Apr-03 at 12:43
Let me start by stating that Elasticsearch is NOT a database, in the strict sense of the term, and should ideally not be used as such. However, nothing prevents you from doing it (and many people are doing it) and according to the good folks at Elastic, they won't ever strive to try and make ES a real database. The main goal of ES is to be a fast and reliable search and analytics engine, period.
If you can, you should always keep another primary source of truth from which you can easily (re-)build your ES indices anytime if something goes south.
In your case, option 1 seems to be the way to go since all you want to do is to allow users to search your products, so there's no point in synching the other tables in ES.
Option 2 sounds appealing, but only if you decide to go only with ES, which you really shouldn't if you want to rely on transactions (ES doesn't have transactional support). Another thing you need to know is that if you only have your data in ES and your index gets corrupted for some reason (during an upgrade, a bug in ES, a bug in your code, etc), your data is gone and your business will suffer.
So to answer your questions more precisely:
ES can be reliable as a primary source of truth provided you throw enough efforts and money into the game. However, you probably don't have millions of products and users (yet), so having a HA cluster with minimum three nodes to search a few thousands products with a few fields doesn't seem like a good spend.
When your products table changes, it is easy to reindex the table into ES (or even in real time) and if you have a few thousand products, it can go fast enough that you don't really have to worry about it. If the synch fails for some reason, you can run the process again without wasting too much time. With the zero-downtime alias technique, you can do it without impacting your users.
ES also provides snapshot/restore capabilities so that you can take a snapshot of PROD and install it in your TEST cluster with a single REST call.
I have a need to index data from RDS (MySQL) and S3 (documents) into Elasticsearch in order to perform fulltext searches.
I've noted that AWS Kinesis seems ideal for this, and can listen to both S3 and MySQL, streaming the formatted results into Elasticsearch.
What I don't understand, however, is how I could bulk-onboard existing data using Kinesis.
For RDS-to-Elasticsearch I've seen the alternative of go-mysql-elasticsearch that would handle this for me, but this still leaves me stuck with gigabytes of S3 data to ingest.
Has anybody solved this problem? I'd rather have as simple a setup as possible.
ANSWERAnswered 2018-May-25 at 07:07
As far as adding metadata to entries in ElasticSearch, you're probably thinking of what's sometimes called data "enrichment." There's a very detailed blog post over here, that talks about how to ingest and enrich data using both static and dynamic reference data. By using AWS Lambda to enrich your data, you can run dynamic queries against data sources and modify your records before they're ingested into ElasticSearch via Kinesis Firehose.Bulk Import
The Kinesis Data Streams API supports a batch ingestion API called
PutRecords. You can ingest up to 500 records into your Kinesis Data Stream with a single API call. The announcement about this is over here.
Once you've set up your ingestion and enrichment pipeline for new records, you could write an application that retrieves records, older than the date that you established the pipeline, and writes them into the Kinesis Data Stream.
No vulnerabilities reported
go get github.com/siddontang/go-mysql-elasticsearch, it will print some messages in console, skip it. :-)
Reuse Trending Solutions
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page