mirus | cross data-center data replication tool
kandi X-RAY | mirus Summary
kandi X-RAY | mirus Summary
Mirus is built around Apache Kafka Connect, providing SourceConnector and SourceTask implementations optimized for reading data from Kafka source clusters. The MirusSourceConnector runs a KafkaMonitor thread, which monitors the source and destination Apache Kafka cluster partition allocations, looking for changes and applying a configurable topic whitelist. Each task is responsible for a subset of the matching partitions, and runs an independent KafkaConsumer and KafkaProducer client pair to do the work of replicating those partitions. Tasks can be restarted independently without otherwise affecting a running cluster, are monitored continuously for failure, and are optionally automatically restarted. To understand how Mirus distributes work across a cluser of machines please read the Kafka Connect documentation.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
- Starts the task
- Creates a config definition
- Add a set of topics to the sensor set
- Seek to offset
- Run the task
- Fetches a list of topic partitions
- Checks if the subscribed partition set has changed
- Returns a list of available topics from the source partition list
- Stops the producer
- Handles a task
- Apply router to topic
- Make a task client id unique to a consumer
- Stops the task monitor thread
- Stops the Kafka monitor
- Starts the task monitor thread
- Create a config definition
- Checks that all the transformations are valid
- Assigns a list of partitions to a list of topics
- Checks if a record should be skipped
- Creates a sensor for a topic
- Runs OffsetStatus
- Converts a consumer record to a SourceRecord
- Deserialize from a Kafka offset entry
- Polls data from the consumer
- Process a connector
- Main entry point
mirus Key Features
mirus Examples and Code Snippets
Community Discussions
Trending Discussions on mirus
QUESTION
Kafka MirrorMaker
is a basic approach to mirror Kafka topics from source to target brokers. Unfortunately, it doesn't fit my requirements to be configurable enough.
My requirements are very simple:
- the solution should be JVM application
- if destination topic doesn't exist, creates it
- solution should have the ability to add prefixes/suffixes to destination topic names
- it should reload and apply configurations on the fly if they're changed
According to this answer there are several alternative solutions to do this stuff:
- MirrorTool-for-Kafka-Connect
- Salesforce Mirus (based on Kafka Connect API)
- Confluent's Replicator
- Build my own application (based on Kafka Streams functionality)
Moreover, KIP-382 was created to make Mirror Maker more flexible and configurable.
So, my question is what's the killer features of these solutions (comparing to others) and finally what's the better one according to provided requirements.
...ANSWER
Answered 2019-Apr-11 at 00:57I see you are referring to my comment there...
As for your bullets
the solution should be JVM application
All the listed ones are Java based
if destination topic doesn't exist, creates it
This is dependent on the Kafka broker version supporting the AdminClient
API. Otherwise, as the MirrorMaker documentation says, you should create the destination topic before mirroring, otherwise you either get (1) denied to produce because auto topic creation is disable (2) problems seeing "consistent" data because a default configured topic was created.
That being said, by default, MirrorMaker doesn't "propogate" topic configurations on its own. When I looked, MirrorTool similarly did not. I have not looked throughly at Mirus, but seems only partition amounts are preserved
Confluent Replicator does copy configurations, partitions, and it will use the AdminClient.
Replicator, MirrorTool, and Mirus are all based on Kafka Connect API. And KIP-382 will be as well
Build my own application (based on Kafka Streams functionality
Kafka Streams can only communicate from()
and to()
a single cluster.
You might as well just use MirrorMaker because it's a wrapper around Producer/Consumer already, and supports one cluster to another. If you need custom features, that's what the MessageHandler
interface is for.
At a higher level, the Connect API is also fairly configurable, and the MirrorTool source code I find really easy to understand.
solution should have the ability to add prefixes/suffixes to destination topic names
Each one can do that, but MirrorMaker requires extra/custom code. See example by @gwenshap
reload and apply configurations on the fly if they're changed
That's the tricky one... Usually, you just bounce the Java process because most configurations are only loaded at startup. The exception being whitelist
or topics.regex
for finding new topics to consume.
KIP-382
Hard to say that'll be accepted. While it is thoughly written, and I personally think it's reasonably scoped, it somewhat defeats the purpose of having Replicator for Confluent. with large majority of Kafka commits and support happening out of Confluent, it's a conflict of interest
Having used Replicator, it has a few extra features that allow for consumer failover in the case of data center failure, so it's still valuable until someone reverse engineers those Kafka API calls into other solutions
MirrorTool had a KIP too, but it was seemingly rejected on the mailing list with the explanation of "Kafka Connect is a pluggable ecosystem, and anyone can go ahead and install this mirroring extension, but it shouldn't be part of the core Kafka Connect project", or at least that's how I read it.
What's "better" is a matter of opinion, and there are still other options (Apache Nifi or Streamsets come to mind). Even using kafkacat
and netcat
you can hack together cluster syncing.
If you are paying for an enterprise license, mostly for support, then you might as well use Replicator.
One thing that might be important to point out with MirrorMaker I discovered, that if you are mirroring a topic that is not using the DefaultPartitioner
, then data will be reshuffled into the DefaultPartitioner
on the destination cluster if you don't otherwise configure the destination Kafka producer to use the same partition value or partitioner class as the source Kafka Producer.
QUESTION
I have DataFrame that in one of columns contains lists of strings, like this one:
...ANSWER
Answered 2018-Sep-04 at 12:29Use nested list comprehension:
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install mirus
target/mirus-${project.version}-all.zip
target/mirus-${project.version}.jar: Primary Mirus jar (dependencies not included)
target/mirus-${project.version}-run.zip: A package containing the Mirus run control scripts
To run the Quick Start example you will need running Kafka and Zookeeper clusters to work with. We will assume you have a standard Apache Kafka Quickstart test cluster running on localhost. Follow the Kafka Quick Start instructions. For this tutorial we will set up a Mirus worker instance to mirror the topic test in loop-back mode to another topic in the same cluster. To avoid a conflict the destination topic name will be set to test.mirror using the destination.topic.name.suffix configuration option. Any message you write to the topic test will now be mirrored to test.mirror.
Build the full Mirus project using Maven > mvn package -P all
Unpack the Mirus "all" package > mkdir quickstart; cd quickstart; unzip ../target/mirus-*-all.zip
Start the quickstart worker using the sample worker properties file > bin/mirus-worker-start.sh config/quickstart-worker.properties
In another terminal, confirm the Mirus Kafka Connect REST API is running > curl localhost:8083 {"version":"1.1.0","commit":"fdcf75ea326b8e07","kafka_cluster_id":"xdxNfx84TU-ennOs7EznZQ"}
Submit a new MirusSourceConnector configuration to the REST API with the name mirus-quickstart-source > curl localhost:8083/connectors/mirus-quickstart-source/config \ -X PUT \ -H 'Content-Type: application/json' \ -d '{ "name": "mirus-quickstart-source", "connector.class": "com.salesforce.mirus.MirusSourceConnector", "tasks.max": "5", "topics.whitelist": "test", "destination.topic.name.suffix": ".mirror", "destination.consumer.bootstrap.servers": "localhost:9092", "consumer.bootstrap.servers": "localhost:9092", "consumer.client.id": "mirus-quickstart", "consumer.key.deserializer": "org.apache.kafka.common.serialization.ByteArrayDeserializer", "consumer.value.deserializer": "org.apache.kafka.common.serialization.ByteArrayDeserializer" }'
Confirm the new connector is running > curl localhost:8083/connectors ["mirus-quickstart-source"] > curl localhost:8083/connectors/mirus-quickstart-source/status {"name":"mirus-quickstart-source","connector":{"state":"RUNNING","worker_id":"1.2.3.4:8083"},"tasks":[],"type":"source"}
Create source and destination topics test and test.mirror in your Kafka cluster > cd ${KAFKA_HOME} > bin/kafka-topics.sh --zookeeper localhost:2181 --create --topic 'test' --partitions 1 --replication-factor 1 Created topic "test". > bin/kafka-topics.sh --zookeeper localhost:2181 --create --topic 'test.mirror' --partitions 1 --replication-factor 1 Created topic "test.mirror".
Mirus should detect that the new source and destination topics are available and create a new Mirus Source Task: > curl localhost:8083/connectors/mirus-quickstart-source/status {"name":"mirus-quickstart-source","connector":{"state":"RUNNING","worker_id":"10.126.22.44:8083"},"tasks":[{"state":"RUNNING","id":0,"worker_id":"10.126.22.44:8083"}],"type":"source"}
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page