go-kafka | Kafka listener and producer above sarama and sarama-cluster | Pub Sub library
kandi X-RAY | go-kafka Summary
kandi X-RAY | go-kafka Summary
Kafka listener and producer above sarama and sarama-cluster
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
- NewListener creates a new listener
- murmur2 implements the murmur2 algorithm .
- getPrometheusLatencyInstrumentation returns a SummaryVec for the latency .
- getPrometheusDroppedRequestInstrumentation returns a new prometheus . CounterVec for the dropped request .
- getPrometheusRequestInstrumentation returns a new CounterVec if it is not already used .
- Creates a new Kafka listener
- DeserializeContextFromKafkaHeaders deserializes context from kafka headers .
- handleMessageWithRetry wraps a sarama . ConsumerMessage with retries .
- DefaultTracing returns an opentracing . Span and context
- init config with default values
go-kafka Key Features
go-kafka Examples and Code Snippets
Community Discussions
Trending Discussions on go-kafka
QUESTION
I have created a build of https://github.com/mongodb/mongo-kafka
But how does this run to connect with my running kafka instance.
Even how stupid this question sound. But there is no documentation seems to be available to make this working with locally running replicaset
of mongodb
.
All blogs point to using mongo atlas instead.
If you have a good resource, please guide me towards it.
UPDATE 1 --
Used maven plugin - https://search.maven.org/artifact/org.mongodb.kafka/mongo-kafka-connect
Placed it in kafka plugins, restarted kafka.
UPDATE 2 -- How to enable mongodb as source for kafka?
https://github.com/mongodb/mongo-kafka/blob/master/config/MongoSourceConnector.properties
file to be used as a configuration for Kafka
...ANSWER
Answered 2020-Dec-22 at 19:49Port 8083 is Kafka Connect, which you start with one of the connect-*.sh
scripts.
It is standalone from the broker, and properties do not get set from kafka-server-start
QUESTION
How does Kafka deal with multiple versions of the same connector plugin provided in the CLASSPATH
? For example, let's say I put both mongo-kafka-1.0.0-all.jar
and mongo-kafka-1.1.0-all.jar
into the respective directory, in order to allow using both versions, depending on what's needed. Unfortunately, the docs do not give away a way to specify the version of connector.class
, I can only assume this is dealt with like classloading is usually dealt with Java-wise.
ANSWER
Answered 2020-Nov-12 at 17:01If you have the same connector plugin that shares the same connector class (e.g. io.confluent.connect.jdbc.JdbcSinkConnector
) and you want separate versions of that same connector JAR, you would need to run multiple Kafka Connect workers.
If you have different connectors that use different dependent JARs then this is handled by Kafka Connect's classpath isolation and the plugin.path
setting.
QUESTION
I need to use a maven build for my project rather than Gradle using Eclipse.
Below is the source code that I will be using: https://github.com/mongodb/mongo-kafka
There are ways to generate pom.xml files using Gradle build (https://www.baeldung.com/gradle-build-to-maven-pom). However, I realized *.kts extension is related to Kotlin DSL rather than groovy. I have used neither of them before.
Is there any possible way to convert this to pom.xml file which can be used for Maven build?
...ANSWER
Answered 2020-Sep-09 at 02:13You can't do it automatically if that is what you are asking. While the dependencies section can be converted one to one, the plugins and tasks are gradle specific. You will need to find a matching maven plugin for each one to fulfill the task currently done by the gradle plugins. A better question would be why bother converting? Gradle is perfectly fine to use and eclipse's maven support is historically terrible.
QUESTION
I have a project where I need to get data from JSON files using java and sink it into kafka topic, and then sink that data from the topic to mongodb. I have found the kafka-mongodb connector, but the documentation is available only to connect using confluent plateform. I have tried:
- Download mongo-kafka-connect-1.2.0.jar from Maven.
- Put the file in /kafka/plugins
- Added this line "plugin.path=C:\kafka\plugins" in connect-standalone.properties.
- created MongoSinkConnector.properties.
ANSWER
Answered 2020-Aug-12 at 20:23You are missing the MongoDB driver. The MongoDB connector jar contains only the classes relevant to Kafka Connect but it still needs a driver to be able to connect to a MongoDB instance. You would need to download that driver and copy the jar file to the same path where you've published your connector ( C:\kafka\plugins
).
To keep things clean you should also create another folder inside that plugins directory ( e.g: C:\kafka\plugins\mongodb
) and move all the stuff relevant to this connector there.
Later Edit:
I went through an old(er) setup that I had with Kafka Connect and MongoDB Sink connector and I found the following jars:
This makes me believe that the kafka-connect-mongdb
jar and the mongodb-driver
won't be enough. You can give it a try though.
QUESTION
To develop my Kafka connector I need to add a connect-API dependency.
Which one I should use?
For example mongodb connector use connect-api from maven central
But links from dev guide go to https://packages.confluent.io/maven/org/apache/kafka/connect-api/5.5.0-ccs/ and beside 5.5.0-ccs
there is also 5.5.0-ce
version.
So, at this moment last versions are:
- 2.5.0 from maven central
- 5.5.0-ccs from packages.confluent.io/maven
- 5.5.0-ce from packages.confluent.io/maven
What difference between all three variants?
Which one I should use?
...ANSWER
Answered 2020-May-03 at 11:38The 5.x version refer to Releases from Confluent whereas the 2.5.0 refers to the Open Source Apache Kafka project. The ccs
belongs to the "Confluent Platform" (licensed) and the ce
to the community edition of the Confluent Platform. This doc on licenses around Confluent/Kafka will give you more details.
According to Confluent documentation on inter-compatibility you have this relation: Confluent Platform and Apache Kafka Compatibility
QUESTION
I am configuring a Kafka Mongodb sink connector on my Windows machine.
My connect-standalone.properties file has
plugin.path=E:/Tools/kafka_2.12-2.4.0/plugins
My MongoSinkConnector.properties file has
...ANSWER
Answered 2020-Mar-23 at 17:26Finally, I could make the mongo-kafka-connector work on Windows.
Here is what worked for me: Kafka installation folder is E:\Tools\kafka_2.12-2.4.0
E:\Tools\kafka_2.12-2.4.0\plugins has mongo-kafka-1.0.1-all.jar file.
I downloaded this from https://www.confluent.io/hub/mongodb/kafka-connect-mongodb Click on the blue Download button at the left to get mongodb-kafka-connect-mongodb-1.0.1.zip file.
There is also the file MongoSinkConnector.properties in the etc folder inside the zip file. Move it to kafka_installation_folder\plugins
My connect-standalone.properties file has the following entries:
QUESTION
When importing data from mongodb to kafka using the connector, https://github.com/mongodb/mongo-kafka, it throws java.lang.IllegalStateException: Queue full
.
I use the default setting of copy.existing.queue.size
, which is 16000, and copy.existing: true
. What value should I set? The collection size is 10G.
Environment:
...ANSWER
Answered 2020-Feb-21 at 09:29Fixed in https://github.com/mongodb/mongo-kafka/commit/7e6bf97742f2ad75cde394d088823b86880cdf4e
and will be released after 1.0.0. So if anyone faces the same issue, please update the version to later than 1.0.0.
QUESTION
Below are my MongoDB config in /etc/kafka/connect-mongodb-source.properties
...ANSWER
Answered 2020-Jan-10 at 08:37QUESTION
I'm using the following mongo-source which is supported by kafka-connect. I found that one of the configurations of the mongo source (from here) is tasks.max.
this means I can provide the connector tasks.max which is > 1, but I fail to understand what it will do behind the scene?
If it will create multiple connectors to listen to mongoDb change stream, then I will end up with duplicate messages. So, does mongo-source really has parallelism and works as a cluster? what does it do if it has more then 1 tasks.max?
...ANSWER
Answered 2019-Dec-18 at 13:40Mongo-source doesn't support tasks.max > 1. Even if you set it greater than 1 only one task will be pulling data from mongo to Kafka.
How many task is created depends on particular connector. Function List> Connector::taskConfigs(int maxTasks)
, (that should be overridden during the implementation of your connector) return the list, which size determine number of Tasks.
If you check mongo-kafka source connector you will see, that it is singletonList.
QUESTION
I have a mongo sink connector as well as a schema registry.
I configured the mongo sink connector to access the schema registry similar to: https://github.com/mongodb/mongo-kafka/blob/master/docs/sink.md#configuration-example-for-avro
I created a schema following this: https://github.com/mongodb/mongo-kafka/blob/master/docs/sink.md#logical-types . It looks something like this:
...ANSWER
Answered 2019-Oct-25 at 21:04Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install go-kafka
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page