reactor-kafka | Reactive Kafka Driver with Reactor | Pub Sub library
kandi X-RAY | reactor-kafka Summary
kandi X-RAY | reactor-kafka Summary
Reactive Kafka Driver with Reactor
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
- Undoes the commit ahead of the given batch into the given batch
- Updates the offset for the given topic partition
- Entry point for testing
- Returns the command line parser
- Entry point to the producer
- Returns the command line arguments
- Main method for testing
- Convert a list of properties to a Map
- Main method
- Attempts to consume messages from a Kafka topic
- Entry point to the server
- Send a count of messages
- Registers listener for revocation events
- Registers listener for assign listeners
- Sets the Kafka producer configuration property
- Test scenario
- Gets offsets from the committed partitions
- Creates a proxy for a producer
- Receive only at once
- Restores offsets for committed orders
- Commits commit event
- Create a consumer proxy
- Invoked when partitions are committed
- Handles a producer record
- Tries to throttle the throughput
- Creates a consumer
reactor-kafka Key Features
reactor-kafka Examples and Code Snippets
Community Discussions
Trending Discussions on reactor-kafka
QUESTION
I'm trying to create a simple flow with Spring Integration and Project Reactor, where I consume records with Reactor Kafka, passing them to a channel that from there it will produce messages into another topic with Reactor Kafka.
The consuming flow is:
...ANSWER
Answered 2022-Jan-25 at 18:39The subscription to the reactiveKafkaConsumerTemplate
happens immediately when the endpoint for the ., String>transform(ConsumerRecord::value)
is started automatically by the application context.
See this one as an alternative:
QUESTION
[Question posted by a user on YugabyteDB Community Slack]
I am currently using YugabyteDB with the reactive Postgres driver (io.r2dbc:r2dbc-postgresql)
, and I am facing some intermittent issues like this one, as you can see in the stack trace below.
I was told that the Postgres driver may not deal correctly with the Yugabyte load balancing, which maybe is leading me to this problem, and then maybe the actual Yugabyte driver would deal properly with such scenarios.
However, I am using a reactive code, which means I need an R2DBC driver, and I did not find any official R2DBC Yugabyte driver.
Do you think a more appropriate driver would really solve such problem? If so, is there any other R2DBC driver that would be more suitable for my purpose here? If not, do you have any suggestions to solve the problem below?
...ANSWER
Answered 2022-Jan-14 at 13:37The exception stack trace is related to Restart read errors
. Currently, YugabyteDB supports only optimistic locking with SNAPSHOT isolation level which means whenever there is conflict on concurrent access, the driver will throw restart read errors like below:
QUESTION
I have a reactive kafka application that reads data from a topic and writes to another topic. The topic has multiple partitions and I want to create the same number of consumers(in the same consumer group) as the partitions in the topic. From what I understand from this thread .receive() will create only one instance of KafkaReceiver that will read from all the partitions in the topic. So I would need multiple receivers to read from different partitions in parallel.
To do that I came up with the following code:
...ANSWER
Answered 2021-Oct-28 at 13:36What you have done is correct.
QUESTION
I am trying on reactor-kafka for consuming messages. Everything else work fine, but I want to add a retry(2) for failing messages. spring-kafka already retries failed record 3 times by default, I want to achieve the same using reactor-kafka.
I am using spring-kafka as a wrapper for reactive-kafka. Below is my consumer template:
...ANSWER
Answered 2021-Oct-01 at 09:40Previously while retrying I was using the below approach:
QUESTION
I am facing an issue while unsubscribing from the Kafka consumer, please note that we are using reactor kafka API. It does unsubscribe successfully in the beginning but then it joins the group immediately and is assigned that topic's partitions so essentially it remains subscribed all the time and keeps consuming messages from topic even when it is not supposed to!
Following is my code for doing this business,
...ANSWER
Answered 2021-Aug-06 at 02:26I am answering my own question.
I wrote a standalone program to subscribe and unsubscribe from the concerned topic, and the program was working as expected. It clearly says that there was no problem from the Kafka parameters perspective at all (so something was wrong within the application itself).
After doing some good code analysis and going through the code line by line, I noticed that subscribe
method was getting invoked 2 times. I commented
one of those invocations and then tested, and it behaved all well and expected.
Never thought that by subscribing twice to the topic, that consumer will never be able to unsubscribe!
NOTE - even after doing 2 invocations of unsubscribe
, this consumer would not unsubscribe from the topic. So if it has subscribed twice (or more possibly - but I have not tested that), it will never be able to unsubscribe for its life!
Is it a normal behavior from Kafka perspective? I am not so sure, keeping this item open for others to respond...
Thanks...
QUESTION
I need to consume from a Kafka topic that will have millions of data. Once I read from the topic, i need to transform and write it to another topic. I am able to consume messages from the topic, process the data by multiple threads and write to another topic. I followed the example from here https://projectreactor.io/docs/kafka/1.3.5-SNAPSHOT/reference/index.html#concurrent-ordered
Here is my code:
...ANSWER
Answered 2021-Jul-27 at 22:52So basically what you are looking for called Consumer Group, the maximum parallel consumption you can run is limited by the number of partitions your topic has.
Kafka Consumer Group mechanism allows you to seperate the work of consumption a topic to diffrent "readers" which belongs to the same group, the work would be divided by that each consumer in the group would be solely responsible for a partition (1 or more, based on number of consumers in the group, and number of partitions to the topic)
QUESTION
I am using spring-kafka to implement a consumer that reads messages from a certain topic. All of these messages are processed by them being exported into another system via a REST API. For that, the code uses the WebClient from the Spring Webflux project, which results in reactive code:
...ANSWER
Answered 2021-Jun-21 at 14:19Your understanding is correct; either switch to a non-reactive rest client (e.g. RestTemplate
) or use reactor-kafka
for the consumer.
QUESTION
Below is a sample code which uses reactor-kafka and reads data from a topic (with retry logic) which has records published via a non-reactive producer. Inside my doOnNext()
consumer I am using non-reactive elasticsearch client which indexes the record in the index. So I have few questions that I am still unclear about :
- I know that consumers and producers are independent decoupled systems, but is it recommended to have reactive producer as well whose consumers are reactive?
- If I am using something that is non-reactive, in this case Elasticsearch client
org.elasticsearch.client.RestClient
, does the "reactiveness" of the code work? If it does or does not, how do I test it? (By "reactiveness", I mean non blocking IO part of it i.e. if I spawn three reactive-consumers and one is latent for some reason, the thread should be unblocked and used for other reactive consumer). - In general the question is, if I wrap some API with reactive clients should the API be reactive as well?
ANSWER
Answered 2021-Apr-30 at 12:30Got some understanding around it.
Reactive KafkaReceiver will internally call some API; if that API is blocking API then even if KafkaReceiver is "reactive" the non-blocking IO will not work and the receiver thread will be blocked because you are calling Blocking API / non-reactive API.
You can test this out by creating a simple server (which blocks calls for sometime / sleep) and calling that server from this receiver
QUESTION
i am asking myself if the ReactiveKafkaConsumerTemplate
of the spring-kafka project does guarantee the correct ordering of messages. I read the documentation of the reactor-kafka project and it states that messages should be consumed using the concatMap
operator, but the ReactiveKafkaConsumerTemplate uses the flatMap
operator at least in case of the receiveAutoAck
method here:
Reference documentation of the reactor-kafka project: https://projectreactor.io/docs/kafka/release/reference/#_auto_acknowledgement_of_batches_of_records
I am interested in using receiveAutoAck
as it seems to be the most simpelst and comfortable approach, which suffices my use case. The only way to overcome this behaviour of the receiveAutoAck
method seems to subclass the ReactiveKafkaConsumerTemplate and overwrite this behaviour. Is this correct?
ANSWER
Answered 2021-Jan-26 at 15:33I don't think it really matters here because internally the source of data for us is Flux.fromIterable(consumerRecords)
which cannot lose its order because of an iterator therefore how hard we wouldn't try to process them in parallel, we still would get the order in one iterator. Yes, the order in between iterators we flatten is really unpredictable, but this doesn't matter for us since we worry about an order withing a single partition, nothing more.
Nevertheless I think we definitely need to fix that for the mentioned concatMap()
to avoid such a confusion in the future. Feel free to provide a contribution on the matter!
QUESTION
I have task that runs a functional test
...ANSWER
Answered 2020-Dec-16 at 12:21The problem was, that I build not whole dependencies.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install reactor-kafka
You can use reactor-kafka like any standard Java library. Please include the the jar files in your classpath. You can also use any IDE and you can run and debug the reactor-kafka component as you would do with any other Java program. Best practice is to use a build tool that supports dependency management such as Maven or Gradle. For Maven installation, please refer maven.apache.org. For Gradle installation, please refer gradle.org .
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page