reactor-kafka | Reactive Kafka Driver with Reactor | Pub Sub library

 by   reactor Java Version: 1.3.22 License: No License

kandi X-RAY | reactor-kafka Summary

kandi X-RAY | reactor-kafka Summary

reactor-kafka is a Java library typically used in Messaging, Pub Sub, Kafka applications. reactor-kafka has no bugs, it has no vulnerabilities, it has build file available and it has high support. You can download it from GitHub, Maven.

Reactive Kafka Driver with Reactor
Support
    Quality
      Security
        License
          Reuse

            kandi-support Support

              reactor-kafka has a highly active ecosystem.
              It has 531 star(s) with 205 fork(s). There are 63 watchers for this library.
              OutlinedDot
              It had no major release in the last 12 months.
              There are 39 open issues and 151 have been closed. On average issues are closed in 125 days. There are 2 open pull requests and 0 closed requests.
              OutlinedDot
              It has a negative sentiment in the developer community.
              The latest version of reactor-kafka is 1.3.22

            kandi-Quality Quality

              reactor-kafka has 0 bugs and 0 code smells.

            kandi-Security Security

              reactor-kafka has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.
              reactor-kafka code analysis shows 0 unresolved vulnerabilities.
              There are 0 security hotspots that need review.

            kandi-License License

              reactor-kafka does not have a standard license declared.
              Check the repository for any license declaration and review the terms closely.
              OutlinedDot
              Without a license, all rights are reserved, and you cannot use the library in your applications.

            kandi-Reuse Reuse

              reactor-kafka releases are available to install and integrate.
              Deployable package is available in Maven.
              Build file is available. You can build the component from source.
              Installation instructions are not available. Examples and code snippets are available.
              reactor-kafka saves you 14642 person hours of effort in developing the same functionality from scratch.
              It has 30419 lines of code, 888 functions and 85 files.
              It has high code complexity. Code complexity directly impacts maintainability of the code.

            Top functions reviewed by kandi - BETA

            kandi has reviewed reactor-kafka and discovered the below as its top functions. This is intended to give you an instant insight into reactor-kafka implemented functionality, and help decide if they suit your requirements.
            • Undoes the commit ahead of the given batch into the given batch
            • Updates the offset for the given topic partition
            • Entry point for testing
            • Returns the command line parser
            • Entry point to the producer
            • Returns the command line arguments
            • Main method for testing
            • Convert a list of properties to a Map
            • Main method
            • Attempts to consume messages from a Kafka topic
            • Entry point to the server
            • Send a count of messages
            • Registers listener for revocation events
            • Registers listener for assign listeners
            • Sets the Kafka producer configuration property
            • Test scenario
            • Gets offsets from the committed partitions
            • Creates a proxy for a producer
            • Receive only at once
            • Restores offsets for committed orders
            • Commits commit event
            • Create a consumer proxy
            • Invoked when partitions are committed
            • Handles a producer record
            • Tries to throttle the throughput
            • Creates a consumer
            Get all kandi verified functions for this library.

            reactor-kafka Key Features

            No Key Features are available at this moment for reactor-kafka.

            reactor-kafka Examples and Code Snippets

            No Code Snippets are available at this moment for reactor-kafka.

            Community Discussions

            QUESTION

            No subscriptions have been created in Reactor Kafka and Spring Integration
            Asked 2022-Jan-25 at 18:39

            I'm trying to create a simple flow with Spring Integration and Project Reactor, where I consume records with Reactor Kafka, passing them to a channel that from there it will produce messages into another topic with Reactor Kafka.

            The consuming flow is:

            ...

            ANSWER

            Answered 2022-Jan-25 at 18:39

            The subscription to the reactiveKafkaConsumerTemplate happens immediately when the endpoint for the ., String>transform(ConsumerRecord::value) is started automatically by the application context.

            See this one as an alternative:

            Source https://stackoverflow.com/questions/70836266

            QUESTION

            YugabyteDB deployment in 2 datacenters, R2DBC driver error
            Asked 2022-Jan-14 at 13:37

            [Question posted by a user on YugabyteDB Community Slack]

            I am currently using YugabyteDB with the reactive Postgres driver (io.r2dbc:r2dbc-postgresql), and I am facing some intermittent issues like this one, as you can see in the stack trace below. I was told that the Postgres driver may not deal correctly with the Yugabyte load balancing, which maybe is leading me to this problem, and then maybe the actual Yugabyte driver would deal properly with such scenarios. However, I am using a reactive code, which means I need an R2DBC driver, and I did not find any official R2DBC Yugabyte driver.

            Do you think a more appropriate driver would really solve such problem? If so, is there any other R2DBC driver that would be more suitable for my purpose here? If not, do you have any suggestions to solve the problem below?

            ...

            ANSWER

            Answered 2022-Jan-14 at 13:37

            The exception stack trace is related to Restart read errors. Currently, YugabyteDB supports only optimistic locking with SNAPSHOT isolation level which means whenever there is conflict on concurrent access, the driver will throw restart read errors like below:

            Source https://stackoverflow.com/questions/70711425

            QUESTION

            How to create multiple instances of KafkaReceiver in Spring Reactor Kafka
            Asked 2021-Oct-29 at 02:00

            I have a reactive kafka application that reads data from a topic and writes to another topic. The topic has multiple partitions and I want to create the same number of consumers(in the same consumer group) as the partitions in the topic. From what I understand from this thread .receive() will create only one instance of KafkaReceiver that will read from all the partitions in the topic. So I would need multiple receivers to read from different partitions in parallel.

            To do that I came up with the following code:

            ...

            ANSWER

            Answered 2021-Oct-28 at 13:36

            What you have done is correct.

            Source https://stackoverflow.com/questions/69746746

            QUESTION

            How to retry failed ConsumerRecord in reactor-kafka
            Asked 2021-Oct-02 at 11:11

            I am trying on reactor-kafka for consuming messages. Everything else work fine, but I want to add a retry(2) for failing messages. spring-kafka already retries failed record 3 times by default, I want to achieve the same using reactor-kafka.

            I am using spring-kafka as a wrapper for reactive-kafka. Below is my consumer template:

            ...

            ANSWER

            Answered 2021-Oct-01 at 09:40

            Previously while retrying I was using the below approach:

            Source https://stackoverflow.com/questions/69400971

            QUESTION

            Kafka Consumer (using reactor kafka) rejoins the group and is assigned topic partitions immediately after invoking unsubscribe
            Asked 2021-Aug-06 at 02:26

            I am facing an issue while unsubscribing from the Kafka consumer, please note that we are using reactor kafka API. It does unsubscribe successfully in the beginning but then it joins the group immediately and is assigned that topic's partitions so essentially it remains subscribed all the time and keeps consuming messages from topic even when it is not supposed to!

            Following is my code for doing this business,

            ...

            ANSWER

            Answered 2021-Aug-06 at 02:26

            I am answering my own question.

            I wrote a standalone program to subscribe and unsubscribe from the concerned topic, and the program was working as expected. It clearly says that there was no problem from the Kafka parameters perspective at all (so something was wrong within the application itself).

            After doing some good code analysis and going through the code line by line, I noticed that subscribe method was getting invoked 2 times. I commented one of those invocations and then tested, and it behaved all well and expected.

            Never thought that by subscribing twice to the topic, that consumer will never be able to unsubscribe!

            NOTE - even after doing 2 invocations of unsubscribe, this consumer would not unsubscribe from the topic. So if it has subscribed twice (or more possibly - but I have not tested that), it will never be able to unsubscribe for its life!

            Is it a normal behavior from Kafka perspective? I am not so sure, keeping this item open for others to respond...

            Thanks...

            Source https://stackoverflow.com/questions/68646252

            QUESTION

            Using different threads to read from a consumer group in Kafka using reactor-kafka
            Asked 2021-Jul-28 at 02:25

            I need to consume from a Kafka topic that will have millions of data. Once I read from the topic, i need to transform and write it to another topic. I am able to consume messages from the topic, process the data by multiple threads and write to another topic. I followed the example from here https://projectreactor.io/docs/kafka/1.3.5-SNAPSHOT/reference/index.html#concurrent-ordered

            Here is my code:

            ...

            ANSWER

            Answered 2021-Jul-27 at 22:52

            So basically what you are looking for called Consumer Group, the maximum parallel consumption you can run is limited by the number of partitions your topic has.

            Kafka Consumer Group mechanism allows you to seperate the work of consumption a topic to diffrent "readers" which belongs to the same group, the work would be divided by that each consumer in the group would be solely responsible for a partition (1 or more, based on number of consumers in the group, and number of partitions to the topic)

            Source https://stackoverflow.com/questions/68552336

            QUESTION

            Using reactive webflux code inside of a @KafkaListener annotated method
            Asked 2021-Jun-21 at 14:19

            I am using spring-kafka to implement a consumer that reads messages from a certain topic. All of these messages are processed by them being exported into another system via a REST API. For that, the code uses the WebClient from the Spring Webflux project, which results in reactive code:

            ...

            ANSWER

            Answered 2021-Jun-21 at 14:19

            Your understanding is correct; either switch to a non-reactive rest client (e.g. RestTemplate) or use reactor-kafka for the consumer.

            Source https://stackoverflow.com/questions/68059163

            QUESTION

            Can Reactive Kafka Receiver work with non-reactive Elasticsearch client?
            Asked 2021-Apr-30 at 12:30

            Below is a sample code which uses reactor-kafka and reads data from a topic (with retry logic) which has records published via a non-reactive producer. Inside my doOnNext() consumer I am using non-reactive elasticsearch client which indexes the record in the index. So I have few questions that I am still unclear about :

            1. I know that consumers and producers are independent decoupled systems, but is it recommended to have reactive producer as well whose consumers are reactive?
            2. If I am using something that is non-reactive, in this case Elasticsearch client org.elasticsearch.client.RestClient, does the "reactiveness" of the code work? If it does or does not, how do I test it? (By "reactiveness", I mean non blocking IO part of it i.e. if I spawn three reactive-consumers and one is latent for some reason, the thread should be unblocked and used for other reactive consumer).
            3. In general the question is, if I wrap some API with reactive clients should the API be reactive as well?
            ...

            ANSWER

            Answered 2021-Apr-30 at 12:30

            Got some understanding around it.

            Reactive KafkaReceiver will internally call some API; if that API is blocking API then even if KafkaReceiver is "reactive" the non-blocking IO will not work and the receiver thread will be blocked because you are calling Blocking API / non-reactive API.

            You can test this out by creating a simple server (which blocks calls for sometime / sleep) and calling that server from this receiver

            Source https://stackoverflow.com/questions/67179449

            QUESTION

            Message ordering of ReactiveKafkaConsumerTemplate receiveAutoAck
            Asked 2021-Jan-26 at 15:33

            i am asking myself if the ReactiveKafkaConsumerTemplate of the spring-kafka project does guarantee the correct ordering of messages. I read the documentation of the reactor-kafka project and it states that messages should be consumed using the concatMap operator, but the ReactiveKafkaConsumerTemplate uses the flatMap operator at least in case of the receiveAutoAck method here:

            https://github.com/spring-projects/spring-kafka/blob/master/spring-kafka/src/main/java/org/springframework/kafka/core/reactive/ReactiveKafkaConsumerTemplate.java#L69

            Reference documentation of the reactor-kafka project: https://projectreactor.io/docs/kafka/release/reference/#_auto_acknowledgement_of_batches_of_records

            I am interested in using receiveAutoAck as it seems to be the most simpelst and comfortable approach, which suffices my use case. The only way to overcome this behaviour of the receiveAutoAck method seems to subclass the ReactiveKafkaConsumerTemplate and overwrite this behaviour. Is this correct?

            ...

            ANSWER

            Answered 2021-Jan-26 at 15:33

            I don't think it really matters here because internally the source of data for us is Flux.fromIterable(consumerRecords) which cannot lose its order because of an iterator therefore how hard we wouldn't try to process them in parallel, we still would get the order in one iterator. Yes, the order in between iterators we flatten is really unpredictable, but this doesn't matter for us since we worry about an order withing a single partition, nothing more.

            Nevertheless I think we definitely need to fix that for the mentioned concatMap() to avoid such a confusion in the future. Feel free to provide a contribution on the matter!

            Source https://stackoverflow.com/questions/65901791

            QUESTION

            Task :compileFunctionalTestGroovy FAILED during gradle ci build
            Asked 2020-Dec-16 at 12:21

            I have task that runs a functional test

            ...

            ANSWER

            Answered 2020-Dec-16 at 12:21

            The problem was, that I build not whole dependencies.

            Source https://stackoverflow.com/questions/65065400

            Community Discussions, Code Snippets contain sources that include Stack Exchange Network

            Vulnerabilities

            No vulnerabilities reported

            Install reactor-kafka

            You can download it from GitHub, Maven.
            You can use reactor-kafka like any standard Java library. Please include the the jar files in your classpath. You can also use any IDE and you can run and debug the reactor-kafka component as you would do with any other Java program. Best practice is to use a build tool that supports dependency management such as Maven or Gradle. For Maven installation, please refer maven.apache.org. For Gradle installation, please refer gradle.org .

            Support

            For any new features, suggestions and bugs create an issue on GitHub. If you have any questions check and ask questions on community page Stack Overflow .
            Find more information at:

            Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items

            Find more libraries
            Install
            Maven
            Gradle
            CLONE
          • HTTPS

            https://github.com/reactor/reactor-kafka.git

          • CLI

            gh repo clone reactor/reactor-kafka

          • sshUrl

            git@github.com:reactor/reactor-kafka.git

          • Stay Updated

            Subscribe to our newsletter for trending solutions and developer bootcamps

            Agree to Sign up and Terms & Conditions

            Share this Page

            share link

            Explore Related Topics

            Consider Popular Pub Sub Libraries

            EventBus

            by greenrobot

            kafka

            by apache

            celery

            by celery

            rocketmq

            by apache

            pulsar

            by apache

            Try Top Libraries by reactor

            reactor-core

            by reactorJava

            reactor-netty

            by reactorJava

            BlockHound

            by reactorJava

            lite-rx-api-hands-on

            by reactorJava