spring-kafka | Provides Familiar Spring Abstractions for Apache Kafka | Pub Sub library

 by   spring-projects Java Version: 3.1.0 License: Apache-2.0

kandi X-RAY | spring-kafka Summary

kandi X-RAY | spring-kafka Summary

spring-kafka is a Java library typically used in Messaging, Pub Sub, Kafka applications. spring-kafka has no bugs, it has no vulnerabilities, it has build file available, it has a Permissive License and it has high support. You can download it from GitHub, Maven.

[Join the chat at
Support
    Quality
      Security
        License
          Reuse

            kandi-support Support

              spring-kafka has a highly active ecosystem.
              It has 1908 star(s) with 1403 fork(s). There are 159 watchers for this library.
              There were 2 major release(s) in the last 6 months.
              There are 39 open issues and 1252 have been closed. On average issues are closed in 28 days. There are 6 open pull requests and 0 closed requests.
              It has a positive sentiment in the developer community.
              The latest version of spring-kafka is 3.1.0

            kandi-Quality Quality

              spring-kafka has 0 bugs and 0 code smells.

            kandi-Security Security

              spring-kafka has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.
              spring-kafka code analysis shows 0 unresolved vulnerabilities.
              There are 0 security hotspots that need review.

            kandi-License License

              spring-kafka is licensed under the Apache-2.0 License. This license is Permissive.
              Permissive licenses have the least restrictions, and you can use them in most projects.

            kandi-Reuse Reuse

              spring-kafka releases are available to install and integrate.
              Deployable package is available in Maven.
              Build file is available. You can build the component from source.
              Installation instructions are not available. Examples and code snippets are available.
              It has 55422 lines of code, 4447 functions and 510 files.
              It has medium code complexity. Code complexity directly impacts maintainability of the code.

            Top functions reviewed by kandi - BETA

            kandi has reviewed spring-kafka and discovered the below as its top functions. This is intended to give you an instant insight into spring-kafka implemented functionality, and help decide if they suit your requirements.
            • Determine the type of the given method .
            • Run the consumer .
            • Produce a Kafka message .
            • Retry batches .
            • Process the retry topic annotation .
            • Check topics config .
            • Build callback .
            • Initialize topics .
            • Find all annotated listeners .
            • Recursively traverse a list of records .
            Get all kandi verified functions for this library.

            spring-kafka Key Features

            No Key Features are available at this moment for spring-kafka.

            spring-kafka Examples and Code Snippets

            No Code Snippets are available at this moment for spring-kafka.

            Community Discussions

            QUESTION

            Combining blocking and non-blocking retries in Spring Kafka
            Asked 2022-Apr-01 at 21:13

            I am trying to implement non blocking rerties with single topic fixed back-off.

            I am able to do so, thanks to documentation https://docs.spring.io/spring-kafka/reference/html/#single-topic-fixed-delay-retries.

            Now I also need to perform a few blocked/local retries on main topic. I have been trying to implement this using DefaultErrorHandler as below:

            ...

            ANSWER

            Answered 2022-Apr-01 at 21:13

            We're currently working on improving configuration for the non-blocking retries components.

            For now, as documented here, you should inject these beans such as:

            Source https://stackoverflow.com/questions/71705876

            QUESTION

            EmbeddedKafka failing since Spring Boot 2.6.X : AccessDeniedException: ..\AppData\Local\Temp\spring.kafka*
            Asked 2022-Mar-25 at 12:39

            e: this has been fixed through Spring Boot 2.6.5 (see https://github.com/spring-projects/spring-boot/issues/30243)

            Since upgrading to Spring Boot 2.6.X (in my case: 2.6.1), I have multiple projects that now have failing unit-tests on Windows that cannot start EmbeddedKafka, that do run with Linux

            There is multiple errors, but this is the first one thrown

            ...

            ANSWER

            Answered 2021-Dec-09 at 15:51

            Known bug on the Apache Kafka side. Nothing to do from Spring perspective. See more info here: https://github.com/spring-projects/spring-kafka/discussions/2027. And here: https://issues.apache.org/jira/browse/KAFKA-13391

            You need to wait until Apache Kafka 3.0.1 or don't use embedded Kafka and just rely on the Testcontainers, for example, or fully external Apache Kafka broker.

            Source https://stackoverflow.com/questions/70292425

            QUESTION

            Kafka consumer not automatically reconnecting after outage
            Asked 2022-Feb-22 at 13:38

            In our infrastructure we are running Kafka with 3 nodes and have several spring boot services running in OpenShift. Some of the communication between the services happens via Kafka. For the consumers/listeners we are using the @KafkaListener spring annotation with a unique group ID so that each instance (pod) consumes all the partitions of a topic

            ...

            ANSWER

            Answered 2022-Feb-22 at 10:04

            In kafka config you can use reconnect.backoff.max.ms config parameter to set a maximum number of milliseconds to retry connecting. Additionally, set the parameter reconnect.backoff.ms to a base number of milliseconds to wait before retrying to connect.

            If provided, the backoff per host will increase exponentially for each consecutive connection failure, up to this maximum.

            Kafka documentation https://kafka.apache.org/31/documentation/#streamsconfigs

            If you set the max milliseconds to reconnect to something fairly high, like a day, the connection will be reattempted for up to a day (With increasing intervals, 50,500,5000,50000 etc').

            Source https://stackoverflow.com/questions/71218856

            QUESTION

            How to avoid hitting all retryable topics for fatal-by-default exceptions?
            Asked 2022-Feb-17 at 22:05

            My team is writing a service that leverages the retryable topics mechanism offered by Spring Kafka (version 2.8.2). Here is a subset of the configuration:

            ...

            ANSWER

            Answered 2022-Feb-17 at 22:05

            That is a good suggestion; it probably should be the default behavior (or at least optionally).

            Please open a feature request on GitHub.

            There is a, somewhat, related discussion here: https://github.com/spring-projects/spring-kafka/discussions/2101

            Source https://stackoverflow.com/questions/71165794

            QUESTION

            Spring Boot Logging to a File
            Asked 2022-Feb-16 at 14:49

            In my application config i have defined the following properties:

            ...

            ANSWER

            Answered 2022-Feb-16 at 13:12

            Acording to this answer: https://stackoverflow.com/a/51236918/16651073 tomcat falls back to default logging if it can resolve the location

            Can you try to save the properties without the spaces.

            Like this: logging.file.name=application.logs

            Source https://stackoverflow.com/questions/71142413

            QUESTION

            How to replace deprecated SeekToCurrentErrorHandler with DefaultErrorHandler (spring-kafka)?
            Asked 2022-Feb-09 at 15:25

            I am trying to find a way to use the new DefaultErrorHandler instead of deprecated SeekToCurrentErrorHandler in spring-kafka 2.8.1, in order to override the retry default behavior in case of errors. I want to "stop" the retry process, so if an error occurs, no retry should be done.

            Now I have, in a config class, the following bean that works as expected:

            ...

            ANSWER

            Answered 2022-Feb-09 at 15:16

            factory.setCommonErrorHandler(new Default....)

            Boot auto configuration of a CommonErrorHandler bean requires Boot 2.6.

            https://github.com/spring-projects/spring-boot/commit/c3583a4b06cff3f53b3322cd79f2b64d17211d0e

            Source https://stackoverflow.com/questions/71051529

            QUESTION

            Correct Number of Partitions/Replicas for @RetryableTopic Retry Topics
            Asked 2022-Feb-03 at 21:33

            Hello Stack Overflow community and anyone familiar with spring-kafka!

            I am currently working on a project which leverages the @RetryableTopic feature from spring-kafka in order to reattempt the delivery of failed messages. The listener annotated with @RetryableTopic is consuming from a topic that has 50 partitions and 3 replicas. When the app is receiving a lot of traffic, it could possibly be autoscaled up to 50 instances of the app (consumers) grabbing from those partitions. I read in the spring-kafka documentation that by default, the retry topics that @RetryableTopic autocreates are created with one partition and one replica, but you can change these values with autoCreateTopicsWith() in the configuration. From this, I have a few questions:

            • With the autoscaling in mind, is it recommended to just create the retry topics with the same number of partitions and replicas (50 & 3) as the original topic?
            • Is there some benefit to having differing numbers of partitions/replicas for the retry topics considering their default values are just one?
            ...

            ANSWER

            Answered 2022-Feb-03 at 21:33

            The retry topics should have at least as many partitions as the original (by default, records are sent to the same partition); otherwise you have to customize the destination resolution to avoid the warning log. See Destination resolver returned non-existent partition

            50 partitions might be overkill unless you get a lot of retried records.

            It's up to you how many replicas you want, but in general, yes, I would use the same number of replicas as the original.

            Only you can decide what are the "correct" numbers.

            Source https://stackoverflow.com/questions/70977626

            QUESTION

            Spring Kafka configuration for high consumer processing time
            Asked 2022-Feb-01 at 23:02
            
                org.springframework.kafka
                spring-kafka
                2.8.2
            
            
            ...

            ANSWER

            Answered 2022-Feb-01 at 23:02

            max.poll.interval.ms is the max time between polls from the consumer. It should be set to a value that is longer than the processing time for all the records fetched during the poll (max.poll.records). Note that it will also delay group rebalances since the consumer will only join the rebalance inside the call to poll.

            A consumer failure is determined by the heartbeat sent by the consumer. The interval for the heartbeat is configured using heartbeat.interval.ms

            When the consumer doesnot send the heartbeat for session.timeout.ms it is considered as failed.

            Ideally for your usecase,

            • session.timeout.ms should be set to a low value to detect failure more quickly. But greater than heartbeat.interval.ms
            • max.poll.interval.ms should be set to a large value that is enough to process the max.poll.records.

            Note,

            The session.timeout.ms value must be in the allowable range as configured in the broker configuration by group.min.session.timeout.ms and group.max.session.timeout.ms.

            Source https://stackoverflow.com/questions/70947052

            QUESTION

            Destination resolver returned non-existent partition
            Asked 2022-Jan-24 at 15:11

            I am using Spring-Kafka to consume messages from Confluent Kafka and I am using RetryTopicConfiguration Bean to configure the topics and backoff strategy. My application works fine but I see a lot of WARNING log like the one below in my logs and I am wondering if my configuration is incorrect.

            ...

            ANSWER

            Answered 2022-Jan-24 at 15:11

            By default, the same partition as the original topic is used; you can override that behavior by overriding the DeadLetterPublishingRecovererFactory @Bean:

            Source https://stackoverflow.com/questions/70803133

            QUESTION

            Dynamic destination in Spring Cloud Stream from Azure Event Hub to Kafka
            Asked 2022-Jan-21 at 17:07

            I'm trying to use Spring Cloud Stream to process messages sent to an Azure Event Hub instance. Those messages should be routed to a tenant-specific topic determined at runtime, based on message content, on a Kafka cluster. For development purposes, I'm running Kafka locally via Docker. I've done some research about bindings not known at configuration time and have found that dynamic destination resolution might be exactly what I need for this scenario.

            However, the only way to get my solution working is to use StreamBridge. I would rather use the dynamic destination header spring.cloud.stream.sendto.destination, in that way the processor could be written as a Function<> instead of a Consumer<> (it is not properly a sink). The main concern about this approach is that, since the final solution will be deployed with Spring Data Flow, I'm afraid I will have troubles configuring the streams if using StreamBridge.

            Moving on to the code, this is the processor function, I stripped away the unrelated parts

            ...

            ANSWER

            Answered 2022-Jan-20 at 21:56

            Not sure what exactly is causing the issues you have. I just created a basic sample app demonstrating the sendto.destination header and verified that the app works as expected. It is a multi-binder application with two Kafka clusters connected. The function will consume from the first cluster and then using the sendto header, produce the output to the second cluster. Compare the code/config in this sample with your app and see what is missing.

            I see references to StreamBridge in the stacktrace you shared. However, when using the sendto.destination header, it shouldn't go through StreamBridge.

            Source https://stackoverflow.com/questions/70785204

            Community Discussions, Code Snippets contain sources that include Stack Exchange Network

            Vulnerabilities

            No vulnerabilities reported

            Install spring-kafka

            You can download it from GitHub, Maven.
            You can use spring-kafka like any standard Java library. Please include the the jar files in your classpath. You can also use any IDE and you can run and debug the spring-kafka component as you would do with any other Java program. Best practice is to use a build tool that supports dependency management such as Maven or Gradle. For Maven installation, please refer maven.apache.org. For Gradle installation, please refer gradle.org .

            Support

            For any new features, suggestions and bugs create an issue on GitHub. If you have any questions check and ask questions on community page Stack Overflow .
            Find more information at:

            Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items

            Find more libraries
            Install
            Maven
            Gradle
            CLONE
          • HTTPS

            https://github.com/spring-projects/spring-kafka.git

          • CLI

            gh repo clone spring-projects/spring-kafka

          • sshUrl

            git@github.com:spring-projects/spring-kafka.git

          • Stay Updated

            Subscribe to our newsletter for trending solutions and developer bootcamps

            Agree to Sign up and Terms & Conditions

            Share this Page

            share link

            Explore Related Topics

            Consider Popular Pub Sub Libraries

            EventBus

            by greenrobot

            kafka

            by apache

            celery

            by celery

            rocketmq

            by apache

            pulsar

            by apache

            Try Top Libraries by spring-projects

            spring-boot

            by spring-projectsJava

            spring-framework

            by spring-projectsJava

            spring-security

            by spring-projectsJava

            spring-petclinic

            by spring-projectsCSS

            spring-mvc-showcase

            by spring-projectsJava