spring-kafka | Provides Familiar Spring Abstractions for Apache Kafka | Pub Sub library
kandi X-RAY | spring-kafka Summary
kandi X-RAY | spring-kafka Summary
[Join the chat at
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
- Determine the type of the given method .
- Run the consumer .
- Produce a Kafka message .
- Retry batches .
- Process the retry topic annotation .
- Check topics config .
- Build callback .
- Initialize topics .
- Find all annotated listeners .
- Recursively traverse a list of records .
spring-kafka Key Features
spring-kafka Examples and Code Snippets
Community Discussions
Trending Discussions on spring-kafka
QUESTION
I am trying to implement non blocking rerties with single topic fixed back-off.
I am able to do so, thanks to documentation https://docs.spring.io/spring-kafka/reference/html/#single-topic-fixed-delay-retries.
Now I also need to perform a few blocked/local retries on main topic. I have been trying to implement this using DefaultErrorHandler
as below:
ANSWER
Answered 2022-Apr-01 at 21:13We're currently working on improving configuration for the non-blocking retries components.
For now, as documented here, you should inject these beans such as:
QUESTION
e: this has been fixed through Spring Boot 2.6.5 (see https://github.com/spring-projects/spring-boot/issues/30243)
Since upgrading to Spring Boot 2.6.X (in my case: 2.6.1), I have multiple projects that now have failing unit-tests on Windows that cannot start EmbeddedKafka
, that do run with Linux
There is multiple errors, but this is the first one thrown
...ANSWER
Answered 2021-Dec-09 at 15:51Known bug on the Apache Kafka side. Nothing to do from Spring perspective. See more info here: https://github.com/spring-projects/spring-kafka/discussions/2027. And here: https://issues.apache.org/jira/browse/KAFKA-13391
You need to wait until Apache Kafka 3.0.1
or don't use embedded Kafka and just rely on the Testcontainers, for example, or fully external Apache Kafka broker.
QUESTION
In our infrastructure we are running Kafka with 3 nodes and have several spring boot services running in OpenShift. Some of the communication between the services happens via Kafka. For the consumers/listeners we are using the @KafkaListener spring annotation with a unique group ID so that each instance (pod) consumes all the partitions of a topic
...ANSWER
Answered 2022-Feb-22 at 10:04In kafka config you can use reconnect.backoff.max.ms config parameter to set a maximum number of milliseconds to retry connecting. Additionally, set the parameter reconnect.backoff.ms to a base number of milliseconds to wait before retrying to connect.
If provided, the backoff per host will increase exponentially for each consecutive connection failure, up to this maximum.
Kafka documentation https://kafka.apache.org/31/documentation/#streamsconfigs
If you set the max milliseconds to reconnect to something fairly high, like a day, the connection will be reattempted for up to a day (With increasing intervals, 50,500,5000,50000 etc').
QUESTION
My team is writing a service that leverages the retryable topics mechanism offered by Spring Kafka (version 2.8.2). Here is a subset of the configuration:
...ANSWER
Answered 2022-Feb-17 at 22:05That is a good suggestion; it probably should be the default behavior (or at least optionally).
Please open a feature request on GitHub.
There is a, somewhat, related discussion here: https://github.com/spring-projects/spring-kafka/discussions/2101
QUESTION
In my application config i have defined the following properties:
...ANSWER
Answered 2022-Feb-16 at 13:12Acording to this answer: https://stackoverflow.com/a/51236918/16651073 tomcat falls back to default logging if it can resolve the location
Can you try to save the properties without the spaces.
Like this:
logging.file.name=application.logs
QUESTION
I am trying to find a way to use the new DefaultErrorHandler instead of deprecated SeekToCurrentErrorHandler in spring-kafka 2.8.1, in order to override the retry default behavior in case of errors. I want to "stop" the retry process, so if an error occurs, no retry should be done.
Now I have, in a config class, the following bean that works as expected:
...ANSWER
Answered 2022-Feb-09 at 15:16factory.setCommonErrorHandler(new Default....)
Boot auto configuration of a CommonErrorHandler
bean requires Boot 2.6.
https://github.com/spring-projects/spring-boot/commit/c3583a4b06cff3f53b3322cd79f2b64d17211d0e
QUESTION
Hello Stack Overflow community and anyone familiar with spring-kafka!
I am currently working on a project which leverages the @RetryableTopic feature from spring-kafka in order to reattempt the delivery of failed messages. The listener annotated with @RetryableTopic is consuming from a topic that has 50 partitions and 3 replicas. When the app is receiving a lot of traffic, it could possibly be autoscaled up to 50 instances of the app (consumers) grabbing from those partitions. I read in the spring-kafka documentation that by default, the retry topics that @RetryableTopic autocreates are created with one partition and one replica, but you can change these values with autoCreateTopicsWith() in the configuration. From this, I have a few questions:
- With the autoscaling in mind, is it recommended to just create the retry topics with the same number of partitions and replicas (50 & 3) as the original topic?
- Is there some benefit to having differing numbers of partitions/replicas for the retry topics considering their default values are just one?
ANSWER
Answered 2022-Feb-03 at 21:33The retry topics should have at least as many partitions as the original (by default, records are sent to the same partition); otherwise you have to customize the destination resolution to avoid the warning log. See Destination resolver returned non-existent partition
50 partitions might be overkill unless you get a lot of retried records.
It's up to you how many replicas you want, but in general, yes, I would use the same number of replicas as the original.
Only you can decide what are the "correct" numbers.
QUESTION
org.springframework.kafka
spring-kafka
2.8.2
...ANSWER
Answered 2022-Feb-01 at 23:02max.poll.interval.ms
is the max time between polls from the consumer. It should be set to a value that is longer than the processing time for all the records fetched during the poll (max.poll.records
).
Note that it will also delay group rebalances since the consumer will only join the rebalance inside the call to poll.
A consumer failure is determined by the heartbeat sent by the consumer. The interval for the heartbeat is configured using heartbeat.interval.ms
When the consumer doesnot send the heartbeat for session.timeout.ms
it is considered as failed.
Ideally for your usecase,
session.timeout.ms
should be set to a low value to detect failure more quickly. But greater thanheartbeat.interval.ms
max.poll.interval.ms
should be set to a large value that is enough to process themax.poll.records
.
Note,
The session.timeout.ms
value must be in the allowable range as configured in the broker configuration by group.min.session.timeout.ms
and group.max.session.timeout.ms
.
QUESTION
I am using Spring-Kafka to consume messages from Confluent Kafka and I am using RetryTopicConfiguration Bean to configure the topics and backoff strategy. My application works fine but I see a lot of WARNING log like the one below in my logs and I am wondering if my configuration is incorrect.
...ANSWER
Answered 2022-Jan-24 at 15:11By default, the same partition as the original topic is used; you can override that behavior by overriding the DeadLetterPublishingRecovererFactory
@Bean
:
QUESTION
I'm trying to use Spring Cloud Stream to process messages sent to an Azure Event Hub instance. Those messages should be routed to a tenant-specific topic determined at runtime, based on message content, on a Kafka cluster. For development purposes, I'm running Kafka locally via Docker. I've done some research about bindings not known at configuration time and have found that dynamic destination resolution might be exactly what I need for this scenario.
However, the only way to get my solution working is to use StreamBridge
. I would rather use the dynamic destination header spring.cloud.stream.sendto.destination
, in that way the processor could be written as a Function<>
instead of a Consumer<>
(it is not properly a sink). The main concern about this approach is that, since the final solution will be deployed with Spring Data Flow, I'm afraid I will have troubles configuring the streams if using StreamBridge.
Moving on to the code, this is the processor function, I stripped away the unrelated parts
...ANSWER
Answered 2022-Jan-20 at 21:56Not sure what exactly is causing the issues you have. I just created a basic sample app demonstrating the sendto.destination
header and verified that the app works as expected. It is a multi-binder application with two Kafka clusters connected. The function will consume from the first cluster and then using the sendto
header, produce the output to the second cluster. Compare the code/config in this sample with your app and see what is missing.
I see references to StreamBridge
in the stacktrace you shared. However, when using the sendto.destination
header, it shouldn't go through StreamBridge
.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install spring-kafka
You can use spring-kafka like any standard Java library. Please include the the jar files in your classpath. You can also use any IDE and you can run and debug the spring-kafka component as you would do with any other Java program. Best practice is to use a build tool that supports dependency management such as Maven or Gradle. For Maven installation, please refer maven.apache.org. For Gradle installation, please refer gradle.org .
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page