spring-cloud-stream-binder-kafka | Spring Cloud Stream binders for Apache Kafka | Pub Sub library
kandi X-RAY | spring-cloud-stream-binder-kafka Summary
kandi X-RAY | spring-cloud-stream-binder-kafka Summary
Spring Cloud Stream binders for Apache Kafka and Kafka Streams
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
- Create a Kafka consumer producer
- Reset offsets for auto rebalance
- Create back off for rollback processor
- Setup rebalance listener
- Creates a message handler for the producer
- Add no header patterns that should never be mapped
- Removes no headers from a headers
- Initialize the bean factories
- Builds list of input bindings
- Initializes the function factory
- Evaluates the function components
- Create event type processor
- Gets the partition information for a given topic
- Gets a queryable store
- Builds the health check
- Extract headers from message headers
- Filters the bean registry
- Create a polled consumer resources
- Sets the Kafka s global properties
- Perform the health check
- Binds a Kafka s consumer
- Bind producer
- Checks and binds metrics to the given registry
- Binding consumer
- Extract resolvable types
- Start the downloader
spring-cloud-stream-binder-kafka Key Features
spring-cloud-stream-binder-kafka Examples and Code Snippets
Community Discussions
Trending Discussions on spring-cloud-stream-binder-kafka
QUESTION
I am using Spring Cloud Streams with the Kafka Streams Binder, the functional style processor API and also multiple processors.
It's really cool to configure a processing application with multiple processors and multiple Kafka topics in this way and staying in the Spring Boot universe with /actuator, WebClient and so on. Actually I like it more than using plain Apache Kafka Streams.
BUT: I would like to integrate exception handling for exceptions occurring within the processors and sending these unprocessable messages to a DLQ. I have setup already DLQs for deserialization errors, but I found no good advice on achieving this besides sobychacko's answer on a similar question. But this is only a snippet! Does anybody have a more detailed example? I am asking this because the Spring Cloud Stream documentation on branching looks quite different.
...ANSWER
Answered 2021-Sep-29 at 18:30Glad to hear about your usage of Spring Cloud Stream with Kafka Streams.
The reference docs you mentioned are from an older release. Please navigate to the newer docs from this page: https://spring.io/projects/spring-cloud-stream#learn
This question has come up before. See if these could help with your use case:
Error handling in Spring Cloud Kafka Streams
How to stop sending to kafka topic when control goes to catch block Functional kafka spring
QUESTION
we are using Spring Cloud Stream to listen to rabbitMQ multiple queues, especially the SCF model
- The spring-cloud-stream-reactive module is deprecated in favor of native support via Spring Cloud Function programming model.
by the time there was a single node/host it was working good (application.yml snippet shared below),
however the moment we try to connect multiple nodes it is failing, Can someone guide how to connect the same or have some sample related to Spring Cloud Documentation
Following Code is working as expected
...ANSWER
Answered 2022-Feb-22 at 19:37Upon adding the binders config for both rabbit1 and rabbit2 it resolved the issue:
Below is the sample config which I tried and was able to consume messages successfully
QUESTION
Initial Question: I have a question how I can bind my GlobalStateStore to a processor. My Application has a GlobalStateStore with an own processor ("GlobalConfigProcessor") to keep the Store up to date. Also, I have another Processor ("MyClassProcessor") which is called in my Consumer Function. Now I try to access the store from MyClassProcessor, but I get an exception saying : Invalid topology: StateStore config_statestore is not added yet.
Update on current situation: I setup a test repository to give a better overview over my situation. This can be found here: https://github.com/fx42/store-example
As you can see in the repo, I have two Consumers which both consume different topics. The Config-Topic provides an event which I want to write to a GlobalStateStore. Here are the StateStoreUpdateConsumer.java and the StateStoreProcessor.java involved. With the MyClassEventConsumer.java I process another Input-Topic and want to read values from the GlobalStateStore. As provided in this doc I can't initialize GlobalStateStores just as StateStoreBean but instead I have to add this actively with the StreamsBuilderFactoryBeanCustomizer Bean. This Code is currently commented out in the StreamConfig.java. Without this code I get the Exception
...ANSWER
Answered 2022-Feb-17 at 17:01I figured out my problem. For me it was the @EnableKafkaStreams annotation which I used. I assume this was the reason I had two different contexts running in parallel and they collided. Also I needed to use the StreamsBuilderFactoryBeanConfigurer
instead of StreamsBuilderFactoryBeanCustomizer
to get the GlobalStateStore registered correctly.
Theses changes done in the linked test-repo which now can start the Application Context properly.
QUESTION
I'm trying to setup a spring cloud stream project with kafka. Everything is working as expected but the key de/serialization. The files are the pom.xml:
...ANSWER
Answered 2022-Jan-27 at 14:22You can override the default serializer at the binder or binding level. e.g. for a specific binding:
QUESTION
I'm trying to use Spring Cloud Stream to process messages sent to an Azure Event Hub instance. Those messages should be routed to a tenant-specific topic determined at runtime, based on message content, on a Kafka cluster. For development purposes, I'm running Kafka locally via Docker. I've done some research about bindings not known at configuration time and have found that dynamic destination resolution might be exactly what I need for this scenario.
However, the only way to get my solution working is to use StreamBridge
. I would rather use the dynamic destination header spring.cloud.stream.sendto.destination
, in that way the processor could be written as a Function<>
instead of a Consumer<>
(it is not properly a sink). The main concern about this approach is that, since the final solution will be deployed with Spring Data Flow, I'm afraid I will have troubles configuring the streams if using StreamBridge.
Moving on to the code, this is the processor function, I stripped away the unrelated parts
...ANSWER
Answered 2022-Jan-20 at 21:56Not sure what exactly is causing the issues you have. I just created a basic sample app demonstrating the sendto.destination
header and verified that the app works as expected. It is a multi-binder application with two Kafka clusters connected. The function will consume from the first cluster and then using the sendto
header, produce the output to the second cluster. Compare the code/config in this sample with your app and see what is missing.
I see references to StreamBridge
in the stacktrace you shared. However, when using the sendto.destination
header, it shouldn't go through StreamBridge
.
QUESTION
In the event you only have single bean of type java.util.function.[Supplier/Function/Consumer], you can skip the spring.cloud.function.definition property, since such functional bean will be > > auto-discovered. However, it is considered best practice to use such property to avoid any confusion.
So I do have multiple beans in my project but still only one bean of type supplier.
Not sure what exactly what I am missing.
EXCEPTION TRACE:
...ANSWER
Answered 2021-Dec-07 at 15:33For 2021.0.0
you have to use:
QUESTION
My understanding was that spring-kafka was created to interact with Kafka Client APIs, and later on, spring-cloud-stream project was created for "building highly scalable event-driven microservices connected with shared messaging systems", and this project includes a couple of binders, one of them is a binder that allows the interaction with Kafka Stream API:
...ANSWER
Answered 2021-Nov-11 at 00:31As Gary pointed out in the comments above, spring-kafka
is the lower level library that provides the building blocks for Spring Cloud Stream Kafka Streams binder (spring-cloud-stream-binder-kafka-streams
). The binder provides a programming model using which you can write your Kafka Streams processor as a java.util.funciton.Funciton
or java.util.function.Consumer
. You can have multiple such functions and each of them will build its own Kafka Streams topologies. Behind the scenes, the binder uses Spring-Kafka
to build the Kafka Streams StreamsBuilder
objet using the StreamsBuilderFactoryBean
. Binder also allows you to compose various functions. The functional model comes largely from Spring Cloud Function, but it is adapted for the Kafka Streams in the binder implementation. The short answer is that both spring-Kafka
and the Spring Cloud Stream Kafka Streams binder will work, but the binder gives a programming model and extra features consistent with Spring Cloud Stream whereas spring-kafka
gives various low-level building blocks.
QUESTION
I am trying to write a spring-cloud-stream function (spring-starter-parent 2.5.3, java 11, spring-cloud-version 2020.0.3) which has both a Kafka and Postgres transaction. The function will raise a simulated error whenever the consumed message starts with the string "fail," which I expect to cause the database transaction to roll back, then cause the kafka transaction to roll back. (I am aware that the Kafka transaction is not XA, which is fine.) So far I have not gotten the database transaction to work, but the kafka transaction does.
Currently I am using a @Transactional
annotation, which does not appear to start a database transaction. (The Kafka binder documentation recommends synchronizing database + kafka transactions using the ChainedTransactionManager, but the Spring Kafka documentation states it is deprecated in favor of using the @Transactional
annotation, and the S.C.S. example for this problem uses the @Transactional
annotation and the default transaction manager created by the start-jpa library (I think)). I can see in my debugger that regardless of whether or not I @EnableTransactionManagement
and use a @Transactional
on my consumer, the consumer is executed in a kafka transaction using a transaction template higher in the stack, but I do not see a database transaction anywhere.
I have a few questions I want to understand:
- Am I correct that the Kafka Listener Container runs my consumers in the context of a Kafka transaction regardless of whether or not I have a
@Transactional
annotation? And if so, is there a way to only run specific functions in a Kafka transaction? - Would the above change for producers, since the container doesn't have a way to intercept calls to the producers (as far as I know)?
- What should I do to synchronize a Kafka and a database transactions so that the DB commit happens before the Kafka commit?
I have the following Crud Repository, collection of handlers, and application.yml:
...ANSWER
Answered 2021-Aug-26 at 16:38 @Bean
@Transactional
public Consumer persistAndSplit(
StreamBridge bridge,
AuditLogRepository repository
) {
QUESTION
Trying to configure Spring to send bad messages to dead letter queue while using batch mode. But as a result in dlq topic there is nothing.
I use Spring Boot 2.5.3 and Spring Cloud 2020.0.3. This automatically resolves version of spring-cloud-stream-binder-kafka-parent as 3.1.3.
Here is application.properties:
...ANSWER
Answered 2021-Aug-09 at 15:35When using spring-cloud-stream, the container is not created by Boot's container factory, it is created by the binder; the error handler @Bean
won't be automatically wired in.
You have to configure a ListenerContainerCustomizer
@Bean
instead.
Example here: Can I apply graceful shutdown when using Spring Cloud Stream Kafka 3.0.3.RELEASE?
QUESTION
I use spring boot version 2.5.3, spring-cloud-stream-binder-kafka-stream version 3.1.3 and kafka-clients version 2.8.0. I want to use REPLACE_THREAD option for uncaught exception handler in kafka streams.
But I'm not able to use that since StreamsBuilderFactoryBeanConfigurer (2.6.7 version) doesn't support
fb.setUncaughtExceptionHandler(ex -> { log.error("Uncaught exception: ", e); snsService.publish("UncaughtException thrown"); return StreamsUncaughtExceptionHandler.StreamThreadExceptionResponse.REPLACE_THREAD; });
Is it possible to replace the streams thread with fb.setUncaughtExceptionHandler(new Thread.UncaughtExceptionHandler()
?
Thanks in Advance!
...ANSWER
Answered 2021-Aug-16 at 07:45Springboot version should be greater than 2.6 to support REPLACE_THREAD in kstreams. https://spring.io/projects/spring-kafka#:~:text=Spring%20Boot%202.4%20users%20should,will%20use%20the%20correct%20version).
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install spring-cloud-stream-binder-kafka
You can use spring-cloud-stream-binder-kafka like any standard Java library. Please include the the jar files in your classpath. You can also use any IDE and you can run and debug the spring-cloud-stream-binder-kafka component as you would do with any other Java program. Best practice is to use a build tool that supports dependency management such as Maven or Gradle. For Maven installation, please refer maven.apache.org. For Gradle installation, please refer gradle.org .
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page