spring-cloud-stream | Framework for building Event-Driven Microservices | Reactive Programming library
kandi X-RAY | spring-cloud-stream Summary
kandi X-RAY | spring-cloud-stream Summary
Framework for building Event-Driven Microservices
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
- Binds a producer .
- Gets the binder for the given name .
- Validate the given stream listener method .
- Returns a map of binder configurations for the given binding service .
- Poll for a message .
- Entry point for the downloader .
- Embeds headers .
- Returns the parameter type of the specified interface .
- Convert from internal to string
- Create the binder type registry .
spring-cloud-stream Key Features
spring-cloud-stream Examples and Code Snippets
spring-cloud-starter-netflix-turbine-stream
spring-boot-starter-webflux
spring-cloud-stream-binder-rabbit
spring:
rabbitmq:
host: localhost
port: 5672
username: guest
password: guest
Community Discussions
Trending Discussions on spring-cloud-stream
QUESTION
I'm trying to start an InboundChannelAdapter manually using a @Scheduled function. I think I'm setting up the message payload wrong but I'm not sure what to have it as. Here's the code:
...ANSWER
Answered 2021-Jun-11 at 07:39You are using an outdated API.
The annotation-based configuration model has been long deprecated in favor of functional programming model, so EnableBinding
, StreamListener
etc are on their way out.
For you case you can simply use Supplier with StreamBridge. See this section for more details. And then you can do programmatic start/stop binding using the available lifecycle features of spring-cloud-stream.
Now, that doesn't mean your other problem will be resolved, but without a full stack trace and sufficient detail (e.g., version of frameworks you are using) it is difficult to say.
QUESTION
I have an application with multiple suppliers. So, I'm trying to configure fixed-delay
for the specific supplier in Spring Cloud Stream. Example:
application.yaml
...ANSWER
Answered 2021-Jun-07 at 14:22Indeed this is not supported for functional programming model at the moment as it really violates a concept of microservices for which spring-cloud-stream was designed for, where one of the main principles is you do one thing and do it well without affecting others. And in your case (especially with sources) you have multiple microservices grouped together in a single JVM process, thus one service affecting another.
That said, feel free to raise an issue so we can consider adding this feature
QUESTION
I am new to spring framework. I have a confusion regarding spring boot and spring cloud.
I used https://start.spring.io/ to initialize a spring boot application. I think I am using the spring boot framework. However, I would like to use some spring cloud dependencies such as spring-cloud-stream-binder-kafka.
Question 1: If I added this dependency above to my spring boot application, I am wondering if I still can go with the spring boot framework, or I have to change to spring cloud framework.
Question 2: I am wondering if there is any difference when deploying the spring boot or spring cloud application. Or, they just have the different frameworks, and we could deploy them in the same way.
Thank you so much!
...ANSWER
Answered 2021-Jun-02 at 21:04You can use together Spring Boot and Spring Cloud packages. Spring Boot is just a preconfigured Spring Framework with some extra functionalities. It also uses library versions compatibile with each other. Spring Cloud is also the part of the Spring ecosystem, contains libraries that mostly used in cloud applications. In the background, these packages will pull all necessary Spring (and other) libraries into your project, as transitive dependencies. So you can use the generated pom/gradle, and add other dependencies. In this case Spring boot will be your core and cloud add extras.
QUESTION
I have two applications - the first produces messages using spring-cloud-stream/function with the AWS Kinesis Binder, the second is an application that builds off of spring integration to consume messages. Communicating between the two is not a problem - I can send a message from "stream" and handle it easily in "integration".
When I want to send a custom header, then there is an issue. The header arrives at the consumer as an embedded header using the "New" format (Has an 0xff at the beginning, etc.) - See AbstractMessageChannelBinder#serializeAndEmbedHeadersIfApplicable in spring-cloud-stream.
However, the KinesisMessageDrivenChannelAdapter (spring-integration-aws) does not seem to understand the "new" embedded header form. It uses EmbeddedJsonHeadersMessageMapper (See #toMessage) which cannot "decode" the message. It throws a com.fasterxml.jackson.core.JsonParseException: Unrecognized token 'ÿ': was expecting (JSON String, Number, Array, Object or token 'null', 'true' or 'false')
because of the additional information included in the embedded header (0xff and so on).
I need to send the header across the wire (the header is used to route on the other side), so it's not an option to "turn off" headers on the producer. I don't see a way to use the "old" embedded headers.
I'd like to use spring-cloud-stream/function on the producer side - it's awesome. I wish I could redo the consumer, but...
I could write my own embedded header mapper that understands the new format (use EmbeddedHeaderUtils), and wire it into the KinesisMessageDrivenChannelAdapter.
Given the close relationship between spring-cloud-stream and spring-integration, I must be doing something wrong. Does Spring Integration have an OutboundMessageMapper that understands the new embedded form?
Or is there a way to coerce spring cloud stream to use a different embedding strategy?
I could use Spring Integration on the producer side. (sad face).
Any thoughts? Thanks in advance.
...ANSWER
Answered 2021-May-28 at 17:10understands the new format
It's not a "new" format, it's a format that Spring Cloud Stream created, originally for Kafka, which only added header support in 0.11.
I could write my own embedded header mapper that understands the new format (use
EmbeddedHeaderUtils
), and wire it into theKinesisMessageDrivenChannelAdapter
.
I suggest you do that, and consider contributing it to the core Spring Integration Project alongside the EmbeddedJsonHeadersMessageMapper
so that it can be used with all technologies that don't support headers natively.
QUESTION
I am developping a simple spring boot application using spring cloud stream and kafka.
I get this error when I added kafka consumer bean.
Spring boot version: 2.5.0
Spring cloud version: 2020.0.3-SNAPSHOT
Kafka client version: 2.7.1
Error log:
An attempt was made to call a method that does not exist. The attempt was made from the following location:
org.springframework.cloud.stream.binder.kafka.KafkaMessageChannelBinder.createConsumerEndpoint(KafkaMessageChannelBinder.java:716)
The following method did not exist:
org.springframework.kafka.listener.ContainerProperties.setAckOnError(Z)V
pom.xml file:
...ANSWER
Answered 2021-May-24 at 13:18Spring Cloud Stream 3.1.x is not currently compatible with Boot 2.5.
https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/1079
QUESTION
I am new to Kafka Streams and Spring Cloud Stream but have read good things about it in terms of moving the integration related codes into properties file so devs can focus mostly on the business logic side of things.
Here I have my simple application class.
...ANSWER
Answered 2021-Apr-06 at 08:42The destination: "some-event"
should point to a kafka topic. Like destination: "some-event-topic"
.
Then you have to create an interface for the listener consume-in-0
. Using the spring annotations will make the project load this listener and it will not be null anymore.
QUESTION
I am trying to create junit tests an application that makes use of Spring Cloud Stream to recieve events from RabbitMQ. In other application I generally import spring-cloud-stream-test-support
and spring-rabbit-test
and annotate the test class with @SpringBootTest
and we're good to go. However, @SpringBootTest
loads the entire application context which is not ideal in this application as it is quite large and would require the mocking of too many beans which are irrlevant to the test. Therefore, I tried limiting the context by specifying the classes I want loaded as follows:
@SpringBootTest(classes = {MessageProcessor.class, Consumer.class})
. It seems like this is not enough as I'm getting a Dispatcher has no subscribers for channel
error.
So my question is, what are the minimum classes that need to be included in the context to test SpringCloudStream/RabbitMQ consumer?
...ANSWER
Answered 2021-May-13 at 08:36After a lot of trial and error it seems that the bare minimum for the tests to work is the following:
QUESTION
I am trying to deduplicate the records using input topic as KTable and sinking them to output topic. But the KTable is still sinking the duplicate records to the output topic. Not sure where am I going wrong.
Here is my application.yml
...ANSWER
Answered 2021-May-07 at 19:46I think the problem that you are trying to solve, will be well solved by compacted topic here. Once you deliver data with the same key to a compacted topic and compaction is enabled on broker level (which is enabled by default), each broker will start a compaction manager thread and a number of compaction threads. These are responsible for performing the compaction tasks. Compaction does nothing but keeps the latest values of each key and cleans up the older (dirty) entries.
Refer this Kafka Documentation for more details.
QUESTION
For example, I configure a topic have 2 partitions, but in my application with 1 instance, I use Flux.parallel(10)
to consume the message, and it has 1000 message lag on that topic, what will happen?
- Will it poll 10 messages per time? from 2 partitions or 1 partition?
- Only poll 2 message and 1 partition each?
I want to know how it works, so I could configure it right with the capability of large throughput and consume sequence
BTW I found this issue, but now answer there
...ANSWER
Answered 2021-May-06 at 14:11It's better to use multiple receivers instead.
Using parallel can cause problems with out of order offset commits.
QUESTION
I'm trying to use one transaction manager (ChainedTransactionManager) for Rabbit and Kafka, chaining RabbitTransactionManager and KafkaTransactionManager. We intend to achieve a Best effort 1-phase commit.
To test it, the transactional method throws an exception after the 2 operations (sending a message to a Rabbit exchange and publishing and event in Kafka). When running the test, the logs suggest a rollback is initiated but the message ends up in Rabbit anyway.
- Notes:
- We're using QPid to simulate in-memory RabbitMQ for testing (version 7.1.12)
- We're using an in-memory Kafka for testing (spring-kafka-test)
- Other relevant frameworks/libraries: spring-cloud-stream
Here's the method where the problem occurs:
...ANSWER
Answered 2021-Apr-29 at 16:44@EnableBinding
is deprecated in favor of the newer functional programming model.
That said, I copied your code/config pretty-much as-is (transacted
is not a kafka producer binding property) and it works fine for me (Boot 2.4.5, cloud 2020.0.2)...
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install spring-cloud-stream
Creating a Sample Application by Using Spring Initializr
Importing the Project into Your IDE
Adding a Message Handler, Building, and Running
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page