spring-cloud-bus | Spring Cloud event bus | Microservice library
kandi X-RAY | spring-cloud-bus Summary
kandi X-RAY | spring-cloud-bus Summary
Spring Cloud Bus links the nodes of a distributed system with a lightweight message broker. This broker can then be used to broadcast state changes (such as configuration changes) or other management instructions. A key idea is that the bus is like a distributed actuator for a Spring Boot application that is scaled out. However, it can also be used as a communication channel between apps. This project provides starters for either an AMQP broker or Kafka as the transport.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
- Checks if the remote application has already been created
- Returns the lowest index of the specified property or - 1 if not found
- Returns a map of key - value pairs from the destination properties
- Post - process environment
- Add or replace the given property source with the given map
- Creates a hash code
- Create a hash code for this object
- Convert the local tags to local tags
- Determines if this event is for a remote application
- Returns a destination for the given destination
- Returns a hashCode of this object
- Creates a hashCode of this object
- Compares this environment
- Process a remote application event
- Get destination
- Converts the given message into an object
- Compares this object for equality
- Registers bean definitions
- Equivalent to SentApplicationEvent
- Process a remote refresh request
spring-cloud-bus Key Features
spring-cloud-bus Examples and Code Snippets
Community Discussions
Trending Discussions on spring-cloud-bus
QUESTION
I want log4j2-spring.xml to read a property from the application.properties file. But seems log4j2-spring.xml is unable to read this. I have read https://logging.apache.org/log4j/2.x/manual/lookups.html#SpringLookup to implement this.
I have seen this answer on this site. Tried like this as well. But it didn't help me.
My build.gradle is like this:
...ANSWER
Answered 2021-May-22 at 18:17The question is do you need Log4j configuration over Spring Cloud?
Problem
If not, I would say org.apache.logging.log4j:2.14.1 dependency is an overkill. It brings Spring Cloud dependencies that you won't need. In a way that I still didn't figure out, it also interfeeres with spring-boot-starter-log4j2 causing initialization of logging context multiple times and then as a sideffect you have this excpetion at the startup as property from Spring is not resolved.
Solution
Mind you don't need whole log4j-spring-cloud-config-client and even spring-boot-starter-log4j2.
Following dependencies will set up your logging context:
- log4j
- log4j-slf4j-impl
- log4j-spring-boot
I have put an example program in GitHub repository. Variable names are slightly changed and there are comments explaining what each dependency is for.
Excerpt of Gradle build file
QUESTION
My question is how to manage the multi instance with Spring Cloud Stream Kafka.
Let me explain, in a Spring Cloud Stream Microservices context (eureka, configserver, kafka) I want to have 2 instances of the same microservice. When I change a configuration in my GIT Repository, the configserver (via a webhook) will push a message into the Kafka topic.
If i use the same group-id in my microservice, only one of two instances will received the notification, and reload his spring context. But I need to refresh all instances ...
So, to do that, I have configured an unique group-id : ${spring.application.name}.bus.${hostname}
It's work well, but the problem is, each time I start a new instance of my service, it create a new consumer group in kafka. Now i have a lot of unused consumer group.
[![consumers for a microservice][1]][1] [1]: https://i.stack.imgur.com/6jIzx.png
Here is the Spring Cloud Stream configuration of my service :
...ANSWER
Answered 2020-Jul-17 at 13:46If you don't provide a group, bus will use a random group anyway.
The broker will eventually remove the unused groups according to its offsets.retention.minutes
property (currently 7 days by default).
QUESTION
When the application stops, kinesis binder tries to unlock dynamoDB and throws unlock failed exception.
I followed this original post with the similar issue and updated spring-integration-aws version to v2.3.1.RELEASE. But still seeing the same error on application shut down.
...ANSWER
Answered 2020-May-26 at 19:38If you confirm me that you don't create your own DynamoDbLockRegistry
bean, then I see what need to be corrected.
Nevertheless this should not be critical error in the end of application lifecycle: you have stopped it anyway and all the unlocked lock because of that error are going to be released next time when leaseDuration
expires.
UPDATE
The fix is here: https://github.com/spring-projects/spring-integration-aws/commit/bc4a1c7c5975555fb5237642b8b97d8633f0f6cb
QUESTION
I am trying to migrate a simple example code with Spring Cloud Config Server
and RabbitMQ
as Spring Cloud Bus
(based on Spring Boot 1.5.22.RELEASE
and Spring Cloud Brixton.SR7
) to Spring Boot 2.2.6.RELEASE
and Spring Cloud Hoxton.SR3
. The example consists of a Config Server, a Config Client, GitLab
as SCM and RabbitMQ
(3.8 - Erlang 22.1.5). The code is compiling, starting up, the push webhook is triggered and can also be seen in the server's and client's log.
The problem is that the property updated in Git is not updated in the client. On the base of the Spring Boot 1.5.22.RELEASE
and Spring Cloud Brixton.SR7
it works reliable.
But if I do curl -X POST http://localhost:8889/actuator/bus-refresh
manually, the property will be updated.
What can be the problem or which property have I forgotten to configure?
Here is my configuration/code:
GitLab (started as Docker container) Push WebHook: http://user:password@localhost:8889/monitor
RabbitMQ (started as Docker container) no particular configuration
pom.xml
Root module of Config server and client:
ANSWER
Answered 2020-Apr-07 at 12:42Setting the property spring.cloud.bus.id
in bootstrap.properties
fixed the problem:
https://github.com/spring-cloud/spring-cloud-bus/issues/124#issuecomment-423960553
Not really pretty but it works.
QUESTION
I'm trying to use Spring cloud bus with Kafka in my microservices application, and indeed I could use it, but only data which is controlled by Spring cloud config server got refreshed!
I'm using jdbc back-end with my config server, and in order to simulate my need, I'm changing some value in properties file in one of my services, beside the properties table, and call the /monintor end point again (mentioned here section 4.3 https://www.baeldung.com/spring-cloud-bus); as a result, only data coming from properties table is changed.
This is the yml file for my Config server
...ANSWER
Answered 2019-Jun-13 at 20:19After some hours of investigation, I found that there is some recommended way. Cloud bus can send Refresh Event and Spring boot can listen to that event; this what I build my solution on, I used this snippet
QUESTION
In the documentation of Spring Cloud Bus
(https://github.com/spring-cloud/spring-cloud-bus) it was mentioned like
The Bus starters cover Rabbit and Kafka, because those are the two most common implementations, but Spring Cloud Stream is quite flexible and binder will work combined with spring-cloud-bus.
In my project we can not maintain an another infrastructure for Rabbit or Kafka hence I want to use spring-cloud-stream-binder-aws-kinesis
(https://github.com/spring-cloud/spring-cloud-stream-binder-aws-kinesis) with spring-cloud-bus
. Can anyone guide me how can I do that??
ANSWER
Answered 2019-Apr-29 at 16:19QUESTION
I am learning spring cloud with consul as service discovery implementation, I followed a tutorial on internet. I am using eclipse and maven. The pom file is generated by Spring Initializr. From spring official documentation, they says "@EnableDiscoveryClient" is no more needed, so I commented it, but with this annoatation makes no difference.
my code is below:
...ANSWER
Answered 2018-Oct-21 at 11:04Tried to launch application with your pom.xml
. It fails with next message:
QUESTION
We experienced the following scenario :
- We have a Kafka cluster composed of 3 nodes, each topic created has 3 partitions
- A message is sent through
MessageChannel.send()
, producing a record for, let's say, partition 1 - The broker acting as the partition leader for that partition fails
By default, MessageChannel.send()
returns true
and doesn't throw any exception, even if, eventually, the KafkaProducer can't send successfully the message. We observe, about 30 seconds after this call, the following message in the logs : Expiring 10 record(s) for helloworld-topic-1 due to 30008 ms has passed since batch creation plus linger time
In our case, this is not acceptable as we have to be sure that all messages are eventually delivered to Kafka, at the moment of the return of the call to MessageChannel.send()
.
We turned on spring.cloud.stream.kafka.bindings..producer.sync
to true
which does exactly as the documentation describes. It blocks the caller for the producer's acknowledgment of the success or the failure of the delivery (MessageTimeoutException
, InterruptedException
, ExecutionException
), all of this controlled by KafkaProducerMessageHandler
. It seems to be the best approach for us as the performance impact is negligible in our case.
But, do we need to take care of the retry ourselves if an exception is thrown ? (in our client code with @Retryable
for instance)
Here is a simple project to experiment : https://github.com/phdezann/spring-cloud-bus-kafka-helloworld
...ANSWER
Answered 2017-Dec-22 at 17:02If the send()
is performed on the @StreamListener
thread and the exception is thrown back to the binder, the binder retry configuration will perform retries.
However, since you are doing the send on an HTTP thread you will need to do your own retry (call send within the scope of a RetryTemplate()
) or make the controller method @Retryable
.
QUESTION
I am using the Spring Cloud Config Server and able to detect the changes from the git repository and pass it to the Config clients.
There are two ways, I have implemented it:
- After making changes (commit and push) in the git repository, I make a curl request curl -X POST http://server:port/bus/refresh and it works fine. For this I am using RabbitMQ as the Spring Cloud Bus.
- After making changes (commit and push) in the git repository, I make a curl request curl -X POST http://server:port/refresh (with no /bus in the url) and it works fine. I am NOT using Spring Cloud Bus here.
Reference: https://spring.io/guides/gs/centralized-configuration/
So both works fine, so Is there any advantage of using Spring Cloud Bus, or in Production environment, will there be any issue with going without Spring Cloud Bus? As there will be extra effort needed to get setup the RabbitMQ Cluster (HA) as Spring Cloud Bus in production.
Thanks, David
...ANSWER
Answered 2017-Apr-11 at 18:31/refresh
will only refresh the config client to whom the request was made. It only refreshes locally. Using /bus/refresh
will refresh all clients connected to the bus. In other words it will refresh all bus clients (or a subset if the destination
parameter is set).
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install spring-cloud-bus
You can use spring-cloud-bus like any standard Java library. Please include the the jar files in your classpath. You can also use any IDE and you can run and debug the spring-cloud-bus component as you would do with any other Java program. Best practice is to use a build tool that supports dependency management such as Maven or Gradle. For Maven installation, please refer maven.apache.org. For Gradle installation, please refer gradle.org .
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page