rabbitmq | RabbitMQ Dockerfile for trusted automated Docker builds | Continuous Deployment library
kandi X-RAY | rabbitmq Summary
kandi X-RAY | rabbitmq Summary
This repository contains Dockerfile of RabbitMQ for Docker's automated build published to the public Docker Hub Registry.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of rabbitmq
rabbitmq Key Features
rabbitmq Examples and Code Snippets
@RabbitListener(queues = QUEUE_MESSAGES_DLQ)
public void processFailedMessagesRetryWithParkingLot(Message failedMessage) {
Integer retriesCnt = (Integer) failedMessage.getMessageProperties().getHeaders().get(HEADER_X_RETRIES_COUNT);
@RabbitListener(queues = QUEUE_MESSAGES_DLQ)
public void processFailedMessagesRetryHeaders(Message failedMessage) {
Integer retriesCnt = (Integer) failedMessage.getMessageProperties().getHeaders().get(HEADER_X_RETRIES_COUNT);
if (
@Bean RabbitTemplate amqpTemplate(ConnectionFactory factory) {
RabbitTemplate template = new RabbitTemplate(factory);
template.setRoutingKey("remoting.binding");
template.setExchange("remoting.exchange");
return templa
Community Discussions
Trending Discussions on rabbitmq
QUESTION
I am using a managed RabbitMQ cluster through AWS Amazon-MQ. If the consumers finish their work quickly then everything is working fine. However, depending on few scenarios few consumers are taking more than 30 mins to complete the processing. In that scenarios, RabbitMQ deletes the consumer and makes the same messages visible again in the queue. Becasue of this another consumer picks it up and starts processing. It is happing in the loop. Therefore the same transaction is getting executed again and I am loosing the consumer as well. I am not using any AcknowledgeMode so I believe it's AUTO by default and it has 30 mins limit. Is there any way to increase the Delivery Acknowledgement Timeout for AUTO mode? Or please let me know if anyone has any other solutions for this.
...ANSWER
Answered 2021-Sep-02 at 13:29This is the response from AWS support.
From my understanding, I see that your workload is currently affected by the consumer_timeout parameter that was introduced in v3.8.15. We have had a number of reach outs due to this, unfortunately, the service team has confirmed that while they can manually edit the rabbitmq.conf, this will be overwritten on the next reboot or failover and thus is not a recommended solution. This will also mean that all security patching on the brokers where a manual change is applied, will have to be paused. Currently, the service does not support custom user configurations for RabbitMQ from this configuration file, but have confirmed they are looking to address this in future, however, is not able to an ETA on when this will available.
From the RabbitMQ github, it seems this was added for quorum queues in v3.8.15 (https://github.com/rabbitmq/rabbitmq-server/releases/tag/v3.8.15 ), but seems to apply to all consumers (https://github.com/rabbitmq/rabbitmq-server/pull/2990 ).
Unfortunately, RabbitMQ itself does not support downgrades (https://www.rabbitmq.com/upgrade.html ) Thus the recommended workaround and safest action form the service team, as of now is to create a new broker on an older version (3.8.11) and set auto minor version upgrade to false, so that it wont be upgraded. Then export the configuration from the existing RabbitMQ instance and import it into new instance and use this instance going forward.
QUESTION
I have a web application that publishes messages to a topic then several Windows services that subscribe to those topics, some with multiple instances. If the services are running when the messages are published everything works correctly but if they are not then the messages are retained on the queue(s) subscribing to that topic but aren't read when the services start back up.
The desired behavior-
When a message is published to the topic string MyTopic, it is read from the MyTopicQueue only once. I use some wildcard topics so each message is sent to multiple queues, but multiple instances of a services subscribe to the same topic string and each message should be read by only of those instances
If the subscribers to the MyTopic topic aren't online when the message is published then the messages are retained on MyTopicQueue.
When the Windows services subscribing to a particularly topic come back on line each retained message is read from MyTopicQueue by only a single subscriber.
I've found some [typically for IBM] spotty documentation about the MQSUBRQ and MQSO_PUBLICATIONS_ON_REQUEST options but I'm not sure how I should set them. Can someone please help figure out what I need to do to get my desired behavior? [Other than switching back to RabbitMQ which I can't do though I'd prefer it.]
My options:
...ANSWER
Answered 2022-Mar-03 at 23:11If you want the messages to accumulate while you are not connected you need to make the subscription durable by adding MQC.MQSO_DURABLE
. In order to be able to resume an existing subscription add MQC.MQSO_RESUME
in addition to MQC.MQSO_CREATE
.
Be careful with terminology, what you are describing as retained messages is a durable subscription.
Retained publications are something else were MQ can retain one most recently published message on each topic and this message will be retrieved by new subscribers by default unless they use MQSO_NEW_PUBLICATIONS_ONLY
to skip receiving the retained publication.
MQSO_PUBLICATIONS_ON_REQUEST
allows a subscriber to only receive retained publications on request, it will not receive non-retained publications.
If you want multiple consumers to work together on a single subscription you have two options:
- Look at shared subscribers in XMS.NET, look at the
CLONESUPP
property. - Create a one time durable subscription to a queue on the topics you want consumed, then have your consumers directly consume from the queue not a topic.
QUESTION
I'm working with Symfony 4.4 and Symfony Messenger
Messenger configuration includes a transport and routing:
...ANSWER
Answered 2022-Mar-02 at 15:37For some reason this seems to be an error that happens with some frequency, so I rather post an answer instead of a comment.
You are supposed to add message classes to the routing configuration, not handler classes.
Your configuration should be, if you want that message to be manages asynchronously:
QUESTION
We have micro service which consumes(subscribes)messages from 50+ RabbitMQ queues.
Producing message for this queue happens in two places
The application process when encounter short delayed execution business logic ( like send emails OR notify another service), the application directly sends the message to exchange ( which in turn it is sent to the queue ).
When we encounter long/delayed execution business logic We have
messages
table which has entries of messages which has to be executed after some time.
Now we have cron worker which runs every 10 mins which scans the messages
table and pushes the messages to RabbitMQ.
Let's say the messages table has 10,000 messages which will be queued in next cron run,
- 9.00 AM - Cron worker runs and it queues 10,000 messages to RabbitMQ queue.
- We do have subscribers which are listening to the queue and start consuming the messages, but due to some issue in the system or 3rd party response time delay it takes each message to complete
1 Min
. - 9.10 AM - Now cron worker once again runs next 10 Mins and see there are yet 9000+ messages yet to get completed and time is also crossed so once again it pushes 9000+ duplicates messages to Queue.
Note: The subscribers which consumes the messages are idempotent, so there is no issue in duplicate processing
Design Idea I had in my mind but not best logicI can have 4 status ( RequiresQueuing, Queued, Completed, Failed )
- Whenever a message is inserted i can set the status to
RequiresQueuing
- Next when cron worker picks and pushes the messages successfully to Queue i can set it to
Queued
- When subscribers completes it mark the queue status as
Completed / Failed
.
There is an issue with above logic, let's say RabbitMQ somehow goes down OR in some use we have purge the queue for maintenance.
Now the messages which are marked as Queued
is in wrong state, because they have to be once again identified and status needs to be changed manually.
Let say I have RabbitMQ Queue named ( events )
This events queue has 5 subscribers, each subscribers gets 1 message from the queue and post this event using REST API to another micro service ( event-aggregator ). Each API Call usually takes 50ms.
Use Case:
- Due to high load the numbers events produced becomes 3x.
- Also the micro service ( event-aggregator ) which accepts the event also became slow in processing, the response time increased from 50ms to 1 Min.
- Cron workers follows your design mentioned above and queues the message for each min. Now the queue is becoming too large, but i cannot also increase the number of subscribers because the dependent micro service ( event-aggregator ) is also lagging.
Now the question is, If keep sending the messages to events queue, it is just bloating the queue.
https://www.rabbitmq.com/memory.html - While reading this page, i found out that rabbitmq won't even accept the connection if it reaches high watermark fraction (default is 40%). Of course this can be changed, but this requires manual intervention.
So if the queue length increases it affects the rabbitmq memory, that is reason i thought of throttling at producer level.
Questions- How can i throttle my cron worker to skip that particular run or somehow inspect the queue and identify it already being heavily loaded so don't push the messages ?
- How can i handle the use cases i said above ? Is there design which solves my problem ? Is anyone faced the same issue ?
Thanks in advance.
AnswerCheck the accepted answer Comments for the throttling using queueCount
...ANSWER
Answered 2022-Feb-21 at 04:45You can combine QoS - (Quality of service) and Manual ACK to get around this problem. Your exact scenario is documented in https://www.rabbitmq.com/tutorials/tutorial-two-python.html. This example is for python, you can refer other examples as well.
Let says you have 1 publisher and 5 worker scripts. Lets say these read from the same queue. Each worker script takes 1 min to process a message. You can set QoS at channel level. If you set it to 1, then in this case each worker script will be allocated only 1 message. So we are processing 5 messages at a time. No new messages will be delivered until one of the 5 worker scripts does a MANUAL ACK.
If you want to increase the throughput of message processing, you can increase the worker nodes count.
The idea of updating the tables based on message status is not a good option, DB polling is the main reason that system uses queues and it would cause a scaling issue. At one point you have to update the tables and you would bottleneck because of locking and isolations levels.
QUESTION
I am very new to Elixir. I have built an app which runs locally, which works fine. But now i need to build a container for it with Docker.
However, every attempt to do a release seems to try and connection to RabbitMQ (which is running locally, as a Docker Container).
I don't want, cant have it try and connect to Rabbit each time this container is built, as it will need to be built by a CI / CD pipeline and will never have access to any Rabbit. I have set it up with an ENV, but this needs to be set within my YAML when deploying to my k8s cluster.
So this is the Dockerfile:
...ANSWER
Answered 2022-Jan-15 at 00:02I have tried to create a project such as yours.
QUESTION
I am trying to get my deployment to only deploy replicas to nodes that aren't running rabbitmq (this is working) and also doesn't already have the pod I am deploying (not working).
I can't seem to get this to work. For example, if I have 3 nodes (2 with label of app.kubernetes.io/part-of=rabbitmq) then all 2 replicas get deployed to the remaining node. It is like the deployments aren't taking into account their own pods it creates in determining anti-affinity. My desired state is for it to only deploy 1 pod and the other one should not get scheduled.
...ANSWER
Answered 2022-Jan-01 at 12:50I think Thats because of the matchExpressions
part of your manifest , where it requires pods need to have both the labels app.kubernetes.io/part-of: rabbitmq
and app: testscraper
to satisfy the antiaffinity rule.
Based on deployment yaml you have provided , these pods will have only app: testscraper
but NOT pp.kubernetes.io/part-of: rabbitmq
hence both the replicas are getting scheduled on same node
from Documentation (The requirements are ANDed.):
QUESTION
Celery disconnects from RabbitMQ each time a task is passed to rabbitMQ, however the task does eventually succeed:
My questions are:
- How can I solve this issue?
- What improvements can you suggest for my celery/rabbitmq configuration?
Celery version: 5.1.2 RabbitMQ version: 3.9.0 Erlang version: 24.0.4
RabbitMQ error (sorry for the length of the log:
...ANSWER
Answered 2021-Aug-02 at 07:25Same problem here. Tried different settings but with no solution.
Workaround: Downgrade RabbitMQ to 3.8. After downgrading there were no connection errors anymore. So, I think it must have something to do with different behavior of v3.9.
QUESTION
I need to copy date from one source (in parallel) to another with batches.
I did this:
...ANSWER
Answered 2021-Dec-04 at 19:50You need to do your heavy work in individual Publisher
-s which will be materialized in flatMap() in parallel. Like this
QUESTION
We have set our Kafka Connect to be able to read credentials from a file, instead of giving them directly in connector config. This is how a login part of connector config looks like:
"connection.user": "${file:/kafka/pass.properties:username}",
"connection.password": "${file:/kafka/pass.properties:password}",
We also added these 2 lines to "connect-distributed.properties" file:
config.providers=file
config.providers.file.class=org.apache.kafka.common.config.provider.FileConfigProvider
Mind that it works perfectly for JDBC connectors, so there is no problem with the pass.properties file. But for other connectors, such as couchbase, rabbitmq, s3 etc. it causes problems. All these connectors work fine when we give credentials directly but when we try to make Connect to read them from a file it gives some errors. What could be the reason? I don't see any JDBC specific configuration here.
EDIT:
An error about couchbase in connect.log:
...ANSWER
Answered 2021-Dec-03 at 06:17Looks like the problem was quote marks in pass.properties file. The interesting thing is, even if credentials are typed with or without quote marks, JDBC connectors work well. Maybe the reason is it is the first line in the file but just a small possibility.
So, do NOT use quote marks in your password files, even if some of the connectors work this way.
QUESTION
I use SignalR in order to expose RabbitMQ messages to browsers. This works fine with one app instance obviously. The question is if it could work with multiple instances too without a backplane. I understand that SignalR client could be disconnected from the pod A and connected back to the pod B but what exactly is the issue here? I am fine to lose some messages during reconnection. Is it the only issue? Is reconnection to the pod B treated as a regular new connection so that the client is just subscribed again as it was subscribed normally without reconnection? Or the system doesn't have input parameters it had during initial subscription and therefore it cannot resubscribe without hints?
...ANSWER
Answered 2021-Dec-02 at 02:53As long as all of your SignalR servers are getting the same data from RabbitMQ or getting only the data for the clients connected to them, you don't need a backplane.
You will need a backplane if you have one of the following:
- Clients can communicate with one another.
- One one SignalR server is connected to RabbitMQ but clients can connect to multiple SignalR servers.
- SignalR servers are connected to different queues or getting different data from the same queue.
I have a similar setup with a database instead of RabbitMQ and need a backplane to either have only one of the SignalR servers access the database (and have data be sent to all clients) or to share the database load between servers (and have data be sent to all clients). This way, the server getting the data can have it sent to a client connected to a different server.
I am using SignalR for ASP.NET and the servers do not know who is subscribed to the other servers. All messages are sent over the backplane and each server determines if they apply to their connected clients. This works well with broadcasts for example or if the same user has multiple clients to make sure they all get the same data regardless of the server.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install rabbitmq
Install Docker.
Download automated build from public Docker Hub Registry: docker pull dockerfile/rabbitmq (alternatively, you can build an image from Dockerfile: docker build -t="dockerfile/rabbitmq" github.com/dockerfile/rabbitmq)
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page