RabbitMQ | RabbitMQ 官方.NET CORE教程实操演练 | Continuous Deployment library

 by   sheng-jie C# Version: Current License: No License

kandi X-RAY | RabbitMQ Summary

kandi X-RAY | RabbitMQ Summary

RabbitMQ is a C# library typically used in Devops, Continuous Deployment, Docker, RabbitMQ applications. RabbitMQ has no bugs and it has low support. However RabbitMQ has 7 vulnerabilities. You can download it from GitHub.

RabbitMQ
Support
    Quality
      Security
        License
          Reuse

            kandi-support Support

              RabbitMQ has a low active ecosystem.
              It has 136 star(s) with 59 fork(s). There are 7 watchers for this library.
              OutlinedDot
              It had no major release in the last 6 months.
              There are 1 open issues and 0 have been closed. There are no pull requests.
              It has a neutral sentiment in the developer community.
              The latest version of RabbitMQ is current.

            kandi-Quality Quality

              RabbitMQ has 0 bugs and 0 code smells.

            kandi-Security Security

              RabbitMQ has 7 vulnerability issues reported (0 critical, 2 high, 3 medium, 2 low).
              RabbitMQ code analysis shows 0 unresolved vulnerabilities.
              There are 0 security hotspots that need review.

            kandi-License License

              RabbitMQ does not have a standard license declared.
              Check the repository for any license declaration and review the terms closely.
              OutlinedDot
              Without a license, all rights are reserved, and you cannot use the library in your applications.

            kandi-Reuse Reuse

              RabbitMQ releases are not available. You will need to build from source code and install.
              Installation instructions are not available. Examples and code snippets are available.

            Top functions reviewed by kandi - BETA

            kandi's functional review helps you automatically verify the functionalities of the libraries and avoid rework.
            Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of RabbitMQ
            Get all kandi verified functions for this library.

            RabbitMQ Key Features

            No Key Features are available at this moment for RabbitMQ.

            RabbitMQ Examples and Code Snippets

            Sends the failed messages to the RabbitMQ queue .
            javadot img1Lines of Code : 14dot img1License : Permissive (MIT License)
            copy iconCopy
            @RabbitListener(queues = QUEUE_MESSAGES_DLQ)
                public void processFailedMessagesRetryWithParkingLot(Message failedMessage) {
                    Integer retriesCnt = (Integer) failedMessage.getMessageProperties().getHeaders().get(HEADER_X_RETRIES_COUNT);
                   
            Handle failed messages in RabbitMQ queue .
            javadot img2Lines of Code : 13dot img2License : Permissive (MIT License)
            copy iconCopy
            @RabbitListener(queues = QUEUE_MESSAGES_DLQ)
                public void processFailedMessagesRetryHeaders(Message failedMessage) {
                    Integer retriesCnt = (Integer) failedMessage.getMessageProperties().getHeaders().get(HEADER_X_RETRIES_COUNT);
                    if (  
            Create RabbitMQ Template .
            javadot img3Lines of Code : 6dot img3License : Permissive (MIT License)
            copy iconCopy
            @Bean RabbitTemplate amqpTemplate(ConnectionFactory factory) {
                    RabbitTemplate template = new RabbitTemplate(factory);
                    template.setRoutingKey("remoting.binding");
                    template.setExchange("remoting.exchange");
                    return templa  

            Community Discussions

            QUESTION

            RabbitMQ Delivery Acknowledgement Timeout
            Asked 2022-Mar-30 at 08:04

            I am using a managed RabbitMQ cluster through AWS Amazon-MQ. If the consumers finish their work quickly then everything is working fine. However, depending on few scenarios few consumers are taking more than 30 mins to complete the processing. In that scenarios, RabbitMQ deletes the consumer and makes the same messages visible again in the queue. Becasue of this another consumer picks it up and starts processing. It is happing in the loop. Therefore the same transaction is getting executed again and I am loosing the consumer as well. I am not using any AcknowledgeMode so I believe it's AUTO by default and it has 30 mins limit. Is there any way to increase the Delivery Acknowledgement Timeout for AUTO mode? Or please let me know if anyone has any other solutions for this.

            ...

            ANSWER

            Answered 2021-Sep-02 at 13:29

            This is the response from AWS support.

            From my understanding, I see that your workload is currently affected by the consumer_timeout parameter that was introduced in v3.8.15. We have had a number of reach outs due to this, unfortunately, the service team has confirmed that while they can manually edit the rabbitmq.conf, this will be overwritten on the next reboot or failover and thus is not a recommended solution. This will also mean that all security patching on the brokers where a manual change is applied, will have to be paused. Currently, the service does not support custom user configurations for RabbitMQ from this configuration file, but have confirmed they are looking to address this in future, however, is not able to an ETA on when this will available.

            From the RabbitMQ github, it seems this was added for quorum queues in v3.8.15 (https://github.com/rabbitmq/rabbitmq-server/releases/tag/v3.8.15 ), but seems to apply to all consumers (https://github.com/rabbitmq/rabbitmq-server/pull/2990 ).

            Unfortunately, RabbitMQ itself does not support downgrades (https://www.rabbitmq.com/upgrade.html ) Thus the recommended workaround and safest action form the service team, as of now is to create a new broker on an older version (3.8.11) and set auto minor version upgrade to false, so that it wont be upgraded. Then export the configuration from the existing RabbitMQ instance and import it into new instance and use this instance going forward.

            Source https://stackoverflow.com/questions/68952297

            QUESTION

            How to read retained messages for a Topic
            Asked 2022-Mar-03 at 23:11

            I have a web application that publishes messages to a topic then several Windows services that subscribe to those topics, some with multiple instances. If the services are running when the messages are published everything works correctly but if they are not then the messages are retained on the queue(s) subscribing to that topic but aren't read when the services start back up.

            The desired behavior-

            1. When a message is published to the topic string MyTopic, it is read from the MyTopicQueue only once. I use some wildcard topics so each message is sent to multiple queues, but multiple instances of a services subscribe to the same topic string and each message should be read by only of those instances

            2. If the subscribers to the MyTopic topic aren't online when the message is published then the messages are retained on MyTopicQueue.

            3. When the Windows services subscribing to a particularly topic come back on line each retained message is read from MyTopicQueue by only a single subscriber.

            I've found some [typically for IBM] spotty documentation about the MQSUBRQ and MQSO_PUBLICATIONS_ON_REQUEST options but I'm not sure how I should set them. Can someone please help figure out what I need to do to get my desired behavior? [Other than switching back to RabbitMQ which I can't do though I'd prefer it.]

            My options:

            ...

            ANSWER

            Answered 2022-Mar-03 at 23:11

            If you want the messages to accumulate while you are not connected you need to make the subscription durable by adding MQC.MQSO_DURABLE. In order to be able to resume an existing subscription add MQC.MQSO_RESUME in addition to MQC.MQSO_CREATE.

            Be careful with terminology, what you are describing as retained messages is a durable subscription.

            Retained publications are something else were MQ can retain one most recently published message on each topic and this message will be retrieved by new subscribers by default unless they use MQSO_NEW_PUBLICATIONS_ONLY to skip receiving the retained publication.

            MQSO_PUBLICATIONS_ON_REQUEST allows a subscriber to only receive retained publications on request, it will not receive non-retained publications.

            If you want multiple consumers to work together on a single subscription you have two options:

            1. Look at shared subscribers in XMS.NET, look at the CLONESUPP property.
            2. Create a one time durable subscription to a queue on the topics you want consumed, then have your consumers directly consume from the queue not a topic.

            Source https://stackoverflow.com/questions/71339255

            QUESTION

            Message not dispatched async despite configuring the handler route to be async in Symfony Messenger
            Asked 2022-Mar-02 at 15:39

            I'm working with Symfony 4.4 and Symfony Messenger

            Messenger configuration includes a transport and routing:

            ...

            ANSWER

            Answered 2022-Mar-02 at 15:37

            For some reason this seems to be an error that happens with some frequency, so I rather post an answer instead of a comment.

            You are supposed to add message classes to the routing configuration, not handler classes.

            Your configuration should be, if you want that message to be manages asynchronously:

            Source https://stackoverflow.com/questions/71324758

            QUESTION

            How to throttle my cron worker form pushing messages to RabbitMQ?
            Asked 2022-Feb-21 at 09:22
            Context:

            We have micro service which consumes(subscribes)messages from 50+ RabbitMQ queues.

            Producing message for this queue happens in two places

            1. The application process when encounter short delayed execution business logic ( like send emails OR notify another service), the application directly sends the message to exchange ( which in turn it is sent to the queue ).

            2. When we encounter long/delayed execution business logic We have messages table which has entries of messages which has to be executed after some time.

            Now we have cron worker which runs every 10 mins which scans the messages table and pushes the messages to RabbitMQ.

            Scenario:

            Let's say the messages table has 10,000 messages which will be queued in next cron run,

            1. 9.00 AM - Cron worker runs and it queues 10,000 messages to RabbitMQ queue.
            2. We do have subscribers which are listening to the queue and start consuming the messages, but due to some issue in the system or 3rd party response time delay it takes each message to complete 1 Min.
            3. 9.10 AM - Now cron worker once again runs next 10 Mins and see there are yet 9000+ messages yet to get completed and time is also crossed so once again it pushes 9000+ duplicates messages to Queue.

            Note: The subscribers which consumes the messages are idempotent, so there is no issue in duplicate processing

            Design Idea I had in my mind but not best logic

            I can have 4 status ( RequiresQueuing, Queued, Completed, Failed )

            1. Whenever a message is inserted i can set the status to RequiresQueuing
            2. Next when cron worker picks and pushes the messages successfully to Queue i can set it to Queued
            3. When subscribers completes it mark the queue status as Completed / Failed.

            There is an issue with above logic, let's say RabbitMQ somehow goes down OR in some use we have purge the queue for maintenance.

            Now the messages which are marked as Queued is in wrong state, because they have to be once again identified and status needs to be changed manually.

            Another Example

            Let say I have RabbitMQ Queue named ( events )

            This events queue has 5 subscribers, each subscribers gets 1 message from the queue and post this event using REST API to another micro service ( event-aggregator ). Each API Call usually takes 50ms.

            Use Case:

            1. Due to high load the numbers events produced becomes 3x.
            2. Also the micro service ( event-aggregator ) which accepts the event also became slow in processing, the response time increased from 50ms to 1 Min.
            3. Cron workers follows your design mentioned above and queues the message for each min. Now the queue is becoming too large, but i cannot also increase the number of subscribers because the dependent micro service ( event-aggregator ) is also lagging.

            Now the question is, If keep sending the messages to events queue, it is just bloating the queue.

            https://www.rabbitmq.com/memory.html - While reading this page, i found out that rabbitmq won't even accept the connection if it reaches high watermark fraction (default is 40%). Of course this can be changed, but this requires manual intervention.

            So if the queue length increases it affects the rabbitmq memory, that is reason i thought of throttling at producer level.

            Questions
            1. How can i throttle my cron worker to skip that particular run or somehow inspect the queue and identify it already being heavily loaded so don't push the messages ?
            2. How can i handle the use cases i said above ? Is there design which solves my problem ? Is anyone faced the same issue ?

            Thanks in advance.

            Answer

            Check the accepted answer Comments for the throttling using queueCount

            ...

            ANSWER

            Answered 2022-Feb-21 at 04:45

            You can combine QoS - (Quality of service) and Manual ACK to get around this problem. Your exact scenario is documented in https://www.rabbitmq.com/tutorials/tutorial-two-python.html. This example is for python, you can refer other examples as well.

            Let says you have 1 publisher and 5 worker scripts. Lets say these read from the same queue. Each worker script takes 1 min to process a message. You can set QoS at channel level. If you set it to 1, then in this case each worker script will be allocated only 1 message. So we are processing 5 messages at a time. No new messages will be delivered until one of the 5 worker scripts does a MANUAL ACK.

            If you want to increase the throughput of message processing, you can increase the worker nodes count.

            The idea of updating the tables based on message status is not a good option, DB polling is the main reason that system uses queues and it would cause a scaling issue. At one point you have to update the tables and you would bottleneck because of locking and isolations levels.

            Source https://stackoverflow.com/questions/71186974

            QUESTION

            Elixir release inside Docker container without Rabbit MQ Connection
            Asked 2022-Jan-15 at 00:02

            I am very new to Elixir. I have built an app which runs locally, which works fine. But now i need to build a container for it with Docker.

            However, every attempt to do a release seems to try and connection to RabbitMQ (which is running locally, as a Docker Container).

            I don't want, cant have it try and connect to Rabbit each time this container is built, as it will need to be built by a CI / CD pipeline and will never have access to any Rabbit. I have set it up with an ENV, but this needs to be set within my YAML when deploying to my k8s cluster.

            So this is the Dockerfile:

            ...

            ANSWER

            Answered 2022-Jan-15 at 00:02

            I have tried to create a project such as yours.

            Source https://stackoverflow.com/questions/70712681

            QUESTION

            Using pod Anti Affinity to force only 1 pod per node
            Asked 2022-Jan-01 at 12:50

            I am trying to get my deployment to only deploy replicas to nodes that aren't running rabbitmq (this is working) and also doesn't already have the pod I am deploying (not working).

            I can't seem to get this to work. For example, if I have 3 nodes (2 with label of app.kubernetes.io/part-of=rabbitmq) then all 2 replicas get deployed to the remaining node. It is like the deployments aren't taking into account their own pods it creates in determining anti-affinity. My desired state is for it to only deploy 1 pod and the other one should not get scheduled.

            ...

            ANSWER

            Answered 2022-Jan-01 at 12:50

            I think Thats because of the matchExpressions part of your manifest , where it requires pods need to have both the labels app.kubernetes.io/part-of: rabbitmq and app: testscraper to satisfy the antiaffinity rule.

            Based on deployment yaml you have provided , these pods will have only app: testscraper but NOT pp.kubernetes.io/part-of: rabbitmq hence both the replicas are getting scheduled on same node

            from Documentation (The requirements are ANDed.):

            Source https://stackoverflow.com/questions/70547587

            QUESTION

            RabbitMQ, Celery and Django - connection to broker lost. Trying to re-establish the connection
            Asked 2021-Dec-23 at 15:56

            Celery disconnects from RabbitMQ each time a task is passed to rabbitMQ, however the task does eventually succeed:

            My questions are:

            1. How can I solve this issue?
            2. What improvements can you suggest for my celery/rabbitmq configuration?

            Celery version: 5.1.2 RabbitMQ version: 3.9.0 Erlang version: 24.0.4

            RabbitMQ error (sorry for the length of the log:

            ...

            ANSWER

            Answered 2021-Aug-02 at 07:25

            Same problem here. Tried different settings but with no solution.

            Workaround: Downgrade RabbitMQ to 3.8. After downgrading there were no connection errors anymore. So, I think it must have something to do with different behavior of v3.9.

            Source https://stackoverflow.com/questions/68602834

            QUESTION

            Project Reactor: buffer with parallel execution
            Asked 2021-Dec-10 at 14:44

            I need to copy date from one source (in parallel) to another with batches.

            I did this:

            ...

            ANSWER

            Answered 2021-Dec-04 at 19:50

            You need to do your heavy work in individual Publisher-s which will be materialized in flatMap() in parallel. Like this

            Source https://stackoverflow.com/questions/70083756

            QUESTION

            Password masking only works for JDBC Connectors
            Asked 2021-Dec-03 at 06:17

            We have set our Kafka Connect to be able to read credentials from a file, instead of giving them directly in connector config. This is how a login part of connector config looks like:

            "connection.user": "${file:/kafka/pass.properties:username}",

            "connection.password": "${file:/kafka/pass.properties:password}",

            We also added these 2 lines to "connect-distributed.properties" file:

            config.providers=file

            config.providers.file.class=org.apache.kafka.common.config.provider.FileConfigProvider

            Mind that it works perfectly for JDBC connectors, so there is no problem with the pass.properties file. But for other connectors, such as couchbase, rabbitmq, s3 etc. it causes problems. All these connectors work fine when we give credentials directly but when we try to make Connect to read them from a file it gives some errors. What could be the reason? I don't see any JDBC specific configuration here.

            EDIT:

            An error about couchbase in connect.log:

            ...

            ANSWER

            Answered 2021-Dec-03 at 06:17

            Looks like the problem was quote marks in pass.properties file. The interesting thing is, even if credentials are typed with or without quote marks, JDBC connectors work well. Maybe the reason is it is the first line in the file but just a small possibility.

            So, do NOT use quote marks in your password files, even if some of the connectors work this way.

            Source https://stackoverflow.com/questions/70195687

            QUESTION

            Can I avoid of using a SignalR backplane behind a load balancer?
            Asked 2021-Dec-02 at 02:53

            I use SignalR in order to expose RabbitMQ messages to browsers. This works fine with one app instance obviously. The question is if it could work with multiple instances too without a backplane. I understand that SignalR client could be disconnected from the pod A and connected back to the pod B but what exactly is the issue here? I am fine to lose some messages during reconnection. Is it the only issue? Is reconnection to the pod B treated as a regular new connection so that the client is just subscribed again as it was subscribed normally without reconnection? Or the system doesn't have input parameters it had during initial subscription and therefore it cannot resubscribe without hints?

            ...

            ANSWER

            Answered 2021-Dec-02 at 02:53

            As long as all of your SignalR servers are getting the same data from RabbitMQ or getting only the data for the clients connected to them, you don't need a backplane.

            You will need a backplane if you have one of the following:

            • Clients can communicate with one another.
            • One one SignalR server is connected to RabbitMQ but clients can connect to multiple SignalR servers.
            • SignalR servers are connected to different queues or getting different data from the same queue.

            I have a similar setup with a database instead of RabbitMQ and need a backplane to either have only one of the SignalR servers access the database (and have data be sent to all clients) or to share the database load between servers (and have data be sent to all clients). This way, the server getting the data can have it sent to a client connected to a different server.

            I am using SignalR for ASP.NET and the servers do not know who is subscribed to the other servers. All messages are sent over the backplane and each server determines if they apply to their connected clients. This works well with broadcasts for example or if the same user has multiple clients to make sure they all get the same data regardless of the server.

            Source https://stackoverflow.com/questions/70191868

            Community Discussions, Code Snippets contain sources that include Stack Exchange Network

            Vulnerabilities

            No vulnerabilities reported

            Install RabbitMQ

            You can download it from GitHub.

            Support

            For any new features, suggestions and bugs create an issue on GitHub. If you have any questions check and ask questions on community page Stack Overflow .
            Find more information at:

            Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items

            Find more libraries
            CLONE
          • HTTPS

            https://github.com/sheng-jie/RabbitMQ.git

          • CLI

            gh repo clone sheng-jie/RabbitMQ

          • sshUrl

            git@github.com:sheng-jie/RabbitMQ.git

          • Stay Updated

            Subscribe to our newsletter for trending solutions and developer bootcamps

            Agree to Sign up and Terms & Conditions

            Share this Page

            share link