activemq | Docker image for ActiveMQ | Continuous Deployment library
kandi X-RAY | activemq Summary
kandi X-RAY | activemq Summary
Dockerfile to build a ActiveMQ container image.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
- Sets the activemq accounts
- Set the activemq configuration
- Replaces all occurrences of a regular expression in a file
- Set the jmx access command
- This function is used to set an activemq web access
- Remove default account
- Set the activemq credentials
- Set the activemq groups
- Sets the activemq wrapper
- Add a line to a file
- Set activemq users
- Set the activemq log4j configuration
activemq Key Features
activemq Examples and Code Snippets
docker run --name='activemq' -it --rm -P \
webcenter/activemq:latest
docker run --name='activemq' -d \
-e 'ACTIVEMQ_NAME=amqp-srv1' \
-e 'ACTIVEMQ_REMOVE_DEFAULT_ACCOUNT=true' \
-e 'ACTIVEMQ_ADMIN_LOGIN=admin' -e 'ACTIVEMQ_ADMIN_PASSWORD=your_passwo
docker pull webcenter/activemq:5.14.3
docker pull webcenter/activemq:latest
git clone https://github.com/disaster37/activemq.git
cd activemq
docker build --tag="$USER/activemq" .
docker run --name='activemq' -it --rm \
-e 'ACTIVEMQ_MIN_MEMORY=512' \
-e 'ACTIVEMQ_MAX_MEMORY=2048'\
-P
webcenter/activemq:latest
Community Discussions
Trending Discussions on activemq
QUESTION
I took the sample code from Apache here: https://activemq.apache.org/components/cms/example
(The producer section specfically) and tried to rewrite it so it doesn't create any threads for producing. And instead, in my program's main thread, creates a producer object and sets up the connection, session, destination, and so on. Then it sends messages using a message producer. This is all done in a singleton so that my program just has one Producer object and just goes to it whenever it needs to dump any message to one of my queues. This example code seems to create a producer for every thread, set it up everytime, just to send a message, then deletes everything. And it does this for every time you want to want to produce something from your program.
I am crashing right when I try to call send on a message producer with any given message. I found out after some digging that after the send call it tries to lock a mutex and enter a critical section. I guess this is for threading? I don't use threads at all in my code so I guess it crashes because of that... Does anyone know a way to bypass this? I don't want to use multiple threads, I won't need to worry about two threads trying to call send at the same time or whatever the problem is that using mutexes is trying to solve.
...ANSWER
Answered 2021-Jun-08 at 17:07You don't need to create a thread to run the producer in but internally the library is going to use a couple of threads as that is necessary for meeting the API requirements and also just because you don't use multiple threads doesn't means others won't so the mutex is an internal requirement.
You are free to modify the example to only create a producer inside the main thread of the application, the example uses two threads because it is acting as both a producer and consumer.
One likely cause of the error you are receiving is because you did not initialize the ActiveMQ-CPP library:
QUESTION
I have a queue which is getting consumed by an Spring Boot application. As my Spring Boot application is waiting for response from a REST API its not able to process the incoming messages faster due to which the number of pending messages is increasing on my queue.
I have done some R&D and came out with the solution which is mentioned below. Kindly help me by reviewing the solution so that I can know whether I am on a right track.
My current ActiveMQ configuration:
...ANSWER
Answered 2021-Jun-08 at 16:36Generally speaking, more consumers (especially concurrent consumers) will process messages more quickly than fewer consumers so this looks good from that perspective.
QUESTION
I would like to remove messages that are scheduled to be delivered to a specific queue but i'm finding the process to be unnecessarily burdensome.
Here I am sending a blank message to a queue with a delay:
...ANSWER
Answered 2021-Jun-08 at 04:00In order to remove specific messages you need to know the ID which you can get via a browse of the scheduled messages. The only other option available is to use the start and stop time options in the remove operations to remove all messages inside a range.
QUESTION
Spring Integration - Producer Queue capacity limitations
We are using Remote partitioning with MessageChannelPartitionHandler to send partition messages to Queue(ActiveMQ) for workers to be pick and process. The job has huge data to process , many partition messages are publishing to queue and the aggregator of response from replyChannnel is failing with timeout of messages as all messages cant be processed in a given time. We also tried to limit messages published to queue by using queue capacity which resulted into server crash with heap dump generated due to memory issues of holding all these partition messages in internal memory.
We wanted to control the creation of StepExecution split itself , so that memory issue doesn’t occur. Example case is around 4k partition messages are being published to queue and whole job takes around 3hrs.
Can we control the publishing of messages to QueueChannel?
...ANSWER
Answered 2021-Jun-08 at 11:10The job has huge data to process , many partition messages are publishing to queue and the aggregator of response from replyChannnel is failing with timeout of messages as all messages cant be processed in a given time.
You need to increase your timeout or add more workers. The Javadoc of MessageChannelPartitionHandler is clear about that:
QUESTION
JMS messages are available in ativeMQ to be picked by worker, worker configuraiton is as above , we wanted to control how many messages workers can proocess, observed that consumers are always multiplied by 3 times for the configuraiton. from above configuration it is observed that 75 consumers are created and 75 task executions(workerPollerTaskExecutor) are running.
Basically 3 sets of concurrent consumers are being created with same Name(3 threads of senExtractWorkerInGateway.container-1) and same behaviour observed with workerPollerTaskExecutor as well. Can someone help us understanding this why always 3 times multiplied.
...ANSWER
Answered 2021-Jun-07 at 05:44The application context was loading 3 times(due to code issue from our end) causing this behaviour. hope this will helpful for others. Thank you.
QUESTION
I am trying to perform a unit test where I need my mock object to perform an action AFTER a sequence of EXPECT_CALLS, or as an action on one of them while allowing the mocked call to return first.
Here is my non working unit test:
...ANSWER
Answered 2021-Jun-05 at 19:21A socket typically behaves asynchronously (i.e., signals are emitted at some indeterminate time after calling methods), but you are setting up the mock object such that it behaves synchronously (signals are emitted immediately as a result of calling the method). You should be attempting to simulate asynchronous behavior.
Typically, you would achieve this behavior by calling the signal manually (and not as part of an invoke clause):
QUESTION
I'm running Apache ActiveMQ Artemis 2.17.0 inside VM for a month now and just noticed that after around 90 always connected MQTT clients Artemis broker is not accepting new connections. I need Artemis to support at least 200 MQTT clients.
What could be the reason for that? How can I remove this "limit"? Could the VM resources like low memory be causing this?
After restarting Artemis service, all connection are dropped, and I'm able to connect again.
I was receiving this message in logs:
...ANSWER
Answered 2021-Jun-05 at 14:53ActiveMQ Artemis has no default connection limit. I just wrote a quick test based on this which uses the Paho 1.2.5 MQTT client. It spun up 500 concurrent connections using both normal TCP and WebSockets. The test finished in less than 20 seconds with no errors. I'm just running this test on my laptop.
I noticed that your journal-buffer-timeout
is 700000
which seems quite high which means you have a very low write speed of 1.43 writes per millisecond (i.e. a slow disk). The journal-buffer-timeout
that is calculated, for example, on my laptop is 4000
which translates into a write-speed of 250 which is significantly faster than yours. My laptop is nothing special, but it does have an SSD. That said, SSDs are pretty common. If this low write-speed is indicative of the overall performance of your VM it may simply be too weak to handle the load you want. To be clear, this value isn't related directly to MQTT connections. It's just something I noticed while reviewing your configuration that may be indirect evidence of your issue.
The journal-buffer-timeout
value is calculated and configured automatically when the instance is created. You can re-calculate this value later and configure it manually using the bin/artemis perf-journal
command.
Ultimately, your issue looks environmental to me. I recommend you inspect your VM and network. TCP dumps may be useful to see perhaps how/why the connection is being reset. Thread dumps from the server during the time of the trouble would also be worth inspecting.
QUESTION
I have a question on which I am stuck and I am not quite sure how to resolve it.
In my work project I have an ActiveMQ queue and I want to send some metrics to Prometheus which will help me to create some alerts in Grafana. I know that for ActiveMQ Artemis I can use this plugin, but I don't understand 100% how to configure it.
My application is deployed on a Kubernetes cluster and the ActiveMQ broker is there too. So I have created ActiveMQPrometheusMetricsPlugin class which implements org.apache.activemq.artemis.core.server.metrics.ActiveMQMetricsPlugin
. Now is where I get confused right now I should deploy my application and the metrics would be gather by Prometheus? I should do more configuration?
We usually do not build the application on local env. We are using a pipeline which is building and deploying the app to various envs (dev, test, prod). I should do the configuration similar with the GitHub plugin project, deploy it, and after that find those jars on Kubernetes and move them to the correct location? Also dev-ops said to me that we are using a default conf. I don't know if we have a broker.xml
file.
ANSWER
Answered 2021-Jun-03 at 16:09There are a couple of important points to understand before getting started:
- When using the Artemis Prometheus Metrics Plugin neither the broker nor the applications "send" metrics to Prometheus. Prometheus itself must retrieve or "scrape" metrics from the broker. This is why the plugin comes with a servlet. The servlet exposes an HTTP endpoint that Prometheus can use to scrape metrics.
- The Artemis Prometheus Metrics Plugin is part of the broker infrastructure. It is not to be deployed as part of an application. The plugin's jar and war files are deployed on the broker and configured in
broker.xml
andbootstrap.xml
respectively.
The Artemis Prometheus Metrics Plugin provides integration with Prometheus using two modules:
artemis-prometheus-metrics-plugin: This provides the actual implementation of
org.apache.activemq.artemis.core.server.metrics.ActiveMQMetricsPlugin
and packages it with the Micrometer and Prometheus dependencies in an "uber" jar.artemis-prometheus-metrics-plugin-servlet: This provides a war file containing a simple servlet which can be deployed to the broker's embedded web server which then Prometheus can use to scrape metrics.
Once you clone the Artemis Prometheus Metrics Plugin repository simply run mvn install
to build these two modules. The output will be in their respective target
directories.
After building the modules follow these steps to deploy and configure the Artemis Prometheus Metrics Plugin. If you have some kind of dev-ops group which manages and configures your broker then they would follow these steps.
Copy
artemis-prometheus-metrics-plugin/target/artemis-prometheus-metrics-plugin-.jar
to/lib
.Add this to your
/etc/broker.xml
:
QUESTION
In Camel,
...ANSWER
Answered 2021-Jun-04 at 18:09The doc says the JMS MessageID is available as header. So you should be able to return it like this:
QUESTION
I'm trying to work out how to fix this ActiveMQ Artemis error.
Seems the occasional message is too big for SimpleString
, and isn't sending, and it goes to the DLQ.
ANSWER
Answered 2021-Jun-03 at 17:19The 2.6.3.redhat-00015
version corresponds to AMQ 7.2.3 which is quite old at this point. The current AMQ release is 7.8.1. I strongly recommend you upgrade as it's likely you're hitting a bug that's already been fixed.
You may be able to work around the issue by increasing the minimum large message size (e.g using minLargeMessageSize
on core client URLs or amqpMinLargeMessageSize
on your AMQP acceptor
). For what it's worth, the stack-trace indicates that the core JMS client (i.e. not AMQP) is in use when the exception is thrown.
Lastly, it's worth noting that the default minimum large message size is 100 KB not 2 GB as explained in the documentation.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
Install activemq
You can launch the image using the docker command line :. The account admin is "admin" and password is "admin". All settings is the default ActiveMQ's settings. Or you can use fig. Assuming you have fig installed,.
For test purpose :
For production purpose :
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page