kandi X-RAY | dataSync Summary
kandi X-RAY | dataSync Summary
Top functions reviewed by kandi - BETA
- Main method for testing
- Parses a tns file
- Search for a single connection detail if it exists
- Parse connection details
- Start insert record
- Flush buffer
- Build an INSERT statement
- Dump table
- Replace all occurrences of rchars with rchars
- Main implementation
- Start write
- Split plugin
- Init data exchange
- Start reader
- Report the schedule
- Generate log
- Main method
- Start insert
- Close all warnings
- Initialize parameters
- Convert Any object to Object array
- Split table
- Get insert statement
- Escape characters
- Start reading the file
- Initializes database connection
dataSync Key Features
dataSync Examples and Code Snippets
Trending Discussions on dataSync
I'm running Apache ActiveMQ Artemis 2.17.0 inside VM for a month now and just noticed that after around 90 always connected MQTT clients Artemis broker is not accepting new connections. I need Artemis to support at least 200 MQTT clients.
What could be the reason for that? How can I remove this "limit"? Could the VM resources like low memory be causing this?
After restarting Artemis service, all connection are dropped, and I'm able to connect again.
I was receiving this message in logs:...
ANSWERAnswered 2021-Jun-05 at 14:53
ActiveMQ Artemis has no default connection limit. I just wrote a quick test based on this which uses the Paho 1.2.5 MQTT client. It spun up 500 concurrent connections using both normal TCP and WebSockets. The test finished in less than 20 seconds with no errors. I'm just running this test on my laptop.
I noticed that your
700000 which seems quite high which means you have a very low write speed of 1.43 writes per millisecond (i.e. a slow disk). The
journal-buffer-timeout that is calculated, for example, on my laptop is
4000 which translates into a write-speed of 250 which is significantly faster than yours. My laptop is nothing special, but it does have an SSD. That said, SSDs are pretty common. If this low write-speed is indicative of the overall performance of your VM it may simply be too weak to handle the load you want. To be clear, this value isn't related directly to MQTT connections. It's just something I noticed while reviewing your configuration that may be indirect evidence of your issue.
journal-buffer-timeout value is calculated and configured automatically when the instance is created. You can re-calculate this value later and configure it manually using the
bin/artemis perf-journal command.
Ultimately, your issue looks environmental to me. I recommend you inspect your VM and network. TCP dumps may be useful to see perhaps how/why the connection is being reset. Thread dumps from the server during the time of the trouble would also be worth inspecting.
I have a cluster of Artemis in Kubernetes with 3 group of master/slave:...
ANSWERAnswered 2021-Jun-02 at 01:56
I've taken your simplified configured with just 2 nodes using a non-wildcard queue with
0, and I reproduced the behavior you're seeing on my local machine (i.e. without Kubernetes). I believe I see why the behavior is such, but in order to understand the current behavior you first must understand how redistribution works in the first place.
In a cluster every time a consumer is created the node on which the consumer is created notifies every other node in the cluster about the consumer. If other nodes in the cluster have messages in their corresponding queue but don't have any consumers then those other nodes redistribute their messages to the node with the consumer (assuming the
ON_DEMAND and the
redistribution-delay is >=
In your case however, the node with the messages is actually down when the consumer is created on the other node so it never actually receives the notification about the consumer. Therefore, once that node restarts it doesn't know about the other consumer and does not redistribute its messages.
I see you've opened ARTEMIS-3321 to enhance the broker to deal with this situation. However, that will take time to develop and release (assuming the change is approved). My recommendation to you in the mean-time would be to configure your client reconnection which is discussed in the documentation, e.g.:
I am setting up a cluster of Artemis in Kubernetes with 3 group of master/slave:...
ANSWERAnswered 2021-May-11 at 23:49
First, it's important to note that there's no feature to make a client reconnect to the broker from which it disconnected after the client crashes/restarts. Generally speaking the client shouldn't really care about what broker it connects to; that's one of the main goals of horizontal scalability.
It's also worth noting that if the number of messages on the brokers and the number of connected clients is low enough that this condition arises frequently that almost certainly means you have too many brokers in your cluster.
That said, I believe the reason that your client isn't getting the messages it expects is because you're using the default
-1) which means messages will not be redistributed to other nodes in the cluster. If you want to enable redistribution (which is seems like you do) then you should set it to >= 0, e.g.:
I have a RecyclerView for a list of languages which can be selected using checkboxes. It uses LiveData, so that when the user selects or deselects a language from another client the change is automatically reflected in all of them.
Every time I click on the checkbox, the record for the language is selected and deselected in the room database, and on the server too. Other devices in which the user is logged in are automatically updated too, so I am using LiveData. I am clashing with the RecyclerView bug that pushes the list back to the top every time it is updated, regardless of the current scroll position.
So, say that I have two phones with the language screen open, I scroll them both all the way down to the bottom, and select or deselect the last language. The RecyclerView on both devices scrolls back to the top, but I would like it to stay still.
I have seen plenty of workaround to fix this behaviour, but I am relatively new to Android and Kotlin, and all the patches suggested are not explained well enough for me to be able to implement them. What do I need to do, and where should I do it?
The RecyclerView is in a Fragment:...
ANSWERAnswered 2021-Feb-07 at 11:30
The solution was given by Teo in this more specific question:
Here is his solution:
// STEP 1 - make a function to notify the variable
I was faced with a broker's (ActiveMQ-Artemis version 2.17.0) behavior unusual for me.
With a large number of messages and when they are quickly sent by the manufacturer, some of the messages reach the queue after the complete execution and the manufacturer has stopped. This is especially evident when the hard drive is normal, not SSD.
As an example, I use the following Apache Camel 2.25.3 route to send messages...
ANSWERAnswered 2021-Apr-14 at 12:10
This kind of behavior is expected when sending non-durable messages because non-durable messages are sent in a non-blocking manner. It's not clear whether or not you're sending non-durable messages, but you've also set
blockOnDurableSend=false on your client's URL so even durable messages will be sent non-blocking.
From the broker's perspective the messages haven't actually arrived so there's no way to see the number of messages that have been sent but are not yet in the queue.
If you want to ensure that when the Camel route terminates all messages are written to the queue then you should send durable messages and set
blockOnDurableSend=true (which is the default value).
Keep in mind that blocking will reduce performance (potentially substantially) based on the speed of you hard disk. This is because the client will have to wait for a response from the broker for every message it sends, and for every message the broker receives it will have to persist that message to disk and wait for the hard disk to sync before it sends a response back to the client. Therefore, if your hard disk can't sync quickly the client will have to wait a long time relatively speaking.
One of the configuration parameters that influences this behavior is
journal-buffer-timeout. This value is calculated automatically and set when the broker instance is first created. You'll see evidence of this logged, e.g.:
I am building a 3d virtual Auction house in Unity for my semester's project where multiple users can join the host server and interact with the action items in the game and increase their bids. The updated bid values should be synced across all the users. I am able to sync player's movements across the network by adding my "character controller" to the "player prefab" of the network manager. Users are also able to interact with the other game object to increase the item's bid locally. I am facing problems in syncing the updated bids of each auction item across the network for every client.
I am adding each auction item to the "registered spawnable prefabs" list of the network manager.
This is the error I am getting...
ANSWERAnswered 2021-Mar-25 at 08:12
As said by default only the Host/Server has the Network Authority over spawned objects.
Unless you spawn them with a certain client having the authority over the object (using
Networking.NetworkServer.SpawnWithClientAuthority) but this also makes no sense if multiple clients shall be able to interact with the object as in your case.
variables will have their values sychronized from the server to clients
.. and only in this direction!
Now there is the option to gain the authority over an object via
Networking.NetworkIdentity.AssignClientAuthority so yes, you could send a
[Command] to the Host/Server and tell it to assign the authority to you so you can call a
[Command] on that object.
However, this has some huge flaws:
- As this might change very often and quick you always have to make sure first that you currently have the authority and also the Host/Server has to make sure authority is assignable to you
- You don't know when exactly the Host/Server is done assigning the authority so you would have to send a
ClientRpcback to the client so he knows that now he can send a
Commandback to the host ... you see where this is going: Why not simply tell the Host/Server already with the very first
[Command]what shall happen ;)
Instead you have to (should/ I would ^^) put your logic into your player script (or at least a script attached to the player object you have authority over) and rather do something like
ANSWERAnswered 2021-Mar-25 at 02:29
When you download the tensor using
dataSync() it just keeps the value. If you wanted the object with a description of each of the results without the tensors you would just have to
console.log(result). Then you expand the result from your log in the browsers console it should return something like this:
I'm using the simplest model available to make this testcase in Node.js:...
ANSWERAnswered 2021-Mar-21 at 10:55
Using SGD in regression could be tricky as outputs don't have an upper bound and that can lead
NaN values in loss, in other words exploding gradients etc.
Changing optimizer to
RMSProp works most of the times.
I applied transformation during training phase in pytorch then I convert my model to run in tensorflow.js. It is working fine but got wrong predictions as I didn't apply same transformation....
ANSWERAnswered 2021-Mar-03 at 09:34
Even though I am not too much acquainted with pytorch documentation, a quick look at it shows that the first parameter for
Normalize is for the mean of the dataset and the second parameter is for the standard deviation.
To normalize using these two parameters with tensorflow.js, the following can be used
We're using ActiveMQ Artemis 2.8.1 with MQTT.
We integrated ActiveMQ Artemis with Wildfly. Consider our server connects using client id
SAM on 50 topics. On checking Artemis with JConsole we can see that each client subscription results in a queue whose name follows the pattern
_. In my case consider topic as
com/api/output which means the subscription queue name will be
SAM_com/api/output. Likewise there will be 50 other subscription queues using the same naming pattern (i.e.
Based on my research each queue is used to store messages sent to that topic for each client's subscription. For example, when the same topic (e.g
1/2/3) is subscribed to by 3 different clients (e.g
C) then there will be 3 subscription queues (i.e.
C_1/2/3). Therefore when a message is sent to the topic
1/2/3 Artemis will put that messages in the subscription queues
Now same client wants to connect to broker with different client id now (e.g.
TOM). My Client initiates connection drop and Artemis also recognizes connection drop, then my client connects to broker with new client id (
TOM) for the same 50 topics. Now there will be 100 subscription queue total with each topic having 2 (i.e. one for each clientid -
TOM). I find the reason that
SAM queues are maintained because while initiating the connection we use
false. So all those subscription queues will be durable, hence the queues are maintained even if the client is disconnected.
When a message is sent to the topic it will be put in two queues (
TOM). Our client is connected to broker with client id
TOM so the
TOM queue has consumer which results in all the
TOM queue messages being consumed by the client. However, the
SAM queue accumulates messages and eats up all JVM's heapspace until the server dies.
The purpose of durable queues is to maintain the message even when the client disconnects, but is there any way to tell ActiveMQ Artemis to purge the client's queues and messages if client doesn't show up for certain time period or to purge the messages from the client's subscription queue when client drops the connection?
ANSWERAnswered 2021-Jan-11 at 15:46
It's really up to the subscriber to delete its subscription when it is done with it. In other words, the MQTT client should unsubscribe the existing subscriber before creating a new one. The broker has no way to know whether or not the subscriber plans to reconnect later to fetch the messages from its subscription.
If you really want the broker to delete the subscription queue you can use the following address-settings:
: This must be
: This is the delay (in milliseconds) from the time the last consumer disconnects until the broker deletes the subscription queue.
: This must be
-1in order to ignore any messages in the subscription queue.
To be clear, I would recommend against this configuration as the broker may inadvertently delete legitimate messages. For example, if a subscriber mistakenly disconnects (e.g. due to a network failure, hardware failure, JVM crash, etc.) for longer than the configured
then the broker will delete its subscription queue and all the subscriber's messages will effectively be lost.
No vulnerabilities reported
You can use dataSync like any standard Java library. Please include the the jar files in your classpath. You can also use any IDE and you can run and debug the dataSync component as you would do with any other Java program. Best practice is to use a build tool that supports dependency management such as Maven or Gradle. For Maven installation, please refer maven.apache.org. For Gradle installation, please refer gradle.org .
Reuse Trending Solutions
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page