movim | Movim - Decentralized social platform | Messaging library
kandi X-RAY | movim Summary
Support
Quality
Security
License
Reuse
- Divides the divis by a given number .
- Load audio file .
- the main loop
- Multiplies the given value with the given multiplier .
- Constructs a new Map .
- Retrieve the length of the remote URL .
- call this module to the main module
- Wrap a function to wrap the function arguments .
- Handle Blob
- Decode and add the last endian endian hash .
movim Key Features
movim Examples and Code Snippets
Trending Discussions on Messaging
Trending Discussions on Messaging
QUESTION
I'm facing some issue in the firebase phone authentication OTP sender name means when the user receives the Firebase OTP the Sender Name shows as CloudOTP. More details please check the screenshot.
Current User Receive OTP with name
I want to show number like below screenshot.
Question: How to change sender name?
Any help would be greatly appreciated.
Thanks in advance.
ANSWER
Answered 2021-Nov-17 at 15:07The name or number that is shown with the text message that contains the OTP is determined by the provider and your phone. As far as I know there is no way for you to control that.
QUESTION
I am trying to formulate a query to select a conversation based on members passed in, this can be many participants.
Users can have multiple conversation with the same users in, and they can rename conversations.
Here is my messages members table:
CREATE TABLE `messages_members` (
`relation_id` int(11) NOT NULL,
`user_id` int(11) NOT NULL,
`date` datetime NOT NULL,
`seen` smallint(6) NOT NULL,
PRIMARY KEY (`relation_id`,`user_id`),
KEY `seen` (`seen`,`date`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci
And the messages relation table:
CREATE TABLE `messages_relation` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`sender_id` int(11) NOT NULL,
`date` datetime NOT NULL,
`name` varchar(255) COLLATE utf8mb4_unicode_ci DEFAULT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=49 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci
Messages table for reference:
CREATE TABLE `messages` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`relation_id` int(11) NOT NULL,
`user_id` int(11) NOT NULL,
`message` longtext COLLATE utf8mb4_unicode_ci NOT NULL,
`date` datetime NOT NULL,
`updated` smallint(6) NOT NULL,
PRIMARY KEY (`id`),
KEY `messages` (`relation_id`,`user_id`),
KEY `date` (`date`)
) ENGINE=InnoDB AUTO_INCREMENT=135 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci
My latest solution is not working right as it is selecting a wrong conversation based on members count passed in.
Only thing I can think of at the momemt is to take the limit off the query and then loop through each result and do some checks to see if the members match what was asked for.
I wonder if there is a more elegant solution using a query though to do this.
Here is my current query: EDIT: Updated query as was doing the count properly.
SELECT relation_id AS relation, GROUP_CONCAT(user_id) AS members, COUNT(user_id) AS selected_members,
(SELECT COUNT(user_id) FROM messages_members WHERE relation_id = relation) AS total_members
FROM messages_members
WHERE user_id IN( ". $inQuery ." )
GROUP BY relation_id
HAVING total_members = ? AND selected_members = ?
I am passing in each users_id from an array, and also the total count of the array to match total members.
For more clarity to try explain what im trying to do there could be multiple conversations with simialir participants:
- Conversation A : user1, user2, user3
- Conversation B : user1, user2
- Conversation C : user1, user2, user5
I am trying to just select conversation A based on the three users passed in, user1, user2, user3
Some sample data from the messages_members table: relation_id, user_id, date, seen
relation_id user_id date seen 2 1 2020-11-18 19:16:54.000 0 5 4 2020-11-18 19:53:34.000 0 5 6 2020-11-18 19:53:34.000 0 14 1 2020-11-19 00:02:44.000 0 14 3 2020-11-19 00:02:44.000 0 19 1 2020-11-19 00:16:32.000 0 19 3 2020-11-19 00:16:32.000 0 20 3 2020-11-19 00:17:37.000 0 20 4 2020-11-19 00:17:37.000 0 21 1 2020-11-19 00:18:09.000 0 21 3 2020-11-19 00:18:09.000 0 21 6 2020-11-19 00:18:09.000 0 22 1 2020-11-19 00:18:45.000 0 22 4 2020-11-19 00:18:45.000 0 22 6 2020-11-19 00:18:45.000 0 23 1 2020-11-19 00:19:06.000 0 23 3 2020-11-19 00:19:06.000 0 23 4 2020-11-19 00:19:06.000 0 24 3 2020-11-19 00:19:42.000 0 24 4 2020-11-19 00:19:42.000 0 24 6 2020-11-19 00:19:42.000 0 25 3 2020-11-19 01:41:44.000 0 25 5 2020-11-19 01:41:44.000 0 43 1 2022-02-28 17:38:34.000 0 43 54 2022-02-28 17:38:35.000 0 46 1 2022-03-16 23:24:43.000 0 46 5 2022-03-16 23:24:43.000 0 47 1 2022-03-16 23:25:51.000 0 47 3 2022-03-16 23:25:51.000 0 47 5 2022-03-16 23:25:51.000 0 48 1 2022-03-17 00:19:26.000 0 2 5 2020-11-18 19:16:54.000 1 23 6 2020-11-19 00:19:06.000 1 47 6 2022-03-16 23:25:51.000 1 48 15 2022-03-17 00:19:26.000 1 54 3 2022-03-19 00:19:22.000 1 54 5 2022-03-19 00:19:22.000 1 55 1 2022-03-19 00:23:18.000 1 55 3 2022-03-19 00:23:18.000 1 55 5 2022-03-19 00:23:18.000 1For some examples using the above data, relation_id is one conversation.
So using the bottom 3 examples,
- If i want to look for a conversation just between user 1 and user 3, it should not return relation_id 55, because it also has user 5 in it.
- Same as relation_id 47 as that has user 5 in it too.
- It should just return relation_id 14 which only has user1, and user3. There is more conversations just between user1 and user3 but I would just return the first one it finds.
ANSWER
Answered 2022-Mar-19 at 04:08I'm not a php guy, but something like this would return only conversations involving user_id 1 and user_id 3:
SELECT mm.relation_id
, GROUP_CONCAT(mm.user_id) AS User_List
FROM message_members mm
WHERE mm.user_id IN (1,3) -- find user_id 1 or 3
AND NOT EXISTS (
-- Does not involve other user_id's
SELECT NULL
FROM message_members ex
WHERE ex.relation_id = mm.relation_id
AND ex.user_id NOT IN (1,3) -- exclude user_id 1 or 3
)
GROUP BY mm.relation_id
-- Where "2" is the unique number of users
-- i.e. Find user 1 and user 2 = 2 distinct user_id's
HAVING COUNT(DISTINCT mm.user_id) = 2
;
Results:
relation_id | User_List ----------: | :-------- 14 | 1,3 19 | 1,3
db<>fiddle here
QUESTION
I am writing a service using Go and using RabbitMQ for messaging. I need to add information in the header that should contain where the message flows through, it should add the exchange
name or the queue
name in the message header as and when it enters one.
Can someone tell me how this can be done?
ANSWER
Answered 2022-Mar-08 at 18:03Every delivered message has a set of properties. Two of these are the exchange used to route the message, and the routing key. Depending on the type of exchange you can also figure out the queue name based on this information.
https://www.rabbitmq.com/amqp-0-9-1-quickref.html
If you need to know when a message is published you can use this plugin - https://github.com/rabbitmq/rabbitmq-message-timestamp
NOTE: the RabbitMQ team monitors the rabbitmq-users
mailing list and only sometimes answers questions on StackOverflow.
QUESTION
I am attempting to use Quarkus' AMQP (reactive-messaging-amqp
) extension to decouple work from the original REST request. The idea being the REST call would kick off a long running action and could later come back to get the result.
However, it seems that in my code, Quarkus runs each step in the same thread, completing the work before returning from the original sendNewLRA()
call. I would have assumed that the message would be sent through AMQP, thus decoupling the process after the message was sent. Why isn't this the case? I currently don't have any AMQP/messaging specific configuration, and just letting the default run from its own TestContainer (managed by Quarkus)
REST handler:
@Inject
LRAMessenger messenger;
@LRA(end = false)
@GET
@Path("start")
@Produces(MediaType.TEXT_PLAIN)
public Response hello(
@HeaderParam(LRA_HTTP_CONTEXT_HEADER) URI lraId,
@QueryParam("processTime") int processTime,
@QueryParam("payload") String payload
) {
log.info("Start. LRA ID: {}", lraId);
StartMessage start = new StartMessage();
start.setLraId(lraId);
start.setProcessTime(processTime);
start.setPayload(payload);
this.messenger.sendNewLRA(start); // blocks here
log.info("Sent lra processing message.");
return Response.ok(lraId).build();
}
Messaging code:
@ApplicationScoped
@Slf4j
public class LRAMessenger {
@Inject
NarayanaLRAClient lraClient;
@Inject
@Channel("lra-out")
Emitter startEmitter;
/**
* Method to kick off backend processing.
* @param startMessage The mesage to send
*/
@Incoming("lra-start")
public void sendNewLRA(StartMessage startMessage) {
startEmitter.send(startMessage);
}
@Incoming("lra-out")
public void processLRA(StartMessage startMessage) throws InterruptedException {
log.info("Got lra message in process step: {}", startMessage);
lraClient.setCurrentLRA(startMessage.getLraId());
int waitTime = startMessage.getProcessTime() / 10;
for (int percent = 10; percent <= 100; percent += 10) {
log.info("Waiting to simulate processing...");
Thread.sleep(waitTime);
log.info("Done waiting ({}%)", percent);
}
log.info("Waiting to simulate processing completed.");
lraClient.closeLRA(startMessage.getLraId());
log.info("Closed LRA.");
}
}
Output:
2022-02-21 11:48:28,723 INFO [org.acm.cus.dem.end.LRAResourceTest] (main) testing LRA.
2022-02-21 11:48:29,192 INFO [org.acm.cus.dem.end.LRAResource] (executor-thread-0) Start. LRA ID: http://localhost:49251/lra-coordinator/0_ffffac110006_b651_6213c25d_2
2022-02-21 11:48:29,200 INFO [org.acm.cus.dem.mes.LRAMessenger] (executor-thread-0) Got lra message in process step: StartMessage(processTime=10000, payload=null)
2022-02-21 11:48:29,200 INFO [org.acm.cus.dem.mes.LRAMessenger] (executor-thread-0) Waiting to simulate processing...
2022-02-21 11:48:30,201 INFO [org.acm.cus.dem.mes.LRAMessenger] (executor-thread-0) Done waiting (10%)
2022-02-21 11:48:30,202 INFO [org.acm.cus.dem.mes.LRAMessenger] (executor-thread-0) Waiting to simulate processing...
2022-02-21 11:48:31,203 INFO [org.acm.cus.dem.mes.LRAMessenger] (executor-thread-0) Done waiting (20%)
2022-02-21 11:48:31,204 INFO [org.acm.cus.dem.mes.LRAMessenger] (executor-thread-0) Waiting to simulate processing...
2022-02-21 11:48:32,205 INFO [org.acm.cus.dem.mes.LRAMessenger] (executor-thread-0) Done waiting (30%)
2022-02-21 11:48:32,206 INFO [org.acm.cus.dem.mes.LRAMessenger] (executor-thread-0) Waiting to simulate processing...
2022-02-21 11:48:33,207 INFO [org.acm.cus.dem.mes.LRAMessenger] (executor-thread-0) Done waiting (40%)
2022-02-21 11:48:33,207 INFO [org.acm.cus.dem.mes.LRAMessenger] (executor-thread-0) Waiting to simulate processing...
2022-02-21 11:48:34,208 INFO [org.acm.cus.dem.mes.LRAMessenger] (executor-thread-0) Done waiting (50%)
2022-02-21 11:48:34,209 INFO [org.acm.cus.dem.mes.LRAMessenger] (executor-thread-0) Waiting to simulate processing...
2022-02-21 11:48:35,210 INFO [org.acm.cus.dem.mes.LRAMessenger] (executor-thread-0) Done waiting (60%)
2022-02-21 11:48:35,211 INFO [org.acm.cus.dem.mes.LRAMessenger] (executor-thread-0) Waiting to simulate processing...
2022-02-21 11:48:36,211 INFO [org.acm.cus.dem.mes.LRAMessenger] (executor-thread-0) Done waiting (70%)
2022-02-21 11:48:36,212 INFO [org.acm.cus.dem.mes.LRAMessenger] (executor-thread-0) Waiting to simulate processing...
2022-02-21 11:48:37,212 INFO [org.acm.cus.dem.mes.LRAMessenger] (executor-thread-0) Done waiting (80%)
2022-02-21 11:48:37,213 INFO [org.acm.cus.dem.mes.LRAMessenger] (executor-thread-0) Waiting to simulate processing...
2022-02-21 11:48:38,214 INFO [org.acm.cus.dem.mes.LRAMessenger] (executor-thread-0) Done waiting (90%)
2022-02-21 11:48:38,215 INFO [org.acm.cus.dem.mes.LRAMessenger] (executor-thread-0) Waiting to simulate processing...
2022-02-21 11:48:39,216 INFO [org.acm.cus.dem.mes.LRAMessenger] (executor-thread-0) Done waiting (100%)
2022-02-21 11:48:39,216 INFO [org.acm.cus.dem.mes.LRAMessenger] (executor-thread-0) Waiting to simulate processing completed.
2022-02-21 11:48:39,235 INFO [org.acm.cus.dem.mes.LRAMessenger] (executor-thread-0) Closed LRA.
2022-02-21 11:48:39,237 INFO [org.acm.cus.dem.end.LRAResource] (executor-thread-0) Sent lra processing message.
Note: I would expect for the Sent lra processing message.
log to appear early in the process, probably before the Got lra message in process step
log message.
ANSWER
Answered 2022-Feb-22 at 16:52Found the answer, or at least a fix...
Adding @Blocking
to the second step in the messaging chain seems to have decoupled the process:
@Incoming("lra-out")
@Blocking
public void processLRA(StartMessage startMessage) throws InterruptedException {
log.info("Got lra message in process step: {}", startMessage);
// ...
QUESTION
I want to understand the behavior I'm seeing with the following code. Using the RabbitMQ.Client library version 6.2.2.
Expected behavior: Connections are created quickly and the process does not slow down.
Actual behavior: First 6 connections are created quickly, after that there is a significant slowdown and connections are created one by done (1s apart).
Note; starting the program multiple times shows similar behavior. That leads me to believe that the bottleneck is per-process rather than RabbitMQ or system resources.
Note 2; system resources are not the bottleneck (AFAIK).
Does anybody know what is causing the observed behavior? RabbitMQ installed on Windows 10 with default settings.
using RabbitMQ.Client;
using System;
namespace ConsoleApp1
{
internal class Program
{
static void Main(string[] args)
{
for (int i = 0; i < 50; i++)
{
var factory = new ConnectionFactory() { HostName = "localhost" };
var connection = factory.CreateConnection();
var channel1 = connection.CreateModel();
var channel2 = connection.CreateModel();
Console.WriteLine(i);
}
Console.ReadLine();
}
}
}
EDIT: I know this violates every best practice regarding "Single connection per process". I'm just curious what is limiting the connection creation and if there is any setting that can control this behavior.
ANSWER
Answered 2022-Feb-21 at 15:25The .NET client uses the ThreadPool
which probably doesn't have enough threads out of the box. You need to increase the amount available:
See issues and discussion here:
https://github.com/rabbitmq/rabbitmq-dotnet-client/search?q=threadpool
NOTE: the RabbitMQ team monitors the rabbitmq-users
mailing list and only sometimes answers questions on StackOverflow.
QUESTION
Som I'm currently looking into updating our very simple service bus service to the latest version (Asure.Messaging.Servicebus) and I'm running into a smaller problem here.
The thing is I want to complete or abandon received or peaked messages manually by delegating the message back to methods in my service class to handle the job.
Here is my simple class so far, exposed by an interface.
using myProject.Interfaces;
using myProject.Utilities;
using Azure.Messaging.ServiceBus;
using System;
namespace myProject.Services
{
///
/// Service bus service controller
///
public class ServiceBusService: IServiceBusService
{
private IConfigurationUtility _configurationUtility;
static string connectionString;
static ServiceBusClient client;
static ServiceBusSender sender;
static ServiceBusReceiver receiver;
public ServiceBusService(IConfigurationUtility configurationUtility)
{
_configurationUtility = configurationUtility;
connectionString = _configurationUtility.GetSetting("ServiceBusConnectionString");
client = new ServiceBusClient(connectionString);
}
///
/// Sending message.
///
///
///
public void SendMessage(string messageContent, string queueName)
{
sender = client.CreateSender(queueName);
ServiceBusMessage message = new ServiceBusMessage(messageContent);
sender.SendMessageAsync(message).Wait();
}
///
/// Receive message.
///
///
///
public ServiceBusReceivedMessage ReceiveMessage(string queueName)
{
receiver = client.CreateReceiver(queueName);
ServiceBusReceivedMessage receivedMessage = receiver.ReceiveMessageAsync().Result;
return receivedMessage;
}
///
/// Peek message.
///
///
public ServiceBusReceivedMessage PeekMessage(string queueName)
{
receiver = client.CreateReceiver(queueName);
ServiceBusReceivedMessage peekedMessage = receiver.PeekMessageAsync().Result;
return peekedMessage;
}
///
/// Complete message.
///
///
public void CompleteMessage(ServiceBusReceivedMessage message)
{
receiver.CompleteMessageAsync(message).Wait();
}
///
/// Abandon message.
///
///
public void AbandonMessage(ServiceBusReceivedMessage message)
{
receiver.AbandonMessageAsync(message).Wait();
}
}
}
everything works well when I'm handling one message at a time, but if I would process two messages at a time for example I get this error:
"The lock supplied is invalid. Either the lock expired, or the message has already been removed from the queue, or was received by a different receiver instance"
And I think the "or was received by a different receiver instance" part is my problem.
Am I thinking about this the wrong way? shouldn't I be able to handle for example: abandoning multiple single received messages at a time?
Thanks in advance.
UPDATE:
So i thing i managed to get it working in regards to what Jesse Squire have been suggested to do in reflection to the new Servicebus ducumentation provided by microsoft.
I Made a new class called ServicebusFactory, added the code provided by Jesse below, to test it out and changed the "= new();" part for the initialized "ConcurrentDictionary's" at the top of the example, as initializing them like this resulted in the error below:
CS8400 Feature 'target-typed object creation' is not available in C# 8.0. Please use language version 9.0 or greater.
Result:
public class ServiceBusFactory : IAsyncDisposable, IServiceBusFactory
{
private readonly ServiceBusClient _client;
private readonly ConcurrentDictionary _senders = new ConcurrentDictionary();
private readonly ConcurrentDictionary _receivers = new ConcurrentDictionary();
public ServiceBusFactory(string fullyQualifiedNamespace, TokenCredential credential) => _client = new ServiceBusClient(fullyQualifiedNamespace, credential);
public ServiceBusFactory(string connectionString) => _client = new ServiceBusClient(connectionString);
public ServiceBusSender GetSender(string entity) =>
_senders.GetOrAdd(entity, entity => _client.CreateSender(entity));
public ServiceBusReceiver GetReceiver(string entity) =>
_receivers.GetOrAdd(entity, entity => _client.CreateReceiver(entity));
public async ValueTask DisposeAsync()
{
await _client.DisposeAsync().ConfigureAwait(false);
GC.SuppressFinalize(this);
}
}
}
I then created an interface for my service bus factory and added this as a singelton to the bottom of my "ConfigurationService" method in my "Startup.cs" class, initializing it with my service bus connection string.
Interface:
public interface IServiceBusFactory
{
public ServiceBusSender GetSender(string entity);
public ServiceBusReceiver GetReceiver(string entity);
public ValueTask DisposeAsync();
}
Startup:
var serviceBusConnectionString = "[CONNECTION STRING]";
services.AddSingleton(new ServiceBusFactory(serviceBusConnectionString));
Finally it was a matter of dependency inject the servicebusFactory interface to my controller constructor and then use it to get a "sender" and using it to send a message to my queue called "Add".
Constructor:
private readonly IServiceBusFactory _serviceBusFactory;
public ServicebusController(IServiceBusFactory serviceBusFactory)
{
_serviceBusFactory = serviceBusFactory;
}
Controller Method/Action implementation:
var test = new List();
test.Add(new ServiceBusMessage("This is a test message."));
var sender = _serviceBusFactory.GetSender("Add");
sender.SendMessagesAsync(test);
ANSWER
Answered 2022-Feb-17 at 14:57Service Bus associates a message lock with the AMQP link from which the message was received. For the SDK, this means that you must settle the message with the same ServiceBusReceiver
instance that you used to receive it.
In your code, you're creating a new receiver for ReceiveMessage
call - so when you attempt to complete or abandon the message, you're using a link for which the message is not valid if any other call to ReceiveMessage
has taken place.
Generally, you want to avoid the pattern of creating short-lived Service Bus client objects. They're intended to be long-lived and reused over the lifetime of the application. In your code, you're also implicitly abandoning the senders/receivers without closing them. This is going to orphan the AMQP link until the service force-closes it for being idle after 20 minutes.
I'd recommend pooling your senders/receivers and keeping each as a singleton for the associated queue. Each call to SendMessage
or ReceiveMessage
should for a given queue should use the same sender/receiver instance.
When your application closes, be sure to close or dispose the ServiceBusClient
, which will ensure that all of its child senders/receivers are also cleaned up appropriately.
I'd also very strongly recommend refactoring your class to be async. The sync-over-async pattern that you're using is going to put additional pressure on the thread pool and is likely to result in thread starvation and/or deadlocks under load.
UPDATE
To add some additional context, I'd advise not wrapping Service Bus operations but, instead, have a factory that focuses on managing clients and letting callers interact directly with them.
This ensures that clients are pooled and their lifetimes are managed correctly, while also giving flexibility to callers to hold onto the sender/receiver reference and use for multiple operations rather than paying the cost to retrieve it.
As an example, the following is a simple factory class that you'd create and manage as a singleton in your application. Callers are able to request a sender/receiver for a specific queue/topic/subscription and they'll be created as needed and then pooled for reuse.
// This class is intended to be treated as a singleton for most
// scenarios. Each instance created will open an independent connection
// to the Service Bus namespace, shared by senders and receivers spawned from it.
public class ServiceBusFactory : IAsyncDisposable
{
// If throughput needs scale beyond a single connection, the factory can
// manage multiple clients and ensure that child entities are evenly distributed
// among them.
private readonly ServiceBusClient _client;
private readonly ConcurrentDictionary _senders = new();
private readonly ConcurrentDictionary _receivers = new();
public ServiceBusFactory(string fullyQualifiedNamespace, TokenCredential credential) => _client = new ServiceBusClient(fullyQualifiedNamespace, credential);
public ServiceBusFactory(string connectionString) => _client = new ServiceBusClient(connectionString);
public ServiceBusSender GetSender(string entity) =>
_senders.GetOrAdd(entity, entity => _client.CreateSender(entity));
public ServiceBusReceiver GetReceiver(string entity) =>
_receivers.GetOrAdd(entity, entity => _client.CreateReceiver(entity));
public async ValueTask DisposeAsync()
{
await _client.DisposeAsync().ConfigureAwait(false);
GC.SuppressFinalize(this);
}
}
QUESTION
I am deciding if I should use MSK (managed kafka from AWS) or a combination of SQS + SNS to achieve a pub sub model?
Background
Currently, we have a micro service architecture but we don't use any messaging service and only use REST apis (dont ask why - related to some 3rd party vendors who designed the architecture). Now, I want to revamp it and start using messaging for communication between micro-services.
Initially, the plan is to start publishing entity events for any other micro service to consume - these events will also be stored in data lake in S3 which will also serve as a base for starting data team.
Later, I want to move certain features from REST to async communication.
Anyway, the main question I have is - should I decide to go with MSK or should I use SQS + SNS for the same? ( I already understand the basic concepts but wanted to understand from fellow community if there are some other pros and cons)?
Thanks in advance
ANSWER
Answered 2022-Feb-09 at 17:58MSK VS SQS+SNS is not really 1:1 comparison. The choice depends on various use cases. Please find out some of specific difference between two
- Scalability -> MSK has better scalability option because of inherent design of partitions that allow parallelism and ordering of message. SNS has limitation of 300 publish/Second, to achieve same performance as MSK, there need to have higher number of SNS topic for same purpose.
Example : Topic: Order Service in MSK -> one topic+ 10 Partitions SNS -> 10 topics
if client/message producer use 10 SNS topic for same purpose, then client needs to have information of all 10 SNS topic and distribution of message. In MSK, it's pretty straightforward, key needs to send in message and kafka will allocate the partition based on Key value.
Administration/Operation -> SNS+SQS setup is much simpler compare to MSK. Operational challenge is much more with MSK( even this is managed service). MSK needs more in depth skills to use optimally.
SNS +SQS VS SQS -> I believe you have multiple subscription(fanout) for same message thats why you have refer SNS +SQS. If you have One Subscription for one message, then only SQS is also sufficient.
Replay of message -> MSK can be use for replaying the already processed message. It will be tricky for SQS, though can be achieve by having duplicate queue so that can be use for replay.
QUESTION
In my UML model I have a system and its subcomponents that talk to each other. For Example, I have a computer and a RC robot where they talk via Bluetooth. Currently in the diagrams the flow is something like:
"Computer" triggers "setVelocity()" function of "RC car".
At this point, I want to refine the communication by saying that
- computer sends "Movement" message
- with velocity field is set to 100 and direction field is set to 0
- which is acknowledged by RC car by sending ACK message
- with message id "Movement" and sequence number X.
How do I do that?
EDIT: Clarification
Normally this is what my diagram looks like without protocol details:
But when I tried to add messages, there are at least 2 problems:
- It seems like Computer first triggered the setVelocity() funciton and then sendBluetoothMessage() sequentially which are not sequential . The followings of setVelocity() are actually what happens inside that.
- sendBluetoothMessage() is actually a function of Computer. But here it belongs to RC Car. (or am I wrong?) And the same things for ACK.
Thanks for the responses. You are gold!
ANSWER
Answered 2022-Jan-29 at 17:48There are two main ways of representing the sending of a movement message between two devices:
A
movement()
operation on the target device, with parameters for the velocity and direction. You would typically show the exchange in a sequence diagram, with a call arrow from the sender to the receiver. The return message could just be label as ACK.A
«signal» Movement
: Signals correspond to event messages. In a class diagram, they are represented like a class but with the«signal»
keyword:velocity
anddirection
would be attributes of that signal.ACK
would be another signal. The classes that are able to receive the signals show it as reception (looks like an operation, but again with «signal» keyword).
In both cases, you would show the interactions of your communication protocol with an almost identical sequence diagram. But signals are meant for asynchronous communication and better reflect imho the nature of the communication. It's semantic is more suitable for your needs.
If you prefer communication diagram over interaction diagrams, the signal approach would be clearer, since communication diagrams don't show return messages.
Why signals is what you need (your edit)With the diagrams, your edited question is much clearer. My position about the use of signals is unchanged: signals would correspond to the information exchanged between the computer and the car. So in a class diagram, you could document the «signal»Movement
as having attributes id
, velocity
and direction
:
In your sequence diagram, you'd then send and arrow with Movement (X,100,0)
. Signal allows to show the high level view of the protocol exchanges, without getting lost on the practical implementation details:
The implementation details could then be shown in a separate diagram. There are certainly several classes involved on the side of the computer (one diagram, the final action being some kind of sending) and on the side of the car (another diagram: how to receive and dispatch the message, and decode its content). I do not provide examples because it would very much look like your current diagram, but the send functions would probably be implemented by a communication controller.
If you try to put the protocol and its implementation in the same diagram, as in your second diagram, it gets confusing because of the lack of separation of concerns: here you say the computer is calling a send function on the car, which is not at all what you want. The reader has then difficulty to see what's really required by the protocol, and what's the implementation details. For instance, I still don't know according to your diagram, if setVelocity
is supposed to directly send something to the car, or if its a preparatory step for sending the movement message with a velocity.
Last but not least, keep in mind that the sequence diagram represents just a specific scenario. If you want to formally define a protocol in UML, you'd need to create as well a protocol state machine that tells the valid succession of messages. When you use signals, you can use their name directly as state transition trigger/event.
QUESTION
Essentially what the subject says.
I'm wondering if JetStream can be queried in a way that allows us to refetch either the last 15 messages of subject "foo.*" or the messages that JetStream received on subject "foo.*" in the last 1.5 seconds.
If that's possible any code-samples or links to code-samples are appreciated.
ANSWER
Answered 2022-Jan-21 at 03:47According to the official docs
- It is possible to grab message starting from a certain time: in the last 1.5 seconds.
DeliverByStartTime
When first consuming messages, start with messages on or after this time. The consumer is required to specify OptStartTime, the time in the stream to start at. It will receive the closest available message on or after that time.
- The other requirement, the last 15 messages, I think it's not possible
QUESTION
I try to persist messages from activeMQ messaging in a postgres databases. This first step was easy. I added this CLI
/subsystem=datasources/data-source=messagingDS:add(jndi-name="java:jboss/datasources/messagingDS",use-java-context=true, \
use-ccm=true,connection-url="{{ pg_db_connection_url_messaging }}",driver-name=postgres,transaction-isolation=TRANSACTION_READ_COMMITTED,min-pool-size=0, \
max-pool-size=20,user-name={{ pg_db_user_pg }},password={{ pg_db_password_pg }},blocking-timeout-wait-millis=10000,check-valid-connection-sql=select 1,validate-on-match=true, \
valid-connection-checker-class-name="org.jboss.jca.adapters.jdbc.extensions.postgres.PostgreSQLValidConnectionChecker",validate-on-match=true, \
exception-sorter-class-name="org.jboss.jca.adapters.jdbc.extensions.postgres.PostgreSQLExceptionSorter")
/subsystem=datasources/data-source=messagingDS:test-connection-in-pool
/subsystem=messaging-activemq/server=default:write-attribute(name=journal-datasource, value=messagingDS)
But i want to use the same DB schema for more than one server on a cluster, somewhere i read i have to use some suffix, but i cant find information how to configure a suffix per server in activeMQ, any idea how to?
ANSWER
Answered 2022-Jan-03 at 17:27I don't believe WildFly supports a suffix or prefix for the JDBC table names so you'll need to manually set the table names for the journal
configuration. Here's the list of attributes:
messages-table
bindings-table
jms-bindings-table
large-messages-table
node-manager-store-table
page-store-table
These will need to be unique for each server.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install movim
Support
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesExplore Kits - Develop, implement, customize Projects, Custom Functions and Applications with kandi kits
Save this library and start creating your kit
Share this Page