queues | A queue system for Vapor | Web Framework library
kandi X-RAY | queues Summary
kandi X-RAY | queues Summary
A queue system for Vapor.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of queues
queues Key Features
queues Examples and Code Snippets
Community Discussions
Trending Discussions on queues
QUESTION
I have a map-stype dataset, which is used for instance segmentation tasks. The dataset is very imbalanced, in the sense that some images have only 10 objects while others have up to 1200.
How can I limit the number of objects per batch?
A minimal reproducible example is:
...ANSWER
Answered 2022-Mar-17 at 19:22If what you are trying to solve really is:
QUESTION
I'm using typescript to make sure queues fulfill the IQueue
interface:
ANSWER
Answered 2022-Mar-01 at 04:12First, if you want the compiler to infer string literal types for the id
properties without inferring a readonly
tuple type for queues
, then you can move the const
assertion from the queues
initializer to just the id
properties in question:
QUESTION
I am developing some concurrent algorithms which deal with Reference objects. I am using java 17.
The thing is I don't know what's the memory semantics of operations like get, clear or refersTo. It isn't documented in the Javadoc.
Looking into the source code of OpenJdk, the referent has no modifier, such as volatile (while the next pointer for reference queues is volatile). Also, get implementation is trivial, but it is an intrinsic candidate. clear and refersTo are native. So I don't know what they really do.
When the GC clears a reference, I have to assume that all threads will see it cleared, or otherwise they would see a reference to an object (in process of being) garbage collected, but it's just an informal guess.
Is there any warranty about the memory semantics of all these operations?
If there isn't, is there a way to obtain the same warranries of a volatile access by invoking, for instance, a fence operation before and/or after calling one of these operations?
...ANSWER
Answered 2022-Feb-28 at 17:38When you invoke clear()
on a reference object, it will only clear this particular Reference
object without any impact on the rest of your application and no special memory semantics. It’s exactly like you have seen in the code, an assignment of null
to a field which has no volatile
modifier.
Mind the documentation of clear()
:
This method is invoked only by Java code; when the garbage collector clears references it does so directly, without invoking this method.
So this is not related to the event of the GC clearing a reference. Your assumption “that all threads will see it cleared” when the GC clears a reference is correct. The documentation of WeakReference
states:
Suppose that the garbage collector determines at a certain point in time that an object is weakly reachable. At that time it will atomically clear all weak references to that object and all weak references to any other weakly-reachable objects from which that object is reachable through a chain of strong and soft references.
So at this point, not only all threads will agree that a weak reference has been cleared, they will also agree that all weak references to the same object have been cleared. A similar statement can be found at SoftReference
and PhantomReference
.
The Java Language Specification, §12.6.2. Interaction with the Memory Model refers to points where such an atomic clear may happen as reachability decision points. It specifies interactions between these points and other program actions, in terms of “comes-before di” and “comes-after di” relationships, the most import ones being:
If r is a read that sees a write w and r comes-before di, then w must come-before di.
If x and y are synchronization actions on the same variable or monitor such that so(x, y) (§17.4.4) and y comes-before di, then x must come-before di.
So, the GC action will be inserted into the synchronization order and even a racy read could not subvert it, but it’s important to keep in mind that the exact location of the reachability decision point is not known to the application. It’s obviously somewhere between the last point where get()
returned a non-null
reference or refersTo(null)
returned false
and the first point where get()
returned null
or refersTo(null)
returned true
.
For practical applications, the fact that once the reference reports the object to be garbage collected you can be sure that it won’t reappear anywhere¹, is enough. Just keep the reference object private, to be sure that not someone invoked clear()
on it.
¹ Letting things like “finalizer resurrection aside”
QUESTION
We have micro service which consumes(subscribes)messages from 50+ RabbitMQ queues.
Producing message for this queue happens in two places
The application process when encounter short delayed execution business logic ( like send emails OR notify another service), the application directly sends the message to exchange ( which in turn it is sent to the queue ).
When we encounter long/delayed execution business logic We have
messages
table which has entries of messages which has to be executed after some time.
Now we have cron worker which runs every 10 mins which scans the messages
table and pushes the messages to RabbitMQ.
Let's say the messages table has 10,000 messages which will be queued in next cron run,
- 9.00 AM - Cron worker runs and it queues 10,000 messages to RabbitMQ queue.
- We do have subscribers which are listening to the queue and start consuming the messages, but due to some issue in the system or 3rd party response time delay it takes each message to complete
1 Min
. - 9.10 AM - Now cron worker once again runs next 10 Mins and see there are yet 9000+ messages yet to get completed and time is also crossed so once again it pushes 9000+ duplicates messages to Queue.
Note: The subscribers which consumes the messages are idempotent, so there is no issue in duplicate processing
Design Idea I had in my mind but not best logicI can have 4 status ( RequiresQueuing, Queued, Completed, Failed )
- Whenever a message is inserted i can set the status to
RequiresQueuing
- Next when cron worker picks and pushes the messages successfully to Queue i can set it to
Queued
- When subscribers completes it mark the queue status as
Completed / Failed
.
There is an issue with above logic, let's say RabbitMQ somehow goes down OR in some use we have purge the queue for maintenance.
Now the messages which are marked as Queued
is in wrong state, because they have to be once again identified and status needs to be changed manually.
Let say I have RabbitMQ Queue named ( events )
This events queue has 5 subscribers, each subscribers gets 1 message from the queue and post this event using REST API to another micro service ( event-aggregator ). Each API Call usually takes 50ms.
Use Case:
- Due to high load the numbers events produced becomes 3x.
- Also the micro service ( event-aggregator ) which accepts the event also became slow in processing, the response time increased from 50ms to 1 Min.
- Cron workers follows your design mentioned above and queues the message for each min. Now the queue is becoming too large, but i cannot also increase the number of subscribers because the dependent micro service ( event-aggregator ) is also lagging.
Now the question is, If keep sending the messages to events queue, it is just bloating the queue.
https://www.rabbitmq.com/memory.html - While reading this page, i found out that rabbitmq won't even accept the connection if it reaches high watermark fraction (default is 40%). Of course this can be changed, but this requires manual intervention.
So if the queue length increases it affects the rabbitmq memory, that is reason i thought of throttling at producer level.
Questions- How can i throttle my cron worker to skip that particular run or somehow inspect the queue and identify it already being heavily loaded so don't push the messages ?
- How can i handle the use cases i said above ? Is there design which solves my problem ? Is anyone faced the same issue ?
Thanks in advance.
AnswerCheck the accepted answer Comments for the throttling using queueCount
...ANSWER
Answered 2022-Feb-21 at 04:45You can combine QoS - (Quality of service) and Manual ACK to get around this problem. Your exact scenario is documented in https://www.rabbitmq.com/tutorials/tutorial-two-python.html. This example is for python, you can refer other examples as well.
Let says you have 1 publisher and 5 worker scripts. Lets say these read from the same queue. Each worker script takes 1 min to process a message. You can set QoS at channel level. If you set it to 1, then in this case each worker script will be allocated only 1 message. So we are processing 5 messages at a time. No new messages will be delivered until one of the 5 worker scripts does a MANUAL ACK.
If you want to increase the throughput of message processing, you can increase the worker nodes count.
The idea of updating the tables based on message status is not a good option, DB polling is the main reason that system uses queues and it would cause a scaling issue. At one point you have to update the tables and you would bottleneck because of locking and isolations levels.
QUESTION
I am trying to perform a series of network requests and would like to limit the number of concurrent tasks in the new Swift Concurrency system. With operation queues we would use maxConcurrentOperationCount
. In Combine, flatMap(maxPublishers:_:)
. What is the equivalent in the new Swift Concurrency system?
E.g., it is not terribly relevant, but consider:
...ANSWER
Answered 2022-Feb-17 at 22:06One can insert a group.next()
call inside the loop after reaching a certain count, e.g.:
QUESTION
I have an exchange with a type of topic
that only redirects messages to queue payments
Somewhere in the future, I will decide to add another queue payment_analyze
to analyze all old and new messages that have been enqueued.
durable exchanges and queues survive rabbit MQ restarts, persistent messages get written to disk but when binding a new queue to an old durable exchange, old messages do not get redirected (only new ones do get redirected)
From my understanding, this is the intended behavior as exchanges do not store messages and only act as a "proxy"
How do I achieve this?
Possible Solution
Creating a queue named parking
and adding every enqueued message to it, whenever a new queue is added, consume messages from parking
without acknowledging to keep the new queue "semi" up to date.
ANSWER
Answered 2022-Feb-10 at 10:32Even though your configured persistent messages on the payments
queue, this just means messages will survive a broker restart - once a message has been consumed and acknowledged it would be removed.
If you know you're going to need the payment_analyze
queue at some point in the future, is it viable to just create this queue/binding upfront and route messages to both payment_analyze
and payments
? Messages on the payment_analyze
will bank up until you're ready to start consuming them. Note: If you're producing a large number of messages this approach might result in storage issues...
As an alternative, you could write the messages to BLOB storage (or some other data store) as part of your payments
queue consumer (or a different queue/consumer altogether) and then when you're ready to introduce the payment_analyze
queue, you could write a script to read all the old messages from BLOB storage and send them to the RabbitMQ exchange. With 'topic' exchanges - see here - you can probably be clever with wildcards and routing keys in your queue bindings to ensure both old messages (from BLOB storage) as well as new messages are both routed to the payment_analyze
queue, but only new messages are routed to the payments
queue (so that your payments
queue consumer is not reprocessing old messages).
Another option (assuming you're not overly invested in RabbitMQ) could be to consider Apache Kafka instead which deals with this scenario quite nicely as messages aren't automatically removed from a partition once they've been processed by a subscriber.
Anyways, just a few options to consider...
QUESTION
I have tried extending IOUserNetworkEthernet
and calling RegisterEthernetInterface()
. This works perfectly for one ethernet interface, though the driver crashes when RegisterEthernetInterface
is called a second time (doesn't return an error code). I have tried registering with separate queues.
Another approach was extending IOUserClient
instead, and calling IOService::Create
to create child IOUserNetworkEthernet instances. Everything about this approach works (the children appear within ioreg). However, once I call RegisterEthernetInterface
on just one of the children, macOS crashes.
How would I go about creating a dext with multiple ethernet interfaces? Have I been approaching it the right way?
Appreciate any help.
...ANSWER
Answered 2022-Jan-21 at 14:34I haven't yet implemented an ethernet dext myself, but based on my experience with using DriverKit for other types of drivers and knowing its design goals, I have an idea what the solution might be.
First off, let's clarify: you're implementing either a virtual ethernet device, or you're building a driver for hardware that unites multiple independent ports in one physical device. In the former case (I'm guessing this is what you're doing based on your IOUserClient
comment), your driver will be matching IOUserResources
and you create the ethernet driver instance when you receive the corresponding message from your user space component.
Now, DriverKit design: DriverKit is built around a goal of keeping the driver instance for each device in its own separate process. Sort of circling back to the microkernel idea. Specifically, this means that multiple devices that use the same driver will each create an independent instance of that driver in its own process. This very likely means that NetworkingDriverKit was never designed to support more than one instance of IOUserNetworkEthernet
, because they are seen as separate devices. Hence the crashes.
OK, what do we do about it? Use DriverKit the way it was intended. Put each virtual ethernet adapter in its own driver instance. This gets a bit tricky. I think this should work:
- Your dext has a "control" instance. This is the thing that matches
IOUserResources
. It's a simpleIOService
class which listens for the signal (presumablyIOUserClient
) to create or destroy a virtual ethernet device. When creating a virtual ethernet device, you need that to run in its own driver instance though. (For an actual device with multiple ports, this would match the USB/PCI device nub, manage bus communication with the device, and enumerate the ports.) - Instead of creating an instance of a
IOUserNetworkEthernet
subclass, create an instance of a "nub" class and attach it to your central control class instance. CallRegisterService()
in its startup code, so it's considered for IOKit matching. - Set up a second IOKit matching dictionary in your Info.plist, which matches your new "nub" object. Here, use your
IOUserNetworkEthernet
-derived class to "drive" the nub. Once the virtual ethernet device is ready to use, callRegisterEthernetInterface()
.
2 extra things to note:
- The virtual device will run in a separate process from the control object, so they can only communicate via DriverKit inter-process calls, i.e. user clients. Hopefully, they don't really need to communicate much though, and the control client can pass all the information required via properties on the nub. If you're implementing support for multi-port hardware, you probably won't be able to avoid this part though.
- Your user space component (app, daemon) will need to open a new communication channel with the virtual device, so you'll probably need to implement user client support there too.
I'm not sure how you'd go about shutting down an individual virtual ethernet device, you could try calling Terminate()
on either it or the nub it's matched on and see what happens.
QUESTION
I'm attempting to add ActiveMQ Artemis' REST interface to my Docker container, and for that I have been following the official guide. I generate a artemis-rest.war
file and move it into my /opt/artemis/web
folder. Now when I navigate to http://localhost:8161/artemis-rest/queues/queue_name
, I get a 404. When I try to navigate to other resources listed in the /opt/artemis/web
like /console/ or /artemis-plugin/ I get at least some sort of a response.
My folder structure looks like this:
...ANSWER
Answered 2021-Dec-16 at 20:26You need to deploy artemis-rest.war
in etc/bootstrap.xml
, e.g.:
QUESTION
When the click event is fired from the mouse, it behaves as expected:
First the listener 1 is pushed into the stack where it queues promise 1 in Microtask Queue(or Job Queue). When listener 1 is popped off, the stack becomes empty. And the promise 1 callback is executed before the listener 2(which is waiting in the Task Queue(or Callback Queue). After promise 1 callback is popped off, the listener 2 is pushed into the stack. So the output is :
Listener 1 Microtask 1 Listener 2 Microtask 2
However when the click is triggered via JavaScript code, it behaves differently:
The callback is pushed into the stack even before the click() function is completed (i.e. call stack is not empty). The output here is :
Listener 1 Listener 2 Microtask 1 Microtask 2
Here's the code:
...ANSWER
Answered 2021-Dec-11 at 19:25As long as the is running, the
.then()
won't be executed.
This snippet shows the difference in the execution order a bit better:
QUESTION
I was writing an application and I had this issue, looking the code over and over, nothing seems to be wrong, tested with the below basic snippet and the issue is reproducible .... RabbitMQ is saying the queue is always empty when it is not.
The below Golang snippet shows a producer sending messages more often than the consumer consuming them. The consumer is always active but sleeping longer to make the queue have messages in its backlog. Result? The consumer fetches messages each time it tries however the API is always saying there are no messages -> message count is 0.
...ANSWER
Answered 2021-Nov-27 at 15:36The field q2.Messages
is unreliable, it is the count of messages not awaiting acknowledgment, i.e. messages already ACK'ed.
Your consumer is declared with autoAck = true
— i.e. noAck
—, which means that no acknowledgements are expected, which means that there are zero messages already ACK'ed.
When you comment out the consumer, the number of acknowledged messages likely depends on the publisher buffer.
Getting a precise number of messages programmatically on a given queue with AMQP 0.9.1 is basically not possible. You may use the message_stats
field in the management API instead:
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install queues
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page