p-queue | Promise queue with concurrency control | Reactive Programming library
kandi X-RAY | p-queue Summary
kandi X-RAY | p-queue Summary
Promise queue with concurrency control
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of p-queue
p-queue Key Features
p-queue Examples and Code Snippets
Community Discussions
Trending Discussions on p-queue
QUESTION
After setting batchSize: 1
how should the worker indicate success and how should it indicate temporary failure / ask for a retry? I read https://docs.aws.amazon.com/lambda/latest/dg/invocation-retries.html but it obviously doesn't cover a custom runtime and it's not at all clear what's happening and it doesn't talk about SQS anyways. I suspect just throwing an exception might be enough but I can't make heads or tails as to what signals Lambda.
Tutorials like https://medium.com/cs-code/setup-queue-with-serverless-laravel-using-bref-92b2cd803bb7 make no mention of this. It talks about maxReceiveCount: 3
but not about how to make SQS retry later.
ANSWER
Answered 2021-Apr-12 at 04:03I am not sure about custom runtime but in Nodejs if you just throw an error the message is available back in SQS in case no error is thrown it is deleted from SQS.
Configure a dead letter queue for sqs after maxRetryCount
, so your message will go to DLQ after retries. After fixing the cause of failures you can move the message from DLQ to your main SQS using another lambda or cli.
SQS will not retry itself as lambda do long polling from SQS.
QUESTION
I am trying to implement a HTTP request/reply using separate RabbitMQ queues in Spring Integration DSL. It's similar to Spring IntegrationFlow http request to amqp queue. The difference is I want the response back to the original http caller. I could see the test http post message successfully passed to the request queue and transformed (into upper case) into the response queue. The message was consumed from the response queue as well but never returned back to the caller(http://localhost:8080/Tunner). Eventually the call timed out with 500 error. I am new to this so there could be something I totally missed. Could someone provide suggestion? The code is as follows:
...ANSWER
Answered 2021-Feb-26 at 21:16You probably misunderstood what is returnChannel
on the Amqp.outboundGateway
and try to rely your logic on it. Please, make yourself familiar with that Publisher Confirms and Returns feature: https://docs.spring.io/spring-amqp/docs/current/reference/html/#cf-pub-conf-ret.
It is also not clear what is a replyBackToHttp
flow purpose, but it confuses at the moment with mixed references to other beans.
You probably need to investigate what is a request-reply configuration from Spring AMQP respective and you would probably don't try to use another queue for replies. Although it is still possible: see replyAddress
property or RabbitTemplate
: https://docs.spring.io/spring-amqp/docs/current/reference/html/#request-reply
QUESTION
Installing elasticdump
throws a bunch of warnings like so
ANSWER
Answered 2020-Dec-07 at 07:29From the logs:
QUESTION
My question is basically a combination of
I'm aware of Promise.allSettled
, but I'm failing to find a good way to also limit concurrency.
What I have so far:
Idea 1 using p-limit
:
ANSWER
Answered 2020-Nov-20 at 16:29It's simple enough to implement it yourself - make an array of functions that, when called, return the Promise. Then implement a limiter function that takes functions from that array and calls them, and once finished, recursively calls the limiter again until the array is empty:
QUESTION
I have a large dataset stored in a Firestore collection and a Nodejs express app (exposed as a firebase functions.https.onRequest) with an endpoint which allows users to query this dataset and download large amounts of data.
I need to return the data in CSV format from the endpoint. Because there is a lot of data, I want to avoid doing large database reads each time the endpoint is hit.
My current endpoint does this:
- User hits endpoint with a database query requesting documents within a range
- Query is hashed into a filename. eg query_"startRange"_"endRange".csv
- Check Firebase storage to see if this query has been run before
if the csv already exists:
- return a 302 redirect to the csv file with a signed url
if the csv doesn't exist:
- Run the query on the Firestore collection
- Transform the data into the appropriate CSV format
- upload the new CSV to Firebase storage
- return a 302 redirect to the newly generated csv file with a signed url
This process is currently working really well, except I can already foresee an issue. The CSV generation stage takes roughly 20s for large queries and there is a high possibility of the same request being hit from multiple users at the same time.
I want to build in some sort of queuing system so that if X number of users hit the endpoint at once, only the first request triggers the generation of the new CSV and the other (X-1) requests will be queued and then resolved once the CSV is generated.
I have currently looked into firebase-queue which appears to be deprecated and not intended to be used with Cloud functions. I have also seen other libraries like p-queue but I'm not sure I understand how that would work with Firebase Cloud functions and how seperate instances are booted for many requests.
...ANSWER
Answered 2020-Aug-14 at 10:01I think that in your scenario the queue approach wouldn't work quite well with Cloud Functions. The queue cannot be implemented in a function as multiple instances won't know about each other, therefore the queue would need to be implemented in some kind of dedicated server, which IMO defeats the purpose of using Cloud Functions as both the queue and the processing could be ran in the same server.
I would suggest having a collection in Firestore that keeps track of the queries that have been requested. This way even if the CSV file isn't still saved on Storage you could check if some function instance is already creating it, then you could sleep the function until the operation completes and return the signed url. Overall the algorithm might look somewhat like this:
QUESTION
I'm working on unit testing under my Angular app.
My version of Angular is 4.0.0.
My component look like this:
component.ts
:
ANSWER
Answered 2020-Apr-15 at 15:30I had the same problem, just figured it out. Remove the line:
QUESTION
I wanted to scrape multiple urls simultaneously, so I used p-queue
to implement a Promise
-queue.
For example, see the code below, uses 1 browser and multiple pages to do this job.
...ANSWER
Answered 2020-Mar-24 at 08:31I found why the above code didn't work, I shouldn't await instance
outside of the worker function, but await
inside, see below,
QUESTION
I am using Supervisord to help keep my Laravel-based App queue running. And I am wondering if the below configuration is correct.
In Laravel docs, for example, numprocs
is set 8, which means that Supervisord will run queue:work 8 times, is and why this is a good thing?
Also, should I be using --daemon in the queue:work command?
...ANSWER
Answered 2020-Jan-03 at 17:15numprocs
will spawn 8 processes that will then poll the queue every 3 seconds. When daemon
is set, these processes will not restart unless told to which has both advantages in terms of server load and disadvantages in the form of some potential edge cases when updating your code base.
Having 8 processes means that you have potentially 8 times the throughput when running jobs.
Example:
There are many scenarios where having multiple processes running in parallel are advantageous.
For instance, say you are processing 1000 users and want to check how many comments each has made in the last month. Say each check takes a minute to process (extreme but it makes a better point), it would take 1000 minutes to complete if you run then in sequence by looping through a array or collection of a 1000 users. That's over 16 hours!
If you queued these as jobs and have numprocs
set to 16, then you are done in just over and hour!
QUESTION
i am trying to implement spring-boot camel. i should create an REST API then the REST API will put object/string to ActiveMQ. i have done :
...ANSWER
Answered 2019-Dec-05 at 13:49If you are not expecting a reply from your message into your foo queue, then you need to tell activeMQ you are not expecting a reply by sending an inOnly type exchange, or it will create a temporary queue and await a reply:
QUESTION
There is a nodejs module that lets you limit the number of concurrent promises:
https://github.com/sindresorhus/p-queue
Does this make use of multiple threads?
Example from the official page:
...ANSWER
Answered 2019-Nov-16 at 20:38Will it run each of the async functions above in different cpu threads?
No, of course not. That's not what async function
s do. All they do is provide nicer syntax for sequential asynchronous code. They're still normal promises with promise callbacks under the hood, running on the same single-threaded event loop as always. See also Is promise.all useful given that javascript is executed in a single thread?
Of course, the got
package would be free to use nodejs' worker thread features or something else to do its job, all it needs to do to play nicely with await
is return a promise. But that won't affect your asynchronous IIFEs or the p-queue
on which the functions are scheduled.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install p-queue
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page