delayed | driven ActiveJob backend used at Betterment to process | Job Scheduling library
kandi X-RAY | delayed Summary
kandi X-RAY | delayed Summary
It supports postgres, mysql, and sqlite, and is designed to be:.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
- Handle an asynchronous operation
- Resume a worker thread
- Run a job
- Start the process .
- Add a callback to the callback .
- Reserve the job .
- Creates a new Worker instance .
- Resumes the job to be executed
- Run a job .
- Returns the name of the process .
delayed Key Features
delayed Examples and Code Snippets
Community Discussions
Trending Discussions on delayed
QUESTION
We have micro service which consumes(subscribes)messages from 50+ RabbitMQ queues.
Producing message for this queue happens in two places
The application process when encounter short delayed execution business logic ( like send emails OR notify another service), the application directly sends the message to exchange ( which in turn it is sent to the queue ).
When we encounter long/delayed execution business logic We have
messages
table which has entries of messages which has to be executed after some time.
Now we have cron worker which runs every 10 mins which scans the messages
table and pushes the messages to RabbitMQ.
Let's say the messages table has 10,000 messages which will be queued in next cron run,
- 9.00 AM - Cron worker runs and it queues 10,000 messages to RabbitMQ queue.
- We do have subscribers which are listening to the queue and start consuming the messages, but due to some issue in the system or 3rd party response time delay it takes each message to complete
1 Min
. - 9.10 AM - Now cron worker once again runs next 10 Mins and see there are yet 9000+ messages yet to get completed and time is also crossed so once again it pushes 9000+ duplicates messages to Queue.
Note: The subscribers which consumes the messages are idempotent, so there is no issue in duplicate processing
Design Idea I had in my mind but not best logicI can have 4 status ( RequiresQueuing, Queued, Completed, Failed )
- Whenever a message is inserted i can set the status to
RequiresQueuing
- Next when cron worker picks and pushes the messages successfully to Queue i can set it to
Queued
- When subscribers completes it mark the queue status as
Completed / Failed
.
There is an issue with above logic, let's say RabbitMQ somehow goes down OR in some use we have purge the queue for maintenance.
Now the messages which are marked as Queued
is in wrong state, because they have to be once again identified and status needs to be changed manually.
Let say I have RabbitMQ Queue named ( events )
This events queue has 5 subscribers, each subscribers gets 1 message from the queue and post this event using REST API to another micro service ( event-aggregator ). Each API Call usually takes 50ms.
Use Case:
- Due to high load the numbers events produced becomes 3x.
- Also the micro service ( event-aggregator ) which accepts the event also became slow in processing, the response time increased from 50ms to 1 Min.
- Cron workers follows your design mentioned above and queues the message for each min. Now the queue is becoming too large, but i cannot also increase the number of subscribers because the dependent micro service ( event-aggregator ) is also lagging.
Now the question is, If keep sending the messages to events queue, it is just bloating the queue.
https://www.rabbitmq.com/memory.html - While reading this page, i found out that rabbitmq won't even accept the connection if it reaches high watermark fraction (default is 40%). Of course this can be changed, but this requires manual intervention.
So if the queue length increases it affects the rabbitmq memory, that is reason i thought of throttling at producer level.
Questions- How can i throttle my cron worker to skip that particular run or somehow inspect the queue and identify it already being heavily loaded so don't push the messages ?
- How can i handle the use cases i said above ? Is there design which solves my problem ? Is anyone faced the same issue ?
Thanks in advance.
AnswerCheck the accepted answer Comments for the throttling using queueCount
...ANSWER
Answered 2022-Feb-21 at 04:45You can combine QoS - (Quality of service) and Manual ACK to get around this problem. Your exact scenario is documented in https://www.rabbitmq.com/tutorials/tutorial-two-python.html. This example is for python, you can refer other examples as well.
Let says you have 1 publisher and 5 worker scripts. Lets say these read from the same queue. Each worker script takes 1 min to process a message. You can set QoS at channel level. If you set it to 1, then in this case each worker script will be allocated only 1 message. So we are processing 5 messages at a time. No new messages will be delivered until one of the 5 worker scripts does a MANUAL ACK.
If you want to increase the throughput of message processing, you can increase the worker nodes count.
The idea of updating the tables based on message status is not a good option, DB polling is the main reason that system uses queues and it would cause a scaling issue. At one point you have to update the tables and you would bottleneck because of locking and isolations levels.
QUESTION
I'm quite familiar with pandas dataframes but I'm very new to Dask so I'm still trying to wrap my head around parallelizing my code. I've obtained my desired results using pandas and pandarallel already so what I'm trying to figure out is if I can scale up the task or speed it up somehow using Dask.
Let's say my dataframe has datetimes as non-unique indices, a values column and an id column.
...ANSWER
Answered 2021-Dec-16 at 07:42The snippet below shows that it's a very similar syntax:
QUESTION
I've been working in Linux for the last 12 years, worked with Windows and command lines before that and have had to recently resurrect those batch file skills for a little easy to use / edit utility. However, I'm having some issues in finding out how to build up a string variable with newline characters (the equivalent of Linux's echo -e "Line1\nLine2"
)
Basically my utility asks three questions of a user and checks the validity of the inputs. Each input has a slightly different "error message" if the validity fails. I then have a check to see if the errMsg
variable contains anything and if it does, it lists the collated error messages from the 3 validity checks. This all works perfectly with the exception of the error message is on one line and I'd like to put each error on it's own line. I then "merely" add newlines to the string ... and that's the crux of this question.
I've used this link as a reference point and with a basic string, the new lines appear as expected. However, when I use a variable, the new lines don't appear and I was hoping someone could explain to me why.
I have the following code snippet
...ANSWER
Answered 2022-Jan-21 at 22:37To create a new line variable is a good start. But you should use it in a different way.
Percent expansion doesn't work quite well with newlines in variables, it can be done, but it's quite complex.
But delayed expansion flawlessly works with any characters
QUESTION
Client (nuxt
) is up on http://localhost:3000
and the client sends
requests to http://localhost:8080
.
Server (django
) is running on 0.0.0.0:50051
.
Also docker
is up
ANSWER
Answered 2021-Dec-20 at 15:29I needed a proxy to receive requests from the server. So I used envoy proxy. In this way, nginx received the request from the browser and then sent it to a port (for example 5000). On the other hand, envoy
listens to port 5000 and then sends the request to the server running on port 50051.
This is how I designed the tracking of a gRPC
connection.
QUESTION
I'm Using GetX Package for State Management in Flutter. I'm trying to show data based on whether condition is true or not. But gets this error which says `type 'bool' is not a subtype of type 'RxBool' in type cast'.
Below is my code which I'm trying to show. Thanks for help. :)
HomeScreen ...ANSWER
Answered 2022-Jan-07 at 06:08Try isStatusSuccess.isTrue
to obtain the bool
.
QUESTION
I have the following code that runs two TensorFlow trainings in parallel using Dask workers implemented in Docker containers.
I need to launch two processes, using the same dask client, where each will train their respective models with N workers.
To that end, I do the following:
- I use
joblib.delayed
to spawn the two processes. - Within each process I run
with joblib.parallel_backend('dask'):
to execute the fit/training logic. Each training process triggers N dask workers.
The problem is that I don't know if the entire process is thread safe, are there any concurrency elements that I'm missing?
...ANSWER
Answered 2021-Dec-24 at 05:12This is pure speculation, but one potential concurrency issue is due to if client is None:
part, where two processes could race to create a Client
.
If this is resolved (e.g. by explicitly creating a client in advance), then dask
scheduler will rely on time of submission to prioritize task (unless priority
is clearly assigned) and also the graph (DAG) structure, there are further details available in docs.
QUESTION
I am a beginner in C programming. In the following code, we have two pthreads. I want one of them to be delayed at the user's choice after the two pthreads are synchronized. I want this delay to be as accurate as possible. In the following code I have done this but the exact amount of delay does not occur.
But I also have another question, and that is how can I force a pthread to run a certain part of the program from start to finish without interruption.
Thank you in advance.
code:
...ANSWER
Answered 2021-Dec-16 at 13:54One way that may increase accuracy is to busy-wait instead of sleeping.
I've made a function called mysleep
that takes a struct timespec*
containing the requested sleep time. It checks the current time and adds the requested sleep time to that - and then just spins until the current time >=
the target point in time.
Note though: It's not guaranteed to stay within any accuracy. It will often be rather ok, but sometimes when the OS puts the thread on hold, you'll see spikes in the measured time. If you are unlucky, the calibration will have one of these spikes in it and then all your sleeps will be totally off. You can run the calibration routine 100 times and then pick the median value to make that unfortunate circumstance very unlikely.
QUESTION
ANSWER
Answered 2021-Nov-22 at 21:20I could not reproduce your code as some datasets are missing.
Below an example with various method to add header rows.
QUESTION
I have read lots of posts about using Python gettext
, but none of them addressed the issue of changing languages at runtime.
Using gettext
, strings are translated by the function _()
which is added globally to builtins
. The definition of _
is language-specific and will change during execution when the language setting changes. At certain points in the code, I need strings in an object to be translated to a certain language. This happens by:
- (Re)define the
_
function inbuiltins
to translate to the chosen language - (Re)evaluate the desired object using the new
_
function - guaranteeing that any calls to_
within the object definition are evaluated using the current definition of_
. - Return the object
I am wondering about different approaches to step 2. I thought of several but they all seem to have fundamental flaws.
- What is the best way to achieve step 2 in practice?
- Is it theoretically possible to achieve step 2 for any arbitrary object, without knowledge of its implementation?
If all translated text is defined in functions that can be called in step 2, then it's straightforward: calling the function will evaluate using the current definition of _
. But there are lots of situations where that's not the case, for instance, translated strings could be module-level variables evaluated at import time, or attributes evaluated when instantiating an object.
Minimal example of this problem with module-level variables is here.
Re-evaluation Manually reload modulesModule-level variables can be re-evaluated at the desired time using importlib.reload
. This gets more complicated if the module imports another module that also has translated strings. You have to reload every module that's a (nested) dependency.
With knowledge of the module's implementation, you can manually reload the dependencies in the right order: if A imports B,
...ANSWER
Answered 2021-Nov-10 at 03:49The only plausible, general approach is to rewrite all relevant code to not only use _
to request translation but to never cache the result. That’s not a fun idea and it’s not a new idea—you already list Refactoring and Deferred translation that rely on the cooperation of the gettext
clients—but it is the “best way […] in practice”.
You can try to do a super-reload
by removing many things from sys.modules
and then doing a real reimport. This approach avoids understanding the import relationships, but works only if the relevant modules are all written in Python and you can guarantee that the state of your program will retain no references to any objects (including types and modules) that used the old language. (I’ve done this, but only in a context where the overarching program was a sort of supervisor utterly uninterested in the features of the discarded modules.)
You can try to walk the whole object graph and replace the strings, but even aside from the intrinsic technical difficulty of such an algorithm (consider __slots__
in base classes and co_consts
for just the mildest taste), it would involve untranslating them, which changes from hard to impossible when some sort of transformation has already been performed. That transformation might just be concatenating the translated strings, or it might be pre-substituting known values to format, or padding the string, or storing a hash of it: it’s certainly undecidable in general. (I’ve done this too for other data types, but only with data constructed by a file reader whose output used known, simple structures.)
Any approach based on partial reevaluation combines the problems of the methods above.
The only other possible approach is a super-LazyString
that refuses to translate for longer by implementing operations like +
to return objects that encode the transformations to eventually apply, but it’s impossible to know when to force those operations unless you control all mechanisms used to display or transmit strings. It’s also impossible to defer past, say, if len(_("…"))>80:
.
QUESTION
I have a gradient exploding problem which I couldn't solve after trying for several days. I implemented a custom message passing graph neural network in TensorFlow which is used to predict a continuous value from graph data. Each graph is associated with one target value. Each node of a graph is represented by a node attribute vector, and the edges between nodes are represented by an edge attribute vector.
Within a message passing layer, node attributes are updated in a certain way (e.g., by aggregating other node/edge attributes), and these updated node attributes are returned.
Now, I managed to figure out where the gradient problem occurs in my code. I have the below snippet.
...ANSWER
Answered 2021-Oct-29 at 16:33Looks great, as you have already followed most of the solutions to resolve gradient exploding problem. Below is the list of all solutions you can try
Solutions to avoid Gradient Exploding problem
Appropriate Weight initialization: utilise appropriate weight Initialization based on the activation function used.
Initialization Activation Function He ReLU & variants LeCun SELU Glorot Softmax, Logistic, None, TanhRedesigning your Neural network: use fewer layers in neural network and/or use smaller batch size
Choosing Non Saturation activation function: choose the right activation function with reduced learning rates
- ReLU
- Leaky ReLU
- randomized leaky ReLU (RReLU)
- parametric leaky ReLU (PReLU)
- exponential linear unit (ELU)
Batch Normalisation: Ideally using batch normalisation before/after each layer, based on what works best for your dataset.
after each layer Paper reference
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install delayed
Add the following to your Gemfile:. Then run bundle install.
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page