delayed | driven ActiveJob backend used at Betterment to process | Job Scheduling library

 by   Betterment Ruby Version: v0.4.0 License: MIT

kandi X-RAY | delayed Summary

kandi X-RAY | delayed Summary

delayed is a Ruby library typically used in Data Processing, Job Scheduling applications. delayed has no bugs, it has no vulnerabilities, it has a Permissive License and it has low support. You can download it from GitHub.

It supports postgres, mysql, and sqlite, and is designed to be:.
Support
    Quality
      Security
        License
          Reuse

            kandi-support Support

              delayed has a low active ecosystem.
              It has 46 star(s) with 3 fork(s). There are 20 watchers for this library.
              OutlinedDot
              It had no major release in the last 12 months.
              There are 2 open issues and 4 have been closed. On average issues are closed in 2 days. There are 2 open pull requests and 0 closed requests.
              It has a neutral sentiment in the developer community.
              The latest version of delayed is v0.4.0

            kandi-Quality Quality

              delayed has 0 bugs and 0 code smells.

            kandi-Security Security

              delayed has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.
              delayed code analysis shows 0 unresolved vulnerabilities.
              There are 0 security hotspots that need review.

            kandi-License License

              delayed is licensed under the MIT License. This license is Permissive.
              Permissive licenses have the least restrictions, and you can use them in most projects.

            kandi-Reuse Reuse

              delayed releases are available to install and integrate.
              Installation instructions, examples and code snippets are available.

            Top functions reviewed by kandi - BETA

            kandi has reviewed delayed and discovered the below as its top functions. This is intended to give you an instant insight into delayed implemented functionality, and help decide if they suit your requirements.
            • Handle an asynchronous operation
            • Resume a worker thread
            • Run a job
            • Start the process .
            • Add a callback to the callback .
            • Reserve the job .
            • Creates a new Worker instance .
            • Resumes the job to be executed
            • Run a job .
            • Returns the name of the process .
            Get all kandi verified functions for this library.

            delayed Key Features

            No Key Features are available at this moment for delayed.

            delayed Examples and Code Snippets

            No Code Snippets are available at this moment for delayed.

            Community Discussions

            QUESTION

            How to throttle my cron worker form pushing messages to RabbitMQ?
            Asked 2022-Feb-21 at 09:22
            Context:

            We have micro service which consumes(subscribes)messages from 50+ RabbitMQ queues.

            Producing message for this queue happens in two places

            1. The application process when encounter short delayed execution business logic ( like send emails OR notify another service), the application directly sends the message to exchange ( which in turn it is sent to the queue ).

            2. When we encounter long/delayed execution business logic We have messages table which has entries of messages which has to be executed after some time.

            Now we have cron worker which runs every 10 mins which scans the messages table and pushes the messages to RabbitMQ.

            Scenario:

            Let's say the messages table has 10,000 messages which will be queued in next cron run,

            1. 9.00 AM - Cron worker runs and it queues 10,000 messages to RabbitMQ queue.
            2. We do have subscribers which are listening to the queue and start consuming the messages, but due to some issue in the system or 3rd party response time delay it takes each message to complete 1 Min.
            3. 9.10 AM - Now cron worker once again runs next 10 Mins and see there are yet 9000+ messages yet to get completed and time is also crossed so once again it pushes 9000+ duplicates messages to Queue.

            Note: The subscribers which consumes the messages are idempotent, so there is no issue in duplicate processing

            Design Idea I had in my mind but not best logic

            I can have 4 status ( RequiresQueuing, Queued, Completed, Failed )

            1. Whenever a message is inserted i can set the status to RequiresQueuing
            2. Next when cron worker picks and pushes the messages successfully to Queue i can set it to Queued
            3. When subscribers completes it mark the queue status as Completed / Failed.

            There is an issue with above logic, let's say RabbitMQ somehow goes down OR in some use we have purge the queue for maintenance.

            Now the messages which are marked as Queued is in wrong state, because they have to be once again identified and status needs to be changed manually.

            Another Example

            Let say I have RabbitMQ Queue named ( events )

            This events queue has 5 subscribers, each subscribers gets 1 message from the queue and post this event using REST API to another micro service ( event-aggregator ). Each API Call usually takes 50ms.

            Use Case:

            1. Due to high load the numbers events produced becomes 3x.
            2. Also the micro service ( event-aggregator ) which accepts the event also became slow in processing, the response time increased from 50ms to 1 Min.
            3. Cron workers follows your design mentioned above and queues the message for each min. Now the queue is becoming too large, but i cannot also increase the number of subscribers because the dependent micro service ( event-aggregator ) is also lagging.

            Now the question is, If keep sending the messages to events queue, it is just bloating the queue.

            https://www.rabbitmq.com/memory.html - While reading this page, i found out that rabbitmq won't even accept the connection if it reaches high watermark fraction (default is 40%). Of course this can be changed, but this requires manual intervention.

            So if the queue length increases it affects the rabbitmq memory, that is reason i thought of throttling at producer level.

            Questions
            1. How can i throttle my cron worker to skip that particular run or somehow inspect the queue and identify it already being heavily loaded so don't push the messages ?
            2. How can i handle the use cases i said above ? Is there design which solves my problem ? Is anyone faced the same issue ?

            Thanks in advance.

            Answer

            Check the accepted answer Comments for the throttling using queueCount

            ...

            ANSWER

            Answered 2022-Feb-21 at 04:45

            You can combine QoS - (Quality of service) and Manual ACK to get around this problem. Your exact scenario is documented in https://www.rabbitmq.com/tutorials/tutorial-two-python.html. This example is for python, you can refer other examples as well.

            Let says you have 1 publisher and 5 worker scripts. Lets say these read from the same queue. Each worker script takes 1 min to process a message. You can set QoS at channel level. If you set it to 1, then in this case each worker script will be allocated only 1 message. So we are processing 5 messages at a time. No new messages will be delivered until one of the 5 worker scripts does a MANUAL ACK.

            If you want to increase the throughput of message processing, you can increase the worker nodes count.

            The idea of updating the tables based on message status is not a good option, DB polling is the main reason that system uses queues and it would cause a scaling issue. At one point you have to update the tables and you would bottleneck because of locking and isolations levels.

            Source https://stackoverflow.com/questions/71186974

            QUESTION

            Extracting latest values in a Dask dataframe with non-unique index column dates
            Asked 2022-Jan-24 at 23:36

            I'm quite familiar with pandas dataframes but I'm very new to Dask so I'm still trying to wrap my head around parallelizing my code. I've obtained my desired results using pandas and pandarallel already so what I'm trying to figure out is if I can scale up the task or speed it up somehow using Dask.

            Let's say my dataframe has datetimes as non-unique indices, a values column and an id column.

            ...

            ANSWER

            Answered 2021-Dec-16 at 07:42

            The snippet below shows that it's a very similar syntax:

            Source https://stackoverflow.com/questions/70374896

            QUESTION

            Appending newlines to a variable to be "echo"ed out
            Asked 2022-Jan-22 at 00:48

            I've been working in Linux for the last 12 years, worked with Windows and command lines before that and have had to recently resurrect those batch file skills for a little easy to use / edit utility. However, I'm having some issues in finding out how to build up a string variable with newline characters (the equivalent of Linux's echo -e "Line1\nLine2")

            Basically my utility asks three questions of a user and checks the validity of the inputs. Each input has a slightly different "error message" if the validity fails. I then have a check to see if the errMsg variable contains anything and if it does, it lists the collated error messages from the 3 validity checks. This all works perfectly with the exception of the error message is on one line and I'd like to put each error on it's own line. I then "merely" add newlines to the string ... and that's the crux of this question.

            I've used this link as a reference point and with a basic string, the new lines appear as expected. However, when I use a variable, the new lines don't appear and I was hoping someone could explain to me why.

            I have the following code snippet

            ...

            ANSWER

            Answered 2022-Jan-21 at 22:37

            To create a new line variable is a good start. But you should use it in a different way.
            Percent expansion doesn't work quite well with newlines in variables, it can be done, but it's quite complex.

            But delayed expansion flawlessly works with any characters

            Source https://stackoverflow.com/questions/70808216

            QUESTION

            Envoy proxy: 503 Service Unavailable
            Asked 2022-Jan-09 at 02:24
            State of Servies:

            Client (nuxt) is up on http://localhost:3000 and the client sends requests to http://localhost:8080.

            Server (django) is running on 0.0.0.0:50051.

            Also docker is up

            ...

            ANSWER

            Answered 2021-Dec-20 at 15:29
            Please see my long answer here. I resolved this issue. > Short Answer:

            I needed a proxy to receive requests from the server. So I used ‍envoy proxy. In this way, nginx received the request from the browser and then sent it to a port (for example 5000). On the other hand, envoy listens to port 5000 and then sends the request to the server running on port 50051.

            This is how I designed the tracking of a gRPC connection.

            Source https://stackoverflow.com/questions/70409706

            QUESTION

            Flutter: type 'bool' is not a subtype of type 'RxBool' in type cast
            Asked 2022-Jan-07 at 06:26

            I'm Using GetX Package for State Management in Flutter. I'm trying to show data based on whether condition is true or not. But gets this error which says `type 'bool' is not a subtype of type 'RxBool' in type cast'.

            Below is my code which I'm trying to show. Thanks for help. :)

            HomeScreen ...

            ANSWER

            Answered 2022-Jan-07 at 06:08

            Try isStatusSuccess.isTrue to obtain the bool.

            Source https://stackoverflow.com/questions/68623020

            QUESTION

            Running two Tensorflow trainings in parallel using joblib and dask
            Asked 2021-Dec-28 at 15:50

            I have the following code that runs two TensorFlow trainings in parallel using Dask workers implemented in Docker containers.

            I need to launch two processes, using the same dask client, where each will train their respective models with N workers.

            To that end, I do the following:

            • I use joblib.delayed to spawn the two processes.
            • Within each process I run with joblib.parallel_backend('dask'): to execute the fit/training logic. Each training process triggers N dask workers.

            The problem is that I don't know if the entire process is thread safe, are there any concurrency elements that I'm missing?

            ...

            ANSWER

            Answered 2021-Dec-24 at 05:12

            This is pure speculation, but one potential concurrency issue is due to if client is None: part, where two processes could race to create a Client.

            If this is resolved (e.g. by explicitly creating a client in advance), then dask scheduler will rely on time of submission to prioritize task (unless priority is clearly assigned) and also the graph (DAG) structure, there are further details available in docs.

            Source https://stackoverflow.com/questions/70465064

            QUESTION

            How to create accurate nanosecond delay in a pthread and how to run a pthread part of the program without interruption?
            Asked 2021-Dec-16 at 13:54

            I am a beginner in C programming. In the following code, we have two pthreads. I want one of them to be delayed at the user's choice after the two pthreads are synchronized. I want this delay to be as accurate as possible. In the following code I have done this but the exact amount of delay does not occur.

            But I also have another question, and that is how can I force a pthread to run a certain part of the program from start to finish without interruption.

            Thank you in advance.

            code:

            ...

            ANSWER

            Answered 2021-Dec-16 at 13:54

            One way that may increase accuracy is to busy-wait instead of sleeping.

            I've made a function called mysleep that takes a struct timespec* containing the requested sleep time. It checks the current time and adds the requested sleep time to that - and then just spins until the current time >= the target point in time.

            Note though: It's not guaranteed to stay within any accuracy. It will often be rather ok, but sometimes when the OS puts the thread on hold, you'll see spikes in the measured time. If you are unlucky, the calibration will have one of these spikes in it and then all your sleeps will be totally off. You can run the calibration routine 100 times and then pick the median value to make that unfortunate circumstance very unlikely.

            Source https://stackoverflow.com/questions/70378227

            QUESTION

            Reproduce a complex table with double headesrs
            Asked 2021-Nov-23 at 06:57

            I would like to create the following table

            Where

            ...

            ANSWER

            Answered 2021-Nov-22 at 21:20

            I could not reproduce your code as some datasets are missing.

            Below an example with various method to add header rows.

            Source https://stackoverflow.com/questions/70059022

            QUESTION

            Approaches to changing language at runtime with python gettext
            Asked 2021-Nov-10 at 03:49

            I have read lots of posts about using Python gettext, but none of them addressed the issue of changing languages at runtime.

            Using gettext, strings are translated by the function _() which is added globally to builtins. The definition of _ is language-specific and will change during execution when the language setting changes. At certain points in the code, I need strings in an object to be translated to a certain language. This happens by:

            1. (Re)define the _ function in builtins to translate to the chosen language
            2. (Re)evaluate the desired object using the new _ function - guaranteeing that any calls to _ within the object definition are evaluated using the current definition of _.
            3. Return the object

            I am wondering about different approaches to step 2. I thought of several but they all seem to have fundamental flaws.

            • What is the best way to achieve step 2 in practice?
            • Is it theoretically possible to achieve step 2 for any arbitrary object, without knowledge of its implementation?
            Possible approaches

            If all translated text is defined in functions that can be called in step 2, then it's straightforward: calling the function will evaluate using the current definition of _. But there are lots of situations where that's not the case, for instance, translated strings could be module-level variables evaluated at import time, or attributes evaluated when instantiating an object.

            Minimal example of this problem with module-level variables is here.

            Re-evaluation Manually reload modules

            Module-level variables can be re-evaluated at the desired time using importlib.reload. This gets more complicated if the module imports another module that also has translated strings. You have to reload every module that's a (nested) dependency.

            With knowledge of the module's implementation, you can manually reload the dependencies in the right order: if A imports B,

            ...

            ANSWER

            Answered 2021-Nov-10 at 03:49

            The only plausible, general approach is to rewrite all relevant code to not only use _ to request translation but to never cache the result. That’s not a fun idea and it’s not a new idea—you already list Refactoring and Deferred translation that rely on the cooperation of the gettext clients—but it is the “best way […] in practice”.

            You can try to do a super-reload by removing many things from sys.modules and then doing a real reimport. This approach avoids understanding the import relationships, but works only if the relevant modules are all written in Python and you can guarantee that the state of your program will retain no references to any objects (including types and modules) that used the old language. (I’ve done this, but only in a context where the overarching program was a sort of supervisor utterly uninterested in the features of the discarded modules.)

            You can try to walk the whole object graph and replace the strings, but even aside from the intrinsic technical difficulty of such an algorithm (consider __slots__ in base classes and co_consts for just the mildest taste), it would involve untranslating them, which changes from hard to impossible when some sort of transformation has already been performed. That transformation might just be concatenating the translated strings, or it might be pre-substituting known values to format, or padding the string, or storing a hash of it: it’s certainly undecidable in general. (I’ve done this too for other data types, but only with data constructed by a file reader whose output used known, simple structures.)

            Any approach based on partial reevaluation combines the problems of the methods above.

            The only other possible approach is a super-LazyString that refuses to translate for longer by implementing operations like + to return objects that encode the transformations to eventually apply, but it’s impossible to know when to force those operations unless you control all mechanisms used to display or transmit strings. It’s also impossible to defer past, say, if len(_("…"))>80:.

            Source https://stackoverflow.com/questions/69906944

            QUESTION

            Gradient exploding problem in a graph neural network
            Asked 2021-Oct-29 at 16:33

            I have a gradient exploding problem which I couldn't solve after trying for several days. I implemented a custom message passing graph neural network in TensorFlow which is used to predict a continuous value from graph data. Each graph is associated with one target value. Each node of a graph is represented by a node attribute vector, and the edges between nodes are represented by an edge attribute vector.

            Within a message passing layer, node attributes are updated in a certain way (e.g., by aggregating other node/edge attributes), and these updated node attributes are returned.

            Now, I managed to figure out where the gradient problem occurs in my code. I have the below snippet.

            ...

            ANSWER

            Answered 2021-Oct-29 at 16:33

            Looks great, as you have already followed most of the solutions to resolve gradient exploding problem. Below is the list of all solutions you can try

            Solutions to avoid Gradient Exploding problem

            1. Appropriate Weight initialization: utilise appropriate weight Initialization based on the activation function used.

              Initialization Activation Function He ReLU & variants LeCun SELU Glorot Softmax, Logistic, None, Tanh
            2. Redesigning your Neural network: use fewer layers in neural network and/or use smaller batch size

            3. Choosing Non Saturation activation function: choose the right activation function with reduced learning rates

              • ReLU
              • Leaky ReLU
              • randomized leaky ReLU (RReLU)
              • parametric leaky ReLU (PReLU)
              • exponential linear unit (ELU)
            4. Batch Normalisation: Ideally using batch normalisation before/after each layer, based on what works best for your dataset.

            Source https://stackoverflow.com/questions/69427103

            Community Discussions, Code Snippets contain sources that include Stack Exchange Network

            Vulnerabilities

            No vulnerabilities reported

            Install delayed

            This gem is designed to work with Rails 5.2+ and Ruby 2.6+ on postgres 9.5+ or mysql 5.6+.
            Add the following to your Gemfile:. Then run bundle install.

            Support

            We would love for you to contribute! Anything that benefits the majority of users—from a documentation fix to an entirely new feature—is encouraged. Before diving in, [check our issue tracker](//github.com/Betterment/delayed/issues) and consider creating a new issue to get early feedback on your proposed change.
            Find more information at:

            Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items

            Find more libraries

            Stay Updated

            Subscribe to our newsletter for trending solutions and developer bootcamps

            Agree to Sign up and Terms & Conditions

            Share this Page

            share link

            Explore Related Topics

            Consider Popular Job Scheduling Libraries

            Try Top Libraries by Betterment

            test_track

            by BettermentRuby

            webvalve

            by BettermentRuby

            backbone.blazer

            by BettermentJavaScript

            DaggerStarter

            by BettermentJava

            test_track_rails_client

            by BettermentRuby