cron | Parse and validate crontab expressions in PHP | Cron Utils library
kandi X-RAY | cron Summary
kandi X-RAY | cron Summary
Standard (V7) compliant crontab expression parser/validator with support for time zones; see "man 5 crontab" for possible expressions.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of cron
cron Key Features
cron Examples and Code Snippets
Community Discussions
Trending Discussions on cron
QUESTION
I am trying to schedule a data-quality monitoring job in AWS SageMaker by following steps mentioned in this AWS documentation page. I have enabled data-capture for my endpoint. Then, trained a baseline on my training csv file and statistics and constraints are available in S3 like this:
...ANSWER
Answered 2022-Feb-26 at 04:38This happens, during the ground-truth-merge job, when the spark can't find any data either in '/opt/ml/processing/groundtruth/' or '/opt/ml/processing/input_data/' directories. And that can happen when either you haven't sent any requests to the sagemaker endpoint or there are no ground truths.
I got this error because, the folder /opt/ml/processing/input_data/
of the docker volume mapped to the monitoring container had no data to process. And that happened because, the thing that facilitates entire process, including fetching data couldn't find any in S3. and that happened because, there was an extra slash(/
) in the directory to which endpoint's captured-data will be saved. to elaborate, while creating the endpoint, I had mentioned the directory as s3:////
, while it should have just been s3:///
. so, while the thing that copies data from S3 to docker volume tried to fetch data of that hour, the directory it tried to extract the data from was s3://////////
(notice the two slashes). So, when I created the endpoint-configuration again with the slash removed in S3 directory, this error wasn't present and ground-truth-merge operation was successful as part of model-quality-monitoring.
I am answering this question because, someone read the question and upvoted it. meaning, someone else has faced this problem too. so, I have mentioned what worked for me. And I wrote this, so that StackExchange doesn't think I am spamming the forum with questions.
QUESTION
We have micro service which consumes(subscribes)messages from 50+ RabbitMQ queues.
Producing message for this queue happens in two places
The application process when encounter short delayed execution business logic ( like send emails OR notify another service), the application directly sends the message to exchange ( which in turn it is sent to the queue ).
When we encounter long/delayed execution business logic We have
messages
table which has entries of messages which has to be executed after some time.
Now we have cron worker which runs every 10 mins which scans the messages
table and pushes the messages to RabbitMQ.
Let's say the messages table has 10,000 messages which will be queued in next cron run,
- 9.00 AM - Cron worker runs and it queues 10,000 messages to RabbitMQ queue.
- We do have subscribers which are listening to the queue and start consuming the messages, but due to some issue in the system or 3rd party response time delay it takes each message to complete
1 Min
. - 9.10 AM - Now cron worker once again runs next 10 Mins and see there are yet 9000+ messages yet to get completed and time is also crossed so once again it pushes 9000+ duplicates messages to Queue.
Note: The subscribers which consumes the messages are idempotent, so there is no issue in duplicate processing
Design Idea I had in my mind but not best logicI can have 4 status ( RequiresQueuing, Queued, Completed, Failed )
- Whenever a message is inserted i can set the status to
RequiresQueuing
- Next when cron worker picks and pushes the messages successfully to Queue i can set it to
Queued
- When subscribers completes it mark the queue status as
Completed / Failed
.
There is an issue with above logic, let's say RabbitMQ somehow goes down OR in some use we have purge the queue for maintenance.
Now the messages which are marked as Queued
is in wrong state, because they have to be once again identified and status needs to be changed manually.
Let say I have RabbitMQ Queue named ( events )
This events queue has 5 subscribers, each subscribers gets 1 message from the queue and post this event using REST API to another micro service ( event-aggregator ). Each API Call usually takes 50ms.
Use Case:
- Due to high load the numbers events produced becomes 3x.
- Also the micro service ( event-aggregator ) which accepts the event also became slow in processing, the response time increased from 50ms to 1 Min.
- Cron workers follows your design mentioned above and queues the message for each min. Now the queue is becoming too large, but i cannot also increase the number of subscribers because the dependent micro service ( event-aggregator ) is also lagging.
Now the question is, If keep sending the messages to events queue, it is just bloating the queue.
https://www.rabbitmq.com/memory.html - While reading this page, i found out that rabbitmq won't even accept the connection if it reaches high watermark fraction (default is 40%). Of course this can be changed, but this requires manual intervention.
So if the queue length increases it affects the rabbitmq memory, that is reason i thought of throttling at producer level.
Questions- How can i throttle my cron worker to skip that particular run or somehow inspect the queue and identify it already being heavily loaded so don't push the messages ?
- How can i handle the use cases i said above ? Is there design which solves my problem ? Is anyone faced the same issue ?
Thanks in advance.
AnswerCheck the accepted answer Comments for the throttling using queueCount
...ANSWER
Answered 2022-Feb-21 at 04:45You can combine QoS - (Quality of service) and Manual ACK to get around this problem. Your exact scenario is documented in https://www.rabbitmq.com/tutorials/tutorial-two-python.html. This example is for python, you can refer other examples as well.
Let says you have 1 publisher and 5 worker scripts. Lets say these read from the same queue. Each worker script takes 1 min to process a message. You can set QoS at channel level. If you set it to 1, then in this case each worker script will be allocated only 1 message. So we are processing 5 messages at a time. No new messages will be delivered until one of the 5 worker scripts does a MANUAL ACK.
If you want to increase the throughput of message processing, you can increase the worker nodes count.
The idea of updating the tables based on message status is not a good option, DB polling is the main reason that system uses queues and it would cause a scaling issue. At one point you have to update the tables and you would bottleneck because of locking and isolations levels.
QUESTION
I had a R script configured to run via cron every day at 10am
...ANSWER
Answered 2022-Feb-08 at 20:25You still can use your original line, just change 10
to 10,22
.
QUESTION
I have an app with postgres as db, sequelize, and express, and whenever it receives a db query, it just stays there forever, no logging or anything I run postgres in a container which I can connect to through GUI normally When I swapped it for sqlite, it worked perfectly the application
here is the relevant piece of code
...ANSWER
Answered 2021-Dec-12 at 05:49I think it is your "0.0.0.0:5432".
If local, it should be just "localhost:5432". If deployed server is remote, it should be a certain IP address XXX.XXX.XXX.XXX:5432. If deployed server is home network, it should be "192.168.0.XXX:5432".
Check your postgres network configuration https://youtu.be/Erqp4C3Y3Ds
QUESTION
I am trying to create a Cron
job programmatically in the CloudScheduler
Google Cloud Platform using the following API explorer.
Reference: Cloud Scheduler Documentation
Even though I have given the user Owner
permission and verified it in Policy Troubleshooter
that it has cloudscheduler.jobs.create
, I am still getting the following error.
ANSWER
Answered 2021-Dec-16 at 14:42The error is caused by using a service account that does not have an IAM role that includes the permission cloudscheduler.jobs.create
. An example role is roles/cloudscheduler.admin
aka Cloud Scheduler Admin
. I have the feeling that you have mixed the permission of the service account that you use with Cloud Scheduler (at runtime, when a job triggers something) and the permission of the account currently creating the job (aka your account for example).
You actually need two service accounts for the job to get created. You need one that you set up yourself (can be whatever name you like and doesn't require any special permissions) and you also need the one for the default Cloud Scheduler itself ( which is managed by Google)
Use an existing service account to be used for the call from Cloud Scheduler to your HTTP target or you can create a new service account for this purpose. The service account must belong to the same project as the one in which the Cloud Scheduler jobs are created. This is the client service account. Use this one when specifying the service account to generate the OAuth / OICD tokens. If your target is part of Google Cloud, like Cloud Functions/Cloud Run update your client service account by granting it the necessary IAM role (Cloud function invoker for cloud functions and Cloud Run Invoker for Cloud Run).The receiving service automatically verifies the generated token. If your target is outside of Google Cloud, the receiving service must manually verify the token.
The other service account is the default Cloud Scheduler service account which must also be present in your project and have the Cloud Scheduler Service Agent role granted to it. This is so it can generate header tokens on behalf of your client service account to authenticate to your target. The Cloud Scheduler service account with this role granted is automatically set up when you enable the Cloud Scheduler API, unless you enabled it prior to March 19, 2019, in which case you must add the role manually.
Note : Do not remove the service-YOUR_PROJECT_NUMBER@gcp-sa-cloudscheduler.iam.gserviceaccount.com
service account from your project, or its Cloud Scheduler Service Agent role
. Doing so will result in 403 responses to endpoints requiring authentication, even if your job's service account has the appropriate role.
QUESTION
I have a configuration that successfully works and loads cell line data and publishes to various recipients in a cell line topic. It works fine, but when I try to load the JobLauncherTestUtils and JobRepositoryTestUtils, I get an error which says that the JobBuilderFactory is not found. As you will see from my configuration, I do load the JobBuilderFactory and StepBuilderFactory using Lombok which delegates to Spring. Like I said all that works fine but the test Here is the test configuration yaml file
application-test.yml
...ANSWER
Answered 2021-Dec-21 at 15:57We encountered the same issue when we added a new scheduled job configuration
How it has been addressed:
- Create the JobLaunchUtils (similar to yours)
QUESTION
I've created a task and verified that it exists using SHOW TASKS
. I'm now trying to create a subtask using the AFTER
parameter of CREATE TASK
, but I'm getting the following error: invalid property 'AFTER' for 'TASK'
. I can't find any documentation on why this is happening. I think my syntax is correct; it appears to match Snowflake's documentation. This is my code:
ANSWER
Answered 2021-Oct-31 at 06:16The equal sign should be removed AFTER = NIGHTLY_1
:
QUESTION
Ever since I upgraded my EKS cluster to v1.21
, I get the following error when triggering Cronjobs manually:
ANSWER
Answered 2021-Oct-11 at 13:31Please upgrade your kubectl to 1.21 as well.
QUESTION
Experienced issue:
The command * * * * * $(command -v docker) docker run --rm -it --env-file=/home/ubuntu/.env ghcr.io/sebastix/crypto-dca:latest buy 10 ADA
won't start. This is tested within the files
- crontab -e
- sudo crontab -e
- sudo nano /etc/crontab
The log E-Mail always prints :
...ANSWER
Answered 2021-Oct-07 at 08:23The solution is simple: I had to remove both -it
and the second docker
statement.
Working command: * * * * * $(command -v docker) run --rm --env-file=/home/ubuntu/.env ghcr.io/sebastix/crypto-dca:latest buy 10 ADA --yes
The appended -yes
will automatically execute the purchase without asking for confirmation.
#Kudos to @MarkoE for helping me to solve my problem!
QUESTION
Based on this article, I am trying to generate a repo that runs everytime y push, but also every 4 or 8 hours. as seen by the action history, only one of them was successful, and only on a Mac Machine:
But my goal is to run it on at ubuntu-latest, windows-latest and macos-latest to check the code in all OS. One final goal is to make it so that the message of the scheduled pushs is updated in xxx date. This is my latest code EDITED on sept 23 which can be seen in this repo:
...ANSWER
Answered 2021-Sep-28 at 19:10Objective: To run it on at ubuntu-latest
, windows-latest
and macos-latest
to check the code in all OS:
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install cron
PHP requires the Visual C runtime (CRT). The Microsoft Visual C++ Redistributable for Visual Studio 2019 is suitable for all these PHP versions, see visualstudio.microsoft.com. You MUST download the x86 CRT for PHP x86 builds and the x64 CRT for PHP x64 builds. The CRT installer supports the /quiet and /norestart command-line switches, so you can also script it.
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page