TaskScheduler | Cooperative multitasking for Arduino , ESPx , STM32 , nRF
kandi X-RAY | TaskScheduler Summary
kandi X-RAY | TaskScheduler Summary
A lightweight implementation of cooperative multitasking (task scheduling). An easier alternative to preemptive programming and frameworks like FreeRTOS. You mostly do not need to worry about pitfalls of concurrent processing (races, deadlocks, livelocks, resource sharing, etc.). The fact of cooperative processing takes care of such issues by design. Scheduling overhead: between 15 and 18 microseconds per scheduling pass (Arduino UNO rev 3 @ 16MHz clock, single scheduler w/o prioritization).
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of TaskScheduler
TaskScheduler Key Features
TaskScheduler Examples and Code Snippets
Community Discussions
Trending Discussions on TaskScheduler
QUESTION
Latest Update (with an image to hope simplify the problem) (thanks for feedback from @Mahmoud)
Relate issue reports for other reference (after this original post created, it seem someone filed issues for Spring Cloud on similar issue, so also update there too):
https://github.com/spring-cloud/spring-cloud-task/issues/793 relate to approach #1
https://github.com/spring-cloud/spring-cloud-task/issues/792 relate to approach #2
Also find a workaround resolution for that issue and update on that github issue, will update this once it is confirmed good by developer https://github.com/spring-cloud/spring-cloud-task/issues/793#issuecomment-894617929
I am developing an application involved multi-steps using spring batch job but hit some roadblock. Did try to research doc and different attempts, but no success. So thought to check if community can shed light
Spring batch job 1 (received job parameter for setting for step 1/setting for step 2)
...ANSWER
Answered 2021-Aug-15 at 13:33
- Is above even possible setup?
yes, nothing prevents you from having two partitioned steps in a single Spring Batch job.
- Is it possible to use JobScope/StepScope to pass info to the partitionhandler
yes, it is possible for the partition handler to be declared as a job/step scoped bean if it needs the late-binding feature to be configured.
Updated on 08/14/2021 by @DanilKo
The original answer is correct in high - level. However, to actually achieve the partition handeler to be step scoped, a code modification is required
Below is the analyze + my proposed workaround/fix (maybe eventually code maintainer will have better way to make it work, but so far below fix is working for me)
Issue being continued to discuss at: https://github.com/spring-cloud/spring-cloud-task/issues/793 (multiple partitioner handler discussion) https://github.com/spring-cloud/spring-cloud-task/issues/792 (which this fix is based up to use partitionerhandler at step scope to configure different worker steps + resources + max worker)
Root cause analyze (hypothesis)The problem is DeployerPartitionHandler utilize annoation @BeforeTask to force task to pass in TaskExecution object as part of Task setup
But as this partionerHandler is now at @StepScope (instead of directly at @Bean level with @Enable Task) or there are two partitionHandler, that setup is no longer triggered, as @EnableTask
seem not able to locate one partitionhandler
during creation.
Resulted created DeployerHandler faced a null with taskExecution
when trying to launch (as it is never setup)
Below is essentially a workaround to use the current job execution id to retrieve the associated task execution id From there, got that task execution and passed to deploy handler to fulfill its need of taskExecution reference It seem to work, but still not clear if there is other side effect (so far during test not found any)
Full code can be found in https://github.com/danilko/spring-batch-remote-k8s-paritition-example/tree/attempt_2_partitionhandler_with_stepscope_workaround_resolution
In the partitionHandler method
QUESTION
The documentation of the ParallelOptions.MaxDegreeOfParallelism
property states that:
The
MaxDegreeOfParallelism
property affects the number of concurrent operations run byParallel
method calls that are passed thisParallelOptions
instance. A positive property value limits the number of concurrent operations to the set value. If it is -1, there is no limit on the number of concurrently running operations.By default,
For
andForEach
will utilize however many threads the underlying scheduler provides, so changingMaxDegreeOfParallelism
from the default only limits how many concurrent tasks will be used.
I am trying to understand what "no limit" means in this context. Based on the above excerpt from the docs, my expectation was that a Parallel.Invoke
operation configured with MaxDegreeOfParallelism = -1
would start executing immediately in parallel all the supplied actions
. But this is not what happening. Here is an experiment with 12 actions:
ANSWER
Answered 2022-Mar-27 at 13:39The definition is deliberately states as -1 means that the number of number of concurrent operations will not be artificially limited.
and it doesn't say that all actions will start immediately.
The thread pool manager normally keeps the number of available threads at the number of cores (or logical processor which are 2x number of cores) and this is considered the optimum number of threads (I think this number is [number of cores/logical processor + 1]) . This means that when you start executing your actions the number of available threads to immediately start work is this number.
Thread pool manager runs periodically (twice a second) and a if none of the threads have completed a new one is added (or removed in the reverse situation when there are too many threads).
A good experiment to see this in action is too run your experiment twice in quick succession. In the first instance the number of concurrent jobs at the beginning should be around number of cores/logical processor + 1 and in 2nd run it should be the number of jobs run (because these threads were created to service the first run:
Here's a modified version of your code:
QUESTION
Sorry for a lengthy one, but I'm in dire straits - just trying to provide all details upfront.
This Fri (2021-Nov-12) after a restart of Visual Studio 2017 it began crashing without notice while opening existing solutions. This worked perfectly fine at least a week ago (after last Win10 Update KB5006670 on 2021-Nov-05 - followed by a reboot). Trying to load old solutions (which haven't been touched for 2+ years) results in exactly the same behavior:
you get a glimpse of "Loading Project .." windows (not sure if it goes through all projects in a solution), then suddenly the main VS window disappears and .. that's it.
VStudio's configuration has not been touched at least for a year. No explicit updates/patches or NuGet packages either. By itself VS starts and shows the main window with usual Start page. But I cannot load any solution or project.
The very first related Event Log entry:
...ANSWER
Answered 2021-Dec-21 at 16:18Sorry it took so long. Was under a gun to finish a project..
The root cause of the problem turned out to be ICSharpCode.CodeConverter v.8.4.1.0!
Wow, of all the pieces installed (which aren't that many)..
On a hunch (since the problem was local to Visual Studio) I started looking at Tools and Extensions, and noticed on this component the Date Installed
being past the most recent Windows Update! The Automatically update this extension
checkbox was checked (by default?).
So it must have silently updated upon VS restart?!
Granted, updates are useful and sometimes necessary. But they also may introduce problems. Performing updates automatically is one thing. But not informing the user about it is bad!
Here's an excerpt from the C:\TEMP\VSIXInstaller_f0335270-1a19-4b71-b74b-e50511bcd107.log
:
QUESTION
I'd like to properly understand the consequences of failing to observe an exception thrown on a Task
used in a fire and forget manner without exception handling.
Here's an extract from CLR via C#, Third Edition by Jeffry Richter: "[...] when a Task
object is garbage collected, its Finalize
method checks to see if the Task
experienced an unobserved exception; if it has, Task
's Finalize
method throws [an exception]. Since you cannot catch an exception thrown by the CLR's finalizer thread, your process is terminated immediately."
I am writing some test code to bring about a termination but am unable to cause one.
Using the test code here, I am able to see the TaskScheduler.UnobservedTaskException
handler being called. However, if I comment out the event handler subscription, the exception appears to be swallowed and does not bring about termination of the program.
I've tried this using the .NET Framework on both versions 4 and 4.8 with a Release build.
How do I demonstrate that failing to observe an exception thrown on a Task
does indeed cause a crash?
ANSWER
Answered 2021-Dec-14 at 17:20The problem is correctly identified by Jon Skeet in his comment to the original post.
The best resource I found concerning this topic is by Stephen Toub.
tldr: "To make it easier for developers to write asynchronous code based on Tasks, .NET 4.5 changes the default exception behavior for unobserved exceptions. While unobserved exceptions will still cause the UnobservedTaskException event to be raised (not doing so would be a breaking change), the process will not crash by default. Rather, the exception will end up getting eaten after the event is raised, regardless of whether an event handler observes the exception. This behavior can be configured, though. A new CLR configuration flag may be used to revert back to the crashing behavior of .NET 4, e.g."
QUESTION
See following example:
...ANSWER
Answered 2021-Dec-13 at 22:07I don't know much about the implications of using the debugging features of the Visual Studio, but if you run your program without the debugger attached (with Ctrl+F5) the tasks are properly recycled by the garbage collector. The tasks created internally by the ForgottenTask
method are not started, so they are not scheduled, and since you are not holding any explicit reference to the tasks there is nothing preventing the garbage collector from recycling them. Here is a minimal demonstration of this behavior:
QUESTION
I'm trying to understand async/await
and read the source code of AsyncMethodBuilder
. I thought there must be some code like xxx.Wait()
or xxx.WaitOnce()
waiting for a task to be completed.
However, I didn't find such code in class AsyncMethodBuilder
.
system\runtime\compilerservices\AsyncMethodBuilder.cs https://referencesource.microsoft.com/#mscorlib/system/runtime/compilerservices/AsyncMethodBuilder.cs,96
So I keep digging, tried to read the source code of Task
, TaskScheduler
, ThreadPoolTaskScheduler
, ThreadPool
.
Finally I got class _ThreadPoolWaitCallback
, but didn't find any caller.
https://referencesource.microsoft.com/#mscorlib/system/threading/threadpool.cs,d7b8a78b4dd14fd0
ANSWER
Answered 2021-Nov-27 at 07:20In a correctly implemented async implementation: there is no wait. Instead, at the bottom of the chain, some code exists that will create some async source, which could be a TaskCompletionSource
, an IValueTaskSource[]
, or something similar - which allows that code to store that token somewhere (for example, in a queue, a correlation dictionary, or an async state object for IOCP), and return the incomplete task to the caller. The caller then discovers that it is incomplete, and registers a "when you have the answer, do this to reactivate me" callback. That calling code now unwinds completely, with every step saying "when you're done, push here", and the thread goes on to do other things, such as service a different request.
At some point in the future (hopefully), the result will come back - again, via IOCP, or via a separate IO reader pulling a response from somewhere and taking the appropriate item out of the queue/correlation-dictionary, and says "the outcome was {...}" (TrySetResult, TrySetException, etc).
For all of that time no threads were blocked. That is, ultimately, the entire point of async/await: to free up threads, to increase scalability.
In incorrectly implemented async systems: anything and everything is possible, including async-over-sync, sync-over-async, and everything else.
QUESTION
I was reading this article and found this example:
...ANSWER
Answered 2021-Oct-05 at 18:18If we're an async Task
or async Task
method, then there's always work to do after the await
- we need to ensure that any exception produced by the await
ed task is properly propagated to our own Task
, or that we pass the appropriate normal return value through.
If we're any kind of async
method using structures such as using
which insert compiler-generated code at the end of an otherwise appearing empty epilogue to our method, then we'll be inserting code at the end of the method, even if it doesn't appear in the source.
If we're any normal async
method that liberally uses await
s then we'll already have a state machine built and running to execute our method and there's no point in optimizing the "no code after the last await" possibility.
In the narrow case that we're an async void
1 method that contains a single await
at the end, then other than some minutiae about where an unhandled exception from a task might be reported, we already had the opportunity to avoid excess code generation by not making the method async
at all and just ignoring the awaitable.
So there's no reason to try to optimize this situation.
1We're already in a state of sin at that point anyway.
QUESTION
Let's say there is a library that handles events asynchronously, e.g. UDP broadcasting. I would like to be able to pass a delegate to this library and make sure that delegate is executed in the thread where it was defined.
...ANSWER
Answered 2021-Oct-03 at 12:26After reading this article, it seems that the only way to have a synchronization context in ASP Core is to create your own.
The way I imagine it is that there must be some data structure that supports concurrency, e.g. ConcurrentDictionary
. Any process can add some values to this data structure and another process will be responsible for reading and broadcasting updated values within the same thread.
Something like this.
QUESTION
I am looking to execute a bunch of ValueTask
-returning functions on a custom thread pool - i.e. on a bunch of threads I'm spawning and handling myself, rather than the default ThreadPool
.
Meaning, all synchronous bits of these functions, including any potential task continuations, should be executed on my custom thread pool.
Conceptually, something like:
...ANSWER
Answered 2021-Sep-17 at 14:58Everything I've read about creating a custom ThreadPool says don't.
An alternative would be to use a custom TaskScheduler on the shared ThreadPool.
You can use this TaskScheduler class like this:
QUESTION
I am using TPL Dataflow to load a video (I am using Emgu.CV library for loading) from a path and via TPL Dataflow first plot it in a Windows Form application (after that there will be a communication step between with a board). I had another post that helped me very much with TPL Dataflow here: Asynchronous Task, video buffering
But after setting up the TPL Dataflow the first image is only loaded in the GUI and after that (while the Block seems to running because prints are displayed in cmd) the image does not refresh... I cannot understand what is wrong? Does it have to do with scheduler or with the TPL Dataflow? Below is the code:
...ANSWER
Answered 2021-Jul-15 at 13:24Most probably the problem is that the PicturePlot2
component doesn't like being manipulated by non-UI threads. To ensure that the ActionBlock
's delegate will be invoked on the UI thread, you can configure the TaskScheduler
option of the block like this:
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install TaskScheduler
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page