async-task | Symfony2 Bundle builded on top of RabbitMq | Pub Sub library
kandi X-RAY | async-task Summary
kandi X-RAY | async-task Summary
The AsyncTasksBundle allows to send asynchronous messages in your Symfony2 Application via RabbitMq (using the php-amqplib library).
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
- Execute the task
- Returns the configuration tree builder .
- Registers event listeners .
- Handles the configuration .
- Rewind the queue
- Declares an exchange .
- Dispatch async event .
- Creates a new AMQP connection .
- Serialize the object .
- Define the compiler .
async-task Key Features
async-task Examples and Code Snippets
Community Discussions
Trending Discussions on async-task
QUESTION
I created a new .Net 5 project and want to use EF Core. I autogenerated multiple migration.cs files using
dotnet ef migrations add MyMigration
and want to apply them (for development and production). I know about the MigrateAsync
method so I read about how to call this method on startup
https://andrewlock.net/running-async-tasks-on-app-startup-in-asp-net-core-part-1/
but everywhere I read that this method should not be used for production since those migrations won't be executed in a single transaction (no rollback on errors).
Unfortunately there are not many resources on how to do it regardless of the environment, I found this article
One option could be a console app calling the migrations
but I wasn't able to understand the difference for this approach because it's not solving the transactional problem?
What are best practises to apply migrations during development/production?
After autogenerating migrations I'm a big fan of simplicity, does
dotnet ef database update
the job and I don't need to work with additional tools?Create a console app, generate .sql files from the migrations, install DbUp and use it for the migration part?
ANSWER
Answered 2021-Jun-01 at 23:42What works best heavily depends on how deployment pipeline works - how many environments are there before production, release cycle, what parts of deployment are automated. There are no universal "best practices" - each way of handling migrations has its own set of tradeoff to be concious about. Pick upgrade procedure according to what your needs and expectations are.
When setting up EF Core migrations for a mid-sized project (around 70 tables), I tried out few potential approaches. My observations from the process and what worked out in the end:
- You want to get a migration SQL somewhere between changing your models and deploying to production, if only to look at it in case there are any breaking changes that may cause issues on rollback. We decided on having migrations directly in project with dbcontext, and have a migration script (using
dotnet ef migrations script --idempotent
) be generated for every build that can potentially be deployed to any environment - in our case, a CI step for each push to trunk or release branch. - Putting migration SQL in version control and treating SQL as a source of truth in regards to database structure gives an ability to manually modify scripts when you want to keep some columns for backup or backwards compatibility purposes. Another option would be to consider your data model as a reference for database schema and treat migration SQL as intermediate step that is not preserved, which makes it easier to automate whole process, but requires you to handle special cases directly in your datamodel.
- Using
--idempotent
flag when generating migration script gives you a script you can reapply to a database schema regardless of what schema version it was at, having it execute only steps that were not yet executed. This means you can reapply same migration script to already migrated database without breaking schema. If you have different versions of your application running in parallel in separate environments (development, staging and production environment), it can save issues with tracking manually what migration scripts version you need to apply and in what order. - When you have migration SQL, you can use native for your database tools in order to apply them to target environment - such as
sqlcmd
for SQL Server,psql
for postgres. This also has a benefit of having separate user with higher privileges (schema modification) handle migrations, while your application works on limited privileges, that often can't touch the schema. - Applying database migrations is part of application deployment, not application startup - if you have deployment automation of some sorts, it's probably the best place to put executing migrations against target database, again - database native client is a good alternative to
DbUp
, pick whichever you prefer. Separating migrations from application startup also gives you ability to run an application against mismatched, but still compatible database schema - which comes handy when e.g. you're doing rollout deployments. - Most problems with schema upgrades come from breaking schema compatibility between versions - avoiding that requires being concious about backwards/forward compatibility when working on data model and splitting breaking changes into separate versions that keep at least single step of backwards/forwards compatibility - whether you need it depends on your project, it's something you should decide on. We run full integration test suite for previous version against current database schema and for current version against previous database schema to make sure no breaking changes are introduced between two subsequent versions - any deployment that moves multiple versions will roll out migrations one by one, with assumption that migration script or application startup can include data transformation from old to new model.
To sum up: generating migration SQL and using either native tools or DbUp on version deploy gives you a degree of manual control over migration process, and ease of use can be achieved by automating your deployment process. For development purposes, you may as well add automatic migrations on application startup, preferably applied only if environment is set to Development
- as long as every person on a team has its own development database (local SQL, personal on a shared server, filedb if you use SQL) there are no conflicts to worry about.
QUESTION
Following are the properties I have set -
...ANSWER
Answered 2021-Feb-23 at 07:54You specified core size of 50 and max size of 200. So your pool will normally run with 50 threads, and when there is extra work, it will spawn additional threads, you'll see "async-task-exec-51", "async-task-exec-52" created and so on. Later, if there is not enough work for all the threads, the pool will kill some threads to get back to just 50. So it may kill thread "async-task-exec-52". The next time it has too much work for 50 threads, it will create a new thread "async-task-exec-53".
So the fact that you see "async-task-exec-7200" means that over the life time of the thread pool it has created 7200 threads, but it will still never have more than the max of 200 running at the same time.
If @Async method is waiting 10 minutes for a thread it means that you have put so much work into the pool that it has already spawned all 200 threads and they are processing, and you have filled up the queue capacity of 100, so now the parent thread has to block(wait) until there is at least a spot in the queue to put the task.
If you need to consistently handle more tasks, you will need a powerful enough machine and enough max threads in the pool. But if your work load is just very spiky, and you don't want to spend on a bigger machine and you are ok with tasks waiting longer sometimes, you might be able to get away with just raising your queue-capacity, so the work will queue up and eventually your threads might catch up (if the task creation gets slower).
Keep trying combinations of these settings to see what will be right for your workload.
QUESTION
I have an API POST endpoint creating a resource, that resource may have multiple relationships. To make sure the resource is created with valid relationships first I need to check if the given IDs exist. There are multiple such relations, and I don't want to await each sequentially. Here's my code:
...ANSWER
Answered 2021-Jan-03 at 18:14You can avoid this by initializing master
at the declaration site.
The easiest way is using the default
keyword.
QUESTION
I have 2 projects, both use Net 5, entity framework Net 5 and async. The unique difference is that the project that is blocked use Sql Server and the other use Sqlite. But I guess the database is not the reason.
Porject 1, that is blocked:
...ANSWER
Answered 2020-Dec-10 at 16:57OK, figured out. The deadlock is becuase of SynchronizationContext
and it's capturing as stated many times in comments to your question.
Usually solution is either use await
all the way up, or ConfigureAwait(false)
, which you doing. BUT!
As stated here (section The Blocking Hack): https://docs.microsoft.com/en-us/archive/msdn-magazine/2015/july/async-programming-brownfield-async-development
If somewhere down the line Microsoft/EF uses conversion from old event-based async pattern then ConfigureAwait(false)
will have no effect, cause context will be already captured anyway.
And turns out they do such conversion, here is the source of SqlCommand
for SqlServer:
Look there for line 2996.
Now why SqlLite is working fine - because SqlLite doesn't not support async I/O, so all asyncs for SqlLite are fake, you could clearly see it in source of SqliteCommand
:
https://github.com/dotnet/efcore/blob/main/src/Microsoft.Data.Sqlite.Core/SqliteCommand.cs
Line 406.
And explanation for "fake" is here - https://docs.microsoft.com/en-us/dotnet/standard/data/sqlite/async
QUESTION
I am trying to upload multiple files to firebase storage and their urls to Firestore using a for loop. If I try to upload 3 files,then all 3 are uploaded to the firebase storage but the url of only the first file is added to Firestore. I don't see any problem in my for loop as such, so how should I fix this?
...ANSWER
Answered 2020-Jul-23 at 13:27The problem is that you are doing multiple sequenced asynchronous operations (upload to Storage and adding data to firestore) in a synchronous way in your loop and that causes the following behavior (example):
The second upload to Storage triggers before the first operation to Firestore has been completed, and this causes the fileName
variable that is being used for the Firestore operation to be populated with value of the next upload.
My suggestion to fix the issue you are facing would be to either not do the operations in sequence, eg. upload everything into storage and only then add everything to firestore, or populate the part of your map that depends on the filename
variable right after it's value has been populated and not in the onSuccess
handler.
QUESTION
Is there a way to intercept SimpleAsyncTaskExecutor
? Basically I am trying to intercept every time SimpleAsyncTaskExecutor
is invoked.
On top of that what I am really trying to do is pass on a RequestScope bean. I found How to enable request scope in async task executor, but the problem is that I cannot reuse threads. I need a new thread created every time.
Any idea how I could forward Request Scoped beans to an async thread or how I could intercept @Async
for a SimpleAsyncTaskExecutor
?
Thanks, Brian
...ANSWER
Answered 2020-Jun-29 at 18:36I'm not sure this is the best way to accomplish what I am trying to accomplish but it seems to work.
Config:
QUESTION
The Task.Yield
method "creates an awaitable task that asynchronously yields back to the current context when awaited." I am searching for something similar that should guarantee that any code that follows will run on a ThreadPool
thread. I know that I could achieve this be enclosing all the following code in a Task.Run
, but I am searching for an inline solution that doesn't create an inner scope.
ANSWER
Answered 2020-Jun-29 at 15:56Raymond Chen posted about this in his blog The Old New Thing in the post C++/WinRT envy: Bringing thread switching tasks to C# (WPF and WinForms edition).
Reproduced here in case the source goes down:
QUESTION
I'm in a situation where we have some code that is run by user input (button click), that runs through a series of function calls and result in generating some data (which is a quite heavy operation, several minutes). We'd like to use Async for this so that it doesn't lock up the UI while we're doing this operation.
But at the same time we also have a requirement that the functions will also be available through an API which preferably should be synchronous.
Visualization/Example (pseudo-code):
...ANSWER
Answered 2020-Jun-09 at 13:35You can utilize .GetAwaiter().GetResult()
as per your example, it would look like:
QUESTION
I have following class in my code, you can say it is just a wrapper over standard RestTemplate
. So whenever we have to make an external request instead of using RestTemplate
we autowire
custom MyRestTemplate
.
ANSWER
Answered 2020-May-18 at 19:57If your custom RestTemplate
looks exactly like what you put in your sample code, it is of zero utility - you can just use the standard RestTemplate
. Also, since you pass in a (presumably) non-request scoped RestTemplate
to the constructor, it is not even doing what you think it is doing I suspect. If you really wanted it to be request-scoped and auto-proxied, you can get the same effect without subclassing it. For example:
QUESTION
I've been working on refactoring a process that iterates over a collection of FileClass
objects that have a Filename
, NewFilename
, and a string[] FileReferences
property, and replaces all FileReferences
in which reference the old filename with the new one. The code below is slightly simplified, in that the real file references property is not just a list of the file names- they are lines which may contain a file name somewhere in them, or not. The current code is ok when the _fileClass
collection is below around 1000 objects... but is painfully slow if there are more objects, or the file references property has thousands.
Following the answers on this post: Run two async tasks in parallel and collect results in .NET 4.5 (and several like it). I've been trying to make a async method which would take a list of all the old and new file names as well as an individual FileClass
, then build an array of these Task
and try to process them in parallel via Task.WhenAll()
. But running into a "Cannot await void" error. I believe it's due to the Task.Run(() => ...);
but removing the () =>
causes further problems.
It's an older code base, and I can't let the async propagate further than the calling code (in this case, Main
, as I have found in some other examples. Nor can I use C#8's async foreach due to a .Net 4.5 limitation.
ANSWER
Answered 2020-Feb-14 at 05:32To solve the initial problem, is you should be using await Task.WhenAll
not Task.WaitAll
Creates a task that will complete when all of the supplied tasks have completed.
However, this looks like more of a job for Parallel.ForEach
Another issue is you looping over the same list twice (nested) which is a quadratic time complexity and is definitely not thread safe
As a solve, you could create a dictionary of the changes, loop over the change set once (in parallel), and update the references in one go.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install async-task
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page