aiojobs | Jobs scheduler for managing background task (asyncio) | Reactive Programming library
kandi X-RAY | aiojobs Summary
kandi X-RAY | aiojobs Summary
Jobs scheduler for managing background task (asyncio)
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
- Wrap a coroutine asynchronously
- Close the job
- Spawn a new job
- Wait for this task to finish
- Close the scheduler
- Calls the exception handler
- Wait for the task to finish
- Start the task
- Returns a Scheduler instance
- Spawn a coroutine
- Get Scheduler from request
- Called when the task is done
- Mark the given job as done
- Report an exception
- Setup asyncio
- Close all pending jobs
aiojobs Key Features
aiojobs Examples and Code Snippets
Community Discussions
Trending Discussions on aiojobs
QUESTION
I cannot understand the reason aiohttp (and asyncio in general) server implementation does not provide a way to limit max concurrent connections limit (number of accepted sockets, or number of running requests handlers). (https://github.com/aio-libs/aiohttp/issues/675). Without this limit, it is easy to run out of memory and/or file descriptors.
In the same time, aiohttp client by default limits number of concurrent requests to 100 (https://docs.aiohttp.org/en/stable/client_advanced.html#limiting-connection-pool-size), aiojobs limits number of running tasks and size of pending tasks list, nginx has worker_connections limit, any sync framework is limited by number of worker threads by design.
While aiohttp can handle a lot of concurrent requests, this number is still limited. Docs on aiojobs says "The Scheduler has implied limit for amount of concurrent jobs (100 by default). ... It prevents a program over-flooding by running a billion of jobs at the same time". And still, we can happily spawn "billion" (well, until we run out of resources) aiohttp handlers.
So the question is, why is it implemented the way it is? Am I missing some important detail? I think we can somehow pause requests handlers using Semafor, but the socket is still accepted by aiohttp and coroutine is spawned, in contrast with nginx. Also when deploying behind nginx, the number of worker_connections and aiohttp desired limit will certainly be different.(because nginx may serve static files also)
...ANSWER
Answered 2019-May-28 at 14:25Based on the developers' comments on the linked issue, the reasons for this choice are the following:
The application can return a 4xx or 5xx response if it detects that the number of connections is larger than what it can reasonably handle. (This differs from the
Semaphore
idiom, which would effectively queue the connection.)Throttling the number of server connections is more complicated than just specifying a number, because the limit might well depend on what your coroutines are doing, i.e. it should at least be path-based. Andrew Svetlov links to NGINX documentation about connection limiting to support this.
It is anyway recommended to put aiohttp behind a specialized front server such as NGINX.
More detail than this can only be provided by the developer(s), who have been known to read this tag.
At this point, it appears that the recommended solution is to either use a reverse proxy for limiting, or an application-based limit like this decorator (untested):
QUESTION
I have the web app. That app has endpoint to push some object data to redis
channel.
And another endpoint handles websocket
connection, where that data is fetched from channel and send to client via ws
.
When i connect via ws, messages gets only first connected client.
How to read messages from redis
channel with multiple clients and not create a new subscription?
Websocket handler.
Here i subscribe to channel, save it to app (init_tram_channel
). Then run job where i listen channel and send messages(run_tram_listening
).
ANSWER
Answered 2019-Jan-14 at 00:29I guess a message is received from one Redis subscription only once, and if there is more than one listeners in your app, then only one of them will get it.
So you need to create something like mini pub/sub inside the application to distribute the messages to all listeners (websocket connections in this case).
Some time ago I've made an aiohttp websocket chat example - not with Redis, but at least the cross-websocket distribution is there: https://github.com/messa/aiohttp-nextjs-demo-chat/blob/master/chat_web/views/api.py
The key is to have an application-wide message_subcriptions
, where every websocket connection registers itself, or perhaps its own asyncio.Queue (I've used Event in my example, but that's suboptimal), and whenever message comes from Redis, it is pushed to all relevant queues.
Of course when websocket connection ends (client unsubscribe, disconnect, failure...) the queue should be removed (and possibly Redis subscription cancelled if it was the last connection listening to it).
Asyncio doesn’t mean we should forget about queues :) Also it’s good to get familiar with combining multiple tasks at once (reading from websocket, reading from message queue, perhaps reading from some notification queue...). Using queues can also help you to handle client reconnects more cleanly (without loss of any messages).
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install aiojobs
You can use aiojobs like any standard Python library. You will need to make sure that you have a development environment consisting of a Python distribution including header files, a compiler, pip, and git installed. Make sure that your pip, setuptools, and wheel are up to date. When using pip it is generally recommended to install packages in a virtual environment to avoid changes to the system.
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page