job-worker | Job/Worker pattern example in golang http | Key Value Database library
kandi X-RAY | job-worker Summary
kandi X-RAY | job-worker Summary
Job/Worker pattern example in golang
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
- payloadHandler handles the request of S3 requests
- Start starts the worker .
- Run the job queue
- NewWorker creates a new Worker
- NewDispatcher creates a new Dispatcher .
job-worker Key Features
job-worker Examples and Code Snippets
Community Discussions
Trending Discussions on job-worker
QUESTION
I'm working on a problem where I have a set of "warm workers". That means that they are maintained in memory, maintain their own contexts and are callable. I've been looking at various Go worker implementations but all depend on closures or simple calculation functions that return results.
I found an example of a worker that lets me spin up my contexts and distribute tasks to them based on a max queue and max routine limit: https://github.com/cahitbeyaz/job-worker/blob/master/main.go#L131
However this pattern doesn't allow me to return a result from the context and feed it back. I'm also using a web server and so the web handler has to receive the result and respond accordingly.
Is there a specific/better pattern I should/could follow or a way I could adapt the job-worker example?
PS. At first I thought I could create a ResultQueue where results are pushed back and consumed by the web handler. I don't think the ordering of the queue can be counted on though.
...ANSWER
Answered 2019-Oct-26 at 17:50The solution is incredibly simple (I was definitely over-complicating it). Not sure how efficient this actually is but doubt it's not too terrible. Suggestions for a better pattern are still welcome:
In the job definition declare a channel to feed back the result:
QUESTION
I'm trying to build a system, worker pool / jobqueue, to handle as many http requests
as possible on each API endpoint. I looked into this example and got it working just fine except that I stumbled upon the problem that I don't understand how to expand the pool / jobqueue
to different endpoints.
For scenario sake let's sketch an Golang http server that has a million request / min across different endpoints and request types GET
& POST
ETC.
How can I expand on this concept? Should I create different worker pools and jobs for each endpoint. Or can I create different jobs and enter them in the same queue and have the same pool handle these?
I want to maintain simplicity where if I create a new API endpoint I don't have to create new worker pools, so I can focus just on the api. But performance is also very much in mind.
The code I'm trying to build on is taken from the example linked earlier, here is a github 'gist' of somebody else with this code.
...ANSWER
Answered 2017-Oct-19 at 14:59It's not clear why you need worker pool at all? Would not be goroutines be enough?
If you are limited by resources you can consider implementing rates limiting. If not why simply not to span go routines as needed?
The best way to learn is to study how others do good stuff.
Have a look at https://github.com/valyala/fasthttp
Fast HTTP package for Go. Tuned for high performance. Zero memory allocations in hot paths. Up to 10x faster than
net/http
.
They are claiming:
serving up to 200K rps from more than 1.5M concurrent keep-alive connections per physical server
That is quite impressive and I doubt you can do any better with pool / jobqueue
.
QUESTION
I have a website and and a webjob, where the website is a oneway client and the webjob is worker.
I use the Azure ServiceBus transport for the queue.
I get the following error:
InvalidOperationException: Cannot use ourselves as timeout manager because we're a one-way client
when I try to send Bus.Defer from the website bus.
Since Azure Servicebus have built in support for timeoutmanager should not this work event from a oneway client?
The documentation on Bus.Defer says: Defers the delivery of the message by attaching a header to it and delivering it to the configured timeout manager endpoint /// (defaults to be ourselves). When the time is right, the deferred message is returned to the address indicated by the header."
Could I fix this by setting the ReturnAddress like this:
...ANSWER
Answered 2017-Jan-22 at 20:52Could I fix this by setting the ReturnAddress like this: headers.Add(Rebus.Messages.Headers.ReturnAddress, "webjob-worker");
Yes :)
The problem is this: When you await bus.Defer
a message with Rebus, it defaults to return the message to the input queue of the sender.
When you're a one-way client, you don't have an input queue, and thus there is no way for you to receive the message after the timeout has elapsed.
Setting the return address fixes this, although I admit the solution does not exactly reek of elegance. A nicer API would be if Rebus had a Defer
method on its routing API, which could be called like this:
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install job-worker
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page