concurrentqueue | fast multi-producer , multi-consumer lock | Architecture library
kandi X-RAY | concurrentqueue Summary
kandi X-RAY | concurrentqueue Summary
An industrial-strength lock-free queue for C++. Note: If all you need is a single-producer, single-consumer queue, I have one of those too.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of concurrentqueue
concurrentqueue Key Features
concurrentqueue Examples and Code Snippets
public static void main(String[] args) {
ConcurrentLinkedQueue clq = new ConcurrentLinkedQueue();
clq.add(10);
clq.add(20);
clq.add(30);
clq.add(40);
clq.add(50);
// Display the existing LinkedQueue
System.out.println("Concu
Community Discussions
Trending Discussions on concurrentqueue
QUESTION
I have window service that polls a web service for new items every 30 seconds. If it finds any new items, it checks to see if they need to be "processed" and then puts them in a list to process. I spawn off different threads to process 5 at a time, and when one finishes, another one will fill the empty slot. Once everything has finished, the program sleeps for 30 seconds and then polls again.
My issue is, while the items are being processed(which could take up to 15 minutes), new items are being created which also may need to be processed. My problem is the main thread gets held up waiting for every last thread to finish before it sleeps and starts the process all over.
What I'm looking to do is have the main thread continue to poll the web service every 30 seconds, however instead of getting held up, add any new items it finds to a list, which would be processed in a separate worker thread. In that worker thread, it would still have say only 5 slots available, but they would essentially always all be filled, assuming the main thread continues to find new items to process.
I hope that makes sense. Thanks!
EDIT: updated code sample
I put together this as a worker thread that operates on a ConcurrentQueue. Any way to improve this?
...ANSWER
Answered 2021-May-21 at 20:31One simple way to do it is to have 5 threads reading from a concurrent queue. The main thread queues items and the worker threads do blocking reads from the queue.
Note: The workers are in an infinite loop. They call TryDequeue, process the item if they got one or sleep one second if they fail to get something. They can also check for an exit flag.
To have your service property behaved, you might have an independent polling thread that queues the items. The main thread is kept to respond to start, stop, pause requests.
Pseudo code for worker thread:
QUESTION
I was wondering if someone can please help with the following situation:
I cannot solve a memory leak with a RabbitMQ Publisher written in C# and using .Net core 5.0.
This is the csproj file :
...ANSWER
Answered 2021-Apr-28 at 08:16First, it seems you are clogging the event handling thread. So, what I'd do is decouple event handling from the actual processing:
( Untested! Just an outline!)
REMOVED FAULTY CODE
Then in serviceInstance1
, I would have Publish
enqueue the orders in a BlockingCollection, on which a dedicated Thread is waiting. That thread will do the actual send. So you'll marshall the orders to that thread regardless of what you chose to do in Processor
and all will be decoupled and in-order.
You probably will want to set BlockOptions according to your requirements.
Mind that this is just a coarse outline, not a complete solution. You may also want to go from there and minimize string-operations etc.
EDITSome more thoughts that came to me since yesterday in no particular order:
- May it be beneficial to ditch the first filter to filter out empty sets of JObjects later?
- Maybe it's worth trying to use System.Text.Json instead of Newtonsoft?
- Is there a more efficient way to get from xml to json? (I was thinking "XSLT" but really not sure)
- I'd recommend to rig up a Benchmark.Net with MemoryAnalyzer to document / proove your changes have positive effects.
- Don't forget to have a look into DataFlowBockOptions to tweak the pipeline's behavior.
QUESTION
I have have a high speed stream of stock prices coming from a vendor... maybe 5000 per second. (about 8000 different symbols)
I have a table (SymbolPrice) in my database that needs to be updated with the most recent last price.
I don't seem to be able to keep the database updates fast enough to process the queue of last prices.
I am on an Azure Sql Server database, so I was able to upgrade the database to a premium version that supports In-Memory tables and made my SymbolPrice table an In-Memory table... but still not good enough.
If it ends up skipping a price, this is not a problem, as long as the most recent price gets in there as quick as possible... so if I get blasted with 10 in a row... only the last needs to be written... this sounds easy, except the 10 in a row might intermixed with other symbols.
So, my current solution is to use a ConcurrentDictionary to hold only the most recent price. And use a queue of Symbols to push updates to the database (see code below)... but this still isn't fast enough.
One way to solve this would be to simply repeatedly do a pass through the whole dictionary... and update the database with the most recent price... but this is a little bit of a waste as I would also be updating values that might only be updating every few minutes at the same rate as values that update many times a second.
Any thoughts on how this can be done better?
Thanks!
Brian
...
ANSWER
Answered 2021-Apr-23 at 19:19You need to use something that enable you to query the stream, SQL is not the best tool for it. Search for Complex Event Processing and Kafka / Event hub + Stream Analytics.
QUESTION
In Objective-C and Swift, is there any guarantee of order of execution for concurrent calls being made inside of a serial queue's async block?
Pseudo-code:
...ANSWER
Answered 2021-Apr-22 at 19:40I've numbered the block in your question, so I can reference them here:
Block 1 and 3 are both running on a serial queue, thus block 3 will only run once 1 is done.
However, block 1 and 3 don't actually wait for task1/2, they just queue off work to happen asynchronously in blocks 2 and 4, which finishes near instantly.
From then on, both task 1 and 2 will be running concurrently, and finish in an arbitrary order. The only guarantee is that task1 will start before task2.
I always like to use the analogy of ordering a pizza vs making a pizza. Queuing async work is like ordering a pizza. It doesn't mean you have a pizza ready immediately, and you're not going to be blocked from doing other things while the pizzeria is baking your pizza.
Your blocks 1 and 3 are strongly ordered, so 1 will finish and finish before 3 starts. However, all the block does is order a pizza, and that's fast. It does mean pizza 1 (task 1) is done before pizza 2 (task 2), it just means you got off the first phone call before making the second.
QUESTION
I'm trying to set up a shiny server on the free tier AWS EC2 to test my app but I can't get all the packages compiled and installed.
e.g. duckdb
in the terminal connected to my instance I paste:
...ANSWER
Answered 2021-Mar-09 at 10:11This is indeed due to the lack of RAM on the free tier VM. Binary packages would indeed solve this. But will see whether we can do something about that as well.
QUESTION
I have studied GCD and Thread-Safe. In apple document, GCD is Thread-Safe that means multiple thread can access. And I learned meaning of Thread-Safe that always give same result whenever multiple thread access to some object.
I think that meaning of Thread-Safe and GCD's Thread-Safe is not same because I tested some case which is written below to sum 0 to 9999.
"something.n" value is not same when I excute code below several time. If GCD is Thread-Safe , Why isn't "something.n" value same ?
I'm really confused with that.. Could you help me? I really want to master Thread-Safe!!!
...ANSWER
Answered 2021-Mar-07 at 10:19Your current queue is concurrent
QUESTION
I have a scenario where I have a queue of data to process. All the data gets read into memory and stored into a ConcurrentQueue
by one thread while another thread starts dequeueing and processing data. T being a custom class with a large amount of data to process.
The reader threads fills one side of the queue, while the processor thread works down the other side of the queue. With two threads, this works perfectly. Processing the data takes about 4 times more work/time than loading the data into memory. So I've been trying to increasing the amount of processing threads. The problem comes that the processed data needs to be SAVED in the same order as it's read. Obviously having multiple threads processing different data in parallel means that they won't finish at the same time. The threads do read the data in order because of ConcurrentQueue and they Dequeue the data in the corrrect order, but I haven't found a way to synchronize the thread's "save" function in a way that insures each thread will "save" in the same order they Dequeued.
I know .NET contains a load of thread helpers, and I've looked at things like Monitor, and Barrier, but they're so wildly different that I'm not sure which helper class or which method would work best.
Anyone have any suggestions or ideas?
...ANSWER
Answered 2021-Feb-05 at 00:26There are many ways to do this. Here is a TPL DataFlow Example
DataFlow has a few advantages
- It can deal with both synchronous and asynchronous workloads.
- You can create larger pipelines.
- Supports Task Schedulers, and cancellations tokens.
- Can run perpetually, or force to complete.
- Each block can support multiple producers and consumers
It does have some disadvantages through
- It's a bit of learning curve for the uninitiated.
- It's designed around a pipeline, not linear collections per se, so using them can be a little unintuitive.
- Creating your own custom blocks will require a deep dive in Stephen Toub's Twisted TPL brain.
- It's not as light as other producer consumer frameworks, however it makes up for it with flexibility
Example
QUESTION
I'm building a candle recorder (Binance Crypto), interesting in 1 minute candles, including intra candle data for market study purpose (But eventually I could use this same code to actually be my eyes on what's happening in the market)
To avoid eventual lag / EF / SQL performance etc. I decided do accomplish this using two threads.
One receives the subscribed (Async) tokens from Binance and put them in a ConcurrentQueue, while another keeps trying to dequeue and save the data in MSSQL
My question goes for the second Thread, a while(true) loop. Whats the best approach to save like 200 + info/sec to SQL while these info come in individually (sometimes 300 info in a matter of 300ms, sometime less) using EF:
Should I open the SQL con each time I want to save? (Performance). Whats the best approach to accomplish this?
-- EDITED -- At one point I got 600k+ in the Queue so I'm facing problems inserting to SQL Changed from Linq to SQL to EF
Here's my actual code:
...ANSWER
Answered 2021-Jan-24 at 21:08I see one error in your code, you're sleeping a background thread after every insert, don't sleep if there's more data. Instead of:
QUESTION
I have declared a ConcurrentQueue
and added a list of GUIDs. Adding into the queue is fine, but when I access the queue from inside the TimerTrigger
function it seems like it's empty (updateQueue.count
is 0). This behavior is happening in the cloud, but when I execute the same locally it works fine.
ANSWER
Answered 2021-Jan-19 at 18:48While Azure Functions may typically share a single backplane, it is not guaranteed. Resources can be spun down or up at any time and new copies of functions may not have access to the original state. As a result, if you use static fields to share data across function executions, it should be able to reload the data from an external source.
That said, this is also not necessarily preferable due to how Azure Functions are designed to be used. Azure Functions enable high-throughput via dynamic scalability. As more resources are needed to process the current workload, they can be provisioned automatically to keep throughput high.
As a result, doing too much work in a single function execution can actually interfere with overall system throughput, since there is no way for the function backplane to provision additional workers to handle the load.
If you need to preserve state, use a form of permanent storage. This could take the form of a Azure Durable Function, an Azure Storage Queue, an Azure Service Bus queue, or even a database. In addition, in order to best take advantage of your function's scalability, try to reduce the workload to manageable batches that allow for large amounts of parallel processing. While you may need to frontload your work in a single operation, you want the subsequent processing to be more granular where possible.
QUESTION
Step 1: Declare a concurrent queue .initialInactive
Step 2: Call the function having a sync closure.
...ANSWER
Answered 2020-Dec-16 at 17:43You called with with sync
, so the call will wait until the block is scheduled and completes. The queue is inactive, so it cannot schedule blocks. Therefore, the block can't complete, and the sync
can't return. Is there a different behavior you're expecting from sync
?
This construct is useful if you want all the processes to wait for some condition before starting. For example, you might make an inactive queue that guards access to something that needs to initialize (logging-in for example, or reading configuration from disk). Once that has initialized, it can call .activate()
, and all of these other processes will start. If the system is already initialized, the .sync {}
call will return immediately.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install concurrentqueue
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page