SPSC_Queue | A highly optimized single producer single consumer message
kandi X-RAY | SPSC_Queue Summary
kandi X-RAY | SPSC_Queue Summary
A single producer single consumer lock-free queue C++ template for ultimate low latency, which can be used in multithread conmunication as well as in shared memory IPC under Linux. The latency of communication of a 10-200B message is within 50-100 ns between two cpu cores on the same node. Sender and receiver don't need to copy a single byte of the msg, that is, a msg is allocated in the queue memory and set by sender, then read directly by the receiver in another thread/process.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of SPSC_Queue
SPSC_Queue Key Features
SPSC_Queue Examples and Code Snippets
Community Discussions
Trending Discussions on SPSC_Queue
QUESTION
Trying to implement a produced consumer scenario where one process feeds cv::Mat objects into a queue buffer. And the consumer consumes them. cv::Mat has a settable allocator that can be implemented for custom memory management, but I had no success in making it work. Popping from the que on consumer side led to segfaults. The closest I've got is this implementation whwre cv::Mat is serialized and deserialized. Another downside of this implementation is buffer size is defined during compilation. So to reiterate the questions: how to efficiently implement cv::Mat lockfree queue in a shared memory.
Related questions:
...ANSWER
Answered 2020-Nov-18 at 20:24The "settable" allocator for cv::Mat
is NOT a Boost Interprocess allocator.
It looks like it's gonna be "hard" to implement the cv::Matallocator
interface to wrap one, as well.
This could be because the fancier allocators are intended for CUDA support, but I'm guessing a bit here.
So, I'd strongly suggest serializing. This should be okay unless you're dealing with giant matrices. See e.g.
Of course you can serialize to shared memory: https://www.boost.org/doc/libs/1_37_0/doc/html/interprocess/streams.html or https://www.boost.org/doc/libs/1_74_0/libs/iostreams/doc/quick_reference.html#devices
Now if you need large matrices (and they NEED to be OpenCV anyways) consider using existing CV allocators to allocate from an already existing contiguous buffer in your shared memory.
This could be as simple as just a vector >
or, indeed array
constructed inside shared memory (either managed (managed_shared_memory
) or unmanaged (bip::mapped_region
that works on top of bip::shared_memory_object
).
QUESTION
Let's imagine a lock-free concurrent SPSC (single-producer / single-consumer) queue.
- The producer thread reads
head
,tail
,cached_tail
and writeshead
,cached_tail
. - The consumer thread reads
head
,tail
,cached_head
and writestail
,cached head
.
Note, that cached_tail
is accessed only by the producer thread, just like cached_head
is accessed only by the consumer thread. They can be thought as private thread local variables, so they are unsynchronized, thus not defined as atomic.
The data layout of the queue is the following:
...ANSWER
Answered 2020-Apr-30 at 11:57Thank you for providing the pseudocode - it is still lacking some details, but I think I get the basic idea. You have a bounded SPSC queue where the indexes can wrap around, and you use the cached_tail
variable in push
to check if there are free slots, so you can avoid loading tail
from a potentially invalidated cache line (and vice versa for pop
).
I would suggest to put head
and cached_tail
next to each other (i.e., on the same cache line), and tail
and cached_head
on a different one. push
always reads both variables - head
and cached_tail
, so it makes sense to have them close together. cached_tail
is only updated if there are no more free slots and we have to reload tail
.
Your code is a bit thin on details, but it seems that there is some room for optimization:
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install SPSC_Queue
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page