SPSC_Queue | A highly optimized single producer single consumer message

 by   MengRao C++ Version: Current License: MIT

kandi X-RAY | SPSC_Queue Summary

kandi X-RAY | SPSC_Queue Summary

SPSC_Queue is a C++ library. SPSC_Queue has no bugs, it has no vulnerabilities, it has a Permissive License and it has low support. You can download it from GitHub.

A single producer single consumer lock-free queue C++ template for ultimate low latency, which can be used in multithread conmunication as well as in shared memory IPC under Linux. The latency of communication of a 10-200B message is within 50-100 ns between two cpu cores on the same node. Sender and receiver don't need to copy a single byte of the msg, that is, a msg is allocated in the queue memory and set by sender, then read directly by the receiver in another thread/process.
Support
    Quality
      Security
        License
          Reuse

            kandi-support Support

              SPSC_Queue has a low active ecosystem.
              It has 192 star(s) with 61 fork(s). There are 14 watchers for this library.
              OutlinedDot
              It had no major release in the last 6 months.
              There are 1 open issues and 3 have been closed. On average issues are closed in 280 days. There are 1 open pull requests and 0 closed requests.
              It has a neutral sentiment in the developer community.
              The latest version of SPSC_Queue is current.

            kandi-Quality Quality

              SPSC_Queue has 0 bugs and 0 code smells.

            kandi-Security Security

              SPSC_Queue has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.
              SPSC_Queue code analysis shows 0 unresolved vulnerabilities.
              There are 0 security hotspots that need review.

            kandi-License License

              SPSC_Queue is licensed under the MIT License. This license is Permissive.
              Permissive licenses have the least restrictions, and you can use them in most projects.

            kandi-Reuse Reuse

              SPSC_Queue releases are not available. You will need to build from source code and install.

            Top functions reviewed by kandi - BETA

            kandi's functional review helps you automatically verify the functionalities of the libraries and avoid rework.
            Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of SPSC_Queue
            Get all kandi verified functions for this library.

            SPSC_Queue Key Features

            No Key Features are available at this moment for SPSC_Queue.

            SPSC_Queue Examples and Code Snippets

            No Code Snippets are available at this moment for SPSC_Queue.

            Community Discussions

            QUESTION

            How to construct boost spsc_queue with runtime size parameter to exchange cv::Mat objects between two processes using shared memory?
            Asked 2020-Nov-18 at 20:24

            Trying to implement a produced consumer scenario where one process feeds cv::Mat objects into a queue buffer. And the consumer consumes them. cv::Mat has a settable allocator that can be implemented for custom memory management, but I had no success in making it work. Popping from the que on consumer side led to segfaults. The closest I've got is this implementation whwre cv::Mat is serialized and deserialized. Another downside of this implementation is buffer size is defined during compilation. So to reiterate the questions: how to efficiently implement cv::Mat lockfree queue in a shared memory.

            Related questions:

            ...

            ANSWER

            Answered 2020-Nov-18 at 20:24

            The "settable" allocator for cv::Mat is NOT a Boost Interprocess allocator.

            It looks like it's gonna be "hard" to implement the cv::Matallocator interface to wrap one, as well.

            This could be because the fancier allocators are intended for CUDA support, but I'm guessing a bit here.

            So, I'd strongly suggest serializing. This should be okay unless you're dealing with giant matrices. See e.g.

            Of course you can serialize to shared memory: https://www.boost.org/doc/libs/1_37_0/doc/html/interprocess/streams.html or https://www.boost.org/doc/libs/1_74_0/libs/iostreams/doc/quick_reference.html#devices

            Now if you need large matrices (and they NEED to be OpenCV anyways) consider using existing CV allocators to allocate from an already existing contiguous buffer in your shared memory.

            This could be as simple as just a vector > or, indeed array constructed inside shared memory (either managed (managed_shared_memory) or unmanaged (bip::mapped_region that works on top of bip::shared_memory_object).

            Source https://stackoverflow.com/questions/64897387

            QUESTION

            Avoiding false sharing of SPSC queue indices
            Asked 2020-Apr-30 at 13:15

            Let's imagine a lock-free concurrent SPSC (single-producer / single-consumer) queue.

            • The producer thread reads head, tail, cached_tail and writes head, cached_tail.
            • The consumer thread reads head, tail, cached_head and writes tail, cached head.

            Note, that cached_tail is accessed only by the producer thread, just like cached_head is accessed only by the consumer thread. They can be thought as private thread local variables, so they are unsynchronized, thus not defined as atomic.

            The data layout of the queue is the following:

            ...

            ANSWER

            Answered 2020-Apr-30 at 11:57

            Thank you for providing the pseudocode - it is still lacking some details, but I think I get the basic idea. You have a bounded SPSC queue where the indexes can wrap around, and you use the cached_tail variable in push to check if there are free slots, so you can avoid loading tail from a potentially invalidated cache line (and vice versa for pop).

            I would suggest to put head and cached_tail next to each other (i.e., on the same cache line), and tail and cached_head on a different one. push always reads both variables - head and cached_tail, so it makes sense to have them close together. cached_tail is only updated if there are no more free slots and we have to reload tail.

            Your code is a bit thin on details, but it seems that there is some room for optimization:

            Source https://stackoverflow.com/questions/61507688

            Community Discussions, Code Snippets contain sources that include Stack Exchange Network

            Vulnerabilities

            No vulnerabilities reported

            Install SPSC_Queue

            You can download it from GitHub.

            Support

            For any new features, suggestions and bugs create an issue on GitHub. If you have any questions check and ask questions on community page Stack Overflow .
            Find more information at:

            Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items

            Find more libraries
            CLONE
          • HTTPS

            https://github.com/MengRao/SPSC_Queue.git

          • CLI

            gh repo clone MengRao/SPSC_Queue

          • sshUrl

            git@github.com:MengRao/SPSC_Queue.git

          • Stay Updated

            Subscribe to our newsletter for trending solutions and developer bootcamps

            Agree to Sign up and Terms & Conditions

            Share this Page

            share link