lockfree | Lockfree data structures for Rust | Map library
kandi X-RAY | lockfree Summary
kandi X-RAY | lockfree Summary
Lockfree data structures for Rust.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of lockfree
lockfree Key Features
lockfree Examples and Code Snippets
Community Discussions
Trending Discussions on lockfree
QUESTION
Trying to implement a produced consumer scenario where one process feeds cv::Mat objects into a queue buffer. And the consumer consumes them. cv::Mat has a settable allocator that can be implemented for custom memory management, but I had no success in making it work. Popping from the que on consumer side led to segfaults. The closest I've got is this implementation whwre cv::Mat is serialized and deserialized. Another downside of this implementation is buffer size is defined during compilation. So to reiterate the questions: how to efficiently implement cv::Mat lockfree queue in a shared memory.
Related questions:
...ANSWER
Answered 2020-Nov-18 at 20:24The "settable" allocator for cv::Mat
is NOT a Boost Interprocess allocator.
It looks like it's gonna be "hard" to implement the cv::Matallocator
interface to wrap one, as well.
This could be because the fancier allocators are intended for CUDA support, but I'm guessing a bit here.
So, I'd strongly suggest serializing. This should be okay unless you're dealing with giant matrices. See e.g.
Of course you can serialize to shared memory: https://www.boost.org/doc/libs/1_37_0/doc/html/interprocess/streams.html or https://www.boost.org/doc/libs/1_74_0/libs/iostreams/doc/quick_reference.html#devices
Now if you need large matrices (and they NEED to be OpenCV anyways) consider using existing CV allocators to allocate from an already existing contiguous buffer in your shared memory.
This could be as simple as just a vector >
or, indeed array
constructed inside shared memory (either managed (managed_shared_memory
) or unmanaged (bip::mapped_region
that works on top of bip::shared_memory_object
).
QUESTION
Hello great minds of stackoverflow. My project builds Boost via a CMake ExternalProject_Add
command. The b2
build command is as follows:
ANSWER
Answered 2020-May-01 at 19:41Apparently I've been doing it wrong this whole time. You shouldn't set --prefix
and --stagedir
to the same location. Previous iterations of boost don't seem to care about this issue. One of the later versions started caring which caused my problem.
QUESTION
I have the following code:
...ANSWER
Answered 2020-Apr-21 at 06:24You're leaking worker
because you used new
to construct it and never use delete
to destruct it. The other ASan messages are there because as part of constructing worker
, its member queue is also constructed.
QUESTION
I read C++ Standard (n4713)'s § 32.6.1 3:
Operations that are lock-free should also be address-free. That is, atomic operations on the same memory location via two different addresses will communicate atomically. The implementation should not depend on any per-process state. This restriction enables communication by memory that is mapped into a process more than once and by memory that is shared between two processes.
So it sounds like it is possible to perform a lock-free atomic operation on the same memory location. I wonder how it can be done.
Let's say I have a named shared memory segment on Linux (via shm_open() and mmap()). How can I perform a lockfree operation on the first 4 bytes of the shared memory segment for example?
At first, I thought I could just reinterpret_cast
the pointer to std::atomic*
. But then I read this. It first points out that std::atomic might not have the same size of T or alignment:
...When we designed the C++11 atomics, I was under the misimpression that it would be possible to semi-portably apply atomic operations to data not declared to be atomic, using code such as
ANSWER
Answered 2018-Jul-10 at 23:20The C++ standard doesn't concern itself with multiple processes, so there can't be any formal answers. This answer will assume the program behaves more or less the same with processes as with threads in regards to synchronization.
The first solution requires C++20 atomic_ref
QUESTION
The following command builds boost using VCPKG.
...ANSWER
Answered 2019-Oct-23 at 01:55It turns out that it is possible to build all of Boost while using ICU for those components that support the ICU feature, as follows.
./vcpkg install boost-locale[icu] boost-regex[icu] boost --triplet x64-windows --recurse
Source: How do I build boost with ICU support without having to build most components of boost twice?
QUESTION
My program is comprised of a bunch of threads happily chugging along, and the only synchronization I have is a volatile
global bool
that tells them if the user exited. All other communication between the threads is lockfree. These threads constantly have work to do, in a very time critical application, so I can't afford having locks between them. I recently came across a lot of information showing that volatile
is bad for multi-threading, so I want to make my code better. I saw that std::atomic_flag
is guaranteed lockfree, but I can't figure out how to use it for my case.
The basic setup is like this (omitting different files that the code is in):
...ANSWER
Answered 2019-Oct-21 at 00:03First, let's fix the bigger performance problem first. Your main Win32 thread is spinning without waiting. That's going to negate any perceived performance difference between a lockless bool and an std::atomic.
You'll burn an entire core just invoking PeekMessage redundantly on an empty queue. So instead of this:
QUESTION
I'm wrapping boost::lockfree::spsc_queue queue
into a RingBuffer class, and want to be able to use this RingBuffer in my project. But I am having difficulty passing capacity size to queue via class constructor.
RingBuffer.hh
...ANSWER
Answered 2019-Jul-30 at 10:11spsc_queue
doesn't have operator()(int)
in its interface. Now your compiler complains to queue(capacity);
- this calls opearator()(int)
on queue
instance.
I assume your intention is to call ctor of spsc_queue
with capacity as argument.
So add helper method to calculate this capacity and pass it to queue constructor on initialization list:
QUESTION
When trying to insert an "Event" into boost lock free queue, I receive: error: static assertion failed: (boost::has_trivial_destructor::value) and static assertion failed: (boost::has_trivial_assign::value). I know the requirement of the container is : T must have a copy constructor T must have a trivial assignment operator T must have a trivial destructor I am not sure why my event class does not meet these requirements.
I've read this question/answer : /boost/lockfree/queue.hpp: error: static assertion failed: (boost::has_trivial_destructor::value). I don't see why my class doesn't satisfy the requirements.
...ANSWER
Answered 2019-Jun-10 at 14:46Your Event
class has a member of type std::string
, which has a non-trivial assignment operator and a non-trivial destructor. Those stop the auto-generated assignment operator and destructor of Event
itself from being trivial. (Remember, "trivial" doesn't just mean "defaulted". It also means that the default behavior doesn't have to do anything interesting. A trivial assignment is just copying bits; a trivial destructor is a no-op.)
QUESTION
I am using a boost::lockfree::queue Foo(128).
Before Popping the queue I can check the queue for empty status by Foo.empty()
function.
I was wondering if I can check similarly for its status at full capacity before Pushing! Couldn't find any resource online explaining how to do it.
Any suggestions?
...ANSWER
Answered 2019-Mar-29 at 05:58It appears Boost's LF multi-producer multi-consumer queue
implementation doesn't support this. Other MPMC queues might.
boost::lockfree::spsc_queue
(single-producer single-consumer ring-buffer queue) does, with spsc.write_available() > 0
.
boost::lockfree::queue
is not fixed-size by default, only if you pass a capacity as a template arg, or fixed_sized
. If the data structure is configured as fixed-sized, the internal nodes are stored inside an array and they are addressed by array indexing. (But it's not a ring-buffer like some other MPMC queues.) Otherwise they're dynamically allocated and kept on a free-list.
For performance you probably should make it fixed-size. Or if you want to limit dynamic allocation, you can use bounded_push
instead of push
, so it will return false instead of going to the OS for more memory (which might not be lock-free).
If you are using a queue>
, then it is possible for the queue to become full.
But It wouldn't be very meaningful to check separately because another producer could have made the queue full between the check and the push. Are you looking for a performance optimization like avoiding constructing the object if the queue would probably still be full by the time you're ready to call push
?
(Also, a consumer could make the queue non-full right after you check, so it really only makes sense to check as part of an attempted push. And perhaps there isn't even an efficient lock-free way to check. Otherwise they could have had the function always return true for non-fixed-size queues, and return a meaningful result for fixed-size.)
This is why push()
returns bool
: false
means the queue was full (or a new node couldn't be allocated for non-fixed-size queues).
Before Popping the queue I can check the queue for empty status by
Foo.empty()
function.
I hope you're not actually doing this; it has all the same problems of racing with other threads as push
, and with fewer opportunities to optimize. There's no object to construct before the attempt, you just call pop
and see if you get one or not.
Another thread could have made the queue empty, or made it non-empty, between your check and your actual pop. Unless you're the only consumer, in which case seeing non-empty does mean you can definitely pop. A multi-producer single-consumer use-case would not be compatible with spsc_queue
.
Anyway, this is why it's bool pop(T &);
not T pop()
.
QUESTION
I am working on a program where, 2+ (gstreamer) boost:: threads and same number of boost:: threads of a dummy application are simultaneously using a queue. Now this queue is used for synchronization between tasks of gstreamer thread with its corresponding dummy application thread.
The queue is an EVENT queue: where the EVENT is a structure
...ANSWER
Answered 2019-Mar-15 at 06:38Why the name lockFREE? is it suggesting cannot be (mutex) locked?
Of course anything can be locked; you put the mutex outside the data structure and have every thread that touches the data structure use it.
boost::lockfree::queue
provides unsynchronized_pop
and unsynchronized_push
for use in cases where you've ensured that only one thread can be accessing the queue.
But the main purpose of lockfree::queue
and of lockless algorithms / data structures is that they don't need to be locked: multiple threads can safely write and/or read at the same time.
"lock free" has 2 meanings in programming, leading to potentially confusing but true statements like "this lockless algorithm is not lock-free".
Casual usage: synonym for lockless - implemented without mutexes, using atomic loads, stores, and RMW operations like CAS or
std::atomic::atomic_fetch_add
. See for example An Introduction to Lock-Free Programming (Jeff Preshing). And maybe What every systems programmer should know about concurrency.std::shared_ptr
uses lockless atomic operations to manage its control block. C++11std::atomic<>
provides lockless primitives for custom algorithms. See stdatomic. Normally in C++11, unsynchronized access to the same variable by multiple threads is undefined behaviour. (Unless they're all read-only.) Butstd::atomic
gives you well-defined behaviour, with your choice of sequential-consistency, acquire/release, or relaxed memory ordering.Technical computer-science meaning: a thread sleeping forever or being killed won't block the rest of the threads. i.e. guaranteed forward progress for the program as a whole (at least one thread). (Wait-free is when threads never have to retry). See https://en.wikipedia.org/wiki/Non-blocking_algorithm. A CAS retry loop is a classic example of lock-free but not wait-free. Wait-free is stuff like RCU (read-copy-update) read threads, or depending on definitions, a
atomic_fetch_add
on hardware that implements it as a primitive (e.g. x86xadd
), not in terms of an LL/SC or CAS retry loop.
Most lockless multi-reader / multi-writer queues are not lock-free in the technical sense. Usually they use a circular buffer, and a writer will "claim" an entry somehow (fixing its order in the queue). but it can't be read until the writer finishes writing to the entry itself.
See Lock-free Progress Guarantees for an example with analysis of its possible blocking behaviour. A writer atomically increments a write-index, then writes data into the array entry. If the writer sleeps between doing those things, other writers can fill up the later entries while readers are stuck waiting for that claimed but not written entry. (I haven't looked at boost::lockfree::queue
, but presumably it's similar1.)
In practice performance is excellent with very low contention between writers and readers. But in theory a writer could block at just the wrong moment and stall the whole queue.
Footnote 1: The other plausible option for a queue is a a linked list. In that case you can fully construct a new node and then attempt to CAS it into the list. So if you succeed at adding it, then other threads can read it right away because you have its pointers set correctly.
But the reclaim problem (safely freeing memory that other threads might be reading to see if another reader has already claimed them) is extremely thorny outside of garbage-collected languages / environments. (e.g. Java)
boost::lockfree::queue queue(128);
Why 128?
That's the queue (max) size, in entries. Of int
in this case, because you used queue
, duh. As mentioned above, most lockless queues use a fixed size circular buffer. It can't realloc and copy like std::vector when it needs to grow, because other threads can be reading it simultaneously.
As documented in the manual (the first google hit for boost::lockfree::queue
), the explicit queue(size_type)
constructor takes a size.
You could also bake the capacity into the type, by using it as a template parameter. (So the capacity becomes a compile-time constant everywhere that uses the queue, not just in places that can do constant-propagation from the constructor call.)
The class apparently doesn't enforce / require a power-of-2 size, so a template size parameter could maybe optimize significantly better by letting % capacity
operations compile into an AND with a mask instead of a division.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install lockfree
Rust is installed and managed by the rustup tool. Rust has a 6-week rapid release process and supports a great number of platforms, so there are many builds of Rust available at any time. Please refer rust-lang.org for more information.
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page