Semaphore | library provides an api for semaphore acquire | Database library
kandi X-RAY | Semaphore Summary
kandi X-RAY | Semaphore Summary
This library provides an api for semaphore acquire and release. Support adapters: flock, pdo, doctrine/orm, apc, memcached, sem. Example model for doctrine adapter. Doctrine adapter works directrly with sql-table. This model only creates appropriate table.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
- Acquire a lock
- Executes a SQL query
- Deletes expired records .
- Get cache key .
- Invalidate expired sessions .
- Execute a SQL query
- Release a sem_release
Semaphore Key Features
Semaphore Examples and Code Snippets
Community Discussions
Trending Discussions on Semaphore
QUESTION
In C++20, we got the capability to sleep on atomic variables, waiting for their value to change.
We do so by using the std::atomic::wait
method.
Unfortunately, while wait
has been standardized, wait_for
and wait_until
are not. Meaning that we cannot sleep on an atomic variable with a timeout.
Sleeping on an atomic variable is anyway implemented behind the scenes with WaitOnAddress on Windows and the futex system call on Linux.
Working around the above problem (no way to sleep on an atomic variable with a timeout), I could pass the memory address of an std::atomic
to WaitOnAddress
on Windows and it will (kinda) work with no UB, as the function gets void*
as a parameter, and it's valid to cast std::atomic
to void*
On Linux, it is unclear whether it's ok to mix std::atomic
with futex
. futex
gets either a uint32_t*
or a int32_t*
(depending which manual you read), and casting std::atomic
to u/int*
is UB. On the other hand, the manual says
The uaddr argument points to the futex word. On all platforms, futexes are four-byte integers that must be aligned on a four- byte boundary. The operation to perform on the futex is specified in the futex_op argument; val is a value whose meaning and purpose depends on futex_op.
Hinting that alignas(4) std::atomic
should work, and it doesn't matter which integer type is it is as long as the type has the size of 4 bytes and the alignment of 4.
Also, I have seen many places where this trick of combining atomics and futexes is implemented, including boost and TBB.
So what is the best way to sleep on an atomic variable with a timeout in a non UB way? Do we have to implement our own atomic class with OS primitives to achieve it correctly?
(Solutions like mixing atomics and condition variables exist, but sub-optimal)
...ANSWER
Answered 2021-Jun-15 at 20:48You shouldn't necessarily have to implement a full custom atomic
API, it should actually be safe to simply pull out a pointer to the underlying data from the atomic
and pass it to the system.
Since std::atomic
does not offer some equivalent of native_handle
like other synchronization primitives offer, you're going to be stuck doing some implementation-specific hacks to try to get it to interface with the native API.
For the most part, it's reasonably safe to assume that first member of these types in implementations will be the same as the T
type -- at least for integral values [1]. This is an assurance that will make it possible to extract out this value.
... and casting
std::atomic
tou/int*
is UB
This isn't actually the case.
std::atomic
is guaranteed by the standard to be Standard-Layout Type. One helpful but often esoteric properties of standard layout types is that it is safe to reinterpret_cast
a T
to a value or reference of the first sub-object (e.g. the first member of the std::atomic
).
As long as we can guarantee that the std::atomic
contains only the u/int
as a member (or at least, as its first member), then it's completely safe to extract out the type in this manner:
QUESTION
I'm beginner in Project Reactor
and think it's pretty easy, but I can't find the solution.
I have N
expensive tasks to do, and I want to implement something like Bounded Semaphore
in Java (do not request next element until current count of running task less than K
).
Shortly: complete all tasks, but no more K
tasks at the same time
ANSWER
Answered 2021-Jun-12 at 14:18What about this solution? I removed parallel from Flux, in order to bufferize 10 elements. Each elements can be then handled in parallel
QUESTION
I'm using Google Analytics and that service has limit of 10 concurrent requests. I had to limit my API somehow, so I decided to use a semaphore, but it seems it doesn't work. All requests are triggered simultaneously. I can't find the problem in my code.
...ANSWER
Answered 2021-Jun-07 at 21:38All requests are triggered simultaneously.
Let's take a look here
QUESTION
I'm trying to import a Vulkan semaphore for using by CUDA on the windows platform, but always get a cudaErrorInvalidValue error. I can’t figure out what causes the problem.
Considering I have already a HANDLE for the Vulkan semaphore object, the Cuda side is at DLL1 as follows:
...ANSWER
Answered 2021-Jun-06 at 08:08As @talonmies commented, indeed, the structure cudaExternalSemaphoreHandleDesc is actually not fully initialized with the code shown. The problem is solved when initializing the structure to zero:
QUESTION
Below is the quote from a C# book:
The WaitHandle class is a simple class whose sole purpose is to wrap a Windows kernel object handle. Internally, the WaitHandle base class has a SafeWaitHandle field that holds a Win32 kernel object handle. This field is initialized when a concrete WaitHandle-derived class is constructed. Each method call on a kernel object causes the calling thread to transition from managed code to native user-mode code to native kernel-mode code and then return all the way back.
I'm confused on which kernel object is associated with the WaitHandle. Below is an example from MSDN:
...ANSWER
Answered 2021-Jun-06 at 01:26How can a worker thread use other thread's kernel object to block itself?
The reason why a worker thread can access a handle that isn't their own is because that handle is a reference windows kernel handle.
Windows Kernel Handles
A windows kernel handle is an operating system wide handle that can allow synchronization across domain boundaries.
Which is just a fancy way of saying that this 'handle' is some object can be seen by any thread, process, or program running on the same Windows operating system.
An easier way that I like thinking about it is to consider a rudimentary lock for a shared resource.
This lock traditionally works such that an array, using the lock keyword and a System.Object
, can be protected from multiple threads changing it's values at once.
Let's look at a short example of that
QUESTION
I came up with the following code which calls a database paging function repeatedly with a page size of 5 and for each item in a page executes a function in parallel with a max concurrency of 4. It looks like its working so far but I'm unsure if I need to use locking to enclose the parallelInvocationTasks.Remove(completedTask);
line and Task.WhenAll(parallelInvocationTasks.ToArray());
So do I need to use locking here and do you see any other improvements?
Here's the code
Program.cs
...ANSWER
Answered 2021-May-21 at 18:09Ok based on the comments above and a little bit more research I arrived at this answer which gets the work done without all the complexity of having to write custom code to manage concurrency. It uses ActionBlock from TPL DataFlow
PagingExtensions.cs
QUESTION
I am supposed to be using two custom Semaphore classes (binary and counting) to print off letters in an exact sequence. Here is the standard semaphore.
...ANSWER
Answered 2021-May-28 at 12:43As @iggy points out the issue is related to fact that different threads are reading different values of value
, because the way you access it isn't thread safe. Some threads may be using an old copy of the value. Making it volatile will mean each thread access reads more consistent value:
QUESTION
Crashes when the class is destructed, how to handle unfinished semaphore?
...ANSWER
Answered 2021-May-27 at 06:28It crashes because of lack of semaphore.signal()
and wrong setup of semaphore.
You should call semaphore.signal()
after you are done with the async method.
You should add semaphore.wait()
outside of the async method.
In your case, it could be like this;
QUESTION
I am new to FreeRTOS and have been reading the FreeRTOS documentation and writing simple code using FreeRTOS on an STM32F767 Nucleo Board. In the simple program that I wrote, I used Binary Semaphores only to signal certain tasks when LPTIM and GPIO interrupts occur through xSemaphoreGiveFromISR()
, and to signal a different task to perform certain operations from another task through xSemaphoreGive()
.
Suppose that I have an I2C1 peripheral connected to two different equipments:
- An accelerometer that triggers a GPIO interrupt to the microcontroller whenever an activity/movement occurs. This GPIO interrupt signals the microcontroller that a piece of data inside its Interrupt Event registers must be read so that the next activity/movement event can be signalled again.
- An equipment that must be read from periodically, which will be triggered through an LPTIM or TIM peripheral
Can I use a Mutex and a Binary Semaphore in the situation above?
The Binary Semaphores will indicate to the task that an operation needs to be performed based on the respective interrupts that were triggered, but the Mutex will be shared between those two tasks, where Task1 will be responsible with reading data from the accelerometer, and Task2 will be responsible for reading data from the other equipment. I was thinking that a Mutex will be used since these two operations should never occur together, so that there are no overlapping I2C transactions that happen on the bus that could potentially lock up either of the I2C devices.
The code would look like the following:
...ANSWER
Answered 2021-May-26 at 18:40In general, I think your design could work. The semaphores are signals for the two tasks to do work. And the mutex protects the shared I2C resource.
However, sharing a resource with a mutex can lead to complications. First, your operations tasks are not responsive to new semaphore signals/events while they are waiting for the I2C resource mutex. Second, if the application gets more complex and you add more blocking calls then the design can get into a vicious cycle of blocking, starvation, race conditions, and deadlocks. Your simple design isn't there yet but you're starting down a path.
As an alternative, consider making a third task responsible for handling all I2C communications. The I2C task will wait for a message (in a queue or mailbox). When a message arrives then the I2C task will perform the associated I2C communication. The operations tasks will wait for the semaphore signal/event like they do now. But rather than waiting for the I2C mutex to become available, now the operations tasks will send/post a message to the I2C task. With this design you don't need a mutex to serialize access to the I2C resource because the I2C task's queue/mailbox does the job of serializing the message communication requests from the other tasks. Also in this new design each task blocks at only one place, which is cleaner and allows the operations tasks to be more responsive to the signals/events.
QUESTION
In my project I'm using Jhipster Spring Boot and I would like to start 2 instances of one microservise at the same time, but on different instances of a database (MongoDB).
In this microservice I have classes, services, rests that are used for collections A, B C,.. for which now I would like to have also history collections A_history, B_history, C_history (that are structured exactly the same like A, B, C) stored in separated instance of a database. It makes no sense to me to create "really separated" microservice since I would have to copy all logic from the first one and end up with doubled code that is very hard to maintain. So, the idea is to have 2 instances of the same microservice, one for A, B, C collections stored in "MicroserviceDB" and second for A_history, B_history, C_history collections stored in "HistoryDB".
I've tried with creating 2 profiles, but when I start from a command line History microservice, it is started ok, but if I also try to start "original" microservice at the same time, it is started but immediately history service becomes "original" microservice. Like they cannot work at the same time.
Is this concept even possible in microservice architecture? Does anyone have an idea how to make this to work, or have some other solution for my problem?
Thanks.
application.yml
...ANSWER
Answered 2021-May-20 at 09:18In general, this concept should be easily achievable with microservices and a suiting configuration. And yes, you should be able to use profiles to define different database connections so that you can have multiple instances running.
I assume you are overwriting temporary build artifacts, that's why it is not working somehow. But that is hard to diagnose from distance. You might consider using Docker containers with a suiting configuration to increase isolation in this regard.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install Semaphore
PHP requires the Visual C runtime (CRT). The Microsoft Visual C++ Redistributable for Visual Studio 2019 is suitable for all these PHP versions, see visualstudio.microsoft.com. You MUST download the x86 CRT for PHP x86 builds and the x64 CRT for PHP x64 builds. The CRT installer supports the /quiet and /norestart command-line switches, so you can also script it.
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page