atomically | ActiveRecord extension | Object-Relational Mapping library
kandi X-RAY | atomically Summary
kandi X-RAY | atomically Summary
atomically is a Ruby Gem for you to write atomic query with ease. All methods are defined in Atomically::QueryService instead of defining in ActiveRecord directly, in order not to pollute the model instance.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
- Sets the value for the given attribute .
- Clears the changed attributes of the set .
atomically Key Features
atomically Examples and Code Snippets
Community Discussions
Trending Discussions on atomically
QUESTION
I'm trying to understand how the "fetch" phase of the CPU pipeline interacts with memory.
Let's say I have these instructions:
...ANSWER
Answered 2021-Jun-15 at 16:34It varies between implementations, but generally, this is managed by the cache coherency protocol of the multiprocessor. In simplest terms, what happens is that when CPU1 writes to a memory location, that location will be invalidated in every other cache in the system. So that write will invalidate the line in CPU2's instruction cache as well as any (partially) decoded instructions in CPU2's uop cache (if it has such a thing). So when CPU2 goes to fetch/execute the next instruction, all those caches will miss and it will stall while things are refetched. Depending on the cache coherency protocol, that may involve waiting for the write to get to memory, or may fetch the modified data directly from CPU1's dcache, or things might go via some shared cache.
QUESTION
I have a relatively simple case where:
- My program will be receiving updates via Websockets, and will be using these updates to update it's local state. These updates will be very small (usually < 1-1000 bytes JSON so < 1ms to de-serialize) but will be very frequent (up to ~1000/s).
- At the same time, the program will be reading/evaluating from this local state and outputs its results.
- Both of these tasks should run in parallel and will run for the duration for the program, i.e. never stop.
- Local state size is relatively small, so memory usage isn't a big concern.
The tricky part is that updates need to happen "atomically", so that it does not read from a local state that has for example, written only half of an update. The state is not constrained to using primitives and could contain arbitrary classes AFAICT atm, so I cannot solve it by something simple like using Interlocked
atomic operations. I plan on running each task on its own thread, so a total of two threads in this case.
To achieve this goal I thought to use a double buffer technique, where:
- It keeps two copies of the state so one can be read from while the other is being written to.
- The threads could communicate which copy they are using by using a lock. i.e. Writer thread locks copy when writing to it; reader thread requests access to lock after it's done with current copy; writer thread sees that reader thread is using it so it switches to other copy.
- Writing thread keeps track of state updates it's done on the current copy so when it switches to the other copy it can "catch up".
That's the general gist of the idea, but the actual implementation will be a bit different of course.
I've tried to lookup whether this is a common solution but couldn't really find much info, so it's got me wondering things like:
- Is it viable, or am I missing something?
- Is there a better approach?
- Is it a common solution? If so what's it commonly referred to as?
- (bonus) Is there a good resource I could read up on for topics related to this?
Pretty much I feel I've run into a dead-end where I cannot find (because I don't know what to search for) much more resources and info to see if this approach is "good". I plan on writing this in .NET C#, but I assume the techniques and solutions could translate to any language. All insights appreciated.
...ANSWER
Answered 2021-Jun-08 at 19:17If I understand correctly, the writes themselves are synchronous. If so, then maybe it's not necessary to keep two copies or even to use locks.
Maybe something like this could work?
QUESTION
In one of his great video, Jon Gjengset implements a mutex to notably understand the effect of std::sync::atomic::Ordering. The code is very simple : create a mutex that holds an integer and start many threads to add 1 to the integer concurrently and see the results. The code is here (I reproduce stricly Jon example) : https://github.com/fmassot/atomics-rust
When using correct ordering, we expect the program to make additions atomically and check the result as the sum of all added values. The code does several times on each thread the following actions :
- call compare_exchange_weak with Ordering::Acquire to get the lock
- on success increment the value by one
- release the lock with Ordering::Release
Unfortunately it does not seem to work on linux/x86_64 nor on macbook/arm64.
The results when running cargo r --release
are sometimes correct, sometimes wrong like this:
ANSWER
Answered 2021-Jun-06 at 22:47Problem solved, solution given by @user4815162342
The same value was used for LOCKED
and UNLOCKED
so there was no lock at all.
Conclusion, the error was stupid and coming from me...
QUESTION
Redis RPUSH docs here suggest that the return value of RPUSH
is the length of the list after the push operation.
However, what's not clear to me is:
- Is the result of
RPUSH
the length of the list after the push operation atomically, (so the result is definitely the index of the last item just added byRPUSH
) or... - Is it possible other
RPUSH
operations from concurrent Redis clients could have executed before theRPUSH
returns, so that you are indeed getting the new length of the list, but that length includes elements from otherRPUSH
commands?
Thanks!
...ANSWER
Answered 2021-Jun-03 at 22:18The operation is atomic, so the result of the RPUSH
is indeed the length of the list after the operation.
However, by the time you get the result on the client, the list could have changed in arbitrary ways, since other clients could have pushed items, popped items, etc. So that return value really doesn't tell you anything about the state of the list by the time that you receive it on the client.
If it's important to you that the return value match the state of the list, then that implies that you have a sequence of operations that you want to be atomic, in which case you can use Redis' transaction facilities. For example, if you performed the RPUSH
in a Lua script, you could be sure that the return value represented the state of the list, since the entire script would execute as a single atomic operation.
QUESTION
I am creating a Smart Contract (BEP20 token) based on the BEP20Token template (https://github.com/binance-chain/bsc-genesis-contract/blob/master/contracts/bep20_template/BEP20Token.template). The public contructor was modified to add some token details. However all of the standard functions are giving compile time issues like Overriding function is missing.
** here is the source code **
...ANSWER
Answered 2021-May-11 at 13:28Constructor public () - Warning: Visibility for constructor is ignored. If you want the contract to be non-deployable, making it "abstract" is sufficient.
The warning message says it all. You can safely remove the public
visibility modifier because it's ignored anyway.
If you marked the BEP20Token
contract abstract, you would need to have a child contract inheriting from it, could not deploy the BEP20Token
itself, but would have to deploy the child contract. Which is not what you want in this case.
QUESTION
The problem: I'm implementing a non-blocking data structure, where threads alter a shared pointer using a CAS operation. As pointers can be recycled, we have the ABA issue. To avoid this, I want to attach a version to each pointer. This is called a versioned pointer. A CAS128 is considered more expensive than a CAS64, so I'm trying to avoid going above 64 bits.
I'm trying to implement a versioned pointer. In a 32b system, the versioned pointer is a 64b struct, where the top 32 bits are the pointer and the bottom 32 is its version. This allows me to use CAS64 to atomically alter the pointer.
I'm having issues with a 64b system. In this case, I still want to use CAS64 instead of CAS128, so I'm trying to allocate a pointer aligned to 4GB (i.e., 32 zeros). I can then use masks to infer the pointer and version.
The solutions I've tried using alligned_malloc
, padding, and std::align
, but these involve allocating very large amounts of memory, e.g., alligned_malloc(1LL << 32, (1LL << 32)* sizeof(void*))
allocates 4GB of memory. Another solution is using a memory mapped file, but this involves synchronization that we're trying to avoid.
Is there a way to allocate 8B of memory aligned to 4GB that I'm missing?
...ANSWER
Answered 2021-May-27 at 17:32First off, a non-portable solution that limits the code complexity creep to the point of allocation (see below for another approach that makes point of use more complicated, but should be portable); it only works on POSIX systems (not Windows), but you could reduce your overhead to the size of a page (not 8 bytes, but in the context of a 64 bit system, wasting 4088 bytes isn't too bad if you're not doing it too often; obviously, the nature of your problem means that you can't possibly waste more than sysconf(_SC_PAGESIZE) - 8
bytes per 4 GB, so that's not too bad) by the following mechanism:
mmap
4 GB of memory anonymously (not file-backed; passfd
of-1
and include theMAP_ANONYMOUS
flag)- Compute the address of the 4 GB aligned pointer within that block
munmap
the memory preceding that address, and the memory beginningsysconf(_SC_PAGE_SIZE)
bytes after that address
This works because memory mappings aren't monolithic; they can be unmapped piecemeal, individual pages can be remapped without error, etc.
Note that if you're short on swap space, the brief request for 4 GB might cause problems (e.g. on a Linux system with heuristic overcommit disabled, it might fail to allocate the memory if it can't back it with swap, even though you never use most of it). You can experiment with passing MAP_NORESERVE
to the original request, then performing the unmapping, then remapping that single page with MAP_FIXED
(without MAP_NORESERVE
) to ensure the allocation can be used without triggering a SIGSEGV
at time of writing.
If you can't use POSIX mmap
, should it really be impossible to use CAS128, you may want to consider a segmented memory model like the old x86 scheme for these pointers. You block allocate 4 GB segments (they don't need any special alignment) up front, and have your "pointers" be 32 bit offsets from the base address of the segment; you can't use the whole 64 bit address space (unless you allow for multiple selectors, possibly by repurposing part of the version number field for example; you can probably make do with a few million versions rather than four billion after all), but if you don't need to do so, this lets you have a base address that never changes after allocation (so no atomics needed), with offsets that fit within your desired 32 bit field. So instead of getting your data via:
QUESTION
So in rust, I want to have an atomically referenced count pointer, but I want that pointer to also be atomic. Meaning that for whatever pointer the Arc holds, it will free the memory pointed to by the pointer when the reference count is zero.
Say I have the following code in C++
...ANSWER
Answered 2021-May-20 at 05:33If you don't need to do manual deallocation or cleanup (are you using unsafe
for allocation?), you don't need to to anything special. When the last Arc
is dropped, it will call drop
on the AtomicPtr
. If you do need do to manual deallocation/cleanup, I think you could solve this by using making a struct PtrGuard(pub AtomicPtr>)
that implements Drop
(running your custom cleanup code), and your smart pointer type will be Arc>
.
QUESTION
How do I atomically check-and-set something in Ecto Repo? I want to make sure no other process changed any part of the struct in parallel, even if the two writes are not-overlapping and read the same data.
For example,
...ANSWER
Answered 2021-May-19 at 01:01Ecto.Changeset.optimistic_lock/3 does exactly what you want. It will require a migration to the table in question (adds a version
field or whatever you want to call it).
I could give you an example but I would just be copying and pasting from the docs. They describe it perfectly.
QUESTION
I'm using MariaDB 10.3 (I presume mySQL behaves similarly), to which I make several concurrent connections.
From one of those connections, I insert the same value (let's say 'xxx' below) into several different tables, like so:
...ANSWER
Answered 2021-May-10 at 15:11I've solved this by using LOCK TABLES ... READ
around my SELECT
s
QUESTION
Dual write is a problem when we need to change data in 2 systems: a database (SQL or NoSQL) and Apache Kafka (for example). The database has to be updated and messages published reliably/atomically. Eventual consistency is acceptable but inconsistency is not.
Without 2 phase commit (2PC) dual write leads to inconsistency.
But in most cases 2PC is not an option.
Transactional Outbox is a microservice architecture pattern where a separate Message Relay process publishes the events inserted into database to a message broker.
Multiple Message Relay processes running in parallel lead to publishing duplicates (2 processes read the same records in the OUTBOX table) or unordering (if every process reads only portion of the OUTBOX table).
A single Message Relay process might publish messages more than once also. A Message Relay might crash after processing an OUTBOX record but before recording the fact that it has done so. When Message Relay restarts, it will then publish the same message again.
How to implement a Message Relay in Transactional Outbox patterns, so that risk of duplicate messages or unordering is minimal and the concept works with all SQL and NoSQL databases?
...ANSWER
Answered 2021-May-04 at 09:35Exactly-once delivery guarantee instead of at-least-once with Transactional Outbox pattern can hardly be achieved.
Consumers of messages published by a Message Relay have to be idempotent and filter duplicates and unordered messages.
Messages must include
- current state of an entity (instead of only changed fields aka change event, "delta"),
- ID header or field,
- version header or field.
ID header/field can be used to detect duplicates (determine that the message has been processed already).
Version header/field can be used to determine that more recent version of the message has been processed already (if a consumer received msg_a: v1, v2, v4 then it have to drop v3 of msg_a when it will arrive because more recent version v4 of msg_a has been processed already).
Message Relay extracted into a separate microservice and run in a single replica (.spec.replicas=1 in Kubernetes) and updated using Recreate Deployment strategy (.spec.strategy.type=Recreate in Kubernetes) when all existing Pods are killed before new ones are created (instead of RollingUpdate Deployment strategy) doesn't help to solve problem with duplicates. A Message Relay might crash after processing an OUTBOX record but before recording the fact that it has done so. When Message Relay restarts, it will then publish the same message again.
Having multiple active-active Message Relay instances allows achieving higher availability but increases probability of publishing duplicates and unordering.
For fast fail-over active-standby cluster of Message Relays can be implemented based on
- Kubernetes Leader Election using sidecar k8s.io/client-go/tools/leaderelection
- Redis Distributed Lock (Redlock)
- SQL lock using
SELECT ... FOR UPDATE NOWAIT
- etc.
As explained by Martin Klappmann distributed locks without fencing are broken and only minimizes the chance of multiple leaders (for a short time) in leader election.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install atomically
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page