dynamodb-lock-client | general purpose distributed locking library built on top | Application Framework library
kandi X-RAY | dynamodb-lock-client Summary
kandi X-RAY | dynamodb-lock-client Summary
The Amazon DynamoDB Lock Client is a general purpose distributed locking library built for DynamoDB. The DynamoDB Lock Client supports both fine-grained and coarse-grained locking as the lock keys can be any arbitrary string, up to a certain length. DynamoDB Lock Client is an open-source project that will be supported by the community. Please create issues in the GitHub repository with questions.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
- Releases all the locks
- Releases a lock
- Remove the kill session monitor
- Gets the lock item
- Creates and returns the lock item from the DynamoDB cache
- Creates a DynamoDB table in AWS DynamoDB
- Loops until all locks are available
- Gets the locks by partition key
- Gets all lock items from DynamoDB table
- Asserts that the lock table exists
- Starts a background thread
- Gets the lock for this owner
- Returns the next item in the page
- Loads and returns the next page in the DynamoDB table
- Loads the next page into results
dynamodb-lock-client Key Features
dynamodb-lock-client Examples and Code Snippets
Community Discussions
Trending Discussions on dynamodb-lock-client
QUESTION
I'm having an issue where two concurrent processes are updating a DynamoDB table within 5ms of each other and both pass the conditional expression when I expect one to throw the ConditionalCheckFailedException
exception. Documentation states:
DynamoDB supports mechanisms, like conditional writes, that are necessary for distributed locks.
https://aws.amazon.com/blogs/database/building-distributed-locks-with-the-dynamodb-lock-client/
My table schema has a single Key attribute called "Id":
...ANSWER
Answered 2022-Jan-19 at 09:32The race you are suggesting is very surprising, because it is exactly what DynamoDB claims its conditional updates avoids. So either Amazon have a serious bug in their implementation (which would be surprising, but not impossible), or the race is actually different than what you described in your question.
In your timeline you didn't say how your code resets "StartedRefreshingAt" to nothing. Does the same UpdateTable operation which writes the results of the work back to the table also deletes the StartedRefreshingAt attribute? Because if it's a separate write, it's theoretically possible (even if not common) for the two writes to be reordered. If StartedRefreshingAt is deleted first, at that moment the second process can start its own work - before the first process's results were written - so the problem you described can happen.
Another thing you didn't say is how your processing reads the work from the item. If you accidentally used eventual consistency for the read, instead of strong consistency, it is possible that execution 2 actually did start after execution 1 was finished, but when it read the work it needs to do - it read again the old value and not what execution 1 wrote - so execution 2 ended up repeating 1's work instead of doing new work.
I don't know if either of these guesses makes sense because I don't know the details of your application, but I think the possibility that DynamoDB consistency simply doesn't work as promised is the last guess I would make.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install dynamodb-lock-client
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page