two-phase-commit | Implementation of the Two-phase commit protocol in Java
kandi X-RAY | two-phase-commit Summary
kandi X-RAY | two-phase-commit Summary
Implementation of the Two-phase commit protocol in Java
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
- Main entry point
- Disort all participants
- Returns the latest string in the log
- Prints the commit
- Aborts running
- Returns a string representation of the socket
- Stops the TCP server
- Write a log message
- Close the connection
- Increment the global commit
- Sends a message
- Receive a message from the socket
- Start commit
- Start participant
- Start a Participant
- Disort all participants
- Returns the latest string in the log
- Prints the commit
- Aborts running
- Returns a string representation of the socket
- Stops the TCP server
- Write a log message
- Close the connection
- Increment the global commit
- Sends a message
- Receive a message from the socket
- Start commit
- Start participant
- Run the server
- Handle incoming message
- Returns true if this object equals another object
two-phase-commit Key Features
two-phase-commit Examples and Code Snippets
Community Discussions
Trending Discussions on two-phase-commit
QUESTION
Asumming that a have a table called "t1" in a "db1" and other table called "t2" in a "db2", and i need to insert a record on both tables or fails.
Connected to the db1 i guess i shall type this
...ANSWER
Answered 2019-Nov-26 at 19:33I think you misunderstood PREPARE TRANSACTION
.
That statement ends work on the transaction, that is, it should be issued after all the work is done. The idea is that PREPARE TRANSACTION
does everything that could potentially fail during a commit except for the commit itself. That is to guarantee that a subsequent COMMIT PREPARED
cannot fail.
The idea is that processing is as follows:
Run
START TRANSACTION
on all database involved in the distributed transaction.Do all the work. If there are errors,
ROLLBACK
all transactions.Run
PREPARE TRANSACTION
on all databases. If that fails anywhere, runROLLBACK PREPARED
on those database where the transaction was already prepared andROLLBACK
on the others.Once
PREPARE TRANSACTION
has succeeded everywhere, runCOMMIT PREPARED
on all involved databases.
That way, you can guarantee “all or nothing” across several databases.
One important component here that I haven't mentioned is the distributed transaction manager. It is a piece of software that persistently memorizes where in the above algorithm processing currently is so that it can clean up or continue committing after a crash.
Without a distributed transaction manager, two-phase commit is not worth a lot, and it is actually dangerous: if transactions get stuck in the “prepared” phase but are not committed yet, they will continue to hold locks and (in the case of PostgreSQL) block autovacuum work even through server restarts, as such transactions must needs be persistent.
This is difficult to get right.
QUESTION
I'm new to MongoDB, and the most difficult to understand is how to ensure data integrity.
I've got two collections Post -> Comment (One to many).
Is there a way to store number of comments for each post, without of using two phase commit?
...ANSWER
Answered 2018-Mar-09 at 03:46Since you mentioned embedding comments within the Post is not a viable option for your use case and don't want to go with 2 phase commit,
I can think of below options:
Creating a secondary index on postId attribute of Comment collection. And finally using count(...) function based on postId on Comment collection.
The other option is to have a map-reduce job that stores the commentCount and postId in a new collection every time a Comment document is added.
In both the options you would not need to store commentNumbers in the Post document. One thing to note is, since the commentsCount is not part of Post document this would result in new query to mongo to read the count.
QUESTION
We have an application that has a number of entity classes for which there must be two tables. The tables are identical, with the only difference being the name. The common solutions offered here on SO are to use inheritance (a mapped superclass and a table-per-class strategy) or two persistence units with different mappings. We use the latter solution and the application is built on top of this approach, so it's now considered a given.
There are EJB methods which will do updates on both persistence contexts and must do so within one transaction. Both persistence contexts have the same data source, which is an XA-enabled connection to a Microsoft SQL Server database (2012 version). The only difference between the contexts is that one has a mapping XML to alter the table names for some entity classes and thus works on those tables.
One of the architecture leads would like to see XA transactions eliminated, since they cause a significant overhead on the database and apparently also make the logging and analysis of the queries that are executed more difficult, possibly also preventing some prepared statement caching. I don't know all the details, but for a lot of applications we've managed to eliminate XA. For this one, however, we currently can't because of the two persistence contexts.
Is there some way in this situation to get the updates to both contexts to happen in a transactional manner without XA? If so, how? If not, is there some architectural or configuration change possible to use one persistence context without having to turn to subclasses for the two tables?
I am aware of these questions: Is it possible to use more than one persistence unit in a transaction, without it being XA? and XA transaction for two phase commit
Before voting to close this as a duplicate, take note that the situations are different. We're not in a read-only situation like in the first question, both contexts operate on the same database, we're using MSSQL exclusively and we're on GlassFish, not Weblogic.
...ANSWER
Answered 2017-Feb-02 at 14:57After some experimenting, I've found that it is in fact possible to have two persistence units that use non-XA resources within a container-managed transaction. However, it may be implementation-dependent. TL;DR at the bottom.
JTA should require XA resources if more than one resource participates in a transaction. It uses X/Open XA for the purpose of allowing distributed transactions, for example over multiple databases, or a database and JMS queue. There is apparently some optimization (it may be GlassFish-specific, I'm not sure) that allows the last participant to be non-XA. In my use-case, however, both persistence units are for the same database (but a different set of tables, with some possible overlap) and both are non-XA. This means we'd expect an exception to be thrown when the second resource does not support XA.
Suppose this is our persistence.xml
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install two-phase-commit
You can use two-phase-commit like any standard Java library. Please include the the jar files in your classpath. You can also use any IDE and you can run and debug the two-phase-commit component as you would do with any other Java program. Best practice is to use a build tool that supports dependency management such as Maven or Gradle. For Maven installation, please refer maven.apache.org. For Gradle installation, please refer gradle.org .
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page