transaction-log | Automatically rebalance or trade your crypto portfolio | Portfolio library
kandi X-RAY | transaction-log Summary
kandi X-RAY | transaction-log Summary
This repository will automatically rebalance your personal cryptocurrency portfolio.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of transaction-log
transaction-log Key Features
transaction-log Examples and Code Snippets
Community Discussions
Trending Discussions on transaction-log
QUESTION
MSSQL V18.7.1 Transaction log on databases is back-upped every hour. Size from this databaselog is auto-grow with 128Mb max 5Gb
This runs smoothly but sometimes we do get an error in our application: 'The transaction log for database Borculo is full due to 'LOG_BACKUP'
This message we got 8.15AM while on 8.01AM de log-backup was done (and emptied).
I would really like it if I had a script or command to check what caused this exponential growth.
We could backup more often (ever 30 minutes) or change size but the problem is not solved then. Basically this problem should not occur with the number of transactions we have.
Probably some task is running (in our ERP) which causes this.
This does not happen every day but in the last month this is the 2nd time.
The transaction-log is a back-upped one to get info from. Not the active one.
Can anyone point me in the right direction?
Thanks
...ANSWER
Answered 2021-Apr-12 at 14:02An hourly transaction log backup means in case of a disaster you could lose up to an hour's worth of data.
It is usually advised to keep you transaction log backups as frequent as possible.
Every 15 mins is usually a good starting point. But if it is a business critical database consider a transaction log backup every minute.
Also why would you limit the size for your transaction log file? If you have more space available on the disk, allow your file to grow if it needs to grow.
It is possible that the transaction log file is getting full because there is some maintenance task running (Index/Statistics maintenance etc) and because the log file is not backed up for an entire hour, the logs doesn't get truncated for and hour and the file reaches 5GB in size. Hence the error message.
Things I would do, to sort this out.
Remove the file size limit, or at least increase the limit to allow it to grow bigger than 5 GB.
Take transaction Log backups more frequently, maybe every 5 minutes.
Set the log file growth increment to at least 1 GB from 128MB (to reduce the number of VLFs)
Monitor closely what is running on the server when the log file gets full, it is very likely to be a maintenance task (or maybe a bad hung connection).
Instead of setting max limit on the log file size, setup some alerts to inform you when the log file is growing too much, this will allow you to investigate the issue without any interference or even potential downtime for the end users.
QUESTION
I've got some databases that are around 500mb with log files that are 50+gb.
reading through I've seen transaction log backups weren't done so i suspect that's the reason for this growth over the years. I'm setting up Always on replication and would like to empty out the transaction log files prior to the always-on setup.
I've followed some answers like: https://theitbros.com/truncate-sql-server-2012-transaction-logs/ https://www.sqlshack.com/sql-server-transaction-log-backup-truncate-and-shrink-operations/
But i'm not able to shrink the files to any moderate size. Is there a way to empty out the log files and bring them down back to +- 100mb?
I've set the DB's to simple recovery model, ran the TSQL below but it's still not releasing. When opening the reports i see that there is no empty space to release even after setting to simple recovery mode. I'm okay with loosing point-in-time restore as, once the logs have been resized i will flip everything back to full and take a full backup.
...ANSWER
Answered 2020-Aug-05 at 13:30Remove the TRUNCATEONLY
.
QUESTION
I am running SQL Server in Windows 10. I have gotten one of the following two errors in all recent queries:
The transaction log for database 'MyDB' is full due to 'ACTIVE_TRANSACTION'.
An error occurred while executing batch. Error message is: There is not enough space on the disk.
I have tried DBCC SQLPERF('logspace')
to analyze disk space. The database has very little log space after attempting to perform a query as suggested here. I do not anticipate being able to resolve the issue by shrinking the log file. I tried CREATE DATABASE
, then
ANSWER
Answered 2020-Mar-23 at 14:18The first thing to understand is what kind of recovery mode the database is using.
If you are in FULL
recovery mode, it's not enough to take regular backups. You must also take frequent (every 20 minutes, or even faster) transaction log backups. Sql Server will never recycle the transaction log unless you do this, and it will continue to grow until you run out of space.
After you have run a transaction log backup, you should be able to shrink the log file and reclaim that disk space.
If you are not in FULL
recovery mode, you may be able to just manually clear or expand the transaction log in Sql Server management studio.
QUESTION
I have a limited space in the server and i have to remove transactions periodically. Moreover, I use below query that answered in this StackOverFlow question:How do you clear the SQL Server transaction log?
...ANSWER
Answered 2020-Mar-15 at 15:21This is just an XYProblem. The problem isn't the transaction log size, it that's you aren't taking transaction log back ups and wondering why the transaction log is growing. It's growing because you aren't backing it up.
Either you need to add an agent task to regularly create transaction log back ups, or change the recovery model. Considering your statement "actually the transaction logs are not very important" I suggest the latter, and then set the max size of the file:
QUESTION
In relation to another post of mine, I realized that there is more we can say on stackoverflow in relation to the Distributed, XA transactions and its internals. Common opinion is that distributed transactions are slow.
What are the XA transactions internals and how can we tune them ?
...ANSWER
Answered 2019-Jun-06 at 07:40First lets put some common vocabulary. We have two or more parties
- Transaction Coordinator this is where our business logic resides. This is the side that orchestrates the distributed transaction.
- Transaction Participant (XAResource) this can be any Dababase supporting distributed transactions or some other entity supporting the XA protocol like messaging service.
Lets highlight the main API functions performed during XA transaction. - start(XID) - end(XID) - prepare(XID) - commit(XID)
The first 2 operations are visible in our source code. This is when we initiate the transaction do some work and then say commit. Once we send the commit message from the source code the Transaction Coordinator and the transaction Participant take over and do some work.
XID parameter is used as a unique key identifying the transaction. Each transaction coordinator and each participant at any time can participate in more than one transaction so this is needed in order to identify them. The XID has two parts one part identifies the global transaction, the second part identifies the participant. This mean that each participant in the same transaction will have its own sub identifier. Once we reach the transaction prepare phase , each transaction participant writes its work to the transaction log and each Transaction Participant(XARersource) votes if its part is OK or FAILED. Once all votes are received the transaction is committed. If the power goes down the both the Transaction Coordinator and the Transaction Participant keep their transaction logs durable and can presume their work. If one of the participant vote FAILED during transaction commit then subsequent rollback will be initiated.
Implications in terms of performance
According to the CAP theorem each application(functionality) falls somewhere in between the triangle defined by Consistency, Partitioning and Availability. The main issue with the XA/ Distributed transaction is that it requires extreme consistency.
This requirement results into very high network and disk IO activity.
Disk activity Both the transaction coordinator and the transaction participant need to maintain a transaction log. This log is held on the disk each transaction needs to force information withing this disklog, this information is not buffered information. Having large parallelism will result in high amount of small messages forced to the disk in each transaction log. Normally if we copy one 1GB file from one hard disk to another hard disk this will be very fast operation. If we split the file into 1 000 000 parts of couple of bytes each the file transfer will be extremely slow.
Disk forcing grows with the number of participants.
1 participant is treated as normal transaction
2 participants the disk forcing is 5
3 equals 7
Network Activity In order to draw a parallel for distributed XATransaction we need to compare it to something. The network activity during normal transaction is the following. 3 network trips -enlist transaction, send some SQLs, commit.
For a XA transaction it is one idea more complicated. If we have 2 Participants. We enlist the resources in a transaction 2 network trips. Then we send prepare message another 2 trips then we commit with another 2 trips.
The actual network activity that is happening for 2 resources grows even more the more participants you enlist in the transaction.
The conclusion on how to get a distributed transaction fast
- To do this you need to ensure you have a fast network with minimum latency
- Ensure you have Hard drives with minimum latency and maximum random write speed. A good SSD can do miracle. -Try to enlist as minimum as possible distributed resources in the transaction
- Try to divide your data into data that has strong requirement for Consistency and Availability (Live data) and data that has low consistency requirements. Live data use Distributed transaction. For offline data use normal transaction, or no transaction if your data does not require it.
My answer is based on what I have read in "XA exposed" (and personal experience) which appears to be no longer available on internet which triggered me to write this.
QUESTION
I'm working on optimising the indexes on a very large database. I need to know the frequency of insert/update statements compared to the frequency of selects. I'm wondering what is the best way to determine this.
Is it possible to do this by using the transaction-log? I'm looking for a method that is minimally invasive for the database performance.
Thank you
...ANSWER
Answered 2019-Mar-01 at 13:14You should use sys.dm_db_index_usage_stats DMV.
user_updates
column responds on your first question (insert/update statements):
Number of updates by user queries. This includes Insert, Delete, and Updates representing number of operations done not the actual rows affected. For example, if you delete 1000 rows in one statement, this count increments by 1
Other columns like
user_seeks
Number of seeks by user queries.user_scans
Number of scans by user queries that did not use 'seek' predicate.user_lookups
Number of bookmark lookups by user queries.
will help you to determine how useful was this index for user queries.
The only thing to remember here is that DMV
holds the statistics only for the time db is online
, I mean if you restart
your server or simply put database offline
/restore
it the statistics are reset to 0.
Is it possible to do this by using the transaction-log
SELECT
statements are not logged.
QUESTION
edited terminology for accuracy:
We have large, daily flows of data within our data-mart. Some of the largest, done with Stored procedures managed by SSIS, take several hours. These long-running stored procedures are preventing the transaction-log from clearing (which compounds the issue because we have numerous SP's running at once, which are then all writing to the T-log with no truncate). Eventually this breaks our database and we're forced to recover from the morning snapshot.
We have explored doing "sub"-commits within the SP, but as I understand it you can't fully release the transaction log within an active stored procedure, because it is itself a transaction.
Without refactoring our large SP's to run in batches, or something to that effect, is it possible to commit to the transaction log periodically within an active SP, so that we release the lock on the transaction log?
edit / extension:
Perhaps I was wrong above: Will committing intermittently within the SP allow the transaction-log to truncate?
...ANSWER
Answered 2019-Jan-14 at 19:22Will committing intermittently within the SP allow the transaction-log to truncate?
If the client starts a transaction, it's not recommended to COMMIT that transaction inside a stored procedure. It's not allowed to exit the stored procedure with a different @@trancount than it was entered with.
The following pattern is technically allowed, although I have never seen it used in the real world:
QUESTION
While deleting a large number of records, I get this error:
The transaction log for database 'databasename' is full
I found this answer very helpful, it recommends:
- Right-click your database in SQL Server Manager, and check the Options page.
- Switch Recovery Model from Full to Simple
- Right-click the database again. Select Tasks Shrink, Files Shrink the log file to a proper size (I generally stick to 20-25% of the size of the data files)
- Switch back to Full Recovery Model
- Take a full database backup straight away
Question: in step 3, when I go to shrink
> files
and choose log
from the file type
dropdown menu, it tells me that 99% of the allocated space is free.
Out of ~4500MB of allocated space, there is ~4400MB free (the data file size is ~3000MB).
Does that mean I'm good to go, and there is no need to shrink?
I don't understand this. Why would that be the case, given the warning I received initially?
...ANSWER
Answered 2018-Oct-24 at 21:53I'm not one for hyperbole, but there are literally billions of articles written about SQL Server transaction logs.
Reader's digest version: if you delete 1,000,000 rows at a time, the logs are going to get large because it is writing those 1,000,000 deletes in case it has to roll back the transaction. The space needed to hold those records does not get released until the transaction commits. If your logs are not big enough to hold 1,000,000 deletes, the log will get filled, throw that error you saw, and rollback the whole transaction. Then all that space will most likely get released. Now you have a big log with lots of free space.
You probably hit a limit on your log file at 4.5gb and it wont get any bigger. To avoid filling your logs in the future, chunk down your transactions to smaller amounts, like deleting 1,000 records at a time. A shrink operation will reduce the physical size of the file, like from 4.5gb down to 1gb.
QUESTION
Is it a good option to use
AbstractServerConnectionFactory.closeCOnnection(ClientConnId). I am using this but what happens is that after a few days of processing server starts giving "too many open files" error. I investigated that Issue by running the commands "lsof" AND "/proc/pid/fd" that showed me around 280 file descriptors for the sockets and pipes
...ANSWER
Answered 2018-Aug-13 at 15:17Closing the socket using it's ID should close it completely.
However, simply set the soTimeout
property and the operating system will notify the framework, which will close the socket if no data is received in that time.
throw new SoftEndOfStreamException
I am not sure where you are doing that, but I can't think of a scenario where it would close the server socket.
That exception should only be thrown from a deserializer (when it detects the socket closed between messages).
QUESTION
So in my weblogic application we are you using some jtaWeblogicTransactionManager. There is some default timeout which can be override in annotation @Transactional(timeout = 60)
. I created some infinity loop to read data from db which correctly timeout:
ANSWER
Answered 2018-May-04 at 08:05set com.atomikos.icatch.threaded_2pc=true
in jta.properties
fixed my problem. Idk why this default value was change to false in web application.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install transaction-log
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page