The transaction log for the database is full

I have a lengthy process that contains a transaction for the full duration.

I have no control over how this is done.

Because a transaction remains open for as long as the transaction log is full, SQL Server cannot increase the size of the log file.

Thus, the process crashes with the error "The transaction log for database 'xxx' is full" .

I tried to prevent this by increasing the size of the transaction log file in the database properties, but I get the same error.

Not sure what I should try next. The process runs for several hours, so it’s not easy to play the trial version and errors.

Any ideas?

If anyone is interested, the process is importing an organization into Microsoft Dynamics CRM 4.0.

There is a lot of disk space, we have a log in simple logging mode and backing up the log before the process starts.

- = - = - = - = - UPDATE - = - = - = - = -

Thank you all for the comments. The following - it led me to think that the journal would not grow due to an open transaction:

I get the following error ...

 Import Organization (Name=xxx, Id=560d04e7-98ed-e211-9759-0050569d6d39) failed with Exception: System.Data.SqlClient.SqlException: The transaction log for database 'xxx' is full. To find out why space in the log cannot be reused, see the log_reuse_wait_desc column in sys.databases 

So, following these tips, I went to the " log_reuse_wait_desc column in sys.databases " and it kept the value " ACTIVE_TRANSACTION ".

According to Microsoft: http://msdn.microsoft.com/en-us/library/ms345414(v=sql.105).aspx

This means the following:

The operation is active (all recovery models). • A long-term transaction may exist at the beginning of a log backup. In this case, another log backup may be required to free up space. For more information, see the “Long Active Transactions” section later in this section.

• Pending transaction (SQL Server 2005 Enterprise Edition and later only). A deferred transaction is actually an active transaction, the rollback of which is blocked due to some inaccessible resource. For information on the reasons for deferred transactions and how to move them from the deferred payment state, see the section Deferred Transactions.

I didn’t understand something?

- = - = - = - UPDATE 2 - = - = - = -

The process has just begun with the original log file size set to 30 GB. It takes a couple of hours.

- = - = - = - Final UPDATE - = - = - = -

The problem was caused by the fact that the log file is consuming all available disk space. In the last attempt, I freed up 120 GB, and he still used all of this and eventually failed.

I did not understand that this was happening earlier, because when the process started all night, it returned to failure. This time I managed to check the size of the log file before rolling back.

Thank you all for your input.

+94
sql sql-server sql-server-2008 dynamics-crm
Jul 16 '13 at 11:09
source share
11 answers

Is this a one time script or a regularly running task?

In the past, for special projects that temporarily require a lot of space for a log file, I created a second log file and made it huge. After the project is completed, we will delete the additional log file.

+18
Jul 16 '13 at 11:35
source share

To resolve this issue, change the Recovery Model to Simple , then Compress Files Log

1. Database Properties> Options> Recovery Model> Simple

2. Database Tasks> Shrink> Files> Log

Done.

Then check the size of the db log file at Database Properties> Files> Database Files> Path

To check the full sql server log: open the log file viewer in SSMS> Database> Management> SQL Server Logs> Current

+81
Jul 14 '14 at 3:16
source share

I had this error once, and in the end it was a server hard drive on which there was no free disk space.

+32
Jun 05 '14 at
source share

Do you have Include auto-sweep and Unlimited file growth that are included for the log file? You can edit them through SSMS in the section "Database Properties> Files"

+15
Jul 16 '13 at 11:21
source share

This is an old approach to school, but if you are performing an iterative update or inserting an operation into SQL, then what works for a long time is recommended to periodically (programmatically) call the "breakpoint". Calling a “checkpoint” forces SQL to write to disk all of these changes only for memory (dirty pages, they are called) and items stored in the transaction log. This leads to periodic clearing of the transaction log, which prevents problems like those described.

+9
Jul 16 '13 at 12:17
source share

The log will be truncated below.

 USE [yourdbname] GO -- TRUNCATE TRANSACTION LOG -- DBCC SHRINKFILE(yourdbname_log, 1) BACKUP LOG yourdbname WITH TRUNCATE_ONLY DBCC SHRINKFILE(yourdbname_log, 1) GO -- CHECK DATABASE HEALTH -- ALTER FUNCTION [dbo].[checker]() RETURNS int AS BEGIN RETURN 0 END GO 
+2
Apr 05 '14 at
source share

If your database recovery model is full and you did not have a log backup service plan, you will get this error because the transaction log will be full due to LOG_BACKUP .

This will prevent any activity in this database (for example, being compressed), and the SQL Server Database Engine will cause error 9002.

To overcome this behavior, I advise you to check this Transaction Log for the database "SharePoint_Config is full due to LOG_BACKUP , which shows the detailed steps to solve the problem.

+1
Jul 06 '16 at 0:04
source share

I found an error: "The transaction log for the database" ... "was full due to" ACTIVE_TRANSACTION "when deleting old rows from my database tables to free up disk space. I realized that this error will occur if the number of rows in be deleted was more than 1,000,000. Therefore, instead of using 1 DELETE statement, I divided the delete task using the DELETE TOP (1000000) statement ....

For example:

instead of using this statement:

 DELETE FROM Vt30 WHERE Rt < DATEADD(YEAR, -1, GETDATE()) 

repeating the following statement:

 DELETE TOP(1000000) FROM Vt30 WHERE Rt < DATEADD(YEAR, -1, GETDATE()) 
0
Aug 14 '18 at 4:14
source share

My problem is solved with multiple execution of limited deletion, like

Before

 DELETE FROM TableName WHERE Condition 

After

 DELETE TOP(1000) FROM TableName WHERECondition 
0
Jun 30 '19 at 5:45
source share

The answer to the question is not deleting rows from the table, but using the tempDB space due to the active transaction. this happens mainly when an upsert is performed, when we try to insert an update and delete transactions. The only option is to make sure that the database has a simple recovery model installed, and also increase the file size to the maximum (Add another group of files). Although this has its advantages and disadvantages, these are the only options.

Another option is to split upsert into two operations. one that performs the insert, and the other that updates and deletes.

-one
Oct 07 '18 at 19:00
source share

Try this:

 USE YourDB; GO -- Truncate the log by changing the database recovery model to SIMPLE. ALTER DATABASE YourDB SET RECOVERY SIMPLE; GO -- Shrink the truncated log file to 50 MB. DBCC SHRINKFILE (YourDB_log, 50); GO -- Reset the database recovery model. ALTER DATABASE YourDB SET RECOVERY FULL; GO 

I hope this helps.

-one
Jun 10 '19 at 15:33
source share



All Articles