Discussion:
The transaction log for the database is full
(too old to reply)
c***@hotmail.com
2007-10-03 07:36:23 UTC
Permalink
When I see this error in db2.
What is the best solution to it?

Delete the log?
Increase size of the log?
or ....

If I want to delete/clear the log,
what is the command I can use?

Thanks in Advances.
Blair Kenneth Adamache
2007-10-03 08:12:35 UTC
Permalink
Increase the size of the log, and archive logs. If you delete the log,
you will likely have problems in the event of a crash, or the
restoration of a database with a rollforward to a point in time.

To read more:
http://publib.boulder.ibm.com/infocenter/db2luw/v9/index.jsp?topic=/com.ibm.db2.udb.admin.doc/doc/c0006086.htm
Post by c***@hotmail.com
When I see this error in db2.
What is the best solution to it?
Delete the log?
Increase size of the log?
or ....
If I want to delete/clear the log,
what is the command I can use?
Thanks in Advances.
Kelly Schlamb
2007-10-03 13:33:16 UTC
Permalink
In addition to following Blair's suggestion, you may want to look at the work being performed (if it's under your control). Committing changes more frequently and ensuring that old uncommitted transactions are not left around will allow DB2 to reclaim the log space more effectively, potentially alleviating the "transaction log full" situations that you are seeing.

Regards,
Kelly Schlamb
b***@gmail.com
2007-10-03 23:47:13 UTC
Permalink
Should I commit more frequently to prevent log full in circular log environment?

When we IMPORT multiple records into a table, there is a option called COMMITCOUNT to perform a commit for several records which can prevent log full problem when importing large amount of records.

So I feel confused. Should we commit more or less frequently to prevent log full problem?

Thanks in Advances...
Ian
2007-10-04 05:38:20 UTC
Permalink
Post by b***@gmail.com
Should I commit more frequently to prevent log full in circular log environment?
Yes. More frequent commits results in smaller units of work, which
means that you are less likely to have a single unit of work that
exceeds the size of your transaction log.
Post by b***@gmail.com
When we IMPORT multiple records into a table, there is a option called COMMITCOUNT to perform a commit for several records which can prevent log full problem when importing large amount of records.
So I feel confused. Should we commit more or less frequently to prevent log full problem?
Commit more frequently (make commitcount parameter smaller). Note, if
you don't specify commitcount, DB2 will try to import the entire file in
a single unit of work.
b***@gmail.com
2007-10-04 08:40:53 UTC
Permalink
My idea is that.

Transaction log file have two state, one is active log another is archive log

Active log files can not be used because it contain transactions not written to container yet.

Suppose in an circular log environment with 3 first and 2 secondary log files, we import 1,000,000 small size records into a table and set COMMITCOUNT = 1. Is it possible that all the 5 transaction logs be occupied before those transactions written to container?

I was confused because I read somewhere that when we import multiple record into a table, we have to set COMMITCOUNT greater than 1 to prevent log full problem.
Any ideas?

Thanks in advances...
Kelly Schlamb
2007-10-04 13:41:02 UTC
Permalink
Basically, the active logs are those logs that would be needed for crash recovery purposes and the beginning of the active log space is determined by two things: 1) the oldest uncommitted transaction and 2) the age of changed pages in the bufferpool(s) that haven't been persisted to disk yet (and these could be for already committed transactions).

With circular logging, the active space "wraps" through the logs and the current head of the active space moves as 1) the oldest transaction commits and/or 2) old pages in the bufferpool get flushed out. This is what I was referring to as log reclaim. In any case, the log space that gets used cannot wrap past this head point, otherwise log records that would be required for recovery purposes would be overwritten. Consider the case where you start a transaction and do some small amount of work -- but you don't commit it. Then you have a bunch of other transactions doing a whole lot of work, committing as they go. Eventually, because of the one open transaction, the log space is going to run out and applications trying to do work will get SQL0964N errors. This is why we suggest committing your transactions freely.

In the case of IMPORT, as the other person suggested, it will try to insert each row and just do one commit at the end. If it's a large import then this could fill your logs. COMMITCOUNT tells import to commit after every X rows inserted, to help alleviate log full situations. So, it's definitely useful to use COMMITCOUNT but you have to choose the value wisely. Setting it to something really small means that you're protecting yourself from these log full issues, but it does add a bit of overhead.

Hope this helps.
Kelly

Loading...