Transactions are key to SQL database operations. They are, of course, fundamental to Oracle. It is not possible to constantly write Oracle tables without issuing a commit, and lo! there is a transaction.
Oracle allows us to specify tables as NOLOGGING, which do not generate a redo log. This is for bulk upload only (with the INSERT /*+ APPEND */ hint), with tips on switching to LOGGING and returning back as soon as possible. Since data that is not logged is not recoverable. And if you do not want to restore it, why write it first?
An alternative approach is to overwrite records in memory, and then use bulk inserts to write them. This is pretty fast.
Here is a simple logbook table and proof of concept package:
create table log_table (ts timestamp(6) , short_text varchar(128) , long_text varchar2(4000) ) / create or replace package fast_log is procedure init; procedure flush; procedure write (p_short log_table.short_text%type , p_long log_table.long_text%type); end fast_log; /
Log entries are stored in the PL / SQL collection, which is a structure in memory with a session area. The INIT () procedure initializes the buffer. The procedure FLUSH () writes the contents of the buffer to LOG_TABLE. The WRITE () procedure inserts a record into the buffer, and if the buffer has the required number of records, calls to FLUSH ().
create or replace package body fast_log is type log_buffer is table of log_table%rowtype; session_log log_buffer; write_limit constant pls_integer := 1000; write_count pls_integer; procedure init is begin session_log := log_buffer(); session_log.extend(write_limit); write_count := 0; end init; procedure flush is begin dbms_output.put_line('FLUSH::'||to_char(systimestamp,'HH24:MI:SS.FF6')||'::'||to_char(write_count)); forall i in 1..write_count insert into log_table values session_log(i); init; end flush; procedure write (p_short log_table.short_text%type , p_long log_table.long_text%type) is pragma autonomous_transaction; begin write_count := write_count+1; session_log(write_count).ts := systimestamp; session_log(write_count).short_text := p_short; session_log(write_count).long_text := p_long; if write_count = write_limit then flush; end if; commit; end write; begin init; end fast_log; /
The pragma AUTONOMOUS_TRANSACTION is used in the logging table, so COMMIT occurs without affecting the surrounding transaction that caused the flash.
Call DBMS_OUTPUT.PUT_LINE () is designed to track progress. So let's see how fast it goes ....
SQL> begin 2 fast_log.flush; 3 for r in 1..3456 loop 4 fast_log.write('SOME TEXT', 'blah blah blah '||to_char(r)); 5 end loop; 6 fast_log.flush; 7 end; 8 / FLUSH::12:32:22.640000::0 FLUSH::12:32:22.671000::1000 FLUSH::12:32:22.718000::1000 FLUSH::12:32:22.749000::1000 FLUSH::12:32:22.781000::456 PL/SQL procedure successfully completed. SQL>
Hmmm, 3456 records in 0.12 seconds, it's not too shabby. The main problem with this approach is the need to flush the buffer to round free entries; it's a pain, for example. at the end of the session. If something causes the server to crash, unplanned entries are lost. Another problem with working in memory is that it consumes memory (durrrr), so we cannot make the cache too large.
For comparison, I added a procedure to a package that inserts one entry directly into LOG_TABLE each time it is called, again using offline transactions:
procedure write_each (p_short log_table.short_text%type , p_long log_table.long_text%type) is pragma autonomous_transaction; begin insert into log_table values ( systimestamp, p_short, p_long ); commit; end write_each;
Here are his timings:
SQL> begin 2 fast_log.flush; 3 for r in 1..3456 loop 4 fast_log.write_each('SOME TEXT', 'blah blah blah '||to_char(r)); 5 end loop; 6 fast_log.flush; 7 end; 8 / FLUSH::12:32:44.157000::0 FLUSH::12:32:44.610000::0 PL/SQL procedure successfully completed. SQL>
Wall clock timeouts are notoriously unreliable, but the batch approach is 2โ3 times faster than a single record rating. However, I could execute more than three thousand discrete transactions in less than half a second, on (far from the top level) laptop. So the question is: how much is the registration bottleneck?
To avoid misunderstandings:
@JulesLt posted his answer while I was working on my PoC. Although there are similarities in our views, I believe that the differences in the proposed workaround merit publication of this issue.
"What time is it for write_each without an autonomous, but the only thing to accomplish at the end? My timing is unimportant - that swelling insert is a big win."
My timings offer a little different. Replacing COMMIT with one COMMIT at the end is about half the time elapsed. Even slower than the volumetric approach, but not nearly the same.
The key here is benchmarking. My proof of concept works about six times faster than the Jules test (I have one index in the table). There are all sorts of reasons why this could be: machine specification, database version (I use Oracle 11gR1), table structure, etc. In other words, YMMV.
So, the teaching: first decide what to do for your application, and then compare this for your environment. Only consider a different approach if your test indicates a serious performance problem. Knut's warning of premature optimization .