How to use Oracle without transactions?

MySQL has a special type of MyISAM table that does not support transactions. Does Oracle have something like this? I would like to create a database for writing (for logging), which should be very fast (it will store a lot of data) and does not need transactions.

+4
source share
8 answers

Transactions are key to SQL database operations. They are, of course, fundamental to Oracle. It is not possible to constantly write Oracle tables without issuing a commit, and lo! there is a transaction.

Oracle allows us to specify tables as NOLOGGING, which do not generate a redo log. This is for bulk upload only (with the INSERT /*+ APPEND */ hint), with tips on switching to LOGGING and returning back as soon as possible. Since data that is not logged is not recoverable. And if you do not want to restore it, why write it first?

An alternative approach is to overwrite records in memory, and then use bulk inserts to write them. This is pretty fast.

Here is a simple logbook table and proof of concept package:

 create table log_table (ts timestamp(6) , short_text varchar(128) , long_text varchar2(4000) ) / create or replace package fast_log is procedure init; procedure flush; procedure write (p_short log_table.short_text%type , p_long log_table.long_text%type); end fast_log; / 

Log entries are stored in the PL / SQL collection, which is a structure in memory with a session area. The INIT () procedure initializes the buffer. The procedure FLUSH () writes the contents of the buffer to LOG_TABLE. The WRITE () procedure inserts a record into the buffer, and if the buffer has the required number of records, calls to FLUSH ().

 create or replace package body fast_log is type log_buffer is table of log_table%rowtype; session_log log_buffer; write_limit constant pls_integer := 1000; write_count pls_integer; procedure init is begin session_log := log_buffer(); session_log.extend(write_limit); write_count := 0; end init; procedure flush is begin dbms_output.put_line('FLUSH::'||to_char(systimestamp,'HH24:MI:SS.FF6')||'::'||to_char(write_count)); forall i in 1..write_count insert into log_table values session_log(i); init; end flush; procedure write (p_short log_table.short_text%type , p_long log_table.long_text%type) is pragma autonomous_transaction; begin write_count := write_count+1; session_log(write_count).ts := systimestamp; session_log(write_count).short_text := p_short; session_log(write_count).long_text := p_long; if write_count = write_limit then flush; end if; commit; end write; begin init; end fast_log; / 

The pragma AUTONOMOUS_TRANSACTION is used in the logging table, so COMMIT occurs without affecting the surrounding transaction that caused the flash.

Call DBMS_OUTPUT.PUT_LINE () is designed to track progress. So let's see how fast it goes ....

 SQL> begin 2 fast_log.flush; 3 for r in 1..3456 loop 4 fast_log.write('SOME TEXT', 'blah blah blah '||to_char(r)); 5 end loop; 6 fast_log.flush; 7 end; 8 / FLUSH::12:32:22.640000::0 FLUSH::12:32:22.671000::1000 FLUSH::12:32:22.718000::1000 FLUSH::12:32:22.749000::1000 FLUSH::12:32:22.781000::456 PL/SQL procedure successfully completed. SQL> 

Hmmm, 3456 records in 0.12 seconds, it's not too shabby. The main problem with this approach is the need to flush the buffer to round free entries; it's a pain, for example. at the end of the session. If something causes the server to crash, unplanned entries are lost. Another problem with working in memory is that it consumes memory (durrrr), so we cannot make the cache too large.

For comparison, I added a procedure to a package that inserts one entry directly into LOG_TABLE each time it is called, again using offline transactions:

  procedure write_each (p_short log_table.short_text%type , p_long log_table.long_text%type) is pragma autonomous_transaction; begin insert into log_table values ( systimestamp, p_short, p_long ); commit; end write_each; 

Here are his timings:

 SQL> begin 2 fast_log.flush; 3 for r in 1..3456 loop 4 fast_log.write_each('SOME TEXT', 'blah blah blah '||to_char(r)); 5 end loop; 6 fast_log.flush; 7 end; 8 / FLUSH::12:32:44.157000::0 FLUSH::12:32:44.610000::0 PL/SQL procedure successfully completed. SQL> 

Wall clock timeouts are notoriously unreliable, but the batch approach is 2โ€“3 times faster than a single record rating. However, I could execute more than three thousand discrete transactions in less than half a second, on (far from the top level) laptop. So the question is: how much is the registration bottleneck?


To avoid misunderstandings:

@JulesLt posted his answer while I was working on my PoC. Although there are similarities in our views, I believe that the differences in the proposed workaround merit publication of this issue.


"What time is it for write_each without an autonomous, but the only thing to accomplish at the end? My timing is unimportant - that swelling insert is a big win."

My timings offer a little different. Replacing COMMIT with one COMMIT at the end is about half the time elapsed. Even slower than the volumetric approach, but not nearly the same.

The key here is benchmarking. My proof of concept works about six times faster than the Jules test (I have one index in the table). There are all sorts of reasons why this could be: machine specification, database version (I use Oracle 11gR1), table structure, etc. In other words, YMMV.

So, the teaching: first decide what to do for your application, and then compare this for your environment. Only consider a different approach if your test indicates a serious performance problem. Knut's warning of premature optimization .

+8
source

The closest one would be to create a NOLOGGING tablespace and use the NOLOGGING option to create a table inside it - although this can only be used for bulk operations (for example, INSERT / * + APPEND * / hint).

This removes REDO due to loss of integrity and data if the database does not work.

I donโ€™t know that it will actually be โ€œfasterโ€ and you should also consider concurrency (if you have many processes trying to write to the same table, you might be better off using transactions that write pending updates to redo logs than trying to update the table "real").

I really didn't research NOLOGGING, though - I rarely got to the point where the application bottleneck was the INSERT speed - when I had it, it was the cost of updating the indexes, not the table, which was the problem.

I just did a quick test in my rather underdeveloped development database (with REDO enabled). Using an autonomous transaction for each row - so each row starts a new transaction and ends with a commit, I can write / commit more than 1000 rows to the indexed logs table in 1 second against about 0.875 seconds, making 1000 inserts without commit.

Doing a 1000-line insert with one hit using a bulk operation is a fraction of a second, so if you can increase the volume of the logs, do it.

Some other thoughts: Will the external table do the job - that is, write to a log file, which is then mounted as an external table in Oracle when / if you need to read it?

+4
source

My experience is that logging is best done with a flat file. My opinion is that magazines are usually not particularly important - SHOULD something went wrong, and at this time they become critical. Because of this, I do not want to manage my logging transactionally. If I need to cancel the transaction, because the problem is that I really do not want the registration data to be rollback, because this is what I am going to use to help determine what the problem is. In addition, how do you register that there is a problem connecting to the database if the log is stored in a database that cannot be connected to?

Share and enjoy.

+2
source

"which should be very fast"

There is a tradeoff between fast and recoverable.

In Oracle, recoverability is achieved through a redo log file. Evey the time you made, in the log writer, makes a synchronous call to write outstanding changes to the file. Synchronously, I mean that he expects the file system to confirm that the write was successful before saying that the commit was successful.

If you do a lot of logging (especially because of the many sessions at the same time), each line in the log file is executed independently (ag-autonomous transaction), then this can be a bottleneck.

If you don't need this recovery level (i.e. you can afford to lose the last few lines of your log data from your logs in the event of a serious failure), look at NOWAIT for commit.

If you cannot afford to lose anything, then your best bet is REALLY quick storage (this could be a cache with a backup battery).

+1
source

What I would like to do for a similar case would be to write the logs to a file (adding to the file is probably the fastest way to store your logs), and then enable the batch insert of these logs into the database at regular intervals. Unless, of course, inserting directly into the database is fast enough ... but you have to test ...

0
source

This seems like a solution to the search problem.

Have you compared performance? Oracle fast enough for you how to eat? Transaction management is built into the way Oracle works and tries to get around it, it looks like you're creating a job for yourself.

It seems you have defined transaction management as a problem, not knowing if there are any problems. What happens later when you have several authors on the table? Or readers block authors?

0
source

PRAGMA AUTONOMOUS_TRANSACTION

This will allow you to log and record the log without affecting the surrounding transactions. Registration is one of the few acceptable options for using offline transactions. He does what he says, allows you to write a pl / sql function / procedure that can do its job without affecting a transaction in which it may or may not participate. She is "autonomous."

a.e. ยท T ยท o ยท mov 1. (country or region) Having self. 2. Act independently or have the freedom to do so: "an autonomous school board committee."

Oracle docs :

The AUTONOMOUS_TRANSACTION pragma changes the way a subprogram works within a transaction. A routine marked with this pragma can perform SQL operations, commit or roll back these operations without committing or rolling back data in the main transaction.

 CREATE OR REPLACE FUNCTION FNC_LOG(p_log_text varchar2(4000)) RETURN NUMBER IS PRAGMA AUTONOMOUS_TRANSACTION; BEGIN -- Your brief code goes here (don't abuse the evil feature that is autonomous transactions). END; 
0
source

Another option, if you need extremely high performance, is to look at the Oracle TimesTen In-Memory database: http://www.oracle.com/technology/products/timesten/index.html

0
source

Source: https://habr.com/ru/post/1315625/


All Articles