Java JDBC clearBatch () and heap memory

I noticed the following behavior.

I have a file about 3 MB in size, containing several thousand lines. In the lines, I divide and create a prepared report (about 250,000 statements).

What am I doing:

preparedStatement
addBatch
do for every 200 rows {
 executeBatch
 clearBatch().
}

at the end

commit()

Memory usage will increase to approximately 70 MB without a memory error. Is it possible to use memory? and have transactional behavior (if one of the failed failures). I was able to reduce memory by committing with executeBatchand clearBatch... but this will lead to a partial insertion of the shared set.

+5
source share
2 answers

temp , . , : insert into target (select * from temp). temp .

EDIT:

+2

JDBC 2.0.

  • dbconnection connection.setAutoCommit(false)
  • statement.addBatch(sql_text_here)
  • , , : statement.executeBatch()
  • connection.commit()
  • connection.rollback()

... :

  catch( BatchUpdateException bue )
  {
    bError = true;
    aiupdateCounts = bue.getUpdateCounts();

    SQLException SQLe = bue;
    while( SQLe != null)
    {
      // do exception stuff

      SQLe = SQLe.getNextException();
    }
  } // end BatchUpdateException catch
  catch( SQLException SQLe )
  {
    ...

  } // end SQLException catch

: http://java.sun.com/developer/onlineTraining/Database/JDBC20Intro/JDBC20.html#JDBC2015

-1

All Articles