If the CSV file is 2 GB, then using a clean database in memory will require more than 4 GB of heap memory. The exact memory requirements largely depend on how redundant the data is. If the same values ββappear again and again, then the database will need less memory, as ordinary objects will be reused (regardless of whether it is a string, length, timestamp, ...).
Note that when using create table as select , LOCK_MODE=0 , UNDO_LOG=0 and LOG=0 are not needed. Also, CACHE_SIZE does not help when using the mem: prefix (but it helps for file systems in memory).
I suggest first using the in-memory file system ( memFS: instead of mem: , which is slightly slower than mem: but usually requires less memory:
jdbc:h2:memFS:test;CACHE_SIZE=65536
If this is not enough, try the compressed mode in memory ( memLZF: , which is again slower but uses even less memory:
jdbc:h2:memLZF:test;CACHE_SIZE=65536
If this is still not enough, I suggest you try the usual constant mode and see how fast it is:
jdbc:h2:~/data/test;CACHE_SIZE=65536
source share