I am developing a multithreaded sqlite database application on an Ubuntu virtual machine that has 4 processors. I am using SQLite version 3.7.13. I created a test to verify that multiple threads / connections can be read from the database at the same time.
I have two executables. The first executable file simply creates a database, creates 1 table in this database, inserts 50 elements into this table and then closes the database. This is not at all connected with multithreading and is simply intended to provide a database with entries in it.
The second executable creates several threads for reading from the database and waits for them to complete and writes the time required to complete all the threads. Each thread performs the following actions: -create a database connection using sqlite_open_v2 () so that each thread has its own individual connection to the database created from the first executable file -perform 100000 SELECTS in one database table (each query selection for one row in the table) -connect the database connection
When I run this test with SQLITE_OPEN_READWRITE specified as the flags for sqlite_open_v2 in each thread, I get the following results for the total time to complete all queries:
1 Theme - 0.65 seconds 2 Themes - 0.70 seconds 3 Themes - 0.76 seconds 4 Themes - 0.91 seconds 5 Themes - 1.10 seconds 6 topics - 1.28 seconds 7 Themes - 1.57 seconds 8 topics - 1.78 seconds
These results were as expected as times increased slightly (perhaps from context switching between threads and other reasons) as I add threads, which means that reading is mostly done in parallel.
However, when I ran the same test with SQLITE_OPEN_READWRITE | SQLITE_OPEN_SHAREDCACHE for flags, I get the following results:
1 Theme - 0.67 seconds 2 Themes - 2.43 seconds 3 Themes - 4.81 seconds 4 Themes - 6.60 seconds 5 Themes - 8.03 seconds 6 Themes - 9.41 seconds 7 Themes - 11.17 seconds 8 Themes - 12.79 seconds
From these results it is clear that something in shared cache mode prevents the simultaneous use of multiple reads in the database. I checked that really different threads are running in parallel (stream 4 reads, reads stream 8, reads stream 2, etc., and stream 1 does all its reads, stream 2 does all its reads, stream 3 does all its reads, and etc.). However, it seems that the reading for each individual transaction is performed in sequential order or something else slows down the database in the shared cache.
Why do I see such a big increase in time when I add threads in shared cache mode, and not without it? Is there a way to fix this and use shared cache mode?
Thanks for any help. This is very valuable.