How many rows can an SQLite table store before queries become community time

I am creating a simple SQLite database to store sensor readings. The tables will look something like this:

sensors - id (pk) - name - description - units sensor_readings - id (pk) - sensor_id (fk to sensors) - value (actual sensor value stored here) - time (date/time the sensor sample was taken) 

The application will collect about 100,000 sensor readings per month from about 30 different sensors, and I would like to store all the sensor readings in the database as much as possible.

Most requests will be in the form

 SELECT * FROM sensor_readings WHERE sensor_id = x AND time > y AND time < z 

This query usually returns about 100-1000 results.

So, the question is how large the sensor_readings table is before the above request becomes too time-consuming (more than a couple of seconds on a standard PC).

I know that one fix can be to create a separate sensor_readings table for each sensor, but I would like to avoid this if it is not needed. Are there other ways to optimize this database schema?

+6
sqlite
source share
4 answers

If you intend to use time in queries, you should add an index to it. This will be the only optimization that I propose based on your information.

100,000 inserts per month equals approximately 2.3 per minute, so another index will not be too burdensome and will speed up your queries. I assume that 100,000 inserts for all 30 sensors, not 100,000 for each sensor, but even if I am wrong, 70 inserts per minute should still be fine.

If performance becomes a problem, you can turn off old data in the historical table (for example, sensor_readings_old ) and execute your queries only in the non-historical table ( sensor_readings ).

Then you at least have all the data available, without affecting regular queries. If you really want to get older data, you can do it, but you will know that queries for this can take some time.

+4
source share

Are you setting indexes right? Besides this and reading http://web.utk.edu/~jplyon/sqlite/SQLite_optimization_FAQ.html , the only answer is โ€œyou have to measure yourselfโ€ - especially since it will be very dependent on the hardware and whether you use a database in memory or on disk, and also if you insert attachments into transactions or not.

Having said that, I was struck by noticeable delays after several tens of thousands of lines, but it was absolutely not optimized - from reading a bit I got the impression that there are people with 100 thousand lines with the correct indexes, etc., who have no problems at all.

+2
source share

SQLite now supports R-tree indexes ( http://www.sqlite.org/rtree.html ), ideal if you intend to make many queries over a time range.

Tom

+1
source share

I know that I come to this late, but I thought that this could be useful to everyone who comes to this question later:

SQLite tends to be relatively fast to read while it only serves one application / user at a time. Concurrency and blocking can be a problem with several users or applications accessing it at a time, and more reliable databases, such as MS SQL Server, work better in environments with high Concurrency.

As others have said, I would definitely index the table if you are concerned about the speed of reading queries. For your specific case, I would probably create one index that included both id and time.

You can also pay attention to the recording speed. Insertion can be quick, but commits are slow, so you probably want to merge multiple inserts together into a single transaction before committing. This is discussed here: http://www.sqlite.org/faq.html#q19

+1
source share

All Articles