I am sure that I am missing something quite obvious, but I canโt stop my pysqlite scripts for my whole life, a database crash is a blocked error. I have two scenarios: one for loading data into a database and one for reading data, but both of them will often and instantly break depending on what the other does with the database at any given time. I have a timeout on both scripts set to 30 seconds:
cx = sqlite.connect("database.sql", timeout=30.0)
And I think I can see some timeout data in that I get what looks like a timestamp (e.g. 0.12343827e-06 0.1 - and how can I stop typing?) Sometimes falls out in the middle of my curses a formatted output screen, but without the delays that are ever received remotely about a 30-second timeout, but still one of the others continues to go astray again and again. I am running RHEL 5.4 on a 64-bit IBM blade server HS21 CPU and have heard some mention of multithreading issues and not if this may be relevant. The packages used are sqlite-3.3.6-5 and python-sqlite-1.1.7-1.2.1, and upgrading to new versions outside of Red Hat's official guidelines is a great option for me. Perhaps, but undesirable because of the environment as a whole.
I already had autocommit=1 earlier in both scripts, but has since been disabled on both, and now I cx.commit() ing on the script insert and don't commit on select script. Ultimately, since I only have one script that really makes any changes, I really don't understand why this lock should happen. I noticed that this deteriorated significantly over time when the database got bigger. This was recently at 13 MB with 3 tables of equal size, which was about 1 day. Creating a new file greatly improved this, which seems understandable, but the timeout in the end simply does not obey.
Any pointers are greatly appreciated.
EDIT: since I asked me to change the code structure a bit and use a signal to periodically write from 0 to 150 updates per transaction every 5 seconds. This significantly reduced the chance of blocking to less than one hour, rather than once per minute or so. I think I could go further, ensuring that the time when I write the data is shifted by a few seconds when I read the data in another script, but basically I work on the problem when I perceive it, making the timeout not wanted that doesn't seem right. That one.
python database sqlite
Chris phillips
source share