Say I have a function that writes to a file. I also have a function that repeatedly reads from the specified file. Both of these functions work for me in separate threads. (Actually, I read / write to registers via MDIO, so I can not execute both streams at the same time, only one or the other, but for simplicity, just say that this is a file)
Now, when I run the recording function separately, it runs pretty quickly. However, when I execute threaded and I have a lock before starting, it works very slowly. Is it because the second thread (read function) is doing a poll to get a lock? Is there any way around this?
Currently I just use simple RLock, but I am open to any changes that will improve performance.
Edit: As an example, I will give a basic example of what is happening. The read stream basically always works, but sometimes a separate stream causes a call to load. If I test by loading the download from the cmd prompt, the work in the stream is at least 3 times slower.
write stream:
import usbmpc
def load(self, lock):
lock.acquire()
f = open('file.txt','r')
data = f.readlines()
for x in data:
usbmpc.write(x)
lock.release()
read thread:
import usbmpc
def read(self, lock):
addr = START_ADDR
while True:
lock.acquire()
data = usbmpc.read(addr)
lock.release()
addr += 4
if addr > BUF_SIZE: addr = START_ADDR
source
share