I know that Python threads can only execute bytecode one at a time, so why does the threading library provide locks? I assume that race conditions cannot occur if only one thread is executed at a time.
The library provides locks, conditions, and semaphores. Is the sole purpose of this to synchronize execution?
Update:
I did a little experiment:
from threading import Thread from multiprocessing import Process num = 0 def f(): global num num += 1 def thread(func): # return Process(target=func) return Thread(target=func) if __name__ == '__main__': t_list = [] for i in xrange(1, 100000): t = thread(f) t.start() t_list.append(t) for t in t_list: t.join() print num
Basically, I had to start 100k threads and increase by 1. The result was 99993.
a) How can the result be not 99999 if there is GIL synchronization and the exclusion of race conditions? b) Is it even possible to start 100k OS threads?
Update 2 after seeing the answers:
If the GIL really does not provide a way to perform a simple operation, such as incrementing atomically, what is the purpose of having it? This does not help in unpleasant concurrency problems, so why was this done? I heard use cases for C extensions, can anyone explain this?
python multithreading python-multiprocessing gil
dani-h
source share