Aiohttp, Asyncio: RuntimeError: event loop closed

I have two scripts: scraper.py and db_control.py. In scraper.py, I have something like this:

... def scrap(category, field, pages, search, use_proxy, proxy_file): ... loop = asyncio.get_event_loop() to_do = [ get_pages(url, params, conngen) for url in urls ] wait_coro = asyncio.wait(to_do) res, _ = loop.run_until_complete(wait_coro) ... loop.close() return [ x.result() for x in res ] ... 

And in db_control.py:

 from scraper import scrap ... while new < 15: data = scrap(category, field, pages, search, use_proxy, proxy_file) ... ... 

Theoretically, the scraper should be started up unknown until a sufficient amount of data has been received. But when new not imidiatelly > 15 , then this error occurs:

  File "/usr/lib/python3.4/asyncio/base_events.py", line 293, in run_until_complete self._check_closed() File "/usr/lib/python3.4/asyncio/base_events.py", line 265, in _check_closed raise RuntimeError('Event loop is closed') RuntimeError: Event loop is closed 

But the scripts work fine if I run scrap () only once. So I think there are some problems with recreating loop = asyncio.get_event_loop() , I tried this one , but nothing has changed. How can i fix this? Of course, these are just fragments of my code, if you think that the problem may be in another place, the full code is available here .

+6
source share
1 answer

The methods run_until_complete , run_forever , run_in_executor , create_task , call_at explicitly check the loop and throw exception if it is closed.

Quote from the docs - BaseEvenLoop.close :

It is idempotent and irreversible


If you have no (good) reasons, you can simply omit the line:

 def scrap(category, field, pages, search, use_proxy, proxy_file): #... loop = asyncio.get_event_loop() to_do = [ get_pages(url, params, conngen) for url in urls ] wait_coro = asyncio.wait(to_do) res, _ = loop.run_until_complete(wait_coro) #... # loop.close() return [ x.result() for x in res ] 

If you want a new cycle every time, you create it manually and set it by default:

 def scrap(category, field, pages, search, use_proxy, proxy_file): #... loop = asyncio.new_event_loop() asyncio.set_event_loop(loop) to_do = [ get_pages(url, params, conngen) for url in urls ] wait_coro = asyncio.wait(to_do) res, _ = loop.run_until_complete(wait_coro) #... return [ x.result() for x in res ] 
+7
source

All Articles