You wait for each request to complete before starting the next one. This way you have the overhead of an event loop without any benefits.
Try the following:
import asyncio import functools import requests import time ts = time.time() loop = asyncio.get_event_loop() @asyncio.coroutine def do_checks(): futures = [] for i in range(10): futures.append(loop.run_in_executor(None, functools.partial(requests.get, "http://google.com", timeout=3))) for req in asyncio.as_completed(futures): resp = yield from req print(resp.status_code) loop.run_until_complete(do_checks()) te = time.time() print("Version A: " + str(te - ts)) ts = time.time() for i in range(10): r = requests.get("http://google.com", timeout=3) print(r.status_code) te = time.time() print("Version B: " + str(te - ts))
This is what I get when I run it:
$ python test.py 200 ... Version A: 0.43438172340393066 200 ... Version B: 1.6541109085083008
Much faster, but in reality these are just spawning streams and waiting for the http library to finish working, you do not need asyncio for this.
You might want to check out aiohttp as it was created for use with asyncio . requests is a fabulous library, but it is not made for asyncio .
source share