Python requests: don't wait for the request to complete

In Bash, you can run the command in the background by adding & . How can I do this in Python?

 while True: data = raw_input('Enter something: ') requests.post(url, data=data) # Don't wait for it to finish. print('Sending POST request...') # This should appear immediately. 
+8
python python-requests
source share
4 answers

I am using multiprocessing.dummy.Pool . I create a single thread pool at the module level, and then use pool.apply_async(requests.get, [params]) to run the task.

This command gives me a future that I can add to the list with other futures indefinitely, until I want to collect all or some of the results.

multiprocessing.dummy.Pool - against all logic and reason - the THREAD pool, not the process pool.

Example (works on both Python 2 and 3 if queries are installed):

 from multiprocessing.dummy import Pool import requests pool = Pool(10) # Creates a pool with ten threads; more threads = more concurrency. # "pool" is a module attribute; you can be sure there will only # be one of them in your application # as modules are cached after initialization. if __name__ == '__main__': futures = [] for x in range(10): futures.append(pool.apply_async(requests.get, ['http://example.com/'])) # futures is now a list of 10 futures. for future in futures: print(future.get()) # For each future, wait until the request is # finished and then print the response object. 

Requests will be executed simultaneously, so running all ten of these requests will take no more than the longest. This strategy will use only one CPU core, but this should not be a problem, because almost all the time will be spent waiting for I / O.

+12
source share

According to the doc you need to go to another library:

Lock or unlock?

When using the default transport adapter, requests do not provide any non-blocking IO. The Response.content property will block until the entire response has been loaded. If you need more granularity, the stream functions of the library (see Requests) allow you to get fewer responses in time. However, these calls will still be blocked.

If you are concerned about using IO locking, there are many projects that combine requests with one of Pythons asynchrony.

Two excellent examples: grequests and requests-futures .

+1
source share

If you can write code that will be executed separately in a separate python program, here is a possible solution based on a subprocess.

Otherwise, you may find this question useful and its related answer: the trick is to use the thread library to run a separate thread that will perform the split task.

The caveat with both approaches may be the number of elements (which, for example, the number of threads) that you must manage. If the item in parent value is too large, you can pause each batch of items until at least some threads have run out, but I believe that such a control is nontrivial.

For a more complex approach, you can use the acting approach, I did not use this library , but I think this could help in this case.

+1
source share

Here is a hacker way to do this:

 try: requests.get("http://127.0.0.1:8000/test/",timeout=0.0000000001) except requests.exceptions.ReadTimeout: pass 
-one
source share

All Articles