Comment extension for using requests (which is built on urllib3):
def download_web_page_with_requests(url_website): import requests r = requests.get(url_website) return r.text
It is much simpler than anything else and properly handles SSL verification regardless of the platformโs own certificate lists. If certifi , requests will automatically use it. Otherwise, it quietly reverts to a more limited, possibly older set of built-in root certificates. If you verify that certifi is used for you, you can do this:
r = requests.get(url_website, verify=certifi.where())
Please note that the above code does not perform error checking, which you probably should do. Therefore, I will point out that request.get () may throw a number of exceptions for invalid ULRs, unreachable sites, communication errors, and failed certificate verification, so you should be prepared to catch them and deal with them. If he successfully talks to the server, but the server returns a non-OK status code (for example, for a page that does not exist), then an exception will not be thrown, so you also want to check that r. status_code == 200.
source share