Catch a specific HTTP error in python

I want to catch a specific HTTP error, not any of the whole family. what i tried to do -

import urllib2 try: urllib2.urlopen("some url") except urllib2.HTTPError: <whatever> 

but I will eventually catch any http error, but I want to catch only if the specified webpage does not exist !! it is likely that the HTTP error is 404..but I don’t know how to specify this error to only catch 404 and let the system run the default handler for other events .. any suggestions

+55
python urllib2 urllib
Jul 07 '10 at 8:25
source share
3 answers

Just catch urllib2.HTTPError , handle it, and if it is not a 404 error, just use raise to re-raise the exception.

See the Python Tutorial .

So you can do:

 import urllib2 try: urllib2.urlopen("some url") except urllib2.HTTPError as err: if err.code == 404: <whatever> else: raise 
+90
Jul 07 '10 at 9:14
source

For Python 3.x

 import urllib.request try: urllib.request.urlretrieve(url, fullpath) except urllib.error.HTTPError as err: print(err.code) 
+25
04 Oct '13 at 2:27
source

Tims' answers seem to me misleading. Especially when urllib2 does not return the expected code. For example, this error will be fatal (believe it or not - this is not uncommon when loading URLs):

AttributeError: a URLError object does not have a 'code' attribute

A quick, but perhaps not the best solution would be code using a nested try / except block:

 import urllib2 try: urllib2.urlopen("some url") except urllib2.HTTPError, err: try: if err.code == 404: # Handle the error else: raise except: ... 

Additional information on the topic of nested try / except blocks Are nested try / except blocks in python a good programming practice?

+3
May 19 '15 at
source



All Articles