I think the correct answer to this question depends on what you think is a βbug fix URLβ.
Ways to catch a few exceptions
If you think that any URL that throws an exception should be added to the missing queue, then you can do:
try: image=urllib2.urlopen(tmpurl).read() except (httplib.HTTPException, httplib.IncompleteRead, urllib2.URLError): missing.put(tmpurl) continue
This will catch any of these three exceptions and add this url to the missing queue. More simply you could do:
try: image=urllib2.urlopen(tmpurl).read() except: missing.put(tmpurl) continue
To catch any exception, but this is not considered Pythonic and may hide other possible errors in your code.
If by "error causing URL" you mean any URL that causes an httplib.HTTPException error, but you still want to continue processing if other errors are received, you can do:
try: image=urllib2.urlopen(tmpurl).read() except httplib.HTTPException: missing.put(tmpurl) continue except (httplib.IncompleteRead, urllib2.URLError): continue
This will add the URL to the missing queue if it raises an httplib.HTTPException , but otherwise it will catch httplib.IncompleteRead and urllib.URLError and your script will fail.
Iterate in turn
Aside, while 1 loops are always a little to me. You should be able to scroll through the contents of the queue using the following template, although you can continue to do this in your own way:
for tmpurl in iter(q, "STOP"):
Safe file handling
Alternatively, if it is absolutely necessary to do otherwise, you should use context managers to open and modify files. Thus, your three lines for working with files will become:
with open(tmpurl[-35:]+".jpg","wb") as wf: wf.write()
The context manager takes care of closing the file and will do this even if an exception occurs when writing to the file.