CPU time, not wall clock time. It is based on the Linux setrlimit function.
Each scraper has a limit of approximately 80 seconds of processing time. After that, in Python and Ruby, you will get the ScraperWiki CPU Time Out exception. In PHP, this will end with โcompleted by SIGXCPUโ.
In many cases, this happens when you first clear the site, catching up with the backlog of existing data. The best way to handle this is to make your scraper at a time using the save_var and get_var functions (see http://scraperwiki.com/docs/python/python_help_documentation/ ) to remember your place.
It also allows you to more easily repair other parsing errors.
source share