How many items were cleared for start_url

I use scrapy to scan 1000 URLs and store the scraper element in mongodb. I would know how many items were found for each url. From the treatment statistics, I see. 'item_scraped_count': 3500 However, I need this amount for each start_url separately. There is also a field refererfor each element, which I can use to manually count each element of the URL:

2016-05-24 15:15:10 [scrapy] DEBUG: Crawled (200) <GET https://www.youtube.com/watch?v=6w-_ucPV674> (referer: https://www.youtube.com/results?q=billys&sp=EgQIAhAB)

But I am wondering if there is built-in support from scrapy.

+4
source share
1 answer

challenge accepted!

in scrapynothing that would directly support this, but you could separate it from your code spiderSpider Middleware

middlewares.py

from scrapy.http.request import Request

class StartRequestsCountMiddleware(object):

    start_urls = {}

    def process_start_requests(self, start_requests, spider):
        for i, request in enumerate(start_requests):
            self.start_urls[i] = request.url
            request.meta.update(start_request_index=i)
            yield request

    def process_spider_output(self, response, result, spider):
        for output in result:
            if isinstance(output, Request):
                output.meta.update(
                    start_request_index=response.meta['start_request_index'],
                )
            else:
                spider.crawler.stats.inc_value(
                    'start_requests/item_scraped_count/{}'.format(
                        self.start_urls[response.meta['start_request_index']],
                    ),
                )
            yield output

settings.py:

SPIDER_MIDDLEWARES = {
    ...
    'myproject.middlewares.StartRequestsCountMiddleware': 200,
}

- :

'start_requests/item_scraped_count/START_URL1': ITEMCOUNT1,
'start_requests/item_scraped_count/START_URL2': ITEMCOUNT2,
+5

All Articles