How to disable logging in Scrapy (Python)

I created a spider using Scrapy, but I cannot figure out how to disable logging by default. From the documentation it seems that I should disable it by doing

logging.basicConfig(level=logging.ERROR) 

But this has no effect. From a look at the code for logging.basicConfig () I assume that this is due to the fact that "the root logger has been configured with handlers", but maybe I'm wrong about that. Anyway, can someone explain what I need to do to get Scrapy not to output normal

  2015-10-18 17:42:00 [scrapy] INFO: Scrapy 1.0.3 started (bot: EF) 2015-10-18 17:42:00 [scrapy] INFO: Scrapy 1.0.3 started (bot: EF) 2015-10-18 17:42:00 [scrapy] INFO: Optional features available: ssl, http11, boto 2015-10-18 17:42:00 [scrapy] INFO: Optional features available: ssl, http11, boto 

and etc.

EDIT: As suggested by sirfz below, line

  logging.getLogger('scrapy').setLevel(logging.WARNING) 

can be used to set the logging level. However, it looks like you should do this in the init method (or later) in your spider.

+6
source share
3 answers

You can simply change the logging level for scrapy (or any other logger):

 logging.getLogger('scrapy').setLevel(logging.WARNING) 

This disables all log messages below the WARNING level.

To disable all trap log messages, you can simply set propagate to False :

 logging.getLogger('scrapy').propagate = False 

This prevents the distribution of log messages in the root log (which prints to the console when configured using basicConfig() )

+7
source

You can add -s LOG_ENABLED=False as a parameter when running the script. That should do the trick.

Note : for version 1.1 a little has changed: -s LOG_ENABLED=0

+2
source

logging.basicConfig (** kwargs)

This function does nothing if the root registrar already has handlers configured for it.

Scrapy has handlers configured for it, so this will not work

0
source

All Articles