Why is Chrome requesting a robots.txt file?

In my magazines, I noticed that Chrome requested robots.txtnext to everything I expected from it.

[...]
2017-09-17 15:22:35 - (sanic)[INFO]: Goin' Fast @ http://0.0.0.0:8080
2017-09-17 15:22:35 - (sanic)[INFO]: Starting worker [26704]
2017-09-17 15:22:39 - (network)[INFO][127.0.0.1:36312]: GET http://localhost:8080/  200 148
2017-09-17 15:22:39 - (sanic)[ERROR]: Traceback (most recent call last):
  File "/usr/local/lib/python3.5/dist-packages/sanic/app.py", line 493, in handle_request
    handler, args, kwargs, uri = self.router.get(request)
  File "/usr/local/lib/python3.5/dist-packages/sanic/router.py", line 307, in get
    return self._get(request.path, request.method, '')
  File "/usr/local/lib/python3.5/dist-packages/sanic/router.py", line 356, in _get
    raise NotFound('Requested URL {} not found'.format(url))
sanic.exceptions.NotFound: Requested URL /robots.txt not found

2017-09-17 15:22:39 - (network)[INFO][127.0.0.1:36316]: GET http://localhost:8080/robots.txt  404 42
[...]

I run Chromium:

60.0.3112.113 (Developer Build) Built on Ubuntu, running on Ubuntu 16.04 (64-bit)

Why is this happening? Can someone clarify?

+6
source share
3 answers

There is a possibility that your site did not request a file robots.txt, but one of the Chrome extensions (for example, Wappalizer you mentioned). This explains why this only happened in Chrome.

To know for sure that you can check the Network tab in Chrome DevTools to see at what point the request is executed, and if it comes from one of your scripts.

+5

, robots.txt -, google.

- /robots.txt -; " " .

: URL -, , http://www.example.com/welcome.html. , http://www.example.com/robots.txt :

User-agent: * Disallow:/ "User-agent: *" . "Disallow:/" , - .

/robots.txt :

/robots.txt. , , , . /robots.txt . , . /robots.txt, .

. http://www.robotstxt.org/robotstxt.html

-3

All Articles