Robots.txt not working

I used robots.txt to limit one of the folders on my site. The folder consists of sites under construction. Google indexed all those sites that are under testing. Therefore, I used robots.txt. I submitted the site first and robots.txt is enabled. Now a success status for www.mysite.com/robots.txt. But Google still lists these test links. Here is the code I wrote for robots.txt ...

User-agent: * Disallow: /foldername/ 

Can someone tell me what the problem is? thanks in advance

+6
web-crawler robots.txt
source share
2 answers

Have you tested your robots.txt following google instructions? http://www.google.com/support/webmasters/bin/answer.py?hl=en&answer=156449

To check the site's robots.txt file:

  • On the webmaster tools homepage, select the site you want.
  • In the "Site Configuration" section, select "Access to the scanner"
  • If it is not already selected, click the Test robots.txt tab.
  • Copy the contents of the robots.txt file and paste it into the first square.
  • In the URL field, enter the site to test.
  • In the User Agents list, select the user agents you need.


Webmaster Tools: https://www.google.com/webmasters/tools/home?hl=en

+3
source share

See a request to remove content from our index on the Google Webmaster Blog. You can speed up the removal process by sending a removal request using Google Webmaster Tools ; otherwise, the pages will eventually be deleted from the index when it is re-viewed (that is, updating the robots.txt file does not have an immediate change, the change occurs during subsequent crawls).

+3
source share

All Articles