No, this is wrong.
You do not have a robots.txt file in a subdirectory. Your robots.txt should be placed in the root directory of your host document .
If you want to prevent crawling of URLs that start with /foo , use this entry in the robots.txt file ( http://example.com/robots.txt ):
User-agent: * Disallow: /foo
This allows you to bypass everything (so there is no need for Allow ) except URLs such as
http://example.com/foohttp://example.com/foo/http://example.com/foo.htmlhttp://example.com/foobarhttp://example.com/foo/bar- ...
source share