Well, since I finally got rid of this, I would like to share what I found;
First of all, an HTML snapshot should be provided for the crawler at a specific URL, where
?_escaped_fragment_=
replaces
So, if you have:
http://www.website.com/
Your server should provide a snapshot at:
http://www.website.com/?_escaped_fragment_=/eng/home
If someone is interested in the method that I use to create a snapshot, I just use a node module called judo ( https://npmjs.org/package/judo ); to use this you need to have phantomjs ( http://phantomjs.org/ ) and node ( http://nodejs.org/ ) on your server; (more on how to install phantomjs on the server: How to configure and run PhantomJS on Ubuntu? )
After you have installed everything, you just need to write a js file using judo (for example, judo.js) (after the page of the document that I contacted before you are ready in 5 minutes); upload the file to the server and execute it using node to create snapshots and a sitemap;
after that you will need to serve the Google crawler with HTML snapshots when it asks? _escaped_fragment_ = URLs; The easiest way, in my opinion, is a .htaccess file; in particular, you need only 3 lines of code, which in my case:
RewriteEngine On RewriteCond %{QUERY_STRING} ^_escaped_fragment_=/(.*)$ RewriteRule ^$ /seo/snapshots/%1\.html [L]
(since my judo.js file takes snapshots in the / seo / snapshots directory)
Finally, you can verify that everything works with the "fetch as google" option on the Google webmaster toolbar; if you did everything right, you will see that the result is an HTML snapshot ...
Cereal killer
source share