Create standalone web applications with parse.com crawlable

I am building a website with Parse.com and trying to make it crawlable by FB and Google bots. The site is based on parse.com and does not use Angular.

I looked at the thread here and configured things accordingly on my end. I created an endpoint for testing the material:

app.get('/hello/:id', function(req, res) { res.render('hello'); }); 

This is what I have in the log when the Google bot selects a URL, for example

https://www.example.org/hello/#!a/b

 I2015-11-10T05:35:18.580Z]v642 Ran custom endpoint with: Input: {"method":"GET","url":"/hello/","headers":{"accept":"text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8","accept-encoding":"gzip,deflate","cache-control":"no-cache","from":"googlebot(at)googlebot.com","host":"www.example.org","user-agent":"Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)","version":"HTTP/1.1","x-forwarded-for":"10.252.5.60","x-forwarded-proto":"https"}} Result: Success 

Since the Google crawler does not rewrite Url using fragments, the wrong page snapshot is sent (corresponding to the base url + path with no parameters). Interestingly, facebookbot replaces shebang with an escape snippet and retrieves the URL below:

 /hello/?_escaped_fragment_=posts/abcdef 

Which is correctly handled at the end of the parsing, and pre-render.io sends the correct cached page.

I’ve been knocking on this issue for several hours now, the members of the Party cannot find a problem anywhere that is quite simple and important.

Has anyone ever been able to create a single Parse design using Google and Facebook bots, or is this just a lost cause?

thanks!

PS: It seems that now the bot bot is trying to display AJAX pages, regardless of the escaped fragments. In other words, it seems that the default behavior is currently, but at the same time, the existing AJAX page can still be scanned with escaped fragments ... see the thread here for feedback from web developers.

+6
source share

All Articles