I read quite a bit about JavaScript client applications and search engines. I found two common approaches:
Workflow 1:
Prerequisite: The entire web application wears out gracefully and can be used without JavaScript. Thus, for search engine robots, it can be seen that they are crawled.
- A user comes from a Google search on a specific topic
- The theme loads as quickly as possible in plain html
- JS App Framework boots in the background
- Once it is ready, the JS App Framework takes all actions and routes, etc.
Workflow 2:
Prerequisite: The server backend is developed after the Google ajax-crawling guide ( https://developers.google.com/webmasters/ajax-crawling ) and returns to escaped_fragment URLs (e.g. www.example.com/ajax. Html? _Escaped_fragment_ = key = value) plain html. As I understand it, something like http://phantomjs.org/ can be used for this to make sure that there is no duplication of code.
- Google shows ajax url in its results.
- The request is made using ajax url #!
- The emberjs application is initialized and the desired state is loaded depending on the URL.
Question:
What should an emberjs scanning stack look like in order to offer server-side rendering for search engine bots and js-framework integrity? Why do emberjs developers recommend doing this? (For example. Node + Emberjs + phantomjs + - x OR Rails + Emberjs + y OR Playframework + Z)?
I know that there can be many ways, but I think it would be useful to use stackoverflow to filter general approaches.
Sidenote:
I already looked at some JS frameworks that want to create such a complete stack out of the box. To name them here:
I especially ask about emberjs because I like their approach, and I think that the team behind it is definitely capable of creating one of the best frameworks.
Bijan
source share