I am using React and React Router in my one page web application. Since I am rendering client-side rendering, I would like to serve all my static files (HTML, CSS, JS) from CDN. I use Amazon S3 to host files and Amazon CloudFront as a CDN.
When the user requests /css/styles.css, the file exists, so S3 serves it. When a user requests / foo / bar, this is a dynamic URL, so S3 adds hashbang: / #! / Foo / bar. This will serve as index.html. On my client side, I delete hashbang so that my URLs are pretty.
All this works great for 100% of my users.
- All static files are transferred via CDN
- The dynamic URL will be redirected to / #! / {...}, which serves as index.html (my one-page application)
- My client side removes the hash bands so that the urls are good again.
Problem
The problem is that Google will not crawl my site. That's why:
- Google Queries /
- They see a bunch of links, for example. to / foo / bar
- Google queries / foo / bar
- They are redirected to / #! / Foo / bar (302 Found)
- They remove hashbang and request /
Why is the hash blank deleted? My application works great for 100% of my users, so why do I need to reconfigure it to get Google to crawl it correctly? This is 2016, just follow hashbang ...
</ & bombastic GT;
Am I doing something wrong? Is there a better way to get S3 to serve index.html when it doesn't recognize the path?
Setting up a node server to handle these paths is not the right solution, because it completely destroys the purpose of the CDN.
In this thread, Michael Jackson, the main contributor to React Router, says: "Fortunately, hashbang is no longer used everywhere." How would you change my setting so as not to use hashbang?
source share