Google search engine architecture - how many concurrent users search on it

With millions of users looking for so many things on Google, Yahoo and so on. How can a server handle so many concurrent requests? I have no idea how they made it so scalable. Any understanding of their architecture would be welcome.

+6
search-engine yahoo
source share
4 answers

One element, DNS load balancing . If you reboot

several times, you will see how different cars react.

There are many resources in google architecture, this site has a nice list:

+6
source share

I recently searched for information on this topic and the Google Wikipedia article Platform was the best source of information on how Google Does it. However, the highly scalable blog has unique scalability articles almost every day. Be sure to check out his article on Google Architecture .

+3
source share

DNS load balancing is correct, but this is actually not a complete answer to the question. Google uses many methods, including but not limited to the following:

  • DNS load balancing (recommended)
  • Clustering - as suggested, but pay attention to the following
    • clustered databases (storage and retrieval of the database applies to many machines)
    • clustered web services (similar to DNSLB here)
    • Internally developed cluster / distributed registration system
  • Highly optimized search indexes and algorithms to quickly and efficiently store data storage in a cluster
  • Caching requests (squid), responses (squid), databases (in memory, see the fragments in the above article).
+3
source share

The core concept of most scalable applications is clustering .

Some resources related to the cluster architecture of various search engines.

You can also read interesting scientific articles on Google Research and Yahoo Research .

+1
source share

All Articles