Postgres has a rich set of capabilities and the best optimizer; its ability to make hash joins often makes it much faster than MySQL for joins. According to rumors, MySQL is faster for simple table scanning. The storage mechanism that you use is also of great importance.
At some point, scaling becomes a choice between two options: scaling by purchasing more equipment or scaling by introducing new machines (which you can trick into data, use replicas as subordinates, or try installing a master master - both Posgres and MySQL have solutions various levels of quality for this kind of thing).
Several million rows of table data currently correspond to standard server memory; if that’s all you do, you don’t need to worry about it - just optimize whatever database you are most comfortable with to ensure that the correct indexes are created, everything is cached (and something like memchached is used where necessary), etc.
People note that Facebook uses MySQL; it's true. Due to the fact that they actually use hundreds (thousands ??) of mysql databases, they are all responsible for their own small cross-section of the data. If you think you can upload facebook to an instance of MySQL (or postgres or oracle) ... well, they will probably be happy to hear from you; -).
Once you get into a terabyte, everything becomes complicated. There are specialized solutions such as Vertica, Greenplum, Aster Data. There are various "nosql" data storages such as Cassandra, Voldemort and HBase. But I doubt that you need to go to such an extreme. Just buy a little more RAM.
source share