You currently have a quarter concert. This year you plan to double (half the concert). Is this the year 1997? No, this is 2010, and people have gigabytes of data on their phones.
So the question is what are you trying to solve? It cannot be memory, because it is a trivial amount of data. If this is performance, I think splitting into multiple databases is likely to make things worse when you plan to use a server for each database. There is an argument for individual databases in terms of security, but there are different ways to solve these problems.
Are you having problems with your current environment? Or at least trends that suggest that you may have problems in twelve months? If not, then just sit tight. If so, articulate them clearly, and then find out how 300 databases will solve these problems and whether they will cost the inevitable tribulation. Then recalibrate this grief to account for 10,000 users, and ask the question again.
There may be some questions that are best answered by βten thousand databases,β but not very many.
"Our largest customer adds about 12,000 records per year."
In other words, one entry every ten business minutes (during an eight-hour business day). It doesnβt seem like a lot of write work.
"The idea, rather, is that the client goes through all the data, he simply accesses their data."
But this is not much data, and, of course, nothing that a decent indexing strategy can fix.
I still don't understand if you have a real real problem now, or are you just thinking about something that might be a problem at some point in the future.