In my experience, 10 concurrent users are starting to cause problems. I am sure there are examples with very small datasets that work well with many users.
Access may be good for some applications. There seems to be a lot of passion in this thread.
The key concept to understand here is that there is no server. EACH QUERY will pull the ENTIRE table over the network.
If its JOIN, EVERY QUERY will pull EVERY table connected over the network. This is because the JOIN engine is on your desktop.
It does not matter where the access file is located. At best, it is located on the user's main desktop machine. Everyone else should use the network to access data.
If you have a 100k table and you want id # 1042, you will pull out 100k * Record the length of the data over the entire network, and then filter out everything except # 1042. It cannot cache, because your colleague may have changed the following record. which you want to look at.
I do not think that this is necessarily the number of simultaneous users in the access database. I think that the number of people at the same time pulls a significant part of the data over the network every time I press a button.
Network load / network latency will increase as the number of counters increases, the number of records increases, and the number of users increases. Perhaps a multiplier effect. Confirm this if you have remote data centers (encryption), vpn users (encryption), users on different continents, etc. Etc. Etc.
greg
source share