If all users should be registered, I would go with a full normalized solution.
USERS TABLE OBJECTS TABLE --------------- ----------------- user_id (primary) object_id (primary) USERS_TO_OBJECTS TABLE -------------------- user_id (index) object_id (index) time (index) action (index) ? object_type (index) // could be useful to speed things up
This setting is likely to give you maximum flexibility in scheduling, and will also be pretty quick, as you can leave the user or object if you don't need them.
Edit:
Tell me, city X (id 9876) was updated by user 123 (id 1234) ...
1234 - user_id (the user that did the action) 9876 - object_id (the object where the action was done) xyz - time updated - action type (so that you select only specific actions) city - object type (so that you select only specific objects)
I downloaded this table with 40M rows and the results are pretty acceptable.
0.002 secs for a simple COUNT by the number of UPDATED cities in the last WEEK. The data was inserted randomly.
Edit 2
If you come across a really huge table, you can resort to MySQL partitions and your schema is beautiful. I really don't know how you are going to use tables, but you could:
RANGE BY RANGE. Organize the section by dates. Every new month or so you will have a new section.
RESOLUTION KEY. Organize the action section. Each action goes to the corresponding section.
You can check more on the partitions on the MySQL website and this article gives you some details on the fine-tuning partitions.
source share