This is a common mistake to worry about "large" tables and performance. If you can use indexes to access your data, it doesn't matter if you have 1,000 out of 1,000,000 records - at least not in the way you could measure. The design you use is commonly used; it's a great design where time is a key part of business logic.
For example, if you want to find out what the price of an item was when the customer placed the order, the ability to search for product records where valid_from <order_date and valid_until is either null or> order_date is by far the easiest solution.
This is not always the case - if you save data only for archiving purposes, it may make sense to create archive tables. However, you must be sure that time is really not part of the business logic, otherwise the pain of finding multiple tables will be significant - imagine that you need to look for a product table or a product_archive table every time you want to find out about the price of a product at the time of placing the order.
Neville kuyt
source share