For a wide table:
- Report this quickly, as it appears to be denormalized, and therefore no connections are required.
- Itβs easy for end users to understand, because they donβt need to keep the data model in their heads.
Against having a wide table:
- You probably need to have several composite indexes to get good query performance.
- It is more difficult to maintain data consistency, i.e. it is necessary to update several lines when the data changes, if this data is on several lines.
- Because you need to update multiple rows and support multiple indexes, simultaneous performance for updates can become a problem as locks increase.
- You may have encountered records with many zeros in columns if the attribute does not apply to the entity in this row, which may complicate the processing of the results.
- If lazy developers make
SELECT * from a table, you end up dragging a lot of data over the network, so you usually need to maintain suitable types of subsets.
So it all depends on what you do. If the main goal of the table is an OLAP report, and updates are infrequent and affect several rows, then perhaps a wide, denormalized table is the right thing. In an OLTP environment, this is probably not, and you prefer narrower tables. (I usually design in 3NF and then denormalize to fulfill requests when I go.)
You can always use the approach to normalizing and providing wide viewing for readers, if that is what they want to see.
Without knowing more about the situation, it is actually impossible to say more about the pros and cons in your specific circumstances.
Edit:
Given what you said in your comments, do you think you have a long and skinny table of names = value, so that you only have UserId, PropertyName, PropertyValue columns? You might want to add other meta attributes; timestamp, version, or something else. SQL Server is quite effective at working with these types of tables, so do not reduce a simple solution like this.
Trevor tippins
source share