I need to add some features to the web application that I support, and I have to decide which way to go to modify the database (Sql Server 2010) in order to store and retrieve new data that the application needs.
The ugly but quick way:
The web application already has this “common domain table”, which stores data from different domains, which can be obtained and filtered through a stored procedure by specifying the domain field.
Sort of:
| Id | Description | Domain | |------------|-------------|--------------| | 001 | Apple | Fruit | | 002 | Peach | Fruit | | 003 | Banana | Fruit | | A01 | Yellow | Color | | A02 | Red | Color | | A03 | Green | Color | SP_GetDomainValues @Domain='Fruit'
The table already has an application layer for storing and retrieving data effortlessly.
All I need to do is create a script database to populate the table with the new entries I need, with the appropriate new domains.
I must add that this application also has several domain tables, each of which stores only one domain.
A good but slow way:
I need to create various tables, stored procedures and DAL methods for storing and retrieving data.
Personally, I like the second approach for these two main reasons :
Using data in a query is much simpler because you naturally join tables, rather than subsets of one large table
Data can be verified using foreign key constraints very naturally, something is not feasible if you have one table, possibly named "GenericDomain". It’s not that it’s completely impossible, it’s just randomly using restrictions
I am inclined to think that if you do not have some kind of hard ratio that can help decide which way to choose, you will end up taking a quick and dirty path every time.
In your experience, is the first choice a bad design or are there occasions when you can use it without feeling too guilty?
source share