Is a large domain table a bad design?

I need to add some features to the web application that I support, and I have to decide which way to go to modify the database (Sql Server 2010) in order to store and retrieve new data that the application needs.

The ugly but quick way:
The web application already has this “common domain table”, which stores data from different domains, which can be obtained and filtered through a stored procedure by specifying the domain field.
Sort of:

| Id | Description | Domain | |------------|-------------|--------------| | 001 | Apple | Fruit | | 002 | Peach | Fruit | | 003 | Banana | Fruit | | A01 | Yellow | Color | | A02 | Red | Color | | A03 | Green | Color | SP_GetDomainValues @Domain='Fruit' 

The table already has an application layer for storing and retrieving data effortlessly.
All I need to do is create a script database to populate the table with the new entries I need, with the appropriate new domains.
I must add that this application also has several domain tables, each of which stores only one domain.

A good but slow way:
I need to create various tables, stored procedures and DAL methods for storing and retrieving data.

Personally, I like the second approach for these two main reasons :

  • Using data in a query is much simpler because you naturally join tables, rather than subsets of one large table

  • Data can be verified using foreign key constraints very naturally, something is not feasible if you have one table, possibly named "GenericDomain". It’s not that it’s completely impossible, it’s just randomly using restrictions

I am inclined to think that if you do not have some kind of hard ratio that can help decide which way to choose, you will end up taking a quick and dirty path every time.

In your experience, is the first choice a bad design or are there occasions when you can use it without feeling too guilty?

+6
source share
2 answers

As usual, it depends.

If the application works and does not have any improvements in functionality, you can simply quickly and dirtyly and continue your life. The business is likely to thank you because it is the fastest and most easily predictable route. You will feel bad if you leave the code in better condition than you found it, but you hopefully start working on something more useful further.

If the application has errors, or, probably, you will need to add even more functions in the future, or if you are a long-term owner of the service for this, you need to exchange future pain working with this model against short-term pain, bringing it into a more convenient state.

As @a_horse_with_no_name writes, the design you draw is a well-known anti-pattern. It's fragile - changing data can break an application in all kinds of exciting ways; it relies on many different levels, everyone understands the basic mechanism for storing data and can remember the correct word for “fruits” (including casing and spelling and any rogue spaces). It relies on application logic to validate the data — there is no referential integrity — and in most cases it doesn’t, so the innocent end user enters a value that they think is correct, but that violates the rules in some way and forever lost.

So, if you need to live with this code for a while, I would think about reorganizing it. I would start by writing unit tests and integration tests for existing code, and try to gradually reorganize parts of the application to use a more rational database model. Converting the entire application can take as much time as the original build, and most business people will not be happy to hear that adding a few simple additional features will require a complete redesign - so look for pragmatic ways to get where you want to go!

+1
source

I would just go with what I already had: getting the job just by adding me the data is fine, even if it is not "excellent."

Also consider that your “good” design requires circuit changes, new code, and testing, all of which are costly and risky.

+4
source

All Articles