In SQL Server data tools, you have the deployment option “Block incremental deployment if data loss can occur”, which in my opinion is the best practice for validation.
Let's say that we have a table foo, and the column row, which is now redundant, has no dependencies, foreign keys, etc. etc., and we have already deleted references to this column in our data layer and stored procedures, because it is simply not used. In other words, we are satisfied that resetting this column will not have negative consequences.
There are several flies in the smear:
- The column has data in it.
- The database is published in hundreds of distributed customers, and it can take months for the change to ripple all customers
As the column fills, the publication will fail if we do not change the option "Phased deployment of the block, if data loss is possible." However, this parameter is at the database level, and not at the table level, and therefore, due to the distributed nature of the clients, we will have to disable the "data loss" parameter a few months before all databases are updated, and return them after the update all customers (our databases have version numbers set by our assembly).
You might think that we can solve this problem with a pre-deployed script, for example
if exists (select * from information_schema.columns where table_name = 'foo' and column_name = 'bar') BEGIN alter table foo drop constraint DF_foo_bar alter table foo drop column bar END
But again this fails if we do not disable the "data loss" option.
I'm just curious what others did in this scenario, as I would like to have a granularity that is not currently possible.
sql-server database-project ssdt
Fetchez la vache
source share