Database change management - setting up initial creation scripts, subsequent migration scripts

I have a database change management workflow. It is based on SQL scripts (therefore it is not a code driven solution).

The basic setup is as follows:

Initial/ Generate Initial Schema.sql Generate Initial Required Data.sql Generate Initial Test Data.sql Migration 0001_MigrationScriptForChangeOne.sql 0002_MigrationScriptForChangeTwo.sql ... 

The process of deploying the database is to then run all Initlal scripts and then run sequential migration scripts. The tool accepts version requirements, etc.

My question is what setup is, it is useful to support this as well:

 Current/ Stored Procedures/ dbo.MyStoredProcedureCreateScript.sql ... Tables/ dbo.MyTableCreateScript.sql ... ... 

By "this" I mean a directory of scripts (separated by an object type) that represents the creation scripts for deploying the current / latest version of the database.

For some reason, I really like the idea, but I cannot specifically substantiate its necessity. Did I miss something?

Benefits:

  • For dev and source control, we will have the same setting for each file as for
  • For deployment, we can deploy a new database instance to the latest version either by launching Initial + Migrate or by running scripts from Current /
  • For developers, we do not need a DB instance that works for development. We can do "offline" in the current folder.

The disadvantages would be as follows:

  • For each change, we need to update the scripts in the Current / folder, and also create a Migration script (in the Migration / folder)

Thanks in advance for any input!

+6
database sql-server redgate
source share
3 answers

Actually, this is the best way. As cumbersome as it may seem, this is better than alternatives to using SQL Compare similar tools or VSDB. I have argued several times that the smae approach exists: Version control and your database . My applications deploy the v1 schema from the initial script, and then run the script update for each version. Each script knows how to upgrade version N-1 to N, and only that. The end result is the current version.

The biggest drawback is the lack of an authoritative .sql file to find the current version of any object (procedure, table, view, etc.). But the benefits of being able to deploy your application over any previous version and the benefits of deploying with well-controlled and tested scripts far outweigh the disadvantage.

If you feel bad about using this deployment process (script to deploy v1, then apply v1.1, then v1.2 ... until finally you apply v4.5, current) then keep this in mind: the exact same process used by SQL Server to update the database between releases. When you attach an older database, you see that in the famous "database is upgrading from version 611 to 612", and you see that the update is in stages, it is not updated directly to the current version 651 (or whatever your case). In addition, the update does not launch the diff tool to deploy v 651 compared to version 611. This is because the best approach is the one you are just using, update one step at a time.

And to add a real answer to your question, after the publication of a rather oblique statement (there is a topic on which I have strong opinions, can you say?): I consider it valuable to have a script of the current version, but I think it should be continuous integration building process. In other words, your build server should create the current database (using update scripts), and then, as a build step, the script exit the database and shorten the build with the current version of the script schema. But they should be used only as reference information for searching and verifying the code, and not as for delivery, my 2C.

+6
source share

I think that in the end, this will further complicate the situation. Entire versions must live in one script so that you can test this script in one context and know that it will work correctly in another context, such as production.

+1
source share

Martin,

If you are in the real world, your production database only accepts updates β€” you never β€œcreate” it from scratch. Therefore, the most important thing that you should store, view, view, etc., is a set of update scripts. These are the scenarios that will make it production, so they are the only real ones.

You do the right thing by making them primary. But developers should be able to get the "current picture" of how the circuit looks. Database administrators also love to do this, although (often) they often do this by logging into production servers and running some kind of gui tool. (Yikes!)

The only caveat I have about your approach is the current / previous schema for the type of object. These scripts should be generated automatically, due to a reset of the database itself. If you can automatically classify them by type, then great! If not, do your best to make it easier to move them, but the manual should always be "automatically generated from the live database."

0
source share

All Articles