Database Deployment Strategies (SQL Server)

I am looking for a way to deploy daily and maintain database scripts according to releases.

We currently have a pretty decent way to deploy our source, we have module code coverage, continuous integration and rollbacks.

The problem is that the database scripts are consistent with the release. It seems like everyone tried to run the script in a test database and then run them in real time when the ORM mappings are updated (i.e., the changes go live), then it raises a new column.

The first problem is that none of the scripts should be written anywhere, as a rule, everyone "tries" to put them in the Subversion folder, but some of the more lazy people just run the script in real time and most of the time there is no one knows what he did in the database.

The second problem is that we have 4 test databases, and they ALWAYS do not work, and the only way to really build their backup is to restore from a live database.

I really believe that such a process should be simple, understandable and easy to use to help the developer, and not to interfere with them.

What I'm looking for is methods / ideas that make it easy for developers to write their database scripts so that they can run as part of the release procedure. A process that a developer wants to follow .

Any stories, use cases or even links are useful.

+52
c # svn sql-server-2005 visual-studio-2005
Feb 02 '09 at 21:09
source share
15 answers

For this very problem, I decided to use the migration tool: Migratordotnet .

When migrating (in any tool), you have a simple class used to make your changes and discard them. Here is an example:

[Migration(62)] public class _62_add_date_created_column : Migration { public void Up() { //add it nullable Database.AddColumn("Customers", new Column("DateCreated", DateTime) ); //seed it with data Database.Execute("update Customers set DateCreated = getdate()"); //add not-null constraint Database.AddNotNullConstraint("Customers", "DateCreated"); } public void Down() { Database.RemoveColumn("Customers", "DateCreated"); } } 

This example shows how you can handle volatile updates, such as adding a new non-zero column to a table with existing data. This can be easily automated, and you can easily switch between versions.

It was a truly valuable addition to our assembly and simplified the process immensely .

I posted a comparison of various migration patterns in .NET here: http://benscheirman.com/2008/06/net-database-migration-tool-roundup

+32
Feb 05 '09 at 21:26
source share

Read K.Scott Allen for publications in database versions .
We created a tool for applying database scripts in a controlled manner based on the methods that it describes, and it works well.
This could then be used as part of a continuous integration process with each test database that has changes deployed to it when the URL is committed to the database update scripts. I would suggest having a base script and update scripts so that you can always run a sequence of scripts to get the database from the current version to the new state that is needed.
It still requires developers to take any processes and discipline (all changes must be transferred to the new version of the base installation of the script and the patch script).

+7
Feb 02 '09 at 21:16
source share

We have been using SQL Compare from RedGate for several years:

http://www.red-gate.com/products/index.htm

The pro version has a command line interface that you could probably use to configure your deployment procedures.

+6
Feb 02 '09 at 21:44
source share

We use a modified version of the database versioning described by K. Scott Allen . We use the Database Publishing Wizard to create the original baseline script. Then a custom SMO SQL-based C # tool to delete stored procedures, views, and user-defined functions. Change scripts that contain schema and data changes are generated using Red Gate . Thus, we obtain a structure of type

 Database\ ObjectScripts\ - contains stored procs, views and user funcs 1-per file \baseline.sql - database snapshot which includes tables and data \sc.01.00.0001.sql - incremental change scripts \sc.01.00.0002.sql \sc.01.00.0003.sql 

If necessary, the custom tool creates a database, applies baseline.sql if necessary, adds the SchemaChanges table if necessary, and applies change scripts as necessary, based on what is in the SchemaChanges table. This process happens as part of the nant build script every time we build the deployment through cc.net.

If someone needs the source code of a schemachanger application, I can drop it on codeplex / google or anywhere.

+5
Feb 05 '09 at 21:47
source share

Go here:

http://www.codinghorror.com/blog/archives/001050.html

Scroll down a bit to the list of 5 links to odetocode.com. A fantastic series of five parts. I would use this as a starting point for getting ideas and determining the process that will work for your team.

+4
Feb 02 '09 at 21:18
source share

If you are trying to synchronize database schemas, try using the Red Gate SQL Comparison SDK . Create a temporary database based on create script (newDb) - this is what you want your database to look like. Compare newDb with your old database (oldDb). Get the changeset from this comparison and apply it using Red Gate. You can create this update process in your tests, and you can try and force all developers to agree that there is one place where the creation of the script for the database is saved. The same practice is well suited for updating your database in several versions and for running scripts and data transfer processes between each step (using an XML document to map the creation and transfer of scripts).

Edit: when using Red Gate technology, you only create scripts, not update scripts, since Red Gate offers an update script. It will also allow you to drop and create indexes, stored procedures, functions, etc.

+4
Feb 02 '09 at 21:42
source share

You should use a build tool like MSBuild or NAnt. We use a combination of CruiseControl.NET, NAnt, and SourceGear Fortress to handle our deployments, including SQL objects. The build task of NAnt db calls sqlcmd.exe to update the scripts in our dev and intermediate environments after checking them in Fortress.

+2
Feb 02 '09 at 21:17
source share

We use Visual Studio for database professionals and TFS to version and manage our database deployments. This allows us to process our databases in the same way as code (check, register, block, view version history, branch, build, deploy, check, etc.) and even include them in the same decision files, if we wish.

Our developers can work with local databases to avoid changing each other in a common environment. When they check for database changes in TFS, we have continuous integration to build, test, and deploy in our integrated development environment. We have a separate build on the release branches to create differential deployment scripts for each subsequent environment.

Later, if an error is detected in the release, we can go to the release branch and simultaneously fix the code and the database.

This is a great product, but its adoption was difficult at an early stage due to a Microsoft marketing error. This was originally a separate product in Team System. This meant that in order to use the release features for developers and release the database at the same time, you had to upgrade to a much more expensive version of Team Suite. We (and many other customers) told Microsoft about this, and we were very pleased that this year they announced that DB Pro was built into the developer version , and anyone who immediately licensed with the developer version can install the database version .

+1
Sep 11 '09 at 19:31
source share

Gus single-handedly mentioned DB Ghost (see above) - I'm second as a potential solution.

A brief overview of how my company uses DB Ghost:

  • After the schema for the new database was reasonably decided during the initial development, we use the DB Ghost "Data and Schema Scripter" script to create script files (.sql) for all database objects (and any static data) and we check these files script to the source control (the tool separates objects from folders, such as "Stored Procedures", "Tables", etc.). At this point, we can use DB GHost's 'Packager' or 'Packager Plus' tools to create a standalone executable to create a new database from these scripts.
  • All changes to the database schema are checked using checks on specific script files.
  • At any time, we can use the packer to create an executable file for (a) creating a new database or (b) updating an existing database. Certain path-specific changes (such as changes requiring data updates) require some configuration, but we have pre-update and post-update scripts that are running.

The “upgrade” process involves creating a clean “source” database, and then (after the user pre-upgrade scripts) a comparison between the schemas of the source database and the target database. DB Ghost updates target database to match

We regularly make changes to production databases (we have 14 clients in 7 different production environments), but inevitably deploy a fairly large set of changes with the DB Ghost executable (created during our build process). Any changes in production that have not been verified at the source (or that have not been noted in the corresponding branch issued) are LOST. This forced everyone to register changes sequentially.

Summarizing:

  • If you apply a policy in which all database updates are deployed using the DB Ghost update executable, you can force developers to check their changes sequentially, whether they are manually deployed in the interim.
  • Adding a step (or steps) to your build process to create the DB Ghost update executable will lead to a test to verify that the database can be created from scripts (i.e. because DB Ghost creates the "source" database, even when creating executable update package), and if you add a step (or steps) to execute the update package [in any of the four test databases that you mentioned], you can save the test databases according to the source code.

There are some caveats and some limitations to what changes are “easily” deployed with this tool (actually a set of related tools), but they are all pretty minor (at least for my company):

  • Renaming objects must be done in one of the custom scripts.
  • The entire database is always updated (for example, objects in one scheme cannot be updated separately), which makes it difficult to support client-specific objects in the main application database
+1
Sep 18 '09 at 21:11
source share

These posts have a bunch of links that I want to track (I rolled my own system a few years ago, I should see if there are any similarities). One thing you will need, and hopefully mentioned in these links, is discipline. I don’t quite understand how any automatic system can work if someone can change anything at any time. (Your question implies that this can happen on your production systems, but obviously this cannot be true.)

Having one person (the legendary database administrator) dedicated to the task of managing changes in databases, in particular production databases, is a very common solution. As for maintaining consistency between the X development and testing bases: if they / they are used by many users, once again you are best served if the individual act is a “clearing house” for changes; if everyone has their own copy of the database, then they are responsible for maintaining it in order, and having a centralized “source” database will be crucial when they need an updated database.

The following post might be of interest here: how-to-refresh-a-test-instance-of-sql-server-with-production-data-without-using

0
Sep 10 '09 at 14:18
source share

The book Database Refactoring addresses many of these issues at a conceptual level.

As for the tools, I know that DB Ghost works well for SQL Server. I heard that the Data Dude version for Visual Studio was indeed included in the latest version, but I have no experience with it,

As far as realistically pulling out a database design of continuous integration style, it really loads resources quickly due to the number of database copies you need. This is very convenient when the database can be placed on the developer's workstation, but impractical when the database is so large that it needs to be deployed over a grid. To do this, you need 1 copy of the database for each developer [developers who make changes to DDL, not just changes to procs) + 6 shared copies. Shared copies:

  • INT DEV → Developers “check” their refactoring for INT DEV to test integration. When integration testing passes, this database is copied to DEV.
  • DEV → This is the “official” copy of the database development. INT DEV is regularly updated with a copy of DEV. Developers working on new refactoring get a new copy of the database from DEV.
  • INT QA → The same ideas as INT DEV, with the exception of the QA team. When integration tests pass here, this database is copied to QA and DEV *.
  • OK
  • INT PROD → Same idea as INT QA, except for production. When integration tests pass here, this database is copied to PROD, QA * and DEV *
  • PROD

* When copying databases on DEV / QA / PROD lines, you will also need to run scripts to update test data related to a specific environment (for example, setting up users in QA that the QA team uses for testing but which do not exist in production).

0
Sep 10 '09 at 3:00 p.m.
source share

One possible solution is to study the implementation of DML auditing in test databases, and then simply collapse these audit logs into a script for final testing and active deployment. SQL Server 2008 greatly improves DML auditing, but even SQL Server 2005 supports it through triggers.

0
Sep 11 '09 at 19:49
source share

Red Gate has a document that describes how to achieve build automation: http://downloads.red-gate.com/HelpPDF/ContinuousIntegrationForDatabasesUsingRedGateSQLTools.pdf

It is built around SQL Source Control , which integrates with SSMS and the existing version control system.

0
Dec 11 '10 at 18:23
source share

I wrote a tool based on .NET for automatic version control of databases. We use this production tool to process database updates (including patches) in several environments, to keep a log in each database, which scripts were executed, and do it all in an automatic way. It has a command line console, so you can create batch scripts that use this tool. Check this out: https://github.com/bmontgomery/DatabaseVersioning

0
Apr 01 2018-11-11T00:
source share

For what it's worth, this is a real-life example of a simple, low-cost approach used by my former employer (and which I am trying to impress my current employer as a primary first step).

Add a table called "DB_VERSION" or similar. In EVERY update of the script, add a row to this table, which may contain as few or as many columns as you consider suitable for the description of the update, but at least I would suggest {VERSION, EXECUTION_DATE, DESCRIPTION, EXECUTION_USER}. Now you have a specific report on what is happening. If someone runs their own unauthorized script, you still need to follow the recommendations of the above answers, but this is just an easy way to significantly improve your existing version control (i.e.No).

Now you have a script update from v2.1 to v2.2 of the database, and you want to check that the lone maverick guy actually ran it in his database, you can just look for the lines where VERSION = 'v2.2', and if you get the result, do not run this update script. It can be integrated into a console application, if necessary.

0
Jul 27 '11 at 11:57
source share



All Articles