Starting with mysql versions without overflowing versions. Good decisions?

I came to the point where I realized that I should start managing my databases and changes. So I read existing posts about SO on this topic, but I'm not sure how to do this.

I am basically a single company, and recently I have not even used version control for my code. I am in a Windows environment using Aptana (IDE) and SVN (with turtle). I am working on PHP / mysql projects.

What is an efficient and sufficient way (without overflowing) of the version of my database schema?

In some projects, I have a freelancer or two, but I do not expect a lot of branching and merging to happen. So basically I would like to track parallel circuits for my code versions.

[edit] Instant solution : at the moment I decided that I’ll just create a dump of the circuit and one with the necessary source data whenever I am going to make a tag (stable version). This seems to be enough for me at this stage. [/ Edit]

[edit2] plus Now I also use a third file called increments.sql, where I put all the changes with dates, etc., to make it easier to keep track of the change history in one file. from time to time I integrate changes into two other files and omitting increments.sql [/ edit]

+7
version-control php mysql svn database-versioning
source share
11 answers

I think this question deserves a modern answer, so I'm going to give it myself. When I wrote the question in 2009, I don’t think Phinx already existed, and definitely Laravel didn’t.

Today the answer to this question is very clear: write incremental DB migration scripts, each of which has an up and down method, and run all these scripts or their delta when installing or updating your application. And, obviously, add migration scripts to your VCS.

As mentioned at the beginning, the PHP world has great tools to help you easily manage your migrations. Laravel has built-in DB migrations, including the corresponding shell commands. Everyone else has a similarly powerful agnostic solution with Phinx.

Both Artisan (Laravel) and Phinx migrations work the same way. For each change in the database, create a new migration, use simple SQL or the built-in query builder to write up and down methods and run artisan migrate accordingly. phinx migrate in console.

+1
source share

An easy way for a small company: dump the SQL database and add it to your repository. Then every time you change something, add the changes to the dump file.

You can then use diff to see changes between versions, not to mention comments explaining your changes. It will also make you virtually immune to MySQL updates.

The only drawback I saw in this is that you have to remember to manually add SQL to your dump file. You can train to always remember, but be careful if you work with others. Lack of updates can hurt later.

This could be mitigated by creating some complex scripts to do this for you when sending in subversive activities, but this is not much for a one-person show.

Edit: In the year that has passed since this answer, I had to implement a version control scheme for MySQL for a small team. Manually adding each change was considered a cumbersome solution, as in the comments, so we went with the database dumping and added this file to version control.

We found that test data ends in a landfill and makes it difficult to determine what has changed. This could be solved by dropping only the schema, but this was not possible for our projects, since our applications depended on certain data in the database for work. In the end, we returned to manually adding changes to the database dump.

This was not just the simplest solution, but it also solved some of the problems that some versions of MySQL encounter when exporting / importing. Usually we have to delete the development database, delete any test data, logs, etc., Delete / change certain names, where applicable, and only then be able to create a production database. By manually adding changes, we could precisely control what would end in production, a little at a time, so that in the end everything was ready and the transition to the production environment was as painless as possible.

+7
source share

How about a versioning file created using this:

 mysqldump --no-data database > database.sql 
+2
source share

Where I work, we have a setup script for each new version of the application that has sql, which we need to run to update. This works well enough for 6 developers with some branches for maintenance updates. We are considering switching to the automatic patch http://autopatch.sourceforge.net/ , which handles which patches apply to any updated database. It seems like there might be some small processing of complications handling with automatic fix, but it doesn't seem like it will be a problem for you.

+2
source share

I would suggest that a batch file like this should do the job (didn't try to work) ...

mysqldump --no-data -ufoo -pbar dbname > path/to/app/schema.sql
svn commit path/to/app/schema.sql

just run the batch file after changing the scheme, or let cron / scheduler do it (but I don’t know ... I think it does work if only the timestamps have changed, even if the contents are the same. I don’t know if this will be a problem.)

+2
source share

The main idea is to have a folder with this structure in your base project path.

 /__DB —-/changesets ——–/1123 —-/data —-/tables 

Now that it all works, you have 3 folders: Tables Holds the query to create the table. I recommend using the name "table_name.sql".

Data Saves a query to insert table data. I recommend using the same name "table_name.sql". Note. Not all tables need a data file, you add only those that need this source data to install the project.

Changes This is the main folder with which you will work. This contains the changes made to the original structure. It actually contains change folders. For example, I added a folder 1123, which will contain modifications introduced in revision 1123 (the number from your source code) and may contain one or more sql files.

I like to add them to tables with the name xx_tablename.sql - xx is a number that tells you that the order in which they should be run sometimes requires modification in a specific order.

Note: When you modify a table, you also add these changes to the table and data files ... since these are the files that will be used for the new installation.

This is the main idea.

for more details you can check this blog post

+2
source share

In our company, we did it as follows:

We put all the table / db objects in our own file, for example tbl_Foo.sql . Files contain several "parts" that are limited

 -- part: create 

where create is just a descriptive identifier for this part, the file looks like this:

 -- part: create IF not exists ... CREATE TABLE tbl_Foo ... -- part: addtimestamp IF not exists ... BEGIN ALTER TABLE ... END 

Then we have an xml file that refers to each individual part that we want to execute when updating the database to a new schema. It looks something like this:

 <playlist> <classes> <class name="table" desc="Table creation" /> <class name="schema" desc="Table optimization" /> </classes> <dbschema> <steps db="a_database"> <step file="tbl_Foo.sql" part="create" class="table" /> <step file="tbl_Bar.sql" part="create" class="table" /> </steps> <steps db="a_database"> <step file="tbl_Foo.sql" part="addtimestamp" class="schema" /> </steps> </dbschema> </playlist> 

Part of <classes/> if for GUI and <dbschema/> with <steps/> these are section changes. <step/> : s are executed sequentially. We have other objects, such as sqlclr , to do different things, such as deploying binaries, but that is pretty much the case.

Of course, we have a component that accepts this playlist file and a resource / file system object that cross-rewrites the playlist and takes out the necessary parts, and then runs them as admin in the database.

Since the "parts" in .sql are written so that they can be executed on any version of the database, we can run all the parts in each previous / old version of the database and modify it as the current one. Of course, there are times when the SQL server parses the column names “early”, and we need to modify the part later to become exec_sql s, but this does not happen often.

+1
source share

Take a look at SchemaSync . It will generate a patch and return the scripts (.sql files) needed to migrate and version your database schema over time. This is a command line utility for MySQL, independent of language and structure.

+1
source share

A few months ago I was looking for a version control tool for MySQL. I found many useful tools such as Doctrine migration, RoR migration, some tools are written in Java and Python.

But none of them satisfied my requirements.

My requirements:

  • No requirements, exclude PHP and MySQL
  • No schema configuration files like schema.yml in Doctrine
  • The ability to read the current scheme from the connection and create a new migration script, than to present an identical scheme in other application settings.

I started writing my migration tool and today I have a beta version.

Please try if you have interest in this topic. Please send me future requests and bug reports.

Source code: bitbucket.org/idler/mmp/src Overview in English: bitbucket.org/idler/mmp/wiki/Home Overview in Russian: antonoff.info/development/mysql-migration-with-php-project

+1
source share

Our solution is MySQL Workbench. We regularly reconstruct the existing database into a model with the corresponding version number. You can then easily run diffs between versions as needed. In addition, we get good EER charts, etc.

+1
source share

I am doing something similar to Manos, but I have a “main” file (master.sql) that I update with some regularity (once every 2 months). Then for each change, I create a version called .sql with the changes. That way, I can start with master.sql and add each version with the name .sql until I get to the current version, and I can update clients using the version with the name .sql to make things easier.

0
source share

All Articles