Getting a Phing dbdeploy job to automatically roll back on a delta error

I am using the Phing dbdeploy task to manage my database schema . This works fine if there are no errors in my delta file requests.

However, if there is an error, dbdeploy will simply run the delta files before the request with the error and then abort. This causes me some frustration, because then I need to manually cancel the entry in the change table. If I do not, dbdeploy will assume that the migration was successful on a subsequent attempt, so any attempts will not do anything.

So the question is: is there a way to force dbdeploy transactions to use, or can you suggest any other way to automatically roll back a finger when an error occurs ?

Note. I am not a professional with Phing, so if it involves writing a custom task, any code sample or URL with additional information is highly appreciated. Thanks

+6
php mysql migration phing
source share
6 answers

(if you're still there ...) Regarding phing for the dump dump task, use the db dump utility and create a phing task. I use postgres mostly and have this in my phing build.xml:

<target name="db-dump" depends=""> <php expression="date('Ymd-Hi')" returnProperty="phing.dump.ts"/> <exec command="pg_dump -h ${db.host} -U ${db.user} -O ${db.name} | gzip > ${db.dumppath}/${db.name}-${phing.dump.ts}.gz" /> </target> 
+3
source share

The easiest way to solve your problem is to use the pdoexec task, which by default runs the sql script in the transaction. If an error occurs, the database engine automatically rolls back your changes (even those that are in the table of the change log - the database will be in the previous state)

Example:

 <pdosqlexec url="pgsql:host=${db.host} dbname=${db.name}" userid="${db.user}" password="${db.pass}" src="${build.dbdeploy.deployfile}" /> 
+3
source share

I know this is a very old thread, but maybe it will be used for someone else. You can use try-> catch statements to implement a solution for this. My example:

 <trycatch> <try> <exec command="${progs.mysql} -h${db.live.host} -u${db.live.user} -p${db.live.password} ${db.live.name} &lt; ${db.live.output}/${build.dbdeploy.deployfile}" dir="${project.basedir}" checkreturn="true" /> <echo>Live database was upgraded successfully</echo> </try> <catch> <echo>Errors in upgrading database</echo> <exec command="${progs.mysql} -h${db.live.host} -u${db.live.user} -p${db.live.password} ${db.live.name} &lt; ${db.live.output}/${build.dbdeploy.undofile}" dir="${project.basedir}" checkreturn="true" /> </catch> </trycatch> 
+3
source share

Why not write a delta-delta series and add a phing task that runs when another task fails?

+1
source share

you really should take a look at capistrano. TomTom: you are missing something: you need to make a backup before changing the schema, but what about the new data that was inserted between when you thought that everything was in order? I am not saying that there is a good tool for this problem, but the problem exists in real life.

+1
source share

The β€œright” way to do this is to back up until the schema changes, then roll back in case of an error.

You are not saying which db you are using, but I would be wondering if all schema transactions in transactions will be supported. Basic SQL databases (oracle, db2, sql server) do not work in all cases for really good reasons. Changes to the transacital pattern are REALLY difficult and REALLY revitalize the intensity.

-one
source share

All Articles