Having been in the control space of a database version for 5 years (as Director of Product Management at DBmaestro ) and working as a database administrator for two decades, I can tell you a simple fact that you cannot handle database objects, when working with your Java, C # or other files, and save the changes in simple DDL scripts.
There are many reasons, and I will name a few:
- Files are stored locally on the developers PC and changes to s / he does not affect other developers. Similarly, the developer did not affect the changes made by her colleague. In a database, this is (usually) not the case, and developers use the same environment database, so any changes that were associated with the database are affected by others.
- Code changes are posted using changes / changes / etc. (depending on the source management tool used). In this case, the code from the local developer directory is inserted into the source management repository. A developer who wants to get the latest code should request it from the source control. In the database, the change already exists and affects other data, even if it is not registered in the repository.
- When registering a file, the source control tool performs a conflict; check to see if the same file was modified and registered by another developer while changing your local copy. Again there it is not checking this in the database. If you change the procedure from your local PC, and at the same time I change the same procedure using the code that forms my local PC, then we redefine each other's changes.
- The code assembly process is performed by retrieving the label / latest version of the code in an empty directory, and then compiling it. Output is binary files in which we copy and replace the existing one. We do not care what happened before. We cannot recreate the database in the database, because we need to maintain the data! Deployment also executes SQL scripts that were generated in the assembly process.
- When executing SQL scripts (with DDL, DCL, DML (for static content), you take the current structure as the environment corresponds to when creating the scripts. If not, your scripts may fail because you are trying to add a new column that already exists .
- Processing SQL scripts as code and generating them manually syntax errors, database dependency errors, scripts that are not reusable, which complicate the development task, testing these scripts. In addition, these scenarios may have an environment that is different from the one you are, although it will work on.
- Sometimes the script in the version control repository does not match the structure of the object that was tested, and then errors will occur in production!
There are many more, but I think you got the picture.
What I found is that the following works:
- Use a forced version control system that provides check-out / check-in on database objects. This will make sure that the version control repository matches the code that was if it reads the metadata of the object during registration and not as a separate manual step. It also allows several developers to work in parallel in the same database, while preventing them from being accidentally overridden by each other.
- Use an impact analysis that uses baselines as part of a comparison to identify conflicts and identify differences (when comparing the structure of an object between a repository control source and a database) is a real change, the origin of which is development or difference that arose in a different way and then it follows skip, for example, a different branch or emergency fix.
- Use a solution that knows how to perform impact analysis for many schemes at once, using the user interface or using the API to ultimately automate the build and deployment process.
In this article I published here , you can read it.
Uri
source share