Is branching function still (or ever) considered bad practice?

Arriving from the TFS world and getting comfortable enough with Git, I’m going to suggest to my team that we should enable the Gitflow workflow , as mentioned in a future article by Vincent Dressen .

Almost all of the modern literature surrounding branching strategies reflects the efficiency of the Gitflow workflow, which is an extended version of feature branching, but dated articles by influential engineers such as Martin Fowler Feature Branch (2009) discredit branch function in general in favor of continuous integration.

Some of his critics stated that Fowler’s opposition to branching was partly because he used SVN as his VCS, which was an inefficient merger tool and therefore led Fowler to recommend ramified anti-ram paranoia merging.

Fowler then responded in 2011 that DVCS systems can simplify the merger process, but they still do not resolve semantic conflicts . Now in 2014, we have merge tools, such as Semantic Merge , that can solve this problem as a whole.

My questions

  • Are branching and continuous integration of functions mutually exclusive?

  • How appropriate is Fowler's article in modern development, especially with our accessibility to tools such as SourceTree, Git, Jenkins and other code checking software that greatly facilitate branching functions, etc.

+7
git branch dvcs continuous-integration
source share
4 answers

In my experience, it depends on where your function branches are created. If you follow the fork and merge models, where function branches are created on your fork, I don't see any problems. From the point of view of the main project, this is still only one (main) branch; the only place where function branches are found is in your fork, and the only reason to use them is to isolate the changes you submit (in the form of a transfer request) against the main branch.

+2
source share

If you look at the Wikipedia article on continuous integration (today), you will see that it is a merger with one main line daily. Based on this, I would say that the answer to your number one question is yes, but this does not preclude the use of a function branching strategy.

On your question, number two, I do not think that the answer is quite straightforward, but, in my experience, creating branches is easy and merging them after entropy. I still find Fowler's article correct.

+1
source share
  • No, you can configure CI on both mainline and each function branch, no problem.

  • This is still very relevant. Although automatic merge algorithms are getting better and better, including some of the semantics based mergers, it is still not possible for a computer to determine a value. Until we get real computer intelligence, it will still be a problem. The question is, what is the% of cases where automatic merging leads to an incorrect result and what% of those cases when he knows that this will lead to an incorrect result. Essentially, if you can automatically deduce all cases where automatic merging fails, you can refer them to people. But it is also difficult to solve. The worst case scenario is not when automatic merging cannot merge the code, but when it can merge it, but merges it incorrectly - usually in semantics or as a result of a race condition or some other problem that cannot be easily identified without understanding by a person.

In most branches of functions it is very useful to isolate changes in a small team that works on a function in which you are either not sure about its quality, this code can adversely affect other larger teams working on the project, or you are not sure if the function will turn it in the next version.

You want to limit the use of function branches to a minimum amount of time. Merging code between two branches with complex sets of changes can be difficult and time-consuming; complexity increases more than o (n), where n is the sum of the changes on both branches. As a rule, within 1 month you should have a really good version control system, good interfaces / code architecture or OCD developers, or a combination of the three.

About 25% of the time in the project should be devoted to reducing technical debt, which mainly includes code refactoring. In the process of refactoring, problems arise in many industries, since merging refactoring and post-refactoring branches can be extremely difficult. For this reason, you must ensure that all Feature branches are merged before refactoring begins.

+1
source share

The original definition of continuous integration has very little to do with the build server and more specifically relates to the practice of "continuous integration" of multiple workflows.

This term was originally coined, or at least coined by Kent Beck, in Extreme Programming.

http://www.extremeprogramming.org/rules/integrateoften.html

Background behind:

  • Increased visibility of what others are working on and how it may overlap / affect your work.
  • the risk of a “big bang” merger is reduced, which can lead to unexpected breakdowns (often under duress).
  • “Ready for production” has been improved since it should always be freed, even if some functionality is incomplete.

Git makes things easier, but for powerful XP proponents, they will still avoid feature branches. Instead, the focus will fork in the code using abstraction and function switching.

0
source share

All Articles