With continuous integration, why are tests performed after committing instead of previously?

While I only have the github repository that I click on (alone), I often forget to run tests or forget to transfer all the relevant files or rely on objects residing on my local machine. This causes the assembly to break, but they are only detected by Travis-CI after an erroneous commit. I know that TeamCity has a pre-commit testing tool (which uses the IDE used), but my question is about the current use of continuous integration, and not for any one implementation. My question is:

Why are variables tested on a clean build machine, such as those that Travis-CI uses for post-commit, before these changes are made?

Such a process will mean that there will never be assembly breaks, which means that the new environment can pull any commit from the repository and be sure of its success; As such, I do not understand why CI is not implemented using post-commit testing.

+7
source share
3 answers

It is assumed that if you write code and compile it, and the tests are passed locally, no assemblies can be damaged correctly. This is only the case if you are the only developer working on this code. But let me say that I am changing the interface that you are using, my code will compile and pass the tests until I get your updated code that uses my interface. Your code will compile and pass tests until you receive my update in the interface. And when we both check our code, the construction machine explodes ...

So, CI is a process that basically says: make your changes as soon as possible and test them on the CI server (it must first be compiled and tested locally). If all developers follow these rules, the assembly will still break, but we will find out sooner rather than later.

+3
source

CI server does not match version control system. The CI server also checks the code from the repository. And so the code has already been committed when it is tested on the CI server.

More extensive tests may be performed periodically, rather than during verification of what the current version of the code is during testing. Think of multi-platform tests or load tests.

Usually, of course, you will unit test your code on your development machine before testing it.

+2
source

I preface my answer with the details that I run on GitHub and Jenkins.

Why does a developer have to run all the tests locally before committing. Especially in the Git paradigm, which is not a requirement. What if, for example, it takes 15-30 minutes to complete all the tests. Do you really want your developers or you personally sitting around, expecting the tests to be run locally before your commit and push your changes?

Our process is usually performed as follows:

  • Make changes to the local branch.
  • Run all the new tests that you created.
  • Make changes to the local branch.
  • Delete local changes remotely in GitHub and create a pull request.
  • You have a build process that picks up changes and runs unit tests.
  • If the tests failed, then fix them in the local branch and push them locally.
  • Get the change code discussed in the pull request.
  • After approval and all checks, click to complete the wizard.
  • Repeat all unit tests.
  • Click the artifact into the repository.
  • Push changes in the environment (e.g. DEV, QA) and perform any integration / functional tests that are based on the complete environment.
    • If you have a cloud, you can push your changes to the new node and only after all the environment tests cross the VIP to the new node (s)
  • Repeat 11 until you have skipped all the pre-connected environments.
  • If you practice continuous deployment, then all your changes go to PROD, if all the checks, checks, etc. pass.

My point is that developers do not use the time to run tests that locally hinder their progress, when you can turn off this work on the continuous integration server and receive notifications about problems that need to be fixed later. In addition, some tests simply cannot be performed until you commit them and place the artifact in the environment. If the environment is broken because you do not have a cloud, and perhaps you have only one server, then fix it locally and quickly click on the changes to stabilize the environment.

You can run tests locally if necessary, but this should not be the norm.

Regarding the issue with multiple developers, open source projects have long been working with this. They use forks on GitHub to allow contributors to offer new fixes and functionalities, but this is not really so different from the developer in the team creating the local branch, pushing it off remotely and gaining buy-in by checking the code before clicking, If someone is pushing for changes that violate your changes, you first try to fix them and then ask them for help. You must follow the principle of "merging early and often," as well as periodically merging updates from the master to your industry.

+1
source

All Articles