How to create a complex value stream with multiple pipelines using Jenkins WorkFlow

How do you implement a complex value stream with multiple pipelines in Jenkins WorkFlow? Just like you can do with a Go CD: How do I make a Go CD ?: Part 2: Pipelines and cost streams .

For a distributed system, I would like each development team and development team to start working with their own delivery pipeline. One change should only cause the pipeline of the team that made the change. He must provoke a new pipeline, which should take the last successful artifacts from each of the pipelines of the team and go from there. This means that artifacts from other teams were not restored or tested, as they were not changed. And after Fan In, we can run a set of automated tests to check the correct behavior of the distributed system with the change.

In the documentation, I only find that you can pull from multiple VCS, but I assume that everything will be built and tested with each change. Who I want to avoid.

If each delivery conveyor is in it own Jenkins Job. How can I visualize a complete pipeline and what is the best way to extract the latest successful artifacts or version from other pipelines?

+6
source share
3 answers

There is no direct equivalent in Jenkins for value streams, and Workflow jobs do not behave differently in this regard: you may have upstream jobs and downstream jobs correlated with triggers (in this case, the build step or ReverseBuildTrigger core) and use (for example) the Copy Artifact plugin to transfer artifacts to string assemblies. Similarly, you can use the external storage manager as a “source of truth” and define task triggers based on snapshots placed in the repository.

However, part of the goal of Workflow is to avoid the need to create complex workflows in most situations¹, since it is usually easier to reason, debug and configure one script with standard control flow operators and local ones than managing many interdependent tasks. If the main problem with a single thread is that you need to avoid restoring unmodified parts, one solution would be to use something like JENKINS-30412 to check the change log of individual repository checks and skip build steps if they are empty. I think that there would be more opportunities necessary for such a system to work in the general case, that workspaces are downed or discarded by other assemblies.

¹ One case when you definitely need separate tasks is that, for security reasons, teams participating in different projects cannot see each other's sources or create magazines.

+2
source

Assuming that each of your development teams works with a different module of your project, and "One change should only cause the pipeline of the team that made the changes," I would use Git Submodules :

Submodules allow you to store a Git repository as a subdirectory of another Git repository.

with one repo, which becomes a submodule of the main repo module for each team. This will be transparent to the teams, because they just work only with the specified repositories.

The core module is also an aggregator project for your module projects in terms of a build tool. So you have options:

  • to create each repo / pipeline individually or
  • to create an entire (main) project right away.

An assembly console that contains one or more assembly tasks is associated with each command / repo / module.

The main conveyor is just a collection of downstream jobs that are the starting points of the team / repo / module pipelines.

Assembly triggers can be any manually, synchronized, or with source changes.

A decision must also be made:

  • regardless of whether you use your modules separately, so that other modules depend only on release versions.
    • Advantage:
      • Others rely on released, usually more stable versions.
      • Modules can decide which version of the dependency they want to use.
    • Disadvantages:
      • Releases should be prepared for each module.
      • This may take longer until the latest changes are available to other users.
      • Modules must decide which version of the dependency they want to use. And they have to adapt it every time they need functionality added to a newer version.
  • or whether you use one version for the entire project (which is then inherited by the modules): ...-SNAPSHOT during the development cycle - release version when the project is released.

    In this case, if there are modules that are necessary for others, for example. the main module, its successful assembly should also initiate the assembly of dependent modules, so that the incompatibility will be recognized as soon as possible.

    • Benefits:
      • Recent changes are immediately available to other users.
      • The release is ready for the whole project only after its delivery.
    • Disadvantages:
      • Recent changes that may be available to other users may contain a not very stable (snapshot) code.

Re "How can I visualize a complete pipeline"

I do not know a single plugin that can do this with Workflows at the moment.

There Build Graph View Plugin , which was originally created for streaming streams, but it is more than two years old:

Downstream strings are identified using the DownStreamRunDeclarer extension point.

  • The default is Jenkins dependencyGraph and UpstreamCause, and as such can detect a common assembly chain.
  • the build-flow plugin helps to transform the flow in a graph
  • some Jenkins plugins may later provide custom solutions.

(You know, "maybe" and "later" often will not and will never develop .;)

There is a Build Pipeline Plugin , but it also seems to be unsuitable for Workflows:

This plugin provides the Assembly Pipeline assembly for upstream and downstream connected jobs [...]


Re "way to pull the last successful artifacts"

Apparently this is not so smooth with Gradle :

By default, Gradle does not define any repositories.

I use Maven and there are local and remote repositories where the latter can also be:

[...] internal repositories installed on a file or an HTTP server inside your company, used to share private artifacts between development teams and releases.

Have you considered using a binary repository manager like Artifactory or Nexus ?

+1
source

From what I saw, people are moving toward smaller, more independent code delivery snippets, rather than monolithic deployments. But it is clear that there will still be dependencies between the various components. At least for example, if you had one script that provided your infrastructure and another that created and deployed your application, you would like to make sure that an update to your infrastructure script was run before your application was deployed. On the other hand, your infrastructure does not depend on the deployment of your application code - you can update it at your own pace, while it passes some testing perfectly.

As mentioned in another post, you really have two options for fulfilling this dependency:

  • Have one pipeline (script workflow) that checks the code from both repositories and passes them through the same pipeline at the same time. Any change to one requires a full boat for everything.
  • Have two pipelines, and this will allow everyone to go at their own pace no matter what the other does. This is not a problem for infrastructure code, but it is very good for application code. If you pushed the application code to release without first updating the infrastructure, the results can be unpleasant.

What I started doing with Jenkins Workflow sets up a dependency between my threads. Basically, I declare that one thread depends on the specific version (in this case, just BUILD_NUM), and therefore, before I deploy the production, I will verify that the last successful assembly of the other pipeline completed first. I can do this using the Jenkins API as part of my script thread that expects this assembly or more to succeed, for example

 import hudson.EnvVars import hudson.model.* int indepdentBuildNum = 16 waitUntil{ verifyDependentPipelineCompletion("FLDR_CM/WorkflowDepedencyTester2", indepdentBuildNum) } boolean verifyDependentPipelineCompletion(String jobName, int buildNum){ def hi = jenkins.model.Jenkins.instance Item dep2 = hi.getItemByFullName(jobName) hi = null def jobs = dep2.getAllJobs().toArray() def onlyJob = jobs[0] //always 1 job...I think? def targetedBuild = onlyJob.getLastSuccessfulBuild() EnvVars me = targetedBuild.getCharacteristicEnvVars() def es = me.entrySet() int targetBuildNum = 0; def vars = es.iterator() while(vars.hasNext()){ def envVar = vars.next() if(envVar.getKey().equals("BUILD_ID")){ targetBuildNum = Integer.parseInt(envVar.getValue()) } } if (buildNum > targetBuildNum) { return false } return true } 

Disclaimer that I am just starting this process, so I have little experience in this world, but I’m updating this stream if I have more relevant information. Any feedback is appreciated.

+1
source

All Articles