Workaround: Aggregate Test Results

To my knowledge, the Downstream Test Results Aggregate function does not work as expected (and it’s very difficult to find useful documentation). I would like to achieve very similar functions:

Build runs T1, T2 jobs in parallel (where T1 does FindBugs, T2 does PMD).

Scenario 1: Once T1 and T2 are finished (which I can achieve with the "Join" plugin), I want to collect artifacts (T1 / findbugs.xml and T2 / pmd.xml). Then they are analyzed and good statistics are generated.

Scenario 2 (I prefer): Similar to scenario 1, but the analysis is performed as part of T1 and T2 (in parallel!). Once T1 and T2 are completed, the analysis results are combined into good statistics.

My questions: For scenario 1, I do not know how to reference the downstream projects T1 and T2. I could use the last successful build, but that seems weird when considering many concurrent jobs.

In scenario 2, I have no idea how to import the data needed for the FindBugs / PMD / Checkstyle / SLOCcount / ... plugins so that the corresponding graphs (also?) Appear outside of T1 / * T2 *.

Thanks Karsten

+7
source share
3 answers

Two additions to the malenkiy_scot post:

  • In fact, you do not need a script for step 3: The build step “copy artifacts from another project” allows you to specify the original task, including parameters.

    For example, using parent notation, it can copy artifacts from the correct execution of job D , using D/PARENT_ID=EXPECTED_VALUE as the "project name".

  • Instead of manually concatenating $JOB_NAME and $BUILD_ID you can use the predefined $BUILD_TAG (which does pretty much the same thing). See https://wiki.jenkins-ci.org/display/JENKINS/Building+a+software+project#Buildingasoftwareproject-JenkinsSetEnvironmentVariables for a complete list of standard environment variables.

+8
source

Here is a diagram for a simpler scenario, but I think you can easily generalize it to your case with a few further tasks. The trick is to use the “marking” options in subsequent tasks.

Let P be the parent task and D the task downstream.

  • Instance (assembly) P calls D through the Parameterized trigger plugin using the build step ( not like a step after assembly) and waits for D to complete. Along with other parameters, P passes the D parameter - let it PARENT_ID - based on P build BUILD_ID.
  • D runs the tests and archives them as artifacts (along with jUnit reports - if applicable).
  • P then runs an external Python (or internal Groovy) script that finds the corresponding assembly D through PARENT_ID (you iterate over the D assemblies and check the value of the PARENT_ID parameter). The script then copies the artifacts from D to P and P, publishes them.

If you are using Python (what I am doing), use the Python JenkinsAPI wrapper . If you use Groovy - use the Groovy Plugin and run the script as a script system. You can then access Jenkins through the Java API .

+10
source

The Jenkins plugin https://wiki.jenkins-ci.org/display/JENKINS/Build+Flow+Plugin should help with such requirements.

-3
source

All Articles