Is Cucumber-jvm thread safe?

I want to run the same Cucumber tests in multiple threads. More specifically, I have a set of functions, and executing these functions in a single thread works fine. I use the JSON formatter to record the execution time of each step. Now I want to perform a load test. I like the running time of each function / step in a multi-threaded environment. Therefore, I create several threads, and each thread runs on one set of functions. Each thread has its own JSON report. Is this possible in theory?

For some reasons installing the project, I cannot use the JUnit runner. Therefore, I have to resort to the CLI path:

long threadId = Thread.currentThread().getId(); String jsonFilename = String.format("json:run/cucumber%d.json", threadId); String argv[] = new String[]{ "--glue", "com.some.package", "--format", jsonFilename, "d:\\features"}; // Do not call Main.run() directly. It has a System.exit() call at the end. // Main.run(argv, Thread.currentThread().getContextClassLoader()); // Copied the same code from Main.run(). ClassLoader classLoader = Thread.currentThread().getContextClassLoader(); RuntimeOptions runtimeOptions = new RuntimeOptions(new Env("cucumber-jvm"), argv); ResourceLoader resourceLoader = new MultiLoader(classLoader); ClassFinder classFinder = new ResourceLoaderClassFinder(resourceLoader, classLoader); Runtime runtime = new Runtime(resourceLoader, classFinder, classLoader, runtimeOptions); runtime.writeStepdefsJson(); runtime.run(); 

I tried to create a separate stream for each cucumber run. The problem is that only one of the threads has a valid JSON report. All other threads just create empty JSON files. Is it design in Cucumber or is there something I missed?

+7
java multithreading gradle cucumber-jvm gpars
source share
4 answers

We reviewed multithreaded cucumber tests for Gradle and Groovy using the excellent GPars library . We have 650 UI tests and counting.

We did not encounter obvious problems running cucumber-JVM in multiple threads, but multithreading also did not improve performance as much as we had hoped.

We ran each function file in a separate thread. There are a few details to take care of, for example, splice cucumber reports from different threads and make sure that our step code was thread safe. Sometimes we need to store values ​​between steps, so we used concurrentHashMap tied to a stream identifier to store such data:

 class ThreadedStorage { static private ConcurrentHashMap multiThreadedStorage = [:] static private String threadSafeKey(unThreadSafeKey) { def threadId = Thread.currentThread().toString() "$threadId:$unThreadSafeKey" } static private void threadSafeStore(key, value) { multiThreadedStorage[threadSafeKey(key)] = value } def static private threadSafeRetrieve(key) { multiThreadedStorage[threadSafeKey(key)] } } 

And here is the gist of the Gradle task code that runs tests multithreaded using GPars:

 def group = new DefaultPGroup(maxSimultaneousThreads()) def workUnits = features.collect { File featureFile -> group.task { try { javaexec { main = "cucumber.api.cli.Main" ... args = [ ... '--plugin', "json:$unitReportDir/${featureFile.name}.json", ... '--glue', 'src/test/groovy/steps', "path/to/$featureFile" ] } } catch (ExecException e) { ++noOfErrors stackTraces << [featureFile, e.getStackTrace()] } } } // ensure all tests have run before reporting and finishing gradle task workUnits*.join() 

We have found that we need to present function files in reverse order of execution time for best results.

The results were a 30% improvement on the i5 processor, worsening more than 4 simultaneous threads, which was a bit disappointing.

I think the threads were too heavy for multithreading on our equipment. Over a certain number of threads, there were too many CPU cache misses.

Running concurrent work across multiple instances using a streaming work queue such as Amazon SQS now seems like a good step forward, especially since it won’t suffer from streaming security issues (at least not on the test environment side).

For us there is no need to test this multi-threaded method on i7 equipment due to security restrictions at our workplace, but I would be very interested to know how i7 compares with a large processor cache and other physical cores.

+2
source share

Right now is the problem you are observing. I did not find the ability to parallelize the script.

Here is a good write about concurrency poor man. Just run a few commands, each of which selects a different subset of your tests - by function or tag. I would unlock the new JVM (like the JUnit driver), instead of trying to insert it, since the cucumber was not intended for this. You must balance them yourself and then figure out how to combine the reports. (But at least the problem is combining reports that are not corrupted by reports.)

+1
source share

Presumably you can run your Cucumber-JVM tests in parallel using the Maven POM configuration here: https://opencredo.com/running-cucumber-jvm-tests-in-parallel/

 <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-surefire-plugin</artifactId> <version>2.14</version> <executions> <execution> <id>acceptance-test</id> <phase>integration-test</phase> <goals> <goal>test</goal> </goals> <configuration> <forkCount>${surefire.fork.count}</forkCount> <refuseForks>false</reuseForks> <argLine>-Duser.language=en</argLine> <argLine>-Xmx1024m</argLine> <argLine>-XX:MaxPermSize=256m</argLine> <argLine>-Dfile.encoding=UTF-8</argLine> <useFile>false</useFile> <includes> <include>**/*AT.class</include> </includes> <testFailureIgnore>true</testFailureIgnore> </configuration> </execution> </executions> </plugin> 

In the above snippet, you can see that the maven-surefire plugin is used to run our acceptance tests - any classes that end with * AT will run as a JUnit test class. Thanks to JUnit, running parallel tests is now a simple case of setting the forkCount configuration option. In the project example, this value is 5, which means that we can simultaneously run up to 5 threads (i.e. 5 runner classes).

0
source share

Well, if you can find a way for the cucumber to display the location of the script (i.e. feature_file_path: line_nunber_in_feature_file) for all the scripts that you want to run based on the given tag, then you can use gpars and gradle to run scripts in parallel. Step 1: In the first Gradle task, use the above solution to generate a text file (say, scenarios.txt) containing locations for all the scripts we want to execute. Step 2: Then extract the contents of the scenarios.txt file generated in step 1. Say a scenariosList to the list of great options. Step 3: create another task (javaExec), here it is good to use gpars withPool in combination with scenariosList.eachParallel, and also use the main class cucumber and other cucumberOptions to run these scripts in parallel. PS: here we will provide the location of the script as the value of the "function" parameter, so that the cucumber will only run this script. Also, you do not need to specify a tag name, since we already have a list of scripts that we need to execute.

Note. You must use a highly configured computer, such as a Linux server, because a new jvm instance is created for each script, and it is likely that a cloud service such as Saucelabs is used to run the scripts. Thus, you do not need to worry about infrastructure.

Step 4: This is the last step. Each bran script in step 3 will generate a json output file. You must match the output based on the object names in order to generate a single json file per object file.

This decision sounds a little complicated, but with the right efforts it can give significant results.

-one
source share

All Articles