Selenium Grid in multiple browsers: should each test case have a separate class for each browser?

I am trying to build my first Data Driven Test test environment, which runs tests through Selenium Grid / WebDriver in multiple browsers. Right now, I have every test case in my own class, and I parameterize the browser, so it runs each test case once with each browser.

Is this common on large test platforms? Or, if each test case is copied and configured for each browser in its own class? So, if I test chrome, firefox and IE, should there be classes for each, for example: "TestCase1Chrome", "TestCase1FireFox", "TestCase1IE"? Or just "TestCase1" and parameterize the test to run 3 times with each browser? Just wondering how others do it.

Parameterizing tests in one class for each test case simplifies support for non-browser code, and duplicate classes, one for each browser case, make it easier to maintain browser-specific code. When I say specific browser code, for example, by clicking an element. On ChromeDriver, you cannot click in the middle of some elements where on FirefoxDriver you can. Thus, you may need two different blocks of code to click an element (if it is not in the middle in the middle).

For those of you who use QA Engineers who use Selenium, what would be best here?

+8
selenium selenium-webdriver selenium-grid webdriver
source share
4 answers

I am currently working on a project that passes 75k - 90k tests daily. We pass the browser as a parameter for tests. Causes:

  • As you mentioned in your question, this helps in maintenance.
  • We do not see too much browser-specific code. If you have too much code for the browser, I would say that there is a problem with webdriver itself. Since one of the advantages of selenium / webdriver is to write code once and run it against any supported browser.

The difference that I see between my code structure and the one you mentioned in the question, I do not have a test class for each test case. Tests are divided into those functions that I am testing, and each function will have a class. And this class will conduct all tests as methods. I use testNG so that these methods can be called in parallel. Maybe this will not match your AUT.

+4
source share

If you save the code structure that you mentioned in the question, sooner or later, saving it will be a nightmare. Try to adhere to the rule: the same test code (once) for all browsers (environments).

This condition will force you to solve two problems:

1) how to run tests for all selected browsers

2) how to apply certain browser workarounds without polluting the test code

Actually, this seems to be your question.

This is how I solved the first problem. First, I defined all the environments that I am going to test. I call the "environment" all the conditions in which I want to run my tests: browser name, version number, OS, etc. So, apart from the test code, I created an enumeration as follows:

public enum Environments { FF_18_WIN7("firefox", "18", Platform.WINDOWS), CHR_24_WIN7("chrome", "24", Platform.WINDOWS), IE_9_WIN7("internet explorer", "9", Platform.WINDOWS) ; private final DesiredCapabilities capabilities; private final String browserName; private final String version; private final Platform platform; Environments(final String browserName, final String version, final Platform platform) { this.browserName = browserName; this.version = version; this.platform = platform; capabilities = new DesiredCapabilities(); } public DesiredCapabilities capabilities() { capabilities.setBrowserName(browserName); capabilities.setVersion(version); capabilities.setPlatform(platform); return this.capabilities; } public String browserName() { return browserName; } } 

Easily modify and add environments when you need to. As you can see, I use this to create and get DesiredCapabilities, which will later be used to create a specific WebDriver.

In order for tests to run for all specific environments, I used JUnit (4.10 in my case) org.junit.experimental.theories :

 @RunWith(MyRunnerForSeleniumTests.class) public class MyWebComponentTestClassIT { @Rule public MySeleniumRule selenium = new MySeleniumRule(); @DataPoints public static Environments[] enviroments = Environments.values(); @Theory public void sample_test(final Environments environment) { Page initialPage = LoginPage.login(selenium.driverFor(environment), selenium.getUserName(), selenium.getUserPassword()); // your test code here } } 

Tests are annotated as @Theory (not as @Test , as in regular JUnit tests) and the parameter is passed. Each test will then be performed for all defined values โ€‹โ€‹of this parameter, which should be an array of values โ€‹โ€‹annotated as @DataPoints . In addition, you should use a runner that extends from org.junit.experimental.theories.Theories . I use org.junit.rules to prepare my tests by putting all the necessary plumbing there. As you can see, I also get a driver of certain features through the rule. Although you can use the following code directly in your test:

 RemoteWebDriver driver = new RemoteWebDriver(new URL(some_url_string), environment.capabilities()); 

The fact is that, having it in the rule, you write the code once and use it for all your tests. As for the page class, this is the class into which I put all the code that uses the driver functionality (find an element, navigate, etc.). Thus, again, the test code remains neat and understandable, and, again, you write it once and use it in all of your tests. So this is the solution to the first problem. (I know you can do a similar thing with TestNG, but I have not tried.)

To solve the second problem, I created a special package in which I save all the possible ways to bypass the browser. It consists of an abstract class, for example. BrowserSpecific, which contains generic code that is different (or has an error) in another browser. In the same package, I have classes specific to each browser used in the tests, and each of them extends BrowserSpecific.

Here's how it works for the Chrome driver error you mention. I create a clickOnButton method in BrowserSpecific with common code for the behavior affected:

 public abstract class BrowserSpecific { protected final RemoteWebDriver driver; protected BrowserSpecific(final RemoteWebDriver driver) { this.driver = driver; } public static BrowserSpecific aBrowserSpecificFor(final RemoteWebDriver driver) { BrowserSpecific browserSpecific = null; if (Environments.FF_18_WIN7.browserName().contains(driver.getCapabilities().getBrowserName())) { browserSpecific = new FireFoxSpecific(driver); } if (Environments.CHR_24_WIN7.browserName().contains(driver.getCapabilities().getBrowserName())) { browserSpecific = new ChromeSpecific(driver); } if (Environments.IE_9_WIN7.browserName().contains(driver.getCapabilities().getBrowserName())) { browserSpecific = new InternetExplorerSpecific(driver); } return browserSpecific; } public void clickOnButton(final WebElement button) { button.click(); } } 

and then I override this method in a specific class, for example. ChromeSpecific where I post the workaround:

 public class ChromeSpecific extends BrowserSpecific { ChromeSpecific(final RemoteWebDriver driver) { super(driver); } @Override public void clickOnButton(final WebElement button) { // This is the Chrome workaround String script = MessageFormat.format("window.scrollTo(0, {0});", button.getLocation().y); driver.executeScript(script); // Followed by common behaviour of all the browsers super.clickOnButton(button); } } 

When I have to consider the specific behavior of a browser, I do the following:

  aBrowserSpecificFor(driver).clickOnButton(logoutButton); 

instead:

  button.click(); 

That way, in my general code, I can easily determine where the workaround was applied, and I keep the workarounds isolated from the general code. I find it easy to maintain, since errors are usually resolved and workarounds can or should be changed or fixed.

Last word on test execution. Since you are going to use Selenium Grid, you will want to use the ability to run tests in parallel, so be sure to configure this feature for your JUnit tests (available since version 4.7).

+3
source share

We use testng in our organization, and we use the parameter that testng gives to specify the environment, that is, the browser used, the machine to start, and any other configuration that is required for the env configuration. The browser name is sent via an XML file that controls what needs to be launched and where. It is set as a global variable. What we did as an extra, we have our own user annotations that can override these global variables, that is, if the test is very specific only to run on chrome and no other browser, then we will indicate it in the user annotation. Thus, regardless of whether the parameter affects FF, if it is annotated with chrome, it will always work on chrome.

For some reason, I think that creating one class for each browser is not a good idea. Imagine that the flow changes or there is a bit here and there, and you have 3 classes instead of one. And if the number of browsers increases, then another class.

What I would suggest is to have code that is specific to the browser to be extracted. Thus, if the click behavior is browser-specific, then redefine it to perform appropriate browser-based checks or crashes.

+1
source share

I do it this way, but keep in mind that this is a pure WebDriver without using Grid or RC:

 // Utility class snippet // Test classes import this with: import static utility.*; public static WebDriver driver; public static void initializeBrowser( String type ) { if ( type.equalsIgnoreCase( "firefox" ) ) { driver = new FirefoxDriver(); } else if ( type.equalsIgnoreCase( "ie" ) ) { driver = new InternetExplorerDriver(); } driver.manage().timeouts().implicitlyWait( 10000, TimeUnit.MILLISECONDS ); driver.manage().window().setPosition(new Point(200, 10)); driver.manage().window().setSize(new Dimension(1200, 800)); } 

Now, using JUnit 4.11+, your parameters file should look something like this:

 firefox, test1, param1, param2 firefox, test2, param1, param2 firefox, test3, param1, param2 ie, test1, param1, param2 ie, test2, param1, param2 ie, test3, param1, param2 

Then, using one .CSV-parameterized test class (which you are going to run with several types of browsers with), in the annotated @Before method, do the following:

  • If the current test of parameters is the first test of this type of browser and the windows are not already open, open a new browser window of the current type.
  • If the browser is already open and the browser type is the same, then simply reuse the same driver object.
  • if another type is open in the browser that matches the current test, then close the browser and restart the browser of the corresponding type.

Of course, my answer does not tell you how to handle the parameters: I leave this for you to understand.

0
source share

All Articles