End-to-End Web Application Testing with New TestCafe Studio

End-to-End Web Application Testing with New TestCafe Studio

A recent beta release of the DevExpress' TestCafe Studio brought us a good opportunity to implement end-to-end testing for our new SQL Studio application. I will explain why TestCafe Studio is a good tool to quickly automate QA for any web site or application and why we end up using an opensource CLI version of the TestCafe.

A Few Notes on End-to-End Testing

One of the definitions that I've found on the internet: end-to-end testing is a methodology that assesses the working order of a complex product in a start-to-finish process. In the scope of the web application development, e2e testing means tests that emulate user interaction with a web application as in the wild. In that sense, end-to-end testing is a mix of integration and user-acceptance testing. Among many different levels of web application testing, I believe, end-to-end testing is the hardest to implement and sustain, and at the same time, the most rewarding one.

Making a Choice

Despite the obvious boom of the javascript ecosystem and explosive growth in numbers of the front-end libraries and tools, a choice of E2E testing tools is not so broad. Anyone looking for a good tool probably would end up with a shortlist containing the following products:

After carefully studying the features and evaluating different opinions from the QAs and developers with experience with e2e testing with various frameworks, we decided to use the TestCafe. I'll try to explain some of the reasons behind the decision.

Ability to quickly start testing without much setup and config. Some might think that E2E testing is the most quirky area of testing and probably they're right. Before the headless Chrome, the only reliable solution for automated browser testing was the Selenium. And it is confusing starting with a selection of which of the selenium's tools to use: WebDriver, IDE or Remote Control? Most experienced developers would setup a WebDriver project with some available bindings to popular languages or even write their own implementation. There are many different ways to setup Selenium and it's not clear which one to use. It definitely lacks good documentation and has a steep learning curve. Opposite, TestCafe is extremely easy to setup and I will explain it in more details later.

Be able to test SPA applications built on Vue.js or React. Automated testing of a web application that constantly re-renders DOM is tricky. Ability to correctly produce interactions and observe changes in a complex SPA application were crucial to our needs and TestCafe proved to be the right tool, partially due to its built-in waiting mechanism.

Have all the test logic as a code. I personally don't like to mess with configuration files and prefer explicit test definitions as code blocks. It makes possible to quickly copy-paste tests and introduce new scenarios the smart way. It's also aligned with recent trend for whatever-as-a-code.

Command-line interface and CI integration ready. We wanted to integrate automated end to end testing into our devops pipelines and so required an ability to run tests on CI server in headless mode.

Not Selenium-based. Again, Selenium has its own downsides and there is definitely some uncertainty with its future as the idea behind it seems obsolete with the appearance of official headless modes for popular browsers.

Not depending on chrome driver. It could be not a big issue especially considering a recent drop out of Microsoft's own browser engine. But the Firefox is still a major player here and we should not ignore it.

Cross-platform and cross-browser. A 100% coverage of all the platforms and browser engines is indeed a doubtful endeavor, but some advanced features that are implemented on top of the auxiliary Web APIs require more sophisticated cross-browser and cross-platform testing. For example, after cross-browser testing, we had to reimplement some of the UI features that were depended on HTML drag and drop API.

Writing the First End-to-End Test

Lucky enough, we decided with implementation right after the TestCafe Studio beta release. After downloading and installing an application, you can immediately start recording test scenarios.

TestCafe Studio Application

What the app does is simply opening a link in the browser of your choice through the TestCafe proxy that will record all your actions within the page. I would say you don't have to be a technical person to perform that task and you don't need any special setup to record a test, just a working link.

Capturing an element during recording

I've finished my first test recording in just 10 minutes. The first problem that I've encountered was some redundant selector expressions that I'd to manually correct. Because of the complexity of the components that we use (like Monaco editor) capturing a correct selector is not a trivial task. Another point was that the test was recorded inside my development browser and Studio accidentally captured tags and props related to the installed extensions.

Running a test

After a test execution, you can check the logs and screenshots captured during a browser playback.

Converting to Code

One of the valuable features of the TestCafe is a conversion of a recorded test to a javascript code. After playing with simple scenarios I stuck with a more complex test case when I needed to perform many repetitive actions in the application under test. I found it much more convenient to implement that in code. TestCafe API has good documentation with examples and you can use any Javascript library that you want inside tests.

import { Selector, RequestLogger } from 'testcafe';
import { VueSelector } from 'testcafe-vue-selectors';

const logger = RequestLogger();

fixture `localhost-dev new PostgreSQL DB`
  .page `http://localhost:8080`
  .requestHooks(logger)
  .beforeEach(async t => {
    await t
      .click(Selector('.conn-info .button'))
      .click(Selector('a').withText('Create new database'))
      ...
  });

Still, there were some issues that I would attribute to automated browser testing in general.

First, timeouts. In my experience, dealing with timeouts is the most difficult and the most annoying part of an end-to-end testing implementation. You have async code everywhere and the more authentic environment you have set up for testing, the more unstable timings for operations you would have. That leads to increased timeout/wait intervals (without hurting code) and makes these tests very lengthy overall.

Second, inconsistent element detection in complex scenarios. For example, to test if an element appears upon particular user action, I used a combination of two selectors, one for a parent element and one for a child. A first test pass had failed. Later, I figured out that the detection of the parent element happened in the moment of time when no child elements were rendered by a Vue component. I replaced two selectors with a single one and that solved the issue. Because in complex web applications DOM undergoes many changes, putting automation points in the correct order is not a trivial task.

Another practical feature that anyone implementing end-to-end tests should pay attention to, is a request logging module. With it, you can intercept any request sent to a backend during the test execution, check a payload and headers.

As conclusion, I would say that modern tooling offers a convenient and quick way to implement an end-to-end testing workflow. You can engage your non-technical team members into a QA process, but you'll benefit most if spend some time to master browser automation code and then integrate it in your DevOps pipeline.