How to create a master test case

Last edited on


It is possible to create a master test case that runs multiple tests without restarting your application each time. You might want to nest test cases. This isn't currently possible with Squish, but you can use a master test case to approximate nested test cases.

Squish test suites normally contain multiple test cases. Each test case consists of a test script and possibly test data. When executing a test suite, for each test case the application under test (AUT) is started, the test is executed, and then the AUT terminates.


In theory, the only way we can be sure that the AUT will only be launched once, is to put everything into a single test case. But if we do this, we lose the benefit of splitting our tests into multiple test cases. And of course, this would also make it more difficult to add new test cases, since they would somehow have to be fitted into the single test case.

We can get around this problem by having multiple test cases as usual, but unchecking the run checkbox for every one of them — except for the one we designate as the master test case. This allows us to record and replay as many individual test cases as we like — providing that once they are working we uncheck their run checkboxes, and comment out their startApplication() calls.

When the test suite is run the only test case to be run will be the master test case (since that is the only one whose run checkbox is left checked). This test case must be hand coded to iterate over all the other test cases, and to replay each one of them.

Below is an example of a suitable master test case. This will run all test cases in the same test suite (except for itself and any listed to be ignored).

The Master Test Case — Python

import exceptions
import os
import os.path

def main():
    ignore = ("tst_specialcase1", "tst_specialcase2")
    for test_path in os.listdir(".."):
        if (test_path in ignore or
            not test_path.startswith("tst_") or
            test_path == os.path.basename(os.getcwd())):
        test.log("Executing: %s" % test_path)
        source(os.path.join("..", test_path, ""))

        # Start the application, if not running; useful
        # to ignore application crashes or application
        # having been stopped by previous test case:
        if applicationContextList().length == 0:

            eval("main()") # Executes the source'd test case's main() function
        except exceptions.Exception, e:
            test.log("Error occurred in test case: %s: %s" % (test_path, e))

If the above code is saved in a file in a test suite's test case called, say, tst_master, it will execute every other test case in the test suite — apart from itself and the tst_specialcase1 and tst_specialcase2 test cases. If you don't need to ignore any test cases just set ignore to be the empty tuple, (). The startApplication() and ignore lines are the only ones that must be edited for each test suite.

The Master Test Case — JavaScript

function main()
    var ignore = ["tst_specialcase1", "tst_specialcase2"];
    var paths = OS.listDir("..");
    for (var i in paths) {
        var test_path = paths[i];
        if (inArray(ignore, test_path) ||
            test_path.indexOf("tst_") != 0 ||
            test_path == basename(OS.cwd())) {
        test.log("Executing: " + test_path);
        source("../" + test_path + "/test.js");

        // Start the application, if not running; useful
        // to ignore application crashes or application
        // having been stopped by previous test case:
        if (applicationContextList().length == 0) {

        try {
            main(); // Executes the source'd test case's main() function
        } catch (e) {
            test.fatal("Error occurred in test case: " + test_path + ": " + e);

function inArray(array, item)
    for (var i = 0; i < array.length; ++i) {
        if (array[i] == item)
            return true;
    return false;

function basename(path)
    for (var i = path.length - 1; i >= 0; --i) {
        if (path[i] == "/" || path[i] == "\\") {
            return path.substring(i + 1);
    return path;

The JavaScript code does the same job in the same way as the Python code. However, we need to provide a couple of auxiliary functions to make up for gaps in JavaScript's default functionality. If you have no test cases to ignore just set ignore to the empty array, {{[]}}. The {{startApplication()}} and {{ignore}} lines are the only ones that must be edited for each test suite.


This approach isn't recommended because there are several drawbacks.

No clean start

When the very first test case is run the AUT is in a clean state, but this is not usually the case for each subsequent test case. One solution is to record a "clean-up" test case that restores the AUT to its start-up state, and then include the clean-up code at the end of each test case — or put it in the master test case's main() function after it has executed a test case's main() function.

Misguiding test developers to avoid shared functions

Some test developers confuse test cases with functions. So they would have one test case that - for example - performs a log in. The next test case does the next "step", like adding an entry. And the next test case does the next "step", like verifying the newly added entry.

These "steps" consist of code that should very likely be put into shared functions instead. So the approach in this article might misguide developers to avoid well established software development practices (use of re-usable code pieces, in form of functions, in this case).

A crash stops all the tests

If the AUT crashes during one of the test cases, the entire test suite execution will stop, unless exception/error handling is being used. The examples above apply exception/error handling.

Recorded test cases must be slightly hand-edited

Once a test case has been recorded and tested and is ready to be used, you must comment out the startApplication() call to avoid the AUT being started more than once. This means that the test case won't work stand-alone unless you uncomment the call.

Lookup Errors

Individual tests sometimes use different symbolic names for the same AUT objects. This can lead to lookup errors in the test suite's object map.