It is possible to create a master test case that runs multiple tests without restarting your application each time. You might want to nest test cases. This isn't currently possible with Squish, but you can use a master test case to approximate nested test cases.
Squish test suites normally contain multiple test cases. Each test case consists of a test script and possibly test data. When executing a test suite, for each test case the application under test (AUT) is started, the test is executed, and then the AUT terminates.
In theory, the only way we can be sure that the AUT will only be launched once, is to put everything into a single test case. But if we do this, we lose the benefit of splitting our tests into multiple test cases. And of course, this would also make it more difficult to add new test cases, since they would somehow have to be fitted into the single test case.
We can get around this problem by having multiple test cases as usual, but unchecking the run checkbox for every one of them — except for the one we designate as the master test case. This allows us to record and replay as many individual test cases as we like — providing that once they are working we uncheck their run checkboxes, and comment out their
When the test suite is run the only test case to be run will be the master test case (since that is the only one whose run checkbox is left checked). This test case must be hand coded to iterate over all the other test cases, and to replay each one of them.
Below is an example of a suitable master test case. This will run all test cases in the same test suite (except for itself and any listed to be ignored).
The Master Test Case — Python¶
If the above code is saved in a
test.py file in a test suite's test case called, say,
tst_master, it will execute every other test case in the test suite — apart from itself and the
tst_specialcase2 test cases. If you don't need to ignore any test cases just set ignore to be the empty tuple,
ignore lines are the only ones that must be edited for each test suite.
This approach isn't recommended because there are several drawbacks.
No clean start¶
When the very first test case is run the AUT is in a clean state, but this is not usually the case for each subsequent test case. One solution is to record a "clean-up" test case that restores the AUT to its start-up state, and then include the clean-up code at the end of each test case — or put it in the master test case's
main() function after it has executed a test case's
Misguiding test developers to avoid shared functions¶
Some test developers confuse test cases with functions. So they would have one test case that - for example - performs a log in. The next test case does the next "step", like adding an entry. And the next test case does the next "step", like verifying the newly added entry.
These "steps" consist of code that should very likely be put into shared functions instead. So the approach in this article might misguide developers to avoid well established software development practices (use of re-usable code pieces, in form of functions, in this case).
A crash stops all the tests¶
If the AUT crashes during one of the test cases, the entire test suite execution will stop, unless exception/error handling is being used. The examples above apply exception/error handling.
Recorded test cases must be slightly hand-edited¶
Once a test case has been recorded and tested and is ready to be used, you must comment out the
startApplication() call to avoid the AUT being started more than once. This means that the test case won't work stand-alone unless you uncomment the call.
Individual tests sometimes use different symbolic names for the same AUT objects. This can lead to lookup errors in the test suite's object map.