19.1. Automated Batch Testing

19.1.1. Automated Test Runs
19.1.2. Distributed Tests
19.1.3. Processing Test Results

This chapter discusses all aspects of automating testing, also known as batch testing. The coverage includes automatically executing tests, distributing tests to different machines, and processing the results produced by the test runs. (See also, How to Do Automated Batch Testing (Section 17.24).)

19.1.1. Automated Test Runs

Squish provides command line tools that make it possible to completely automate the running of tests. The tool for executing tests is squishrunner, but for it to work properly a squishserver must be running—the squishrunner makes use of the squishserver to start AUTs and communicate with them.

Automated batch tests can be created on any of the platforms that Squish supports, including Windows and Unix-like platforms.

For example, here is a simple Unix shell script to execute the complete test suite /home/squish/suite_myapp and save its results to /home/squish/results-<date>.xml:

#!/bin/sh

# start the squishserver
squishserver &

# create a dated filename for the logfile
LOGFILE=/home/squish/results-`date +%Y-%m-%d`.xml

# execute the test
squishrunner --testsuite /home/squish/suite_myapp \
--reportgen xml2.1,$LOGFILE

# stop the squishserver
squishserver --stop

Of course, if the tests were run more than once a day, we would use an extended date format that included the time. Notice also that we have used output format xml2.1 rather than plain xml. Squish supports several different XML output formats—the xml format is retained for backwards compatibility, while the xml2 format is recommended and preferred for Squish 3 and the xml2.1 format is recommended and preferred for Squish 4. (Note also, that if any of the paths contain spaces they must be double-quoted—for example, --testsuite "/home/squish 4.1/suite_myapp".)

Here is a similar example, but this time written for Windows using the standard cmd.exe shell:

REM start the squishserver
start "Squishserver Window" /B ^
 "C:\Program Files\squish 4.1\squishserver" --verbose

REM create a dated filename for the logfile:
REM assumes MM/DD/YYYY format date
@set TODAY=%date%
@set YEAR=%TODAY:~6,4%
@set MONTH=%TODAY:~0,2%
@set DAY=%TODAY:~3,2%
set LOGFILE=C:\squish results\results-%YEAR%-%MONTH%-%DAY%.xml

REM execute the test
"C:\Program Files\squish 4.1\squishrunner" ^
  --testsuite "C:\squish results\suite_myapp" ^
  --reportgen "xml2.1,%LOGFILE%"

REM stop the squishserver
"C:\Program Files\squish 4.1\squishserver" --stop

The Unix example assumes that the squishserver and squishrunner executables can be found in the system search path, i.e., they are in a directory that is in the PATH environment variable, whereas the Windows example specifies the full paths—and uses double-quotes for those paths that include spaces. (Also, we have split lines using \ on Unix and ^ on Windows to make them easier to read in this manual.)

One disadvantage of using shell scripts and batch files like this is that for cross-platform testing we must maintain at least one Unix shell script and one Windows batch file. (In fact we may need more than one Unix shell script, or to have a quite complicated one if we need to cope with various Unix variants, such as Linux and Mac OS X.) We can avoid this problem by using a cross-platform scripting language which would allow us to write just one script and run it on all the platforms we were interested in. Here is an example of such a script written in Python:

Python
#!/usr/bin/env python

import os, sys, subprocess, time

if sys.platform.startswith("win"): # Windows
    HOME = "C:/squish results" # Python understands Unix paths even on Windows
elif sys.platform.startswith("darwin"): # Mac OS X
    HOME = "/Users/squish"
else: # Other Unix-like, e.g. Linux
    HOME = "/home/squish"

# start the squishserver (and don't wait)
if sys.platform.startswith("win"):
    pid = subprocess.Popen(["start", "/b", "squishserver"]).pid
else:
    pid = subprocess.Popen(["squishserver"]).pid

# create a dated filename for the logfile
LOGFILE = os.path.join(HOME, "results-%s.xml" % time.strftime("%Y-%m-%d"))

# execute the test (and wait for it to finish)
subprocess.call(["squishrunner", "--testsuite",
    os.path.join(HOME, "suite_myapp"), "--reportgen",
                 "xml2.1,%s" % LOGFILE])

# stop the squishserver
subprocess.call(["squishserver", "--stop"])

This script does the same job as the Unix shell script and Windows batch file shown earlier, and makes the assumption that squishrunner and squishserver are in the PATH. It should run on Windows, Mac OS X, and other Unix-like systems without needing any changes. (The reason for preferring to use the subprocess module over the os.system function is that the former automatically handles escaping, e.g., for arguments that contain spaces.)

Whatever language we write our automated test script in, the squishrunner will run the specified test with all the required initializations and cleanups. The resulting report can then be post-processed as necessary—see Processing Test Results (Section 19.1.3) for details.

Once we have the script set up (again, no matter what language we are using for it), to make it fully automatic we must ensure that it is run automatically, say once a day. How to do this is beyond the scope of this manual, but if you require help you can always contact froglogic's commercial support to assist you. If you would rather try to set it up on your own first, there is a lot of information on the Internet—for Unix it is a matter of setting up a cron job. (Since Windows Services don't support a display for running GUI applications, it is not possible to execute the squishserver as a Windows Service.)

19.1.2. Distributed Tests

Throughout the manual it is generally assumed that all testing takes place locally. This means that the squishserver, squishrunner, and the AUT, are all running on the same machine. This scenario is not the only one that is possible, and in this section we will see how to remotely run tests on a different machine. For example, let's assume that we work and test on computer A, and that we want to test an AUT located on computer B.

The first step is to install Squish and the AUT on the target computer (computer B). Note though, that we do not need to do this step for Squish for Web. Now—except if we are using Squish for Web on computer B—we must tell the squishserver the name of the AUT's executable and where the executable is located. This is achieved by running the following command:

squishserver --config addAUT <name_of_aut> <path_to_aut>

Later we will connect from computer A to the squishserver on computer B. By default the squishserver only accepts connections from the local machine, since accepting arbitrary connections from elsewhere might compromise security. So if we want to connect to the squishserver from another machine we must first register the machine which will try to establish a connection for executing the tests (computer A in this example), with the machine running the AUT and squishserver (computer B). Doing this ensures that only trusted machines can communicate with each other using the squishserver.

To perform the registration, on the AUT's machine (computer B) we create a plain text file in ASCII encoding called /etc/squishserverrc (on Unix or Mac) or c:\squishserverrc (on Windows). If you don't have write permissions to /etc or c:\, you can also put this file into SQUISH_ROOT/etc/squishserverrc on either platform. (And on Windows the file can be called squishserverrc or squishserverrc.txt at your option.) The file should have the following contents:

ALLOWED_HOSTS = <ip_addr_of_computer_A>

<ip_addr_of_computer_A> must be the IP address of computer A. (An actual IP address is required; using a hostname won't work.) For example, on our network the line is:

ALLOWED_HOSTS = 192.168.0.3 

This will almost certainly be different on your network.

If you want to specify the IP addresses of several machines which should be allowed to connect to the squishserver, you can put as many IP addresses on the ALLOWED_HOSTS line as you like, separated by spaces. And if you want to allow a whole group of machines which have similar IP addresses, you can use wildcards. For example, to allow all those machines which have IP addresses that start with 192.168.0, to connect to this squishserver, you can specify an IP address of 192.168.0.*.

Once we have registered computer A, we can run the squishserver on computer B, ready to listen to connections, which can now come from computer B itself or from any of the allowed hosts, for example, from computer A.

We are now ready to create test cases on computer A and have them executed on computer B. First, we must start squishserver on computer B (calling it with the default options starts it on port 4322—see squishserver (Section 19.4.2) for a list of available options):

squishserver

For convenience, by default, the Squish IDE starts squishserver locally on startup and connects to this local squishserver to execute the tests. But it is also possible to connect to a squishserver on a remote machine, such as computer B, from within the Squish IDE. We can control this behavior through the preferences dialog. Click Window|Preferences to invoke the Pref­er­ences dialog (Section 20.3.12), then click Squish in the tree of preferences and choose the Remote Testing item to show the Remote Testing preferences page. Uncheck the Start local Squish server automatically checkbox, and enter the IP address of the machine running the remote squishserver (computer B) in the squishserver host line edit. The port number should only be changed if the squishserver is started with a non-standard port number, in which case the port number should be set to match whichever one is used on the remote machine (computer B).

The Squish IDE's Preferences Dialog

Now we can execute the test suite as usual. One immediately noticable difference is that the AUT is not started locally, but on computer B instead. After the test has finished, the results become visible in the Squish IDE on computer A as usual.

It is also possible to do remote testing using the command line. The command is the same as described earlier, only this time we must also specify a host name using the --host option:

squishrunner --host computerB.froglogic.com --testsuite suite_addressbook

The host can be specified as an IP address or as a name.

This makes it possible to create, edit, and run tests on a remote machine via the Squish IDE. And by adding the --host option to the shell script, batch file, or other script file used to automatically run tests, it is possible to automate the testing of applications located on different machines and platforms as we saw earlier—Automated Test Runs (Section 19.1.1).

[Note]Squish License Key

When Squish tools are executed they always check their license key. This shouldn't matter when using a single machine, but might cause problems when using multiple machines. If the default license key directory is not convenient for using with automated tests it can be changed by setting the SQUISH_LICENSEKEY_DIR environment variable to a directory of your choice—and this can of course be done in a shell script or batch file. (See Environment Variables (Section 19.5).)

See also, the Squish pane's child panes (Section 20.3.12.7.1)'s Remote Testing pane.

19.1.3. Processing Test Results

In the previous section we saw how to execute an AUT and run its tests on a target machine under the control of a separate machine, and we also saw how to automatically execute test runs using scripts and batch files. In this section we will look at processing the test results from automatic test runs.

By default, squishrunner prints test results to stdout as plain text. Although it isn't difficult to parse this output, squishrunner also includes a report generator which can output the results as XML or in the Excel™ file format. There are modules for nearly every scripting language available to parse XML, so it is quite easy to post-process the test results and convert them into the format you require.

For example, to make squishrunner use the XML report generator, specify --reportgen xml2.1 on the command line. If you want to get the XML output written into a file instead of stdout, specify --reportgen xml2.1,<filename>, e.g.:

squishrunner --host computerB.froglogic.com --testsuite suite_addressbook_py --reportgen xml2.1,/tmp/results.xml

To get Excel™ output, the command is almost the same, only this time use the xls option for --reportgen. For example:

squishrunner --host computerB.froglogic.com --testsuite suite_addressbook_py --reportgen xls,/tmp/results.xls

Reports in Excel™ format are not only readable by Excel™, but by many other applications, including, for example, OpenOffice.

Squish 3.x supports xml (old XML format kept for backwards compatibility), xml2 (recommended XML format for Squish 3), xls (Excel™ format), and stdout (plain text). Squish 4 supports the same formats, and in addition xml2.1 (recommended XML format for Squish 4) and xmljunit (same output as JUnit tests; less informative than Squish's native XML formats).

19.1.3.1. The xml2.1 XML Report Format

The document starts with the <?xml?> tag which identifies the file as an XML file and specifies the encoding as UTF-8. Next comes the Squish-specific content, starting with the SquishReport tag which has a version attribute set to 2.1. This tag may contain one or more test tags. The test tags themselves may be nested—i.e., there can be tests within tests—but in practice Squish uses top-level test tags for test suites, and nested test tags for test cases within test suites. (If we export the results from the Test Results view (Section 20.2.15) there will be no outer test tag for the test suite, but instead a sequence of test tags, one per test case that was executed.)

The test tag has a name attribute used to store the name of the test suite or test case. Every test tag must contain a prolog tag as its first child with a time attribute set to the time the test execution started in ISO 8601 format, and must contain an epilog tag as its last child with a time attribute set to the time the test execution finished, again in ISO 8601 format. In between the prolog and epilog there must be at least one verification tag, and there may be any number of message tags (including none).

Every verification tag has four attributes.

The name attribute is used to specify the verification point name. It will be empty for verifications that are created purely in script code (such as a call to the test.compare function) when the message> parameter is not provided to the respective test function. When the message> parameter is provided it is used as value for name.

The file attribute contains the path and filename of the test script that was executed, and the line attribute contains the number of the line in the file where the verification was executed.

The type's value is “screenshot” for screenshot verifications or “properties” for property verifications (e.g. calls to the test.vp function) or an empty string for any other kind of verification (such as calls to the test.verify function). In addition to its own attributes, every verification contains one or more result tags.

Every result tag has two attributes: a time attribute set to the time the result was generated in ISO 8601 format, and a type attribute whose value is one of PASS, FAIL, XPASS, XFAIL, FATAL, or ERROR. In addition the result tag should contain at least one description tag whose text describes the result. Normally, two description tags are present, one that describes the result and the other with an attribute called type with a value of DETAILED whose text gives a more detailed description of the result. For screenshot verifications there will be additional description tags, one with a type attribute with a value of object whose content is the symbolic name of the relevant GUI object, and one with a type attribute with a value of failedImage whose content is either the text “Screenshots are considered identical” (for passes), or the full path to the actual image (for fails, i.e., where the actual image is different from the expected image).

In addition to verification tags, and at the same level (i.e., as children of a test tag), there can be zero or more message tags. These tags have two attributes, a time attribute set to the time the message was generated in ISO 8601 format, and a type attribute whose value is one of LOG, WARNING, or FATAL. The message tag's text contains the message itself.

Here is an example report of a test suite run. This test suite had just one test case, and one of the screenshot verifications failed. We have changed the line-wrapping and indentation for better reproduction in the manual.

<?xml version="1.0" encoding="UTF-8"?>
<SquishReport version="2.1">
  <test name="tst_case1">
    <prolog time="2011-04-28T15:30:44+01:00"/>
      <message line="9" type="LOG" time="2011-04-28T15:30:44+01:00"
	file="/squish/examples/qt/addressbook/suite_js/tst_case1/test.js">
	<description>Successfully passed regression #13248</description>
      </message>
    <verification line="33" type="" name=""
	file="/squish/examples/qt/addressbook/suite_js/tst_case1/test.js">
      <result type="PASS" time="2011-04-28T15:31:46+01:00">
        <description>Verified</description>
        <description type="DETAILED">True expression</description>
      </result>
    </verification>
    <verification line="48" type="screenshot"  name="VP1"
	file="/squish/examples/qt/addressbook/suite_js/tst_case1/test.js">
      <result type="PASS" time="2011-04-28T15:30:48+01:00">
	<description>VP1: Screenshot comparison of
':Address Book - MyAddresses.adr.File_QToolBar' passed</description>
	<description type="DETAILED">Screenshots are considered identical
</description>
	<description type="object">
:Address Book - MyAddresses.adr.File_QToolBar</description>
	<description type="failedImage">Screenshots are considered identical
</description>
      </result>
    </verification>
    <verification line="52" type="" name=""
	file="/squish/examples/qt/addressbook/suite_js/tst_case1/test.js">
      <result type="PASS" time="2011-04-28T15:44:34+01:00">
        <description>Comparison</description>
	<description type="DETAILED">'Crisp' and 'Crisp' are equal
</description>
      </result>
    </verification>
    <verification line="56" type="screenshot" name="VP2"
	file="/squish/examples/qt/addressbook/suite_js/tst_case1/test.js">
      <result type="FAIL" time="2011-04-28T15:30:49+01:00">
	<description>VP2: Screenshot comparison of
':Address Book - Add_Dialog' failed</description>
	<description type="DETAILED">Screenshots do not match.
Differing screenshot saved as '/squish/examples/qt/addressbook/suite_js/
tst_case1/verificationPoints/failedImages/failed_2.png'</description>
        <description type="object">:Address Book - Add_Dialog</description>
	<description type="failedImage">/squish/examples/qt/addressbook/
	    suite_js/tst_case1/verificationPoints/failedImages/failed_2.png
</description>
      </result>
    </verification>
    <epilog time="2011-04-28T15:30:50+01:00"/>
  </test>
</SquishReport>

In examples/regressiontesting you can find some example scripts which execute the addressbook test suite on different machines and present the daily output on a Web page by post processing the XML and generating HTML. The How to Do Automated Batch Testing (Section 17.24) section explains how to automate test runs and process the test results to produce HTML that can be viewed in any web browser.