4.8. Squish for iOS Tutorials

4.8.1. Tutorial: Starting to Test iOS Applications
4.8.2. Tutorial: Designing Behavior Driven Development (BDD) Tests
4.8.3. Tutorial: Migration of existing tests to BDD

Learn how to test iOS applications.

4.8.1. Tutorial: Starting to Test iOS Applications

[Warning]iOS Application Testing

Note that iOS apps can only be tested on Apple hardware—either on the devices themselves or inside the iOS Simulator that runs on Mac OS X.

[Note]Web Testing

If you want to test web applications on iOS (e.g., Safari applications), you will need to use the Squish for Web edition and do some additional setup. After installing Squish for Web, see the iOS web-specific installation instructions (Installation for Web Testing using browsers on mobile devices (Section 3.1.8)). Testing web applications on iOS is the same as for any other web platform, and is introduced in the Tutorial: Starting to Test Web Applications (Section 4.5.1) chapter.

This tutorial will show you how to create, run, and modify tests for an example iOS application. In the process you will learn about Squish's most frequently used features so that by the end of the tutorial you will be able to start writing your own tests for your own applications.

This chapter presents many of the major concepts behind Squish and provides the information you need to get started using Squish for testing your own applications. This tutorial does not discuss all of Squish's features, and those that it does cover are not covered in full detail. After reading this tutorial we recommend reading the User Guide (Chapter 5), and at least skimming the API Reference Manual (Chapter 6) and the Tools Reference Manual (Chapter 7), so that you are familiar with all the features that Squish has to offer, even if you don't need to use them all straight away.

This tutorial is divided into several sections. If you are new to Squish (or to the new IDE introduced in Squish 4), it is best to read all of them. If you are already using Squish you might want to just skim the tutorial, stopping only to read those sections that cover any new features that you haven't used before—or you could just skip straight to the User Guide (Chapter 5).

Whenever we show how to achieve something using the IDE we will always follow with an explanation of how to do the same thing using the command line tools. Using an IDE is the easiest and best way to start, but once you build up lots of tests you will want to automate them, (e.g., doing nightly runs of your regression test suite), so it is worth knowing how to use the command line tools since they can be run from batch files or shell scripts.

For this chapter we will use a simple Elements application as our AUT. The application is shipped with Squish in squish/examples/ios/elements. This is a very basic application that shows information about the elements (Hydrogen, Helium, etc.), and that allows users to scroll through the elements by name or by category or to search for an element by typing in some search text. Despite the application's simplicity, it has many of the key features that most standard iOS applications have: buttons to click, a list to scroll, and an edit box for entering text. All the ideas and practices that you learn to test this application can easily be adapted to your own applications. And naturally, the User Guide (Chapter 5) has many more examples.

The screenshot shows the application in action; the left hand image shows an element being displayed and the right hand image shows the application's main window.

The iOS Elements.app example in the simulator.
[Note]Using the Examples

The first time you try running a test for one of the example AUTs you might get a fatal error that begins “Squish couldn't find the AUT to start...”. If this occurs, click the Test Suite Settings toolbar button, and in the Application Under Test (AUT) section choose the AUT from the combobox if it is available, or click the Browse... button and choose the AUT's executable via the file open dialog that pops up. (Some versions of Squish will automatically pop up this dialog if no AUT is specified.) This only needs to be done once per example AUT. (This doesn't arise when testing your own AUTs.)

In the following sections we will create a test suite and then create some tests, but first we will very briefly review some key Squish concepts.

4.8.1.1. Squish Concepts

To perform testing, two things are required:

  1. an application to test—known as the Application Under Test (AUT), and

  2. a test script that exercises the AUT.

One fundamental aspect of Squish's approach is that the AUT and the test script that exercises it are always executed in two separate processes. This ensures that even if the AUT crashes, it should not crash Squish. (In such cases the test script will fail gracefully and log an error message.) In addition to insulating Squish and test scripts from AUT crashes, running the AUT and the test script in separate processes brings other benefits. For example, it makes it easier to store the test scripts in a central location, and it also makes it possible to perform remote testing on different machines and platforms. The ability to do remote testing is particularly useful for testing AUTs that run on multiple platforms, and also when testing AUTs that run on embedded devices.

Squish runs a small server (squishserver) that handles the communication between the AUT and the test script. The test script is executed by the squishrunner tool, which in turn connects to the squishserver. The squishserver starts the AUT and injects the Squish hook into it. The hook is a small library that makes the AUT's live running objects accessible and that can communicate with the squishserver. With the hook in place, the squishserver can query AUT objects regarding their state and can execute commands—all on behalf of the squishrunner. And the squishrunner itself requests that the AUT performs whatever actions the test script specifies. All the communication takes place using network sockets which means that everything can be done on a single machine, or the test script can be executed on one machine and the AUT can be tested over the network on another machine.

The following diagram illustrates how the individual Squish tools work together.

From the test engineer's perspective this separation is not noticeable, since all the communication is handled transparently behind the scenes.

Tests can be written and executed using the Squish IDE, in which case the squishserver is started and stopped automatically, and the test results are displayed in the Squish IDE's Test Results view (Section 8.2.15). The following diagram illustrates what happens behind the scenes when the Squish IDE is used.

The Squish tools can also be used from the command line without the Squish IDE—this is useful for those testers who prefer to use their own tools (for example, their favorite editor), and also for performing automatic batch testing (for example, when running regression tests overnight). In these cases, the squishserver must be started manually, and stopped when all the testing is complete (or, if preferred, started and stopped for each test).

For Squish to make it possible for test scripts to be able to query and control an AUT, Squish must be able to access the AUT's internals, and this is made possible by the use of bindings. Bindings are in effect libraries that provide access to the objects—and in turn to the objects' properties and methods—that are available from a GUI toolkit, or from the AUT itself.

There are two sets of bindings that are of interest when developing tests using Squish.

  1. GUI toolkit bindingsSquish provides bindings for all the GUI toolkits it supports, including Qt, Java AWT/Swing, Java SWT, Web, etc. This means that all the standard objects (including the GUI widgets) provided by these toolkits can be queried and controlled by Squish test scripts.

  2. AUT-specific bindings—it is possible to create bindings that provide access to the AUT's own API for those cases where the toolkit's bindings don't provide sufficient functionality for proper testing. (Note that for Java- and Qt-based AUTs Squish automatically creates bindings to the AUTs objects—including custom classes; see How to Create and Access Application Bindings (Section 5.25).)

The need to make AUT-specific bindings is rarely needed in practice, but if it really is necessary, Squish provides a tool to make the process as simple as possible. The tool, squishidl (Section 7.4.3), is used to instrument the AUT (and any additional components) to generate AUT-specific bindings. The generated bindings library is seamlessly integrated with the standard GUI toolkit bindings and in the same way will automatically be loaded on demand by the Squish test tools.

When Squish automatically creates bindings to AUT classes, for Qt applications this means that the properties and slots of the AUT's custom widgets can be accessed without having to take any special action, and for Java AUTs this means that objects of custom classes are automatically available in test scripts without needing to be registered.

[Note]Terminology

The Squish documentation mostly uses the term widget when referring to GUI objects (i.e., buttons, menus, menu items, labels, table controls, etc). Windows users might be more familiar with the terms control and container, but here we use the term widget for both. Similarly, Mac OS X users may be used to the term view; again, we use the term widget for this concept.

4.8.1.1.1. Making an Application Testable

In most cases, nothing special needs to be done to make an application testable, since the toolkit's API (e.g., Qt) provides enough functionality to implement and record test scripts. The connection to the squishserver is also established automatically, when the Squish IDE starts the AUT.

[Note]The Squish Directory

Throughout the manual, we often refer to the SQUISH directory. This means the directory where Squish is installed, which might be C:\Squish, /usr/local/squish, /opt/local/squish, or somewhere else, depending on where you installed it. The exact location doesn't matter, so long as you mentally translate the SQUISH directory to whatever the directory really is when you see paths and filenames in this manual.

4.8.1.2. Creating a Test Suite

A test suite is a collection of one or more test cases (tests). Using a test suite is convenient since it makes it easy to share tests scripts and test data between tests.

Here, and throughout the tutorial, we will start by describing how to do things using the IDE, with the information for command line users following.

To begin with start up the Squish IDE, either by clicking or double-clicking the squishide icon, or by launching squishide from the taskbar menu or by executing open squishide.app on the command line—whichever you prefer. Once Squish starts up it will look similar to the screenshot—but probably slightly different depending on the Mac OS X version, colors, fonts, and theme that you use, and so on.

The Squish IDE with no Test Suites

Once Squish has started click File|New Test Suite... to pop-up the New Test Suite wizard shown below.

The New Test Suite wizard's Name and Directory page

Enter a name for your test suite and choose the folder where you want the test suite to be stored. In the screenshot we have called the test suite suite_py and will put it inside the squish-ios-test folder; the actual example code is in Squish's examples/ios/elements folder. (For your own tests you might use a more meaningful name such as "suite_elements"; we chose "suite_py" because for the sake of the tutorial we will create several suites, one for each scripting language that Squish supports.) Naturally, you can choose whatever name and folder you prefer. Once the details are complete, click Next to go on to the Toolkit (or Scripting Language) page.

[Note]Toolkits

Different versions of Squish support different toolkits—if your version only supports one toolkit, this page may not appear, and you may be taken directly to the Scripting Language page instead. And if you do get this page, the toolkits listed on it might be different from those shown here, depending on what options you built Squish with.

The New Test Suite wizard's Toolkit page

If you get this wizard page, click the toolkit your AUT uses. For this example, you must click iOS since we are testing a iOS application. Then click Next to go to the Scripting Language page.

[Note]Scripting Languages

Squish supports several different scripting languages, and different installations may include support for some or all of these—so the scripting languages shown in the screenshot may be different from those shown by your version of Squish.

The New Test Suite wizard's Scripting Language page

Choose whichever scripting language you want—the only constraint is that you can only use one scripting language per test suite. (So if you want to use multiple scripting languages, just create multiple test suites, one for each scripting language you want to use.) The functionality offered by Squish is the same for all languages. Having chosen a scripting language, click Next once more to get to the wizard's last page.

The New Test Suite wizard's AUT page

If you are creating a new test suite for an AUT that Squish already knows about, simply click the combobox to pop-down the list of AUTs and choose the one you want. If the combobox is empty or your AUT isn't listed, click the Browse button to the right of the combobox—this will pop-up a file open dialog from which you can choose your AUT. In the case of iOS programs, the AUT is the application's executable (e.g., Elements on iOS). Once you have chosen the AUT, click Finish and Squish will create a sub-folder with the same name as the test suite, and will create a file inside that folder called suite.conf that contains the test suite's configuration details. Squish will also register the AUT with the squishserver. The wizard will then close and Squish's IDE will look similar to the screenshot below.

The Squish IDE with the suite_py test suite

We are now ready to start creating tests. Read on to learn how to create test suites without using the IDE, or skip ahead to Recording Tests and Verification Points (Section 4.8.1.3) if you prefer.

[Note]For command-line users

To create a new test suite from the command line, three steps are necessary: first, create a directory for the test suite; second, create a test suite configuration file; and third, register the AUT with squishserver.

  1. Create a new directory to hold the test suite—the directory's name should begin with suite. In this example we have created the squish/examples/ios/elements/suite_py directory for Python tests. (We also have similar subdirectories for other languages but this is purely for the sake of example, since normally we only use one language for all our tests.)

  2. Create a plain text file (ASCII or UTF-8 encoding) called suite.conf in the suite subdirectory. This is the test suite's configuration file, and at the minimum it must identify the AUT, the scripting language used for the tests, and the wrappers (i.e., the GUI toolkit or library) that the AUT uses. The format of the file is key = value, with one key–value pair per line. For example:

    AUT      = Elements
    LANGUAGE = Python
    LAUNCHER = iphonelauncher
    WRAPPERS = iOS
    

    The AUT is the iOS executable. The LANGUAGE can be set to whichever one you prefer—currently Squish is capable of supporting JavaScript, Python 2, Perl, Ruby, and Tcl, but the precise availability may vary depending on how Squish was installed. The WRAPPERS should be set to iOS. Make sure you set the LAUNCHER to iphonelauncher.

  3. Register the AUT with the squishserver. [13] This is done by executing the squishserver on the command line with the --config option and the addAUT command. For example, assuming we are in the squish directory on Mac OS X:

    squishserver --config addAUT Elements \
    squish/examples/ios/elements
    

    We must give the addAUT command the name of the AUT's executable and—separately—the AUT's path. In this case the path is to the executable that was added as the AUT in the test suite configuration file. (For more information about application paths, see AUTs and Settings (Section 7.3) in the User Guide (Chapter 5), and for more about the squishserver's command line options see squishserver (Section 7.4.2) in the Tools Reference Manual (Chapter 7).)

We are now ready to record our first test.

4.8.1.3. Recording Tests and Verification Points

Squish records tests using the scripting language that was specified for the test suite, rather than using a proprietary language. Once a test has been recorded we can run the test and Squish will faithfully repeat all the actions that we performed when recording the test, but without all the pauses that humans are prone to but which computers don't need. It is also possible—and very common—to edit recorded tests, or to copy parts of recorded tests into manually created tests, as we will see later on in the tutorial.

Recordings are always made into existing tests, so we must begin by creating a new "empty" test. There are two ways we can do this. One way is to click File|New Test Case.... This will pop up the New Squish Test Case wizard (Section 8.3.6)—simply enter the name for the test case and then click Finish. Another way is to click the New Test Case toolbar button (to the right of the "Test Cases" label in the Test Suites view); this will create a new test case with a default name (which you can easily change). Use one of these methods and give the new test case the name “tst_general”. Squish automatically creates a sub-folder inside the test suite's folder with this name and also a test file, in this case test.py. (If we had chosen JavaScript as our scripting language the file would be called test.js, and equivalently for Perl, Ruby, or Tcl.)

The Squish IDE with the empty tst_general test case

To make the test script file (e.g., test.py) appear in an Editor view (Section 8.2.6), click—or double-click depending on the Window|Preferences|General|Open mode setting—the test case. (Incidentally, the checkboxes are used to control which test cases are run when the Run Test Suite toolbar button is clicked; we can always run a single test case by clicking its Run Test button.) Initially, the script is empty. If we were to create a test manually, we must create a main function. The name "main" is special to Squish—tests may contain as many functions and other code as we like (providing it is legal for the scripting language), but when the test is executed (i.e., run), Squish always executes the main function. This is actually very convenient since it means we are free to create other functions, import libraries, and so on, without problems. It is also possible to share commonly used code between test scripts—this is covered in the User Guide (Chapter 5). (In fact, two other function names are special to Squish, cleanup and init; see Tester-Created Special Functions (Section 6.1) for details.)

Once the new empty test case has been created we are now free to write test code manually, or to record a test. If we choose to record we can either replace all the test's code with the recorded code, or insert recorded code into the middle of some existing test code. We will only concern ourselves with recording and replacing in the tutorial.

[Note]For command-line users

Creating a new test case from the command line is an easy two-step process: first, create a test case directory; and second, create an empty test case script.

  1. Create a new subdirectory inside the test suite directory. For example, inside the squish/examples/ios/addressbook/suite_py directory, create the tst_general directory.

  2. Inside the test case's directory create an empty file called test.py (or test.js if you are using the JavaScript scripting language, and similarly for the other languages).

Before we dive into recording let's briefly review our very simple (and far from thorough) test plan:

  1. Click the Elements by name option.

  2. Click the Argon element.

  3. Verify that the Category is “Noble Gases

  4. Return to the main window.

  5. Click Search.

  6. Enter a search term of “pluto” and click the Search button.

  7. Verify that element 94, Plutonium is found.

  8. Finish.

To verify some aspect of a widget's state, you must choose the widget and add a verification point. Squish provides two ways of doing this, so we will show both.

We are now ready to record our first test. Click the Record Test Case toolbar button () that's to the right of the tst_general test case shown in the Test Suites view (Section 8.2.16)'s Test Cases list. This will cause Squish to run the AUT so that you can interact with it. Once the simulator has started and the Elements AUT is running, perform the following actions—and don't worry about how long it takes since Squish doesn't record idle time:

  1. Click the Elements by name item. Once the list of elements appears, click the Argon (Ar) item.

  2. When the Argon screen appears you want to verify that it has the correct Category. For this verification you will take a slightly long-winded approach. First, click the Insert Object Properties Verification Point toolbar button in the Squish Control Bar.

    This makes the Squish IDE reappear. In the Application Objects view, expand the Elements item (by clicking its gray triangle), then the UI_Window_0 item, then the UILayoutContainerView_0 item, then the UINavigationTransitionView_0 item, then the UIViewControllerWrapperView_0 item, and then the UITableView_0 item. Now the table's items should be visible. Now expand the Category_UITableViewCell_8 item and then the UIView_1 item. Now click the Noble Gases_UILabel_1 item. At last we've found the item we want. (Don't worry, when you do the next verification you'll make Squish find the item for you!)

  3. In the Properties view expand the label's text property. Now click the checkbox beside the stringValue subproperty. Squish should now look similar to the screenshot.

    The Squish IDE showing a verification point about to be inserted

    Now click the Insert button (at the top-right of the Verification Point Creator view (Section 8.2.19)). This will insert the Category verification into the recorded script. The Squish IDE will disappear and you can continue to record interactions with the AUT.

  4. Back in the Elements AUT, click Name to return to the list of elements by name, then click Main to return to the main window.

  5. Click the Search item and in the Search window enter the text “pluto” in the Name Contains line edit. Then click the Search button.

  6. When the Search Results appears you want to verify that element 94, Plutonium was found. This time, you will make Squish find the relevant object for you. Once again click the Insert Object Properties Verification Point toolbar button in the Squish Control Bar. As before, this will make the Squish IDE appear.

  7. In the Application Objects view click the Pick toolbar button (it looks like an eye dropper). This will make the Squish IDE disappear. Move the mouse over the “94: Plutonium (Pu)” text in the Search Results window and click this text. The Squish IDE will now reappear and Squish will have found and highlighted the relevant widget.

  8. In the Properties view expand the widget's text property. Now click the checkbox beside the stringValue subproperty. Squish should now look similar to the screenshot.

    The Squish IDE showing a verification point about to be inserted

    Now click the Insert button (at the top-right of the Verification Point Creator view (Section 8.2.19)). This will insert the verification into the recorded script. The Squish IDE will disappear and you can continue to record interactions with the AUT.

  9. We have now finished our test plan and inserted the verifications. Click the Stop Recording toolbar button in the Squish Control Bar. The Elements AUT and the simulator will stop and the Squish IDE will reappear.

Once you stop recording, the recorded test will appear in Squish's IDE as the screenshot illustrates. (Note that the exact code that is recorded will vary depending on how you interact with the AUT and which scripting language you have chosen.)

When the recording is finished you can immediately play it back to see that it works as expected by clicking the tst_general's Play button in the Test Cases view.

The Squish IDE showing the results of playback with two verification points

If the recorded test doesn't appear, click (or double-click depending on your platform and settings) the tst_general test case—or click the test.py file in the Test Case Resources list—this will make Squish show the test's test.py file in an editor window as shown in the screenshot.

Now that we've recorded the test we are able to play it back, i.e., run it. This in itself is useful in that if the play back failed it might mean that the application has been broken. Furthermore, the two verifications we put in will be checked on play back as the screenshot shows.

Inserting verification points during test recording is very convenient. Here we inserted two separately, but we can insert as many as we like as often as we like during the test recording process. However, sometimes we might forget to insert a verification, or later on we might want to insert a new verification. We can easily insert additional verifications into a recorded test script as we will see in the next section, Inserting Additional Verification Points (Section 4.8.1.4).

Before going further we will look at how to record a test from the command line. Then we will see how to run a test, and we will also look at some of the code that Squish generated to record the test and discuss some of its features.

[Note]For command-line users

First and foremost, the squishserver must always be running when recording or running a test. This is handled automatically by the Squish IDE, but for command line users the squishserver must be started manually. (See squishserver (Section 7.4.2) for further details.)

To record a test from the command line we execute the squishrunner program and specify the test suite we want to record inside and the name we want to give to the test case. For example (assuming we are in the directory that contains the test suite's directory):

squishrunner --testsuite suite_py --record tst_general --useWaitFor

It is always best to record using the --useWaitFor option since this records calls to the waitForObject function which is more reliable than using the snooze function which for historical reasons is the default. (Note that the Squish IDE automatically uses the waitForObject function.)

To run a test case in the IDE just click the Run Test Case toolbar button (the green right-pointing triangle that appears when the test case is selected in the Test Suites view (Section 8.2.16)). This will cause Squish to run the AUT and replay every action (omitting human idle time, but allowing just enough time for the GUI toolkit to keep up). It is worth trying out since it has quite an impressive effect, especially if you haven't seen it in action before.

When we have two or more test cases we can run them individually by clicking the test case we want to run to select it and then clicking the Run Test button, or we can run them all (one after the other) by clicking the Run Test Suite toolbar button (which is above and slightly to the left of the Run Test button. (Actually, only those test cases that are checked are run by clicking the Run Test Suite toolbar button, so we can easily run a particular group of tests.)

[Note]For command-line users

As noted earlier, the squishserver must always be running when recording or running a test. (See squishserver (Section 7.4.2) for further details.)

To play back a recorded test from the command line we execute the squishrunner program and specify the test suite our recorded script is in and the test case we want to play. For example (assuming we are in the directory that contains the test suite's directory):

squishrunner --testsuite suite_py --testcase tst_general

If you look at the code in the screenshot (or the code snippet shown below) you will see that it consists of lots of waitForObject calls as parameters to various other calls such as clickObject and type. The waitForObject function waits until a GUI object is ready to be interacted with (i.e., becomes visible and enabled), and is then followed by some function that interacts with the object. The typical interactions are click a button, or type in some text. (For a complete overview of Squish's script commands see the User Guide (Chapter 5), the API Reference Manual (Chapter 6), and the Tools Reference Manual (Chapter 7). Objects are identified by names that Squish generates. (See How to Identify and Access Objects (Section 5.1) for full details.)

[Note]Scripting Language Support

Although the screenshots only show the Python test suite in action, for the code snippets quoted here and throughout the tutorial, we show the code for all the scripting languages that Squish supports. In practice you would normally only use one of them of course, so feel free to just look at the snippets in the language you are interested in and skip the others. (In the HTML version of this manual you can use the combobox at the top of the page to select the language you use—this will hide the code snippets in other languages.)

Python

def main():
    startApplication("Elements")
    ctx_1 = waitForApplicationLaunch()
    clickObject(waitForObject(":Elements by name_UITableViewCell"))
    clickObject(waitForObject(":Argon (Ar)_UITableViewCell"))
    waitFor("object.exists(':Noble Gases_UILabel')", 20000)
    test.compare(findObject(":Noble Gases_UILabel").text.stringValue, "Noble Gases")
    clickObject(waitForObject(":Name_UINavigationItemButtonView"))
    clickObject(waitForObject(":Main_UINavigationItemButtonView"))
    clickObject(waitForObject(":Search_UITableViewCell"))
    clickObject(waitForObject(":Name Contains_UITextField"), 113, 14)
    type(waitForObject(":Name Contains_UITextField"), "pluto")
    clickObject(waitForObject(":Search_UINavigationButton"))
    waitFor("object.exists(':94: Plutonium (Pu)_UITableViewCell')", 20000)
    test.compare(findObject(":94: Plutonium (Pu)_UITableViewCell").text.stringValue, "94: Plutonium (Pu)")


JavaScript
function main()
{
    startApplication("Elements");
    var ctx_1 = waitForApplicationLaunch();
    clickObject(waitForObject(":Elements by name_UITableViewCell"));
    clickObject(waitForObject(":Argon (Ar)_UITableViewCell"));
    waitFor("object.exists(':Noble Gases_UILabel')", 20000);
    test.compare(findObject(":Noble Gases_UILabel").text.stringValue,
            "Noble Gases");
    clickObject(waitForObject(":Name_UINavigationItemButtonView"));
    clickObject(waitForObject(":Main_UINavigationItemButtonView"));
    clickObject(waitForObject(":Search_UITableViewCell"));
    clickObject(waitForObject(":Name Contains_UITextField"), 113, 14);
    type(waitForObject(":Name Contains_UITextField"), "pluto");
    clickObject(waitForObject(":Search_UINavigationButton"));
    waitFor("object.exists(':94: Plutonium (Pu)_UITableViewCell')", 20000);
    test.compare(findObject(":94: Plutonium (Pu)_UITableViewCell").text.stringValue,
            "94: Plutonium (Pu)");
}

Perl
sub main {
    startApplication("Elements");
    my $ctx_1 = waitForApplicationLaunch();
    clickObject(waitForObject(":Elements by name_UITableViewCell"));
    clickObject(waitForObject(":Argon (Ar)_UITableViewCell"));
    waitFor("object::exists(':Noble Gases_UILabel')", 20000);
    test::compare(findObject(":Noble Gases_UILabel")->text->stringValue,
        "Noble Gases");
    clickObject(waitForObject(":Name_UINavigationItemButtonView"));
    clickObject(waitForObject(":Main_UINavigationItemButtonView"));
    clickObject(waitForObject(":Search_UITableViewCell"));
    clickObject(waitForObject(":Name Contains_UITextField"), 113, 14);
    type(waitForObject(":Name Contains_UITextField"), "pluto");
    clickObject(waitForObject(":Search_UINavigationButton"));
    waitFor("object::exists(':94: Plutonium (Pu)_UITableViewCell')", 20000);
    test::compare(findObject(":94: Plutonium (Pu)_UITableViewCell")->text->stringValue,
        "94: Plutonium (Pu)");
}

Ruby
# encoding: UTF-8
require 'squish'
include Squish

def main
    startApplication("Elements")
    ctx_1 = waitForApplicationLaunch
    clickObject(waitForObject(":Elements by name_UITableViewCell"))
    clickObject(waitForObject(":Argon (Ar)_UITableViewCell"))
    waitFor("Squish::Object.exists(':Noble Gases_UILabel')", 20000)
    Test.compare(findObject(":Noble Gases_UILabel").text.stringValue,
                 "Noble Gases")
    clickObject(waitForObject(":Name_UINavigationItemButtonView"))
    clickObject(waitForObject(":Main_UINavigationItemButtonView"))
    clickObject(waitForObject(":Search_UITableViewCell"))
    clickObject(waitForObject(":Name Contains_UITextField"), 113, 14)
    type(waitForObject(":Name Contains_UITextField"), "pluto")
    clickObject(waitForObject(":Search_UINavigationButton"))
    waitFor("Squish::Object.exists(':94: Plutonium (Pu)_UITableViewCell')", 20000)
    Test.compare(findObject(":94: Plutonium (Pu)_UITableViewCell").text.stringValue,
                 "94: Plutonium (Pu)")
end

Tcl
proc main {} {
    startApplication "Elements"
    set ctx_1 [waitForApplicationLaunch]
    invoke clickObject [waitForObject ":Elements by name_UITableViewCell"]
    invoke clickObject [waitForObject ":Argon (Ar)_UITableViewCell"]
    waitFor {object exists ":Noble Gases_UILabel"} 20000
    test compare [property get [property get \
        [findObject ":Noble Gases_UILabel"] text] stringValue] "Noble Gases"
    invoke clickObject [waitForObject ":Name_UINavigationItemButtonView"]
    invoke clickObject [waitForObject ":Main_UINavigationItemButtonView"]
    invoke clickObject [waitForObject ":Search_UITableViewCell"]
    invoke clickObject [waitForObject ":Name Contains_UITextField"] 113 14
    invoke type [waitForObject ":Name Contains_UITextField"] "pluto"
    invoke clickObject [waitForObject ":Search_UINavigationButton"]
    waitFor {object exists ":94: Plutonium (Pu)_UITableViewCell"} 20000
    test compare [property get [property get \
        [findObject ":94: Plutonium (Pu)_UITableViewCell"] text] stringValue] \
        "94: Plutonium (Pu)"
}

We have quoted the entire test script here since it is so short. Every Squish test must have a main function which is what Squish calls to begin the test. Here the recorded test script begins in the standard way by calling the startApplication function. Normally, this alone is sufficient to start the AUT, but because this AUT (Elements) is executed inside a simulator Squish wants to be sure that the simulator has started and begun the AUT, so it includes a call to the waitForApplicationLaunch function—this only returns once the AUT has been started successfully. The function's return value is the AUT's context object (see Application Context (Section 6.3.11)); it is not used in this test.

The rest of the function calls are concerned with replaying the interactions that were recorded, in this case, clicking widgets and typing in text using the clickObject and type functions. In addition, the verifications we made have been recorded as a call to the waitFor function followed by a call to the test.compare function. When we write test scripts by hand we normally use the waitForObject function rather than the waitFor function, as we will see in the next section.

[Note]Note

In the code snippet it looks as though Squish is having to look up the same objects by name time after time. In fact Squish cache's names, so after a name has been looked up once, subsequent lookups are very fast. Another thing to notice is that there are no explicit delays. (It is possible to force a delay using Squish's snooze function.) This is because the waitForObject function delays until the object it is given is ready—thus allowing Squish to run as fast as the GUI toolkit can cope with, but no faster.

Another point to notice is that all the object names begin with a colon. This identifies them as symbolic names. Squish supports several naming schemes, all of which can be used—and mixed—in scripts. The advantage of using symbolic names is that if the application changes in a way that results in different names being generated, we can simply update Squish's Object Map (which relates symbolic names to real names), and thereby avoid the need to change our test scripts. (See the Object Map (Section 7.10) and the Object Map view (Section 8.2.9) for more about the Object Map.)

4.8.1.4. Inserting Additional Verification Points

In the previous section we saw how easy it is to insert verification points during the recording of test scripts. Verification points can also be inserted into existing test scripts, either by setting a breakpoint and using the Squish IDE, or simply by editing a test script and putting in calls to Squish's test functions such as test.compare and test.verify.

Squish supports three kinds of verification points: those that verify that a particular condition holds—known as "Object Property Verifications"; those that verify that an entire table has the contents we expect—known as "Table Verifications"; and those that verify that two images match—known as "Screenshot Verifications". Although the screenshot verifications are very impressive, by far the most commonly used kind are object property verifications, and it is these that we will cover in the tutorial. (See also How to Create and Use Table Verifications (Section 5.22.2) and How to Do Screenshot Verifications (Section 5.22.3).)

In fact, object property verification points (which we'll just call "verification points" in the rest of the tutorial), are simply calls to the test.compare function, with two arguments—the value of a particular property for a particular object, and an expected value. We can manually insert calls to the test.compare function in a recorded or hand written script, or we can get Squish to insert them for us using the IDE. In the previous section we showed how to use the Squish IDE to insert verifications during recording. Here we will first show how to use the Squish IDE to insert a verification into an existing test script, and then we will show how to insert a verification by hand.

Before asking Squish to insert verification points, it is best to make sure that we have a list of what we want to verify and when. There are many potential verifications we could add to the tst_general test case, but since our concern here is simply to show how to do it, we will only do two—we will verify that the Argon element's Symbol is “Ar” and that its Number is 18. We will put these verifications immediately after the one we inserted during recording that verified its Category.

To insert a verification point using the IDE we start by putting a break point in the script (whether recorded or manually written—it does not matter to Squish), at the point where we want to verify.

For clarity we have created a new test called tst_argon. First we clicked the Squish IDE's New Test Case toolbar button, then we renamed the test, and finally we copied and pasted the entire tst_general's code into the new test. So, at this point both tests have the same code, but we will modify the tst_argon test by adding new verifications to it. (In practice you would just add the verifications to an existing test.)

The Squish IDE showing the tst_argon test case with a breakpoint

As the above screenshot shows, we have set a breakpoint at line 9. This is done simply by Ctrl+Clicking the line number and then clicking the Add Breakpoint menu item in the context menu. We chose this line because it follows the first verification point we added during recording, so at this point the details of Argon will be visible on the screen. (Note that your line number may be different if you recorded the test in a different way.)

Having set the breakpoint, we now run the test as usual by clicking the Run Test button, or by clicking the Run|Run Test Case menu option. Unlike a normal test run the test will stop when the breakpoint is reached (i.e., at line 9, or at whatever line you set), and Squish's main window will reappear (which will probably obscure the AUT). At this point the Squish IDE will automatically switch to the Squish Test Debugging Perspective (Section 8.1.2.3).

[Note]Perspectives and Views

The Squish IDE works just like the Eclipse IDE. This provides a much more sophisticated user interface than the old Squish Classic IDE. If you aren't used to Eclipse it is crucial to understand one key concept: Views and Perspectives. In Eclipse (and therefore in the new Squish IDE), a View is essentially a child window (perhaps a dock window, or a tab in an existing window). And a Perspective is a collection of Views arranged together. Both are accessible through the Window menu.

The Squish IDE is supplied with three Perspectives—the Squish Test Management Perspective (Section 8.1.2.2) (which is the Perspective that the Squish IDE starts with, and the one we have seen in all previous screenshots), Squish Test Debugging Perspective (Section 8.1.2.3), and Squish Spy Perspective (Section 8.1.2.1). You can change these Perspectives to include additional Views (or to get rid of any Views that you don't want), and you can create your own Perspectives with exactly the Views you want. So if your windows change dramatically it just means that the Perspective changed; you can always use the Window menu to change back to the Perspective you want. In practice, Squish will automatically change perspective to reflect the current situation, so it isn't really necessary to change perspective manually. Other than this, the Squish IDE works in a very similar way to the Classic IDE, although it has a lot more features, and is easier to use once you've got used to it.

As the screenshot below shows, when Squish stops at a breakpoint the Squish IDE automatically changes to the Squish Test Debugging Perspective (Section 8.1.2.3). The perspective shows the Variables view (Section 8.2.18), the Editor view (Section 8.2.6), the Debug view (Section 8.2.5), the Application Objects view (Section 8.2.1), and the Properties view (Section 8.2.11), Methods view (Section 8.2.8), and Test Results view (Section 8.2.15).

To insert a verification point we can expand items in the Application Objects view until we find the object we want to verify. In this example we want to verify the Symbol's UILabel's text, so we expand items all the way to the UITableView, and then the Symbol's UITableViewCell. Once we have selected the appropriate UILabel we expand its text property in the Properties view (Section 8.2.11) view and check the stringValue subproperty.

Finding an object to verify in the Application Objects view

To add the verification point we must click the verification point editor's Insert button. After the insertion the test replay remains stopped: we can either continue by clicking the Resume toolbar button in the Debug view (or press F8), or we can stop by clicking the Terminate toolbar button. This is to allow us to enter more verifications. In this example we have finished for now, so either resume or terminate the test.

Incidentally, the normal Squish Test Management Perspective (Section 8.1.2.2) can be returned to at any time by choosing it from the Window menu (or by clicking its toolbar button), although the Squish IDE will automatically return to it if you stop the script or run it to completion.

Once we have finished inserting verifications and stopped or finished running the test we should now disable the break point. Just Ctrl+Click the break point and click the Disable Breakpoint menu option in the context menu. We are now ready to run the test without any breakpoints but with the verification points in place. Click the Run Test button. This time we will get some test results—as the screenshot shows—all of which we have expanded to show their details. (We have also selected the lines of code that Squish inserted to perform the verification—notice that the code is structurally identical to the code inserted during recording.)

The newly inserted verification point in action

Another way to insert verification points is to insert them in code form. In theory we can just add our own calls to Squish's test functions such as test.compare and test.verify anywhere we like in an existing script. In practice it is best to make sure that Squish knows about the objects we want to verify first so that it can find them when the test is run. This involves a very similar procedure as using the Squish IDE. First we set a breakpoint where we intend adding our verifications. Then we run the test script until it stops. Next we navigate in the Application Objects view (Section 8.2.1) until we find the object we want to verify. At this point it is wise to Ctrl+Click the object we are interested in and click the Add to Object Map context menu option. This will ensure that Squish can access the object. Then Ctrl+Click again and click the Copy to clipboard (Symbolic Name) context menu option—this gives us the name of the object that Squish will use to identify it. Now we can edit the test script to add in our own verification and finish or stop the execution. (Don't forget to disable the break point once it isn't needed any more.)

Although we can write our test script code to be exactly the same style as the automatically generated code, it is usually clearer and easier to do things in a slightly different style, as we will explain in a moment.

For our manually added verification we want to check that Argon's number is “18” in the relevant UILabel. The screenshot shows the two lines of code we entered to get this new verification, plus the results of running the test script.

Manually entered verification point in action

When writing scripts by hand, we use Squish's test module's functions to verify conditions at certain points during our test script's execution. As the screenshot (and the code snippets below) show, we begin by retrieving a reference to the object we are interested in. Using the waitForObject function is standard practice for manually written test scripts. This function waits for the object to be available (i.e., visible and enabled), and then returns a reference to it. (Otherwise it times out and raises a catchable exception.) We then use this reference to access the item's properties and methods—in this case the UILabel's stringValue subproperty—and verify that the value is what we expect it to be using the test.compare function.

Here is the code for all the Argon verifications for all the scripting languages that Squish supports. Naturally, you only need to look at the code for the language that you will be using for your own tests.

Python

    waitFor("object.exists(':Noble Gases_UILabel')", 20000)
    test.compare(findObject(":Noble Gases_UILabel").text.stringValue, "Noble Gases")
    waitFor("object.exists(':Ar_UILabel')", 20000)
    test.compare(findObject(":Ar_UILabel").text.stringValue, "Ar")
    label = waitForObject(":18_UILabel")
    test.compare(label.text.stringValue, "18")
JavaScript

    waitFor("object.exists(':Noble Gases_UILabel')", 20000);
    test.compare(findObject(":Noble Gases_UILabel").text.stringValue,
            "Noble Gases");
    waitFor("object.exists(':Ar_UILabel')", 20000);
    test.compare(findObject(":Ar_UILabel").text.stringValue, "Ar");
    var label = waitForObject(":18_UILabel");
    test.compare(label.text.stringValue, "18")
Perl

    waitFor("object::exists(':Noble Gases_UILabel')", 20000);
    test::compare(findObject(":Noble Gases_UILabel")->text->stringValue,
        "Noble Gases");
    waitFor("object::exists(':Ar_UILabel')", 20000);
    test::compare(findObject(":Ar_UILabel")->text->stringValue, "Ar");
    my $label = waitForObject(":18_UILabel");
    test::compare($label->text->stringValue, "18");
Ruby

    waitFor("Squish::Object.exists(':Noble Gases_UILabel')", 20000)
    Test.compare(findObject(":Noble Gases_UILabel").text.stringValue,
                 "Noble Gases")
    waitFor("Squish::Object.exists(':Ar_UILabel')", 20000)
    Test.compare(findObject(":Ar_UILabel").text.stringValue, "Ar")
    label = waitForObject(":18_UILabel")
    Test.compare(label.text.stringValue, "18")
Tcl

    waitFor {object exists ":Noble Gases_UILabel"} 20000
    test compare [property get [property get \
        [findObject ":Noble Gases_UILabel"] text] stringValue] "Noble Gases"
    waitFor {object exists ":Ar_UILabel"} 20000
    test compare [property get [property get \
        [findObject ":Ar_UILabel"] text] stringValue] "Ar"
    set label [waitForObject ":18_UILabel"]
    test compare [property get [property get $label text] stringValue] "18"

The coding pattern is very simple: we retrieve a reference to the object we are interested in and then verify its properties using one of Squish's verification functions. (Recall that we got the UILabel's symbolic name from the clipboard where we'd pasted it earlier.) And we can, of course, call methods on the object to interact with it if we wish.

For complete coverage of verification points, see How to Create and Use Verification Points (Section 5.22) in the User Guide (Chapter 5).

4.8.1.4.1. Test Results

After each test run finishes, the test results—including those for the verification points—are shown in the Test Results view at the bottom of the Squish IDE.

This is a detailed report of the test run and would also contain details of any failures or errors, etc. If you click on a Test Results item, the Squish IDE highlights the script line which generated the test result. And if you expand a Test Results item, you can see additional details of the test.

4.8.1.5. Learning More

We have now completed the tutorial! Squish can of course do much more than we have shown here, but the aim has been to get you started with basic testing as quickly and easily as possible. The User Guide (Chapter 5) provides many more examples, including those that show how tests can interact with particular widgets, as well as how to do data-driven and keyword-driven testing.

The API Reference Manual (Chapter 6) and Tools Reference Manual (Chapter 7) give full details of Squish's testing API and the numerous functions it offers to make testing as easy and efficient as possible. It is well worth reading the User Guide (Chapter 5) and at least skimming the API Reference Manual (Chapter 6) and Tools Reference Manual (Chapter 7)—especially since the time invested will be repaid because you'll know what functionality Squish provides out of the box and can avoid reinventing things that are already available.

4.8.1.6. Notes on Testing iOS Apps in the iOS Simulator

Squish for iOS allows you to test your iOS apps in the iOS Simulator that is included in Xcode installations. This makes it much easier and more convenient to test iOS AUTs without having to use an actual iOS device.

[Important]Important
  • The iOS Simulator is part of Xcode. So you need an Xcode installation in order to run Squish tests in the simulator.

  • In the iOS Simulator you can only run applications that were built for the simulator, not applications that were built for running on a device. So please make sure that you choose the correct build of the iOS app as the AUT in the test suite wizard.

  • You have to use the application (i.e. the file with the .app extension) in Squish as the AUT. The .xcodeproj is the Xcode project that contains the information to build the application. The .xcodeproj can't be used as the AUT.

    So use Xcode to open the .xcodeproj and build the application (for the iOS Simulator target). In newer Xcode versions it might be hard to locate the built app: by default Xcode places the app in a subdirectory of Library/Application Support/iPhone Simulator in your home directory.

There are further options in the test suite that allows you to control how the iOS Simulator is started. In order to use these, open the test suite settings in the Squish IDE and enter one or more of the following options into the Launcher Arguments: line edit:

  • --device-id=<uuid> If you are using Xcode 6 or later, you can specify the device ID of the simulated device to be used.

    Use Window|Devices in Xcode to see the device IDs for the simulated devices available. Or run Squish's iphonelauncher command with the option --list-devices in the terminal to determine the device ID.

    You can't use the --device or --sdk in conjunction with this option since the device ID already defines the simulated hardware and SDK and these value can't be overriden.

  • --device=<device-family> If your application is a universal application (i.e. runs on both, the iPhone and the iPad), you can use this option to specify if Squish start the application in a simulated iPhone or iPad. For <device-family> you can use either iPhone or iPad.

    If you are using Xcode 5.0 or newer, you have more fine grained control over the exact device type and you can also specify iPhone-retina-3.5-inch, iPhone-retina-4-inch and iPad-retina as the <device-family>.

  • --sdk=<version> Squish tries to automatically determine the iOS SDK version that was used to when compiling the app. If this fails or if you want to start the simulator with a different SDK, use this option to overwrite the automatically determined version.

    For example, if your want to enforce the app to start with SDK 4.2, specify the option --sdk=4.2.

  • --xcode-root=<directory> Squish uses the iOS Simulator in the default Xcode installation directory (/Developer). Use this option if you want to use an Xcode installation in a different directory.

    For example, if your Xcode is installed in the directory /Developer4.0, specify the option --xcode-root=/Developer4.0.

4.8.1.7. Notes on Testing iOS Apps in an iPhone or iPad

It is perfectly possible—albeit slightly less convenient—to test iOS Apps on an actual iPhone or iPad device. To do this you must add a Squish-specific wrapper library to Xcode, make a small modification to your application's main function, and make sure that your Mac is set up correctly.

[Important]Important

Your desktop computer and the iOS device communicate through a TCP/IP network connection. So the device has to be reachable from the desktop computer and the other way round. Especially the iOS device connects to the squishserver running on the desktop computer. So if you have any active firewall, you have to disable it or at least allow connections from the iOS device to the squishserver.

4.8.1.7.1. Modify the AUT's main Function

First you must modify your application's main function that it calls Squish's squish_allowAttaching function when running for testing. Here is a typical main function for iOS applications with the necessary modifications (the modifications are shown in bold). Please note that depending on your concrete source code, the main function might vary and you should not simply copy the below code; rather modify your existing source code to add the highlighted lines at the appropriate places.

#import <UIKit/UIKit.h>
#import "AppDelegate.h"

#if defined(SQUISH_TESTING) && !TARGET_IPHONE_SIMULATOR
extern bool squish_allowAttaching(unsigned short port);
#endif

int main(int argc, char *argv[])
{
    NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init];

#if defined(SQUISH_TESTING) && !TARGET_IPHONE_SIMULATOR
    squish_allowAttaching(11233);
#endif

    int retVal = UIApplicationMain(argc, argv, nil, NSStringFromClass([AppDelegate class]));
    [pool release];
    return retVal;
}

The defined(SQUISH_TESTING) means that we compile the Squish specific modifications only if SQUISH_TESTING is defined. We will later set up the Xcode project that we have a special build configuration that sets this compiler define. So you can easily switch between building a version of the app for Squish testing and one normal version that you can also submit to the app store.

And the !TARGET_IPHONE_SIMULATOR means that we only compile the Squish specific modifications into the iOS app only if we are building for the device (and not for simulator builds).

We need to call the squish_allowAttaching function later on in the main function. This function is implemented in a static library provided by Squish. So here we need to declare the function so that the compiler knows about this function when we later try call it. You have to add this before the main function where you actually call this function.

Add a call to squish_allowAttaching(11233) after creating the autorelease pool and before entering the event loop. The argument 11233 is a TCP/IP port number that Squish will use to connect to the applicaiton running on the device.

4.8.1.7.2. Add the Wrapper to Xcode

After the modifications to the application's main function, we also have to link the app against the static library libsquishioswrapper.a that is shipped with the Squish package and can be found in the package's lib directory.

[Note]Note

The following steps use Xcode 4.2. Different Xcode versions might vary slightly in the exact user interface steps (especially with respect to the screenshots), but the overall process is the same for all Xcode versions.

First we create a new build configuration in the Xcode project. This allows us to easily switch between Squish builds of your application and normal builds of your application (without the modifications required by Squish): click on the project to open the project settings. In the Info tab of the project's settings, you can choose to duplicate an existing build configuration. You can base your builds on any of existing build configurations; in our example we choose to duplicate the "Release" build configuration (i.e. we base the Squish specific configuration on release builds).

Duplicate the "Release" build configuration

Give the new build configuration a name, for our example we simply choose "Squish".

Name the new build configuration "Squish"

Next, we have to make sure that the compiler defines SQUISH_TESTING when we build the project with the "Squish" build configuration (this is the define we are checking for in our modified main function):

  1. Switch to the Build Settings tab in the project settings.

  2. Search for the "Other C Flags" build settings.

  3. Make sure to expand the Other C Flags and select the Squish build configuration.

  4. Double click on the Other C Flags entry of the Squish build configuration in the column for your project (in the example the Elements-ios5 column).

  5. Click the + button in the popup to add a new flag.

  6. Enter the flag -DSQUISH_TESTING and press the Done button.

Extend the compiler flags for the "Squish" configuration

Then, we also have to add the Squish static library to the linker flags:

  1. Search for the "Other Linker Flags" build settings.

  2. Make sure to expand the Other Linker Flags and select the Squish build configuration.

  3. Double click on the Other Linker Flags entry of the Squish build configuration in the column for your project (in the example the Elements-ios5 column).

  4. Click the + button in the popup to add the new flags.

  5. Enter the following flags (the order is important):

    • -lstdc++

    • -lz

    • -force_load

    • <squishdir>/lib/arm/libsquishioswrapper.a

    and press the Done button. Make sure that you replace <squishdir> by the full path (or relative path) to the directory of your Squish installation. Alterantively, you can also copy the library into the project directory of your application and specify the libsquishioswrapper.a without any path.

[Important]Important

When you are updating your Squish installation to a new version, you have to make sure that you are using the libsquishioswrapper.a library of the new package, and that you rebuild your application with the new version of the library.

[Note]Note

If your app uses libc++ instead of libstdc++, the linker flag should be /usr/lib/libstdc++.dylib instead of -lstdc++.

Extend the linker flags for the "Squish" configuration

The last step is to really build the iOS app with the newly created "Squish" configuration. For this, we create a separate scheme in Xcode. This allows us to quickly change between building for Squish testing and for other purposes of the application.

Choose New Scheme... from the scheme popup in Xcode.

Create a new scheme for "Squish" builds

Give the newly created scheme a good name, in the example we use "Elements (Squish)", stressing that this builds the Elements app for Squish testing.

Name the new scheme

The newly created scheme has default settings. So we now need to edit the scheme and change the build configuration to be used. So make sure that the new scheme is the active one and choose Edit Scheme... from the popup.

Edit the new scheme

In the dialog to edit the schemes, make sure that you selected the Run <appname>.app action. Then change the Build Configuration: setting to Squish. You should do the same thing for the other actions that take a build configuration as well (i.e. for Test, Profile <appname>, Analyze, and Archive). So all builds done with the "Elements (Squish)" scheme build the app in a way that it is suitable for testing with Squish.

Let the new scheme build the "Squish" configuration

Now you only have to build the app for your device and install it there and you can start testing it on a physical device (after you follow the last steps of setting up a testsuite in Squish on the your desktop computer).

As a quick test to see if all the above modifications are correct, execute the app on the device through Xcode's debugger. Take a closer look at the debugger console in Xcode: if you see the message Listening on port 11233 for incoming connections on startup of the app, then the modifications were correct. If you don't see this message, you missed one of the above steps.

4.8.1.7.3. Setting Up a Computer for iOS Device Testing

Although the iOS application you want to test will run on the iOS device, Squish itself runs on a computer. Here is how to set up the computer to support iOS testing.

  • You have to turn off the firewall on the computer. Naturally it is very important that you turn the firewall back on after the testing is finished!

  • Register the host and port number of the iOS device as an attachable AUT. This is done inside the Squish IDE; click the Edit|Server Settings|Manage AUTs... menu item, then click the Attachable AUTs item. Now click the Add... button. Give the configuration a name, for example, “iPhoneDevice”. Enter the iOS device's IP address as the host and for the port give the number used when calling the squish_allowAttaching function (e.g., 11233).

Now that the computer is set up you can play back or create tests for your iOS applications.

If you want to play back tests you created with the simulator, you have to do the following changes:

  • Change startApplication("iPhoneApp") in your test script to attachToApplication("iPhoneDevice") (or using whatever configuration name you chose if different).

  • If your test script contains a waitForApplicationLaunch() call after the startApplication() you also have to remove this.

  • If you interacted with the iOS Simulator application itself, you have to remove those interactions as well.

Now you can start the application on the device and then replay the test script you recorded on the iOS Simulator.

You can also record the test directly on the device. In this case, please open the test suite settings of your iOS test suite and make sure that the selection for the AUT is <No Application>. Then start the application on the device and if you choose to record a test case in the Squish IDE, you are asked the application. Choose iPhoneDevice (attachable) (or whatever name you used when registering the attachable AUT). Now all user interactions you do on the device are recorded until you end the recording in the Squish IDE's control bar.

[Important]Important

The iOS devices are pretty much locked down so it is not possible for Squish to start (or end) the AUT. So the application has to be started manually and when you execute a test script make sure that the application is running in the foreground and that the device is not locked or sleeping

If you keep the application running, you can execute multiple test cases after each other and each test case then connects to the same application. This means that you have to ensure in your test cases that the application is left in a state that the next test case can run successfully (or you have to write your test cases in a way that on start, they bring the application in a well-known state).

4.8.2. Tutorial: Designing Behavior Driven Development (BDD) Tests

This tutorial will show you how to create, run, and modify Behavior Driven Development (BDD) tests for an example application. You will learn about Squish's most frequently used features. By the end of the tutorial you will be able to write your own tests for your own applications.

For this chapter we will use the Elements app as our Application Under Test (AUT). This app searches and displays information on chemical elements. You find it in Squish's examples/ios directory. The screenshot shows the application in action.

The iOS Elements example.

4.8.2.1. Introduction to Behavior Driven Development

Behavior-Driven Development (BDD) is an extension of the Test-Driven Development approach which puts the definition of acceptance criteria at the beginning of the development process as opposed to writing tests after the software has been developed. With possible cycles of code changes done after testing.

BDD process

Behavior Driven Tests are built out of a set of Feature files, which describe product features through the expected application behavior in one or many Scenarios. Each Scenario is built out of a sequence of Steps which represent actions or verifications that need to be tested for that Scenario.

BDD focuses on expected application behavior, not on implementation details. Therefore BDD tests are described in a human-readable Domain Specific Language (DSL). As this language is not technical, such tests can be created not only by programmers, but also by product owners, testers or business analysts. Additionally, during the product development, such tests serve as living product documentation. For Squish usage, BDD tests shall be created using Gherkin syntax. The previously written product specification (BDD tests) can be turned into executable tests. This step by step tutorial presents automating BDD tests with Squish IDE support.

4.8.2.2. Gherkin syntax

Gherkin files describe product features through the expected application behavior in one or many Scenarios. An example showing the searching feature of the elements example application.


Feature: Searching for elements
    As a user I want to search for elements and get correct results.

    Scenario: Initial state of the search view
        Given elements application is running
        When I switch to the search view
        Then the search field has zero entries

    Scenario: State after searching with exact match
        Given elements application is running
        When I switch to the search view
        And I enter 'helium' into the search field and tap Search
        Then '1' entries should be present

    Scenario: State after searching with multiple matches
        Given elements application is running
        When I switch to the search view
        And I enter 'he' into the search field and tap Search
        Then the following entries should be present
            | Number | Symbol | Name          |
            | 2      | He     | Helium        |
            | 44     | Ru     | Ruthenium     |
            | 75     | Re     | Rhenium       |
            | 104    | Rf     | Rutherfordium |
            | 116    | Uuh    | Ununhexium    |

    Scenario: State of the details when searching
        Given elements application is running
        When I switch to the search view
        And I enter 'Carbon' into the search field and tap Search
        And I tap on the first search result
        Then the previously entered search term is the title of the view

    Scenario Outline: Doing a search with exact match multiple times
        Given elements application is running
        When I switch to the search view
        And I enter '<Name>' into the search field and tap Search
        Then the entry '<Number>: <Name> (<Symbol>)' should be present
        Examples:
            | Name     | Number | Symbol |
            | Hydrogen | 1      | H      |
            | Helium   | 2      | He     |
            | Carbon   | 6      | C      |

Most of the above is free form text (does not have to be English). It's just the Feature/Scenario structure and the leading keywords like "Given", "And", "When" and "Then" that are fixed. Each of those keywords marks a step defining preconditions, user actions and expected results. Above application behavior description can be passed to software developers to implement this features and at the same time the same description can be passed to software testers to implement automated tests.

4.8.2.3. Test implementation

4.8.2.3.1. Creating Test Suite

First, we need to create a Test Suite, which is a container for all Test Cases. Start the squishide and select File|New Test Suite.... Please follow the New Test Suite wizard, provide a Test Suite name, choose the iOS Toolkit and scripting language of your choice and finally register Elements app as AUT. Please refer to Creating a Test Suite (Section 4.8.1.2) for more details about creating new Test Suite.

4.8.2.3.2. Creating Test Case

Squish offers two types of Test Cases: "Script Test Case" and "BDD Test Case". As "Script Test Case" is the default one, in order to create new "BDD Test Case" we need to use the context menu by clicking on the expander next to New Test Case button and choosing the option "New BDD Test Case". The Squish IDE will remember your choice and the "BDD Test Case" will become the default when clicking on the button in the future.

Creating new BDD Test Case

The newly created BDD Test Case consists of a test.feature file (filled with a Gherkin template while creating a new BDD test case), a file named test.(py|js|pl|rb|tcl) which will drive the execution (there is no need to edit this file), and a Test Suite Resources file named steps/steps.(py|js|pl|rb|tcl) where Step implementation code will be placed.

We need to replace the Gherkin template with a Feature for the addressbook example application. To do this, copy the Feature description below and paste it into the Feature file.

Feature: Searching for elements
    As a user I want to search for elements and get correct results.

    Scenario: Initial state of the search view
        Given elements application is running
        When I switch to the search view
        Then the search field has zero entries

After saving the test.feature file, a Feature file warning "No implementation found" is displayed for each Step. This means that no Step implementation was found in the steps subdirectory, in Test Case Resources, or in Test Suite Resources. Running our Feature test now will currently fail at the first step with a "No Matching Step Definition" error and the following Steps will be skipped.

4.8.2.3.3. Recording Step implementation

In order to record the Scenario, press the Record button next to the respective Scenario that is listed in the Scenarios tab in Test Case Resources view.

Record Scenario

This will cause Squish to run the AUT so that we can interact with it. Additionally, the Control Bar is displayed with a list of all Steps that need to be recorded. Now all interaction with the AUT or any verification points added to the script will be recorded under the first step Given elements application is running (which is bolded in the Step list on the Control Bar). Since Squish automatically records the start of the application, we are already done with our first step.

In order to record the next Step in the sequence we click on the button in the Control Bar that is located right in front of the current Step (). The control bar now shows the Step that is recorded to be "When I switch to the search view". So we click on Search in the AUT and are done with recording the second Step.

Clicking on the button in front of the current step in the Control Bar () again proceeds to the recording of the last step, "Then the search field has zero entries". To record this verification, click on Insert Verifications while recording, select Properties and use the Picker tool to point to the text field for the search. Choose the text property from Object Properties list and insert the verification point. Finally, click on the Stop recording button in the Control Bar.

Control Bar
Inserting Verification Point

As a result, Squish will generate the following Step definitions in the steps.* file (at Test Suites+Test Suite Resources):

Python
@Given("elements application is running")
def step(context):
    startApplication("Elements")
    waitForApplicationLaunch()

@When("I switch to the search view")
def step(context):
    clickObject(waitForObject(":Search_UITableViewLabel"), 179, 9)

@Then("the search field has zero entries")
def step(context):
    waitFor("object.exists(':Name Contains_UITextField')", 20000)
    test.compare(findObject(":Name Contains_UITextField").text, "")
JavaScript
Given("elements application is running", function(context) {
    startApplication("Elements");
    waitForApplicationLaunch();
});

When("I switch to the search view", function(context) {
    clickObject(waitForObject(":Search_UITableViewLabel"), 179, 9);
});

Then("the search field has zero entries", function(context) {
    waitFor("object.exists(':Name Contains_UITextField')", 20000);
    test.compare(findObject(":Name Contains_UITextField").text, "");
});
Perl
Given("elements application is running", sub {
    my $context = shift;
    startApplication("Elements");
    waitForApplicationLaunch();
});

When("I switch to the search view", sub {
    my $context = shift;
    clickObject(waitForObject(":Search_UITableViewLabel"), 179, 9);
});

Then("the search field has zero entries", sub {
    my $context = shift;
    waitFor("object::exists(':Name Contains_UITextField')", 20000);
    test::compare(findObject(":Name Contains_UITextField")->text, "");
});
Ruby
Given("elements application is running") do |context|
  startApplication("Elements")
  waitForApplicationLaunch()
end

When("I switch to the search view") do |context|
  clickObject(waitForObject(":Search_UITableViewLabel"), 179, 9)
end

Then("the search field has zero entries") do |context|
  waitFor("Squish::Object.exists(':Name Contains_UITextField')", 20000)
  Test.compare(findObject(":Name Contains_UITextField").text, "")
end
Tcl
Given "elements application is running" {context} {
    startApplication "Elements"
    waitForApplicationLaunch
}

When "I switch to the search view" {context} {
    invoke clickObject [waitForObject ":Search_UITableViewLabel"] 179 9
}

Then "the search field has zero entries" {context} {
    waitFor {object exists ":Name Contains_UITextField"} 20000
    test compare [property get [findObject ":Name Contains_UITextField"] text] ""
}

The application is automatically started at the beginning of the first step due to the recorded startApplication call. At the end of each Step, Squish detaches from the application, but leaves it running. Detaching is done in function called OnScenarioEnd. This function is a so-called hook and you can find it in the file bdd_hooks.(py|js|pl|rb|tcl), which is located in the Scripts tab of the Test Suite Resources view. You can define additional hooks. For a list of all available hooks please refer to Performing Actions During Test Execution Via Hooks (Section 6.19.9).

Python
@OnScenarioEnd
def OnScenarioEnd():
    for ctx in applicationContextList():
        ctx.detach()

JavaScript
OnScenarioEnd(function(context) {
    applicationContextList().forEach(function(ctx) { ctx.detach(); });
});

Perl
OnScenarioEnd(sub {
    foreach (applicationContextList()) {
        $_->detach();
    }
});
Ruby
OnScenarioEnd do |context|
    applicationContextList().each { |ctx| ctx.detach() }
end
Tcl
OnScenarioEnd {context} {
    foreach ctx [applicationContextList] {
        applicationContext $ctx detach
    }
}
4.8.2.3.4. Manual Step implementation

An alternative approach to recording Step implementations is to manually implement them. This gives us the opportunity to modularize our test scripts (i.e. put common code into shared functions, keep test data separate from test scripts). Squish can help with that by creating skeletons of step definitions. To generate a Step implementation, right-click on the given Scenario in the Feature file and choose the option Create Missing Step Implementations from context menu.

Python
@Given("elements application is running")
def step(context):
    test.warning("TODO implement elements application is running")

@When("I switch to the search view")
def step(context):
    test.warning("TODO implement I switch to the search view")

@Then("the search field has zero entries")
def step(context):
    test.warning("TODO implement the search field has zero entries")
JavaScript
Given("elements application is running", function(context) {
    test.warning("TODO implement elements application is running");
});

When("I switch to the search view", function(context) {
    test.warning("TODO implement I switch to the search view");
});

Then("the search field has zero entries", function(context) {
    test.warning("TODO implement the search field has zero entries");
});
Perl
Given("elements application is running", sub {
    my $context = shift;
    test::warning("TODO implement elements application is running");
});

When("I switch to the search view", sub {
    my $context = shift;
    test::warning("TODO implement I switch to the search view");
});

Then("the search field has zero entries", sub {
    my $context = shift;
    test::warning("TODO implement the search field has zero entries");
});
Ruby
Given("elements application is running") do |context|
  Test.warning "TODO implement elements application is running"
end

When("I switch to the search view") do |context|
  Test.warning "TODO implement I switch to the search view"
end

Then("the search field has zero entries") do |context|
  Test.warning "TODO implement the search field has zero entries"
end

When("I enter 'helium' into the search field and tap Search") do |context|
  Test.warning "TODO implement I enter 'helium' into the search field and tap Search"
end
Tcl
Given "elements application is running" {context} {
    test warning "TODO implement elements application is running"
}

When "I switch to the search view" {context} {
    test warning "TODO implement I switch to the search view"
}

Then "the search field has zero entries" {context} {
    test warning "TODO implement the search field has zero entries"
}

Next implement Steps definition taking full advantage of Squish API (API Reference Manual (Chapter 6)) and remove the generated test.warning calls when you are done.

4.8.2.3.5. Step parameterization

So far, our Steps did not use any parameters and all values were hardcoded. Squish has different types of parameters like any, integer or word, allowing our Step definitions to be more reusable. Let us add a new Scenario to our Feature file which will provide Step parameters for both the Test Data and the expected results. Copy the below section into your Feature file.

Scenario: State after searching with exact match
    Given elements application is running
    When I switch to the search view
    And I enter 'helium' into the search field and tap Search
    Then '1' entries should be present

After auto-saving the Feature file, the Squish IDE provides a hint that only 2 Steps need to be implemented: And I enter 'helium' into the search field and tap Search and Then '1' entries should be present. The remaining Steps already have a matching Step implementation.

To record the missing Steps, hit the record button next to the test case name in the Test Suites view. The script will play until it gets to the missing Step and then prompt you to implement it. If you select the Add button, then you can type in the information for a new entry. Click on the button to move to the next step. For the second missing step, we could record an object property verification like we did with the Step Then the search field has zero entries. Or we could copy that step's implmentation in the steps.(py|js|pl|rb|tcl) file and increment the number at the end of the test.compare line. Instead of testing for zero items, we are testing for one item.

Now we paramaterize the generated step implementation by replacing the values with parameter types. Since we want to be able to add different names, replace 'helium' with '|word|'. Note that each parameter will be passed to the step implementation function in the order of appearance in the descriptive name of the step. Finish paramaterizing by editing to look like this example Step:

Python
@When("I enter '|word|' into the search field and tap Search")
def step(context, search):
    clickObject(waitForObject(":Name Contains_UITextField"), 25, 13)
    type(waitForObject(":Name Contains_UITextField"), search)
    clickObject(waitForObject(":Search_UINavigationButton"))

    # synchronization: wait until search result view is visible
    waitForObject(":Search Results_UINavigationItemView")

@Then("'|integer|' entries should be present")
def step(context, numOfEntries):
    test.compare(findObject(":_UITableView").numberOfRowsInSection_(0), numOfEntries)
JavaScript
When("I enter '|word|' into the search field and tap Search", function(context, search) {
    clickObject(waitForObject(":Name Contains_UITextField"), 25, 13);
    type(waitForObject(":Name Contains_UITextField"), search);
    clickObject(waitForObject(":Search_UINavigationButton"));

    // synchronization: wait until search result view is visible
    waitForObject(":Search Results_UINavigationItemView");
});

Then("'|integer|' entries should be present", function(context, numOfEntries) {
    test.compare(findObject(":_UITableView").numberOfRowsInSection_(0), numOfEntries);
});
Perl
When("I enter '|word|' into the search field and tap Search", sub {
    my $context = shift;
    my $search = shift;
    clickObject(waitForObject(":Name Contains_UITextField"), 25, 13);
    type(waitForObject(":Name Contains_UITextField"), $search);
    clickObject(waitForObject(":Search_UINavigationButton"));

    # synchronization: wait until search result view is visible
    waitForObject(":Search Results_UINavigationItemView");
});

Then("'|integer|' entries should be present", sub {
    my $context = shift;
    my $numOfEntries = shift;
    test::compare(findObject(":_UITableView")->numberOfRowsInSection_(0), $numOfEntries);
});
Ruby
When("I enter '|word|' into the search field and tap Search") do |context, search|
  clickObject(waitForObject(":Name Contains_UITextField"), 25, 13)
  type(waitForObject(":Name Contains_UITextField"), search)
  clickObject(waitForObject(":Search_UINavigationButton"))

  # synchronization: wait until search result view is visible
  waitForObject(":Search Results_UINavigationItemView")
end

Then("'|integer|' entries should be present") do |context, numOfEntries|
  Test.compare(findObject(":_UITableView").numberOfRowsInSection_(0), numOfEntries)
end
Tcl
When "I enter '|word|' into the search field and tap Search" {context search} {
    invoke clickObject [waitForObject ":Name Contains_UITextField"] 25 13
    invoke type [waitForObject ":Name Contains_UITextField"] $search
    invoke clickObject [waitForObject ":Search_UINavigationButton"]

    # synchronization: wait until search result view is visible
    waitForObject ":Search Results_UINavigationItemView"
}

Then "'|integer|' entries should be present" {context numOfEntries} {
    test compare [invoke [findObject ":_UITableView"] numberOfRowsInSection_ 0] $numOfEntries
}
4.8.2.3.6. Provide parameters for Step in table

The next Scenario will test a search result with multiple elements found. Instead of using multiple steps for verifying this, we use a single step and pass a table as an argument to the step.

Scenario: State after searching with multiple matches
    Given elements application is running
    When I switch to the search view
    And I enter 'he' into the search field and tap Search
    Then the following entries should be present
        | Number | Symbol | Name          |
        | 2      | He     | Helium        |
        | 44     | Ru     | Ruthenium     |
        | 75     | Re     | Rhenium       |
        | 104    | Rf     | Rutherfordium |
        | 116    | Uuh    | Ununhexium    |

The Step implementation to handle such tables looks like this:

Python
@Then("the following entries should be present")
def step(context):
    table = context.table
    table.pop(0) # Drop initial row with column headers

    tableView = waitForObject(":_UITableView")
    dataSource = tableView.dataSource
    numberOfRows = tableView.numberOfRowsInSection_(0)

    test.compare(numberOfRows, len(table))
    for i in range(numberOfRows):
        number = table[i][0]
        symbol = table[i][1]
        name = table[i][2]
        expectedText = number + ": " + name + " (" + symbol + ")"

        indexPath = NSIndexPath.indexPathForRow_inSection_(i, 0)
        cell = dataSource.tableView_cellForRowAtIndexPath_(tableView, indexPath)
        test.compare(cell.text, expectedText)
JavaScript

Then("the following entries should be present", function(context) {
    var table = context.table;
    table.shift(); // Drop initial row with column headers

    var tableView = waitForObject(":_UITableView");
    var dataSource = tableView.dataSource;
    var numberOfRows = tableView.numberOfRowsInSection_(0);

    test.compare(numberOfRows, table.length);
    for (var i = 0; i < table.length; ++i) {
        var number = table[i][0];
        var symbol = table[i][1];
        var name = table[i][2];
        var expectedText = number + ": " + name + " (" + symbol + ")";

        var indexPath = NSIndexPath.indexPathForRow_inSection_(i, 0);
        var cell = dataSource.tableView_cellForRowAtIndexPath_(tableView, indexPath);
        test.compare(cell.text, expectedText);
    }
});

Perl

Then("the following entries should be present", sub {
    my $context = shift;
    my $table = $context->{'table'};
    shift(@{$table}); # Drop initial row with column headers

    my $tableView = waitForObject(":_UITableView");
    my $dataSource = $tableView->dataSource;
    my $numberOfRows = $tableView->numberOfRowsInSection_(0);

    test::compare($numberOfRows, scalar @{$table});
    for (my $i = 0; $i < @{$table}; $i++) {
        my $number = @{@{$table}[$i]}[0];
        my $symbol = @{@{$table}[$i]}[1];
        my $name = @{@{$table}[$i]}[2];
        my $expectedText = $number . ": " . $name . " (" . $symbol . ")";

        my $indexPath = NSIndexPath::indexPathForRow_inSection_($i, 0);
        my $cell = $dataSource->tableView_cellForRowAtIndexPath_($tableView, $indexPath);
        test::compare($cell->text, $expectedText);
    }
});

Ruby
Then("the following entries should be present") do |context|
  table = context.table
  table.shift # Drop initial row with column headers

  tableView = waitForObject(":_UITableView")
  dataSource = tableView.dataSource
  numberOfRows = tableView.numberOfRowsInSection_(0)

  Test.compare(numberOfRows, table.length)
  for i in 0...numberOfRows do
    number = table[i][0]
    symbol = table[i][1]
    name = table[i][2]
    expectedText = number + ": " + name + " (" + symbol + ")"

    indexPath = NSIndexPath.indexPathForRow_inSection_(i, 0)
    cell = dataSource.tableView_cellForRowAtIndexPath_(tableView, indexPath)
    Test.compare(cell.text, expectedText)
  end
end
Tcl

Then "the following entries should be present" {context} {
    # Drop initial row with column headers
    set table [$context table]
    set table [lrange $table 1 end]

    set tableView [waitForObject ":_UITableView"]
    set dataSource [property get $tableView dataSource]
    set numberOfRows [invoke $tableView numberOfRowsInSection_ 0]

    test compare $numberOfRows [llength $table]
    for {set i 0} {$i < $numberOfRows} {incr i} {
        set number [lindex $table $i 0]
        set symbol [lindex $table $i 1]
        set name [lindex $table $i 2]
        set expectedText "$number: $name ($symbol)"

        set indexPath [invoke NSIndexPath indexPathForRow_inSection_ $i 0]
        set cell [invoke $dataSource tableView_cellForRowAtIndexPath_ $tableView $indexPath]
        test compare [property get $cell text] $expectedText
    }
}

4.8.2.3.7. Sharing data between Steps and Scenarios

If you execute the whole feature file in a row, you might notice that the second or third scenario fails to execute becuase the iOS Simulator fails to boot. The problem here is that the default OnScenarioEnd hook simply kills the iOS Simulator and does not give it enought time to do a full shutdown before we try to start the AUT for the next scenario.

A quick fix would be to simply wait a short amount of time after detaching from the iOS Simulator and before starting the AUT again. But this is rather fragile since we do not know exactly how long to wait. The better solution is to actually quit the iOS Simulator its menu entry.

In order to implement this, we have to switch to the context of the iOS Simulator in the hook. So we actually have to remembere the context object returned by the startApplication call in our first step and use that in the hook. For this we can use the context object passed to each step and hook implementation.

[Note]Application Context and Context Argument

The term "context" is used in two different and distinct cases that are independent from each other:

  • The application context is needed to control multiple AUTs in one test at the same; in the case of iOS testing this is the iOS Simulator and the iOS app you want to test.
  • The context argument that is passed to each of the scenario and hook in BDD testing. The context object contains useful information about the step that is executed and through its userData property, you can share data between scenarios and hooks.

So let's remember the iOS Simulator application context in the userData:

Python
@Given("elements application is running")
def step(context):
    context.userData = {}
    context.userData['ctx_simulator'] = startApplication("Elements")
    waitForApplicationLaunch()
JavaScript
Given("elements application is running", function(context) {
    context.userData['ctx_simulator'] = startApplication("Elements");
    waitForApplicationLaunch();
});
Perl
Given("elements application is running", sub {
    my $context = shift;
    $context->{userData}{"ctx_simulator"} = startApplication("Elements");
    waitForApplicationLaunch();
});
Ruby
Given("elements application is running") do |context|
  context.userData = Hash.new
  context.userData[:ctx_simulator] = startApplication("Elements")
  waitForApplicationLaunch()
end
Tcl
Given "elements application is running" {context} {
    set ctx [startApplication "Elements"]
    $context userData [dict create "ctx_simulator" $ctx]
    waitForApplicationLaunch
}

And in the OnScenarioEnd hook we use the context to cleanly shut down the iOS Simulator:

Python

@OnScenarioEnd
def hook(context):
    setApplicationContext(context.userData['ctx_simulator'])
    type(findObject("{type='NSView'}"), '<Command+q>')

JavaScript

OnScenarioEnd(function(context) {
    setApplicationContext(context.userData['ctx_simulator']);
    type(findObject("{type='NSView'}"), '<Command+q>');
});

Perl

OnScenarioEnd(sub {
    my $context = shift;
    setApplicationContext($context->{userData}{"ctx_simulator"});
    type(findObject("{type='NSView'}"), "<Command+q>");
});

Ruby

OnScenarioEnd do |context|
  setApplicationContext(context.userData[:ctx_simulator])
  type(findObject("{type='NSView'}"), '<Command+q>')
end

Tcl

OnScenarioEnd {context} {
    setApplicationContext [getApplicationContext [dict get [$context userData] "ctx_simulator"]]
    invoke type [findObject "{type='NSView'}"] "<Command+q>"
}

But the userData property can also be used for other purposes. So lets add a new Scenario to the Feature file. This time we would like to check that in the detailed search results, the title of the detail view is the same as our search term.

Scenario: State of the details when searching
    Given elements application is running
    When I switch to the search view
    And I enter 'Carbon' into the search field and tap Search
    And I tap on the first search result
    Then the previously entered search term is the title of the view

To share this data, we use the userData property of context object again.

Python
@When("I enter '|word|' into the search field and tap Search")
def step(context, search):
    clickObject(waitForObject(":Name Contains_UITextField"), 25, 13)
    type(waitForObject(":Name Contains_UITextField"), search)
    clickObject(waitForObject(":Search_UINavigationButton"))

    # synchronization: wait until search result view is visible
    waitForObject(":Search Results_UINavigationItemView")

    context.userData["search"] = search
JavaScript
When("I enter '|word|' into the search field and tap Search", function(context, search) {
    clickObject(waitForObject(":Name Contains_UITextField"), 25, 13);
    type(waitForObject(":Name Contains_UITextField"), search);
    clickObject(waitForObject(":Search_UINavigationButton"));

    // synchronization: wait until search result view is visible
    waitForObject(":Search Results_UINavigationItemView");

    context.userData["search"] = search;
});
Perl
When("I enter '|word|' into the search field and tap Search", sub {
    my $context = shift;
    my $search = shift;
    clickObject(waitForObject(":Name Contains_UITextField"), 25, 13);
    type(waitForObject(":Name Contains_UITextField"), $search);
    clickObject(waitForObject(":Search_UINavigationButton"));

    # synchronization: wait until search result view is visible
    waitForObject(":Search Results_UINavigationItemView");

    $context->{userData}{"search"} = $search;
});
Ruby
When("I enter '|word|' into the search field and tap Search") do |context, search|
  clickObject(waitForObject(":Name Contains_UITextField"), 25, 13)
  type(waitForObject(":Name Contains_UITextField"), search)
  clickObject(waitForObject(":Search_UINavigationButton"))

  # synchronization: wait until search result view is visible
  waitForObject(":Search Results_UINavigationItemView")

  context.userData[:search] = search
end
Tcl
When "I enter '|word|' into the search field and tap Search" {context search} {
    invoke clickObject [waitForObject ":Name Contains_UITextField"] 25 13
    invoke type [waitForObject ":Name Contains_UITextField"] $search
    invoke clickObject [waitForObject ":Search_UINavigationButton"]

    # synchronization: wait until search result view is visible
    waitForObject ":Search Results_UINavigationItemView"

    set userData [$context userData]
    dict set userData "search" $search
    $context userData $userData
}

All data stored in context.userData can be accessed in all Steps and Hooks in all Scenarios of the given Feature. Finally, we need to implement the Step Then the previously entered search term is the title of the view.

Python
@Then("the previously entered search term is the title of the view")
def step(context):
    waitFor("object.exists(':_UINavigationItemView')", 20000)
    test.compare(findObject(":_UINavigationItemView").title, context.userData["search"])
JavaScript
Then("the previously entered search term is the title of the view", function(context) {
    waitFor("object.exists(':_UINavigationItemView')", 20000);
    test.compare(findObject(":_UINavigationItemView").title, context.userData["search"]);
});
Perl
Then("the previously entered search term is the title of the view", sub {
    my $context = shift;
    waitFor("object::exists(':_UINavigationItemView')", 20000);
    test::compare(findObject(":_UINavigationItemView")->title, $context->{userData}{"search"});
});
Ruby
Then("the previously entered search term is the title of the view") do |context|
  waitFor("Squish::Object.exists(':_UINavigationItemView')", 20000)
  Test.compare(findObject(":_UINavigationItemView").title, context.userData[:search])
end
Tcl
Then "the previously entered search term is the title of the view" {context} {
    waitFor {object exists ":_UINavigationItemView"} 20000
    test compare [property get [findObject ":_UINavigationItemView"] title] [dict get [$context userData] "search"]
}
4.8.2.3.8. Scenario Outline

Assume our Feature contains the following two Scenarios:

Scenario: State after searching with exact match
    Given elements application is running
    When I switch to the search view
    And I enter 'Hydrogen' into the search field and tap Search
    Then the entry '1: Hydrogen (H)' should be present

Scenario: State after searching with exact match
    Given elements application is running
    When I switch to the search view
    And I enter 'Helium' into the search field and tap Search
    Then the entry '2: Helium (He)' should be present

As we can see, those Scenarios perform the same actions using different test data. The same can be achieved by using a Scenario Outline (a Scenario template with placeholders) and Examples (a table with parameters).


Scenario Outline: Doing a search with exact match multiple times
  Given elements application is running
  When I switch to the search view
  And I enter '<Name>' into the search field and tap Search
  Then the entry '<Number>: <Name> (<Symbol>)' should be present
  Examples:
     | Name     | Number | Symbol |
     | Hydrogen | 1      | H      |
     | Helium   | 2      | He     |
     | Carbon   | 6      | C      |

Please note that the OnScenarioEnd hook will be executed at the end of each loop iteration in a Scenario Outline.

4.8.2.4. Test execution

In the Squish IDE, users can execute all Scenarios in a Feature, or execute only one selected Scenario. In order to execute all Scenarios, the proper Test Case has to be executed by clicking on the Play button in the Test Suites view.

Execute all Scenarios from Feature

In order to execute only one Scenario, you need to open the Feature file, right-click on the given Scenario and choose Run Scenario. An alternative approach is to click on the Play button next to the respective Scenario in the Scenarios tab in Test Case Resources.

Execute one Scenario from Feature

After a Scenario is executed, the Feature file is colored according to the execution results. More detailed information (like logs) can be found in the Test Results View.

Execution results in Feature file

4.8.2.5. Test debugging

Squish offers the possibility to pause an execution of a Test Case at any point in order to check script variables, spy application objects or run custom code in the Squish script console. To do this, a breakpoint has to be placed before starting the execution, either in the Feature file at any line containing a Step or at any line of executed code (i.e. in middle of step definition code).

Breakpoint in Feature file

After the breakpoint is reached, you can inspect all application objects and their properties. If a breakpoint is placed at a Step definition or a hook is reached, then you can additionally add Verification Points or record code snippets.

4.8.2.6. Re-using Step definitions

BDD test maintainability can be increased by reusing Step definitions. For example, the following line imports all Step definitions first from the step directories in the Test Case Resources and Test Suite Resources.

Python
collectStepDefinitions('./steps', '../shared/steps')
JavaScript
collectStepDefinitions('./steps', '../shared/steps');
Perl
use Squish::BDD;
collectStepDefinitions("./steps", "../shared/steps");
Ruby
include Squish::BDD
collectStepDefinitions "./steps", "../shared/steps"
Tcl
source [findFile "scripts" "tcl/bdd.tcl"]
Squish::BDD::collectStepDefinitions "./steps" "../shared/steps"

If the same Step definition is provided in multiple directories, then the first Step definition occurrence is used. Hence in above example, the definition from the Test Case Resources will be used. If no definition is found in that directory, the definition from the Test Suite Resources will be used.

4.8.3. Tutorial: Migration of existing tests to BDD

This chapter is aimed for users that have existing standard Squish tests and who would like to introduce Behavior Driven Testing. The first section describes how to keep the existing tests and just create new tests with the BDD approach. The second section describes how to convert existing standard tests to BDD tests.

4.8.3.1. Extend existing tests to BDD

The first option is to keep any existing Squish standard tests and extend them by adding new BDD tests. It's possible to have a Test Suite containing both standard Test Cases and BDD Test Cases. Simply open existing Test Suite with standard Test Cases and choose "New BDD Test Case" option from drop down list.

Creating new BDD Test Case

Assuming your existing standard Test Cases make use of a library and you are calling shared functions to interact with the AUT, those functions can still be used in both, existing standard Test Cases and newly created BDD Test Cases. In the example below, a function is used in multiple standard Test Cases:

Python
def switchToSearchView():
    clickObject(waitForObject(":Search_UITableViewLabel"), 179, 9)
JavaScript
function switchToSearchView(){
    clickObject(waitForObject(":Search_UITableViewLabel"), 179, 9);
}
Perl
sub switchToSearchView{
    clickObject(waitForObject(":Search_UITableViewLabel"), 179, 9);
}
Ruby
def switchToSearchView
  clickObject(waitForObject(":Search_UITableViewLabel"), 179, 9)
end
Tcl
proc switchToSearchView {} {
    invoke clickObject [waitForObject ":Search_UITableViewLabel"] 179 9
}

New BDD Test Cases can easily use the same function:

Python
@When("I switch to the search view")
def step(context):
    switchToSearchView()
JavaScript
When("I switch to the search view", function(context) {
    switchToSearchView();
});
Perl
When("I switch to the search view", sub {
    my $context = shift;
    switchToSearchView();
});
Ruby
When("I switch to the search view") do |context|
  switchToSearchView()
end
Tcl
When "I switch to the search view" {context} {
    switchToSearchView
}

4.8.3.2. Convert existing tests to BDD

The second option is to convert an existing Test Suite that contains standard Test Cases into behavior driven tests. Since a Test Suite can contain both, standard Test Cases and BDD Test Cases, migration can be done gradually. A Test Suite containing a mix of both Test Case types can be executed and results analyzed without any extra effort required.

The first step is to review all Test Cases of the existing Test Suite and group them by the Feature they test. Each standard Test Case will be transformed into a Scenario, which is a part of a Feature. For example, assume we have 5 standard Test Cases. After review, we realize that those standard Test Cases examine two Features. Therefore, when migration is completed, our Test Suite will contain two BDD Test Cases, each of them containing one Feature. Each Feature will contain multiple Scenarios. In our example the first Feature contains three Scenarios and the second Feature contains two Scenarios.

Conversion Chart

At the beginning, open a Test Suite in the Squish IDE that contains standard Squish tests that are planned to be migrated to BDD tests. Next, create a new Test Case by choosing New BDD Test Case option from the context menu. Each BDD Test Case contains a test.feature file that can be filled with maximum one Feature. Next, open the test.feature file to describe the Features using the Gherkin language. Following the syntax from the template, edit the Feature name and optionally provide a short description. Next, analyze which actions and verifications are performed in the standard Test Case that is going to be migrated. This is how an example Test Case for the elements application could look like:

Python
def main():
    startApplication("Elements")
    ctx_1 = waitForApplicationLaunch()
    clickObject(waitForObject(":Search_UITableViewLabel"), 179, 9)
    waitFor("object.exists(':Name Contains_UITextField')", 20000)
    test.compare(findObject(":Name Contains_UITextField").text, "")
JavaScript
function main(){
    startApplication("Elements");
    ctx_1 = waitForApplicationLaunch();
    clickObject(waitForObject(":Search_UITableViewLabel"), 179, 9);
    waitFor("object.exists(':Name Contains_UITextField')", 20000);
    test.compare(findObject(":Name Contains_UITextField").text, "");
}
Perl
sub main {
    startApplication("Elements");
    waitForApplicationLaunch();
    clickObject(waitForObject(":Search_UITableViewLabel"), 179, 9);
    waitFor("object::exists(':Name Contains_UITextField')", 20000);
    test::compare(findObject(":Name Contains_UITextField")->text, "");
}
Ruby
def main
  startApplication("Elements")
  ctx_1 = waitForApplicationLaunch()
  clickObject(waitForObject(":Search_UITableViewLabel"), 179, 9)
  waitFor("object.exists(':Name Contains_UITextField')", 20000)
  test.compare(findObject(":Name Contains_UITextField").text, "")
end
Tcl
proc main {} {
    startApplication "Elements"
    waitForApplicationLaunch
    invoke clickObject [waitForObject ":Search_UITableViewLabel"] 179 9
    waitFor {object exists ":Name Contains_UITextField"} 20000
    test compare [property get [findObject ":Name Contains_UITextField"] text] ""
}

After analyzing the above standard Test Case we can create the following Scenario and add it to test.feature file:

Scenario: Initial state of the search view
   Given elements application is running
   When I switch to the search view
   Then the search field is empty

Next, right-click on the Scenario and choose the option Create Missing Step Implementations from the context menu. This will create a skeleton of Steps definitions:

Python
@Given("elements application is running")
def step(context):
    test.warning("TODO implement elements application is running")

@When("I switch to the search view")
def step(context):
    test.warning("TODO implement I switch to the search view")

@Then("the search field is empty")
def step(context):
    test.warning("TODO implement the search field is empty")
JavaScript
Given("elements application is running", function(context) {
    test.warning("TODO implement elements application is running");
});

When("I switch to the search view", function(context) {
    test.warning("TODO implement I switch to the search view");
});

Then("the search field is empty", function(context) {
    test.warning("TODO implement the search field is empty");
});
Perl
Given("elements application is running", sub {
    my $context = shift;
    test::warning("TODO implement elements application is running");
});

When("I switch to the search view", sub {
    my $context = shift;
    test::warning("TODO implement I switch to the search view");
});

Then("the search field is empty", sub {
    my $context = shift;
    test::warning("TODO implement the search field is empty");
});
Ruby
Given("elements application is running") do |context|
  Test.warning "TODO implement elements application is running"
end

When("I switch to the search view") do |context|
  Test.warning "TODO implement I switch to the search view"
end

Then("the search field is empty") do |context|
  Test.warning "TODO implement the search field is empty"
end
Tcl
Given "elements application is running" {context} {
    test warning "TODO implement elements application is running"
}

When "I switch to the search view" {context} {
    test warning "TODO implement I switch to the search view"
}

Then "the search field is empty" {context} {
    test warning "TODO implement the search field is empty"
}

Now we put code snippets from the standard Test Case into respective Step definitions and remove the lines containing test.warning. If your standard Test Cases make use of shared scripts, you can call those functions inside of the Step definition as well. For example, the final result could look like this:

Python
@Given("elements application is running")
def step(context):
    startApplication("Elements")
    waitForApplicationLaunch()

@When("I switch to the search view")
def step(context):
    clickObject(waitForObject(":Search_UITableViewLabel"), 179, 9)

@Then("the search field is empty")
def step(context):
    waitFor("object.exists(':Name Contains_UITextField')", 20000)
    test.compare(findObject(":Name Contains_UITextField").text, "")
JavaScript
Given("elements application is running", function(context) {
    startApplication("Elements");
    waitForApplicationLaunch();
});

When("I switch to the search view", function(context) {
    clickObject(waitForObject(":Search_UITableViewLabel"), 179, 9);
});

Then("the search field is empty", function(context) {
    waitFor("object.exists(':Name Contains_UITextField')", 20000);
    test.compare(findObject(":Name Contains_UITextField").text, "");
});
Perl
Given("elements application is running", sub {
    my $context = shift;
    startApplication("Elements");
    waitForApplicationLaunch();
});

When("I switch to the search view", sub {
    my $context = shift;
    clickObject(waitForObject(":Search_UITableViewLabel"), 179, 9);
});

Then("the search field is empty", sub {
    my $context = shift;
    waitFor("object::exists(':Name Contains_UITextField')", 20000);
    test::compare(findObject(":Name Contains_UITextField")->text, "");
});
Ruby
Given("elements application is running") do |context|
  startApplication("Elements")
  waitForApplicationLaunch()
end

When("I switch to the search view") do |context|
  clickObject(waitForObject(":Search_UITableViewLabel"), 179, 9)
end

Then("the search field is empty") do |context|
  waitFor("Squish::Object.exists(':Name Contains_UITextField')", 20000)
  Test.compare(findObject(":Name Contains_UITextField").text, "")
end
Tcl
Given "elements application is running" {context} {
    startApplication "Elements"
    waitForApplicationLaunch
}

When "I switch to the search view" {context} {
    invoke clickObject [waitForObject ":Search_UITableViewLabel"] 179 9
}

Then "the search field is empty" {context} {
    waitFor {object exists ":Name Contains_UITextField"} 20000
    test compare [property get [findObject ":Name Contains_UITextField"] text] ""
}

Additionally, when the Test Case execution ends, Squish terminates the AUT. After converting standard Test Cases into Scenarios, we must ensure that the AUT is terminated at the end of Scenario as well. This can be done by implementing an OnScenarioEnd hook.

Python
@OnScenarioEnd
def hook(context):
    for ctx in applicationContextList():
        ctx.detach()
JavaScript
OnScenarioEnd(function(context) {
    applicationContextList().forEach(function(ctx) { ctx.detach(); });
});
Perl
OnScenarioEnd(sub {
    foreach (applicationContextList()) {
        $_->detach();
    }
});
Ruby
OnScenarioEnd do |context|
    applicationContextList().each { |ctx| ctx.detach() }
end
Tcl
OnScenarioEnd {context} {
    foreach ctx [applicationContextList] {
        applicationContext $ctx detach
    }
}

The above example was simplified for this tutorial. In order to take full advantage of Behavior Driven Testing in Squish, please familiarize yourself with the section Behavior Driven Testing (Section 6.19) in API Reference Manual (Chapter 6).




[13] Each AUT must be registered with the squishserver so that test scripts do not need to include the AUT's path, thus making the tests platform-independent. Another benefit of registering is that AUTs can be tested without the Squish IDE—for example, when doing regression testing.