Squish Coco

Code Coverage Measurement for Tcl, C# and C/C++

PartI
Setup and Tutorials

Chapter1Installation and Setup

This chapter describes the installation and setup of SquishCoco on your machine and a later update of software and licenses.

1.1Choosing a license

There are two kinds of licenses, node-locked and floating licenses.

A node-locked license is bound to a specific machine and a specific user account. On that account, you can run as many SquishCoco processes as you like. Note that if you run SquishCoco from a Continuous Integration Server, the server has a separate account; in this case SquishCoco needs an additional node-locked license.

Floating licenses are bound to processes, not to computers. One needs as many floating licenses as there are SquishCoco processes running at the same time. A SquishCoco process can be

  1. an instance of CoverageScanner or of the compiler wrappers,
  2. an instance of the CoverageBrowser,
  3. each instance of the command line tools, cmmerge, cmcsexeimport and cmreport.

Every compiler call that is instrumented by SquishCoco is counted separately because it requires one instance of the CoverageScanner to run. Builds with several compilations in parallel should however cause no real problem since a process that cannot get a license will wait until one is freed.

Floating licenses are managed by a license server (see Chapter26). The server must be running on a machine that is reachable from the machine on which SquishCoco runs. froglogic will provide a license file which determines the maximal number of SquishCoco processes that can be active at the same time. (The license server itself is not counted among the processes that need a license.)

1.2Installation

An installer for SquishCoco can be downloaded via http://www.froglogic.com/secure.

The installers for the various platforms supported by SquishCoco have a common name scheme, but the way they must be executed differs. The naming scheme is SquishCocoSetup_x.y.z_⟨platform.suffix⟩, where x.y.z is the program version, ⟨platform⟩ describes the operating system and other details of the installation, and ⟨suffix⟩ is an operating system specific suffix. (In the noncommercial version, SquishCoco is replaced by SquishCocoNoncommercial.)

Microsoft® Windows
The installer is a file SquishCocoSetup_x.y.z_Windows_x86.exe or SquishCocoSetup_x.y.z_Windows_x64.exe and must be executed.
Linux
The installer is a file SquishCocoSetup_x.y.z_⟨platform.run and must be executed with bash, as in
$bashSquishCocoSetup_3.3.2_Linux_x86_64.run

Note that the CentOS version of the installer is also valid for RedHat Linux.

Apple® MacOSX
The installation package is a file of the form SquishCocoSetup_x.y.z_⟨platform.pkg. Click on it and the installer will be started.

If no valid license is present for your account, the installer will run the License Wizard (see Chapter25) at the end of the installation. The License Wizard will then allow to configure the license.

1.2.1Installation of a license server

If you have chosen a floating license, you need to specify a machine on your local network on which the license server program runs.

The license server needs a configuration file to run. It specifies the number of licenses served, the machine on which it runs, and the port used by the server. To specify the configuration file, SquishCoco must be installed first.

Install therefore SquishCoco on the license server machine. After the installation, cocolicwizard will be started. Ignore it. Instead, run the command cocolicserver--server-identifier and redirect its output to a file. How this is done depends slightly on the operating system.

Microsoft® Windows
In a command window, type
c:\project>"C:\ProgramFiles\squishcoco\cocolicserver.exe"--server-identifier>machine.txt
Linux
Open an command shell and type
$/opt/SquishCoco/bin/cocolicserver--server-identifier>machine.txt
Apple® MacOSX
Open an command shell and type
$/Applications/SquishCoco/cocolicserver--server-identifier>machine.txt

This will generate a file machine.txt in the directory in which the command was executed. Send it to froglogic, together with the number of requested licenses (and, if necessary, the port number).

1.2.2Running the license server

froglogic sends you a configuration file, which is usually called licserver.cfg. With it, the license server can run. In the simplest case, write

$cocolicserver-clicserver.cfg

The server then provides licenses to other machines in the same network. For additional options of cocolicserver, see Chapter26.

1.3Updates

A new license can be installed with with cocolic or cocolicwizard.

To update to a newer version of SquishCoco, download and install it. It will then overwrite the previous version. The license will not be touched, you can continue to use it.

Chapter2Instrumenting a simple project

In this section we will show a small project with unit tests and show how it can be instrumented. The project is a simple expression parser, and it has not many requirements besides a C++ compiler.

The project replicates in miniature an existing project that has been extend with unit tests and for which we will now use SquishCoco to find out how good the test coverage is. Instrumentation should therefore be non-intrusive and should not change the project very much.

The procedures to set up a project for instrumentation under UNIX® and Apple® MacOSX differ from that for Microsoft® Windows. The setup section has therefore two versions, which appear in the following sections.

2.1UNIX® and Apple® MacOSX setup

2.1.1Setup

The parser example can be found in SquishCoco’s installation directory. Under UNIX®, this is the directory /opt/SquishCoco/ or, if you have installed SquishCoco locally, the subdirectory SquishCoco/ of your home directory.1 Under Apple® MacOSX, the installation directory is /Applications/SquishCoco/.

We will refer to it as the SquishCoco/ directory, wherever it is located. The SquishCoco examples, together with their supporting programs, are in SquishCoco/samples/, and the parser is in SquishCoco/samples/parser/. This directory contains three versions of the program, in the directories parser_v1/ to parser_v3/. They represent the parser in different stages of its development.

The example uses CppUnit as its unit test framework. There is a version of CppUnit in the SquishCoco/samples/ directory, and the parser example is prepared to use it.

Copy therefore now the content of the whole samples/ directory to your workspace and make parser_v1/ your working directory. If SquishCoco is installed in /opt/SquishCoco/, this is done in the following way:

$cp-a/opt/SquishCoco/samples.
$cdsamples/parser/parser_v1

Make also sure that the SquishCoco tools are in your search path. If they are not so already, you can now do it by writing

$.cocopath.sh

(Don’t forget the dot at the beginning!) Now programs like coveragebrowser can be called from the command line.

2.1.2Structure of the parser directories

We we will use samples/parser/parser_v1/ as our working directory. It contains C++ source files and header files, together with an unit test file, unittests.cpp. The makefile has been prepared for unit tests, but not for instrumentation. The instrumentation is done with the help of the bash script instrumented. (There are also some files that are needed in the Microsoft® Windows version. We will ignore them here.)

The makefile is called gmake.mak to distinguish it from the Microsoft® Windows makefile, nmake.mak. Therefore one has to write make-fgmake.mak in places where otherwise a make would be enough.

2.1.3Compiling and testing

Run “make -f gmake.mak” to compile the program. It is a simple expression parser and calculator.

$./parser
EnteranexpressionanpressEntertocalculatetheresult.
Enteranemptyexpressiontoquit.

>2+2
Ans=4
>Pi
Ans=3.14159
>sin(Pi)
Ans=1.22465e-16
>sinn(90)
Error:Unknownfunctionsinn(col9)
>sin(90)
Ans=0.893997
>cos(pi)
Ans=-1
>
$

We have added some unit tests for the main class, Parser. Look into the file unittests.cpp to see the tests that have been included. Execute it with make-fgmake.maktests. You will see that eight tests have been executed.

2.1.4Instrumentation

We have kept the instrumentation separate from the main project. The core of the instrumentation is a short shell script, instrumented. It is a simple wrapper, and calling “instrumented command⟩” executes ⟨command⟩ with a few environment variables set. We will do this now. Enter

$make-fgmake.makclean
$./instrumentedmake-fgmake.maktests

The first command removes all object files, since we need everything to be recompiled. The second command then compiles the program with instrumentation and runs the tests. That’s all!

We now have a look at what the script has done and how it has done it. List the contents of your parser directory:

$ls
constants.hfunctions.o.csmesparser.cppunittests.o
error.cppinstrumentedparser.hunittests.o.csmes
error.hLICENSEparser.ovariablelist.cpp
error.omain.cppparser.o.csmesvariablelist.h
error.o.csmesmain.ounittestsvariablelist.o
functions.cppmain.o.csmesunittests.cppvariablelist.o.csmes
functions.hMakefileunittests.csexe
functions.oNOTICEunittests.csmes

You see two kinds of files that do not appear as result of normal compilation. The .csmes files contain the information that is needed for coverage measurement, and the .csexe files contain the results of code execution. The files that end in .o.csmes are temporary files and are only used during compilation.

This time, the only program that was actually executed was unittests, and therefore the only .csexe file is unittests.csexe. To see the coverage results, you can therefore start the CoverageBrowser with the command

$coveragebrowser-munittests.csmes-eunittests.csexe

Then the CoverageBrowser will start with a modal window, "Load Execution Result". Click on the "Import" button to load the data.

By default, CoverageBrowser automatically deletes the .csexe file after it loads it. You can switch this behavior off by unselecting the "Delete after loading" checkbox. Or, if you don’t, select the "File->Save" menu item to save the execution report in the unittests.csexe file.

For the use of CoverageBrowser, see PartIII. We will now rather describe how the instrumentation is done.

2.1.5How the project is instrumented

The file instrumented is a short bash script:

#!/bin/bash

.getcoco.sh#GetCocovariables

exportPATH=$COCO_WRAPPER_DIR:$PATH
exportCOVERAGESCANNER_ARGS='--cs-on'

"$@"

At its beginning, the shell script getcoco.sh sets the shell variable COCO_WRAPPER_DIR. It contains the name of the directory in which SquishCoco is installed. Then there are two export statements, and the final cryptic statement executes the command line parameters of instrumented. So if you call ./instrumentedmake-fgmake.maktests, the command make-fgmake.maktests is executed by the script, but in a different environment than normally.

The important part of it are therefore the two export statements. In the first one, the search part is manipulated so that the programs in /opt/SquishCoco/wrapper/bin/ are searched first. This directory contains a lot of files with the same names as the compilers2 that are supported by SquishCoco:

$ls/opt/SquishCoco/wrapper/bin
arg++-4.9x86_64-linux-gnu-ar
c89-gccgccx86_64-linux-gnu-g++
c99-gccgcc-4.6x86_64-linux-gnu-g++-4.6
...

These programs are the compiler wrappers. With the new PATH, they are executed instead of the real compilers. The compiler wrappers are actually symbolic links to a single program, coveragescanner (see PartIV). When executed to compile a source file, they create an instrumented version of the source and then run the original compiler to compile it.

In the second export statement, additional flags for the compiler wrappers (see Chapter15) are set. Here we set only one option, --cs-on. If it is not present, the compiler wrappers are inactive and just call the compilers they represent.

2.1.6Additional changes

It is also convenient to add make targets to handle the files generated by CoverageScanner. In the parser directory, the Makefile has been changed in the following way:

clean:testclean
...
-$(DEL_FILE)*.o.csmes#(added)

distclean:clean
...
-$(DEL_FILE)*.csmes*.csexe#(added)

Since the .o.csmes files are needed only for compilation, they can be deleted whenever the .o files are deleted (which is that what makeclean does). The .csmes and .csexe files are more precious and should only be deleted when all generated files are removed. Therefore we have added their deletion statements to the distclean target.

2.2Microsoft® Windows setup

2.2.1Setup

The parser example can be found in the directory C:\ProgramFiles\squishcoco\parser. This directory contains three versions of the program, in the subdirectories parser_v1 to parser_v3. They represent the parser in different stages of its development.

The example uses CppUnit as its unit test framework. There is a version of CppUnit in the C:\ProgramFiles\squishcoco directory, and the parser example is prepared to use it.

Since these directories are write-protected, you need to create your own working copies. Copy therefore the two directories C:\ProgramFiles\squishcoco\parser and C:\ProgramFiles\squishcoco\cppunit-1.12.1 to a directory of your choice. Then remove the write protection of the directories and all the files contained in them.

2.2.2Structure of the parser directories

We will use parser\parser_v1 as our working directory. It contains C++ source files and header files, together with an unit test file, unittests.cpp. The Makefile has been prepared for unit tests, but not for instrumentation. The instrumentation is done with the help of the batch file instrumented.bat. (There are also some files that are needed in the UNIX®version. We will ignore them here.)

The makefile is called nmake.mak to distinguish it from the UNIX® makefile, gmake.mak. Therefore one has to write nmake/Fgmake.mak in places where otherwise a nmake would be enough.

2.2.3Compiling and testing

We will do the compilation of the example on the command line. To get a command window, execute the batch file CocoCmd.bat that is located in the parser_v1 directory. In this window, the Microsoft® Visual Studio® command line tools (like nmake) are accessible, and also the main SquishCoco programs (like CoverageBrowser).

Run “nmake /F nmake.mak” to compile the program. It is a simple expression parser and calculator.

$C:\code\parser\parser_v1>parser.exe
EnteranexpressionanpressEntertocalculatetheresult.
Enteranemptyexpressiontoquit.

>2+2
Ans=4
>Pi
Ans=3.14159
>sin(Pi)
Ans=1.22465e-16
>sinn(90)
Error:Unknownfunctionsinn(col9)
>sin(90)
Ans=0.893997
>cos(pi)
Ans=-1
>
C:\code\parser\parser_v1>

We have added some unit tests for the main class, Parser. Look into the file unittests.cpp to see the tests that have been included. Execute it with nmake/Fnmake.maktests. You will see that eight tests have been executed.

2.2.4Instrumentation

We have kept the instrumentation separate from the main project. The core of the instrumentation is a short shell script, instrumented.bat. It is a simple wrapper, and calling “instrumented.bat command⟩” executes ⟨command⟩ with a few environment variables set. We will do this now. Enter

C:\code\parser\parser_v1>nmake/Fnmake.makclean
C:\code\parser\parser_v1>instrumented.batnmake/Fnmake.maktests

The first command removes all object files, since we need everything to be recompiled. The second command then compiles the program with instrumentation and runs the tests. That’s all!

We now have a look at what the script has done and how it has done it. List the contents of your parser directory:

C:\code\parser\parser_v1>dir/D

DirectoryofC:\code\parser\parser_v1

[.]LICENSEunittests.exe
[..]main.cppunittests.exe.csexe
constants.hmain.objunittests.exe.csmes
error.cppmain.obj.csmesunittests.exp
error.hMakefileunittests.lib
error.objnmake.makunittests.obj
error.obj.csmesNOTICEunittests.obj.csmes
functions.cppparser.cppvariablelist.cpp
functions.hparser.hvariablelist.h
functions.objparser.objvariablelist.obj
functions.obj.csmesparser.obj.csmesvariablelist.obj.csmes
instrumentedREADME.squishcoco
instrumented.batunittests.cpp
35File(s)1,079,919bytes
2Dir(s)35,159,457,792bytesfree

You see two kinds of files that do not appear as result of normal compilation. The .csmes files contain the information that is needed for coverage measurement, and the .csexe files contain the results of code execution. The files that end in .obj.csmes are temporary files and are only used during compilation.

This time, the only program that was actually executed was unittests, and therefore the only .csexe file is unittests.exe.csexe. To see the coverage results, you can therefore start the CoverageBrowser with the command

C:\code\parser\parser_v1>coveragebrowser-munittests.exe.csmes-eunittests.exe.csexe

Then the CoverageBrowser will start with a modal window, "Load Execution Result". Click on the "Import" button to load the data.

By default, CoverageBrowser automatically deletes the .csmes file after it loads it. You can switch this behavior off by unselecting the "Delete after loading" checkbox. Or, if you don’t, select the "File->Save" menu item to save the execution report in the unittests.csexe file.

For the use of CoverageBrowser, see PartIII. We will now rather describe how the instrumentation is done.

2.2.5How the project is instrumented

The file instrumented.bat is a short batch file:

@echooff
setlocal

set
PATH=%SQUISHCOCO%\visualstudio;%PATH%
setCOVERAGESCANNER_ARGS=--cs-on

call%*

endlocal

The variable SQUISHCOCO contains the name of the directory in where SquishCoco is installed. It is set by SquishCoco during installation.

At the beginning, the setlocal command ensures that the following commands change the environment variables only temporarily. Then there are two set statements, and the final call statement executes the command line parameters of instrumented. So if you call “instrumentednmake/Fnmake.maktests”, the command “nmake/Fnmake.maktests” is executed by the batch file, but in a different environment than normally. At the end, endlocal undoes the changes in the environment variables.

The important part of the script are therefore the two set statements. In the first one, the search part is manipulated so that the programs in C:\ProgramFiles\squishcoco\visualstudio are searched first.3 This directory contains files with the same names as the compilers and the linker

C:\code\parser\parser_v1>dir/d"\ProgramFiles\squishcoco\visualstudio"

DirectoryofC:\ProgramFiles\squishcoco\visualstudio

[.]cl.exelink.cspromsvcr100.dll
[..]lib.csprolink.exe
cl.csprolib.exemsvcp100.dll
8File(s)5,662,299bytes
2Dir(s)35,053,502,464bytesfree

The .exe files in this directory are the compiler wrappers. With the new PATH, they are executed instead of the real compiler. The compiler wrappers are actually copies of a single program, coveragescanner.exe (see PartIV). When executed to compile a source file, they create an instrumented version of the source and then run the original compiler to compile it.

In the second set statement, additional flags for the compiler wrappers (see Chapter15) are set. Here we set only one option, --cs-on. If it is not present, the compiler wrappers are inactive and just call the compilers they represent.

The resulting script should work without changes for many simple projects. If more customization is needed, it can often be achieved by adding more options to COVERAGESCANNER_ARGS.

2.2.6Additional changes

It is also convenient to add make targets to handle the files generated by CoverageScanner. In the parser_v1 directory, the file nmake.mak has been changed in the following way:

clean:testclean
...
-$(DEL_FILE)*.obj.csmes#(added)

distclean:clean
...
-$(DEL_FILE)*.csmes*.csexe#(added)

Since the .obj.csmes files are needed only for compilation, they can be deleted whenever the .obj files are deleted (which is that what makeclean does). The .csmes and .csexe files are more precious and should only be deleted when all generated files are removed. Therefore we have added their deletion statements to the distclean target.

2.3Beyond the minimal instrumentation

In the following sections we will show additional abilities of SquishCoco. They will require small changes in the code of the project.

2.3.1Excluding code from instrumentation

The coverage information generated so far has a problem: It covers too many files. The problematic files are those that belong to the testing framework and not to the tested program. Including them would create artificially low coverage rates.

With SquishCoco, one can exclude files from coverage by additional command line options. In parser_v2, this has been done. Look into parser_v2/instrumented (or parser_v2\instrumented.bat under Microsoft® Windows). In it, three additional command line options have been set, which we will now explain:

  • The option --cs-exclude-path=../../cppunit-1.12.1 excludes the source files of a directory and all its subdirectories. Here we use it to exclude all the files of the CppUnit framework.

    You can use slashes or backslashes with this option—SquishCoco normalizes them internally.

  • The options --cs-exclude-file-wildcard=unittests.cpp and --cs-exclude-file-wildcard=CppUnitListener.cpp exclude specific files. We use them to exclude the files unittests.cpp and CppUnitListener.cpp. (The second file is described below.)

2.3.2Making the test names visible

For the next modification, we want to change the project such that we know not only whether a line of code is covered by tests, but also by which tests it is covered. For this we will add calls of the CoverageScanner C/C++ library (see Chapter16.1) to the code, to tell SquishCoco the names of the tests and where they begin and end.

An updated version of the project can be found in the directory parser_v2. The greatest difference to the version in parser_v1 is that the file CppUnitListener.cpp has been added. It is copied almost verbatim from Chapter32.3.1. The file contains a class CppUnitListener and a new main() function. The main() function in unittests.cpp has been removed, but the file is otherwise unchanged.

CppUnitListener.cpp provides a unit test listener which allows to hook into the framework before and after the execution of each test. One can thus record additional test information, like the name and the result of a test, in the code coverage data without modifying the test code itself. (For a listing of CppUnitListener.cpp and an explanation how it works, see Chapter32.3.1)

Now you can execute this program with the same way as its previous version. View the results in the CoverageBrowser and see the code coverage for each single test item.

2.3.3Patch file analysis

Now consider the follwing scenario: In a large project, a last-minute patch has to be evaluated. It is not enough time to run the full test suite, but some risk assessment needs to be done. For situations like this, SquishCoco provides the feature of patch analysis. With it, one can specifically display the code coverage for the changed lines of code, and find the tests in a large suite that cover them. One can now see how risky the changes are.

We will simulate this situation in our example. In a new version of the parser, the character classification functions in the code, isWhiteSpace(), isAlpha(), etc., have been changed and use the standard C classification functions, like isspace(), instead of strchr(). The new version of the parser can be found in the directory parser_v3.

We will now compare it with the version in parser_v2, but neither run the tests nor even compile it. Instead we need the following two pieces of information:

  1. The coverage data from parser_v2, as generated in the previous section.
  2. A patch file showing the differences between the two directories. There is already a diff file in the parser directory that you can use, parser.diff.

    The diff file must be in the “unified” difference format. This is the standard output format of the diff functionality of many version control systems, e.g. of git diff (see Chapter28). Under UNIX®-like systems, the patch file can also be generated by the diff utility. It would be invoked from the parser directory in the following way:

    $diff-uparser_v2parser_v3>parser.diff

    There is also a Microsoft® Windows version of GNU diff in the parser directory, therefore the same command works in a Windows command shell too.

Start the CoverageBrowser. Then load the instrumentation database parser_v2/unittests.csmes via the menu entry "File->Open…", and the measurements file parser_v2/unittests.csexe with "File->Load Execution Report…".

Now select the menu entry "Reports->Patch File Analysis…". When the "Patch File Analysis" dialog appears:

  • Enter a title in the "Title" box.
  • Enter the path to the patch file in the "Patch File" box.
  • Enter a path (including the file name) for the report in the "Type" box.
  • Set the "Tooltip Maximal Size" to 5 (or any value greater than zero).
  • Make any other option adjustments.

Then click "Open" to view the report in the browser.

2.3.4The patch analysis report

The report consists of three tables that summarize the influence that the patch has on code coverage, and then an annotated version of the patch file.

The two tables in the section “Overview” contain statistics about the number and kind of the lines that were influenced by the patch.

The first table groups the patched lines in the code by the results of the tests that executed them. One can see here how much influence the patch has on tests that have passed (and now could fail) or failed (and could now succeed). There are also entries for manually checked tests and for those whose status is unknown. In our example, we did not register the test results and all our tests are counted as “Unknown”.

The second table shows the kind of changes to expect in the test coverage after the patch has been applied. It consists of three columns, containing the statistics about removed and inserted lines and their sum. From the first two columns one can see whether the test coverage for the patched code grows or falls. (In the parser example, it stays the same.) The last line in the table is also important: It shows the number of lines which SquishCoco could not classify as inserted or removed. Patch analysis is a heuristics, after all.

The section “List of tests influenced by the modifications” is a list with the names of the tests that executed the patched code, togehter with their result. It is helpful for a qualitative analysis of the patch. In our example, we can see that all tests execute code that is affected by the patch.

pictures/manual.tmp001.png
Figure 2.1: Coverage of the patched lines by the tests

More details can be found in the “Patch File” section of the report. It is an annotated version of the original patch file, with the old version of the text in red and the new version in green. Lines that did not change are shown in gray. The most important column is “Tests”, which shows for each code line the the number of tests that executed it (if it is removed) or will probably execute it after the patch is applied. A tooltip shows the names of these tests.

Chapter3Getting started with Qt

The following example is more complex. We will take Qt’s TextEdit example and use it to illustrate how Squish Coco can be used at different stages of the development process. To cover the whole coding cycle, we will first show how an instrumented application is created, perform manual tests and analyze their results. Then we will create an instrumented unit test.

In a second step (see Chapter3.2), we will cover those aspects which are more interesting to product managers: analysing the impact of code changes (e.g., bug fixes)—in particular, tracking their test progress, externalizing testing, and collecting the code coverage analysis of a complete testing team.

The Qt framework can directly be downloaded from http://qt-project.org.

The modified TextEdit sample is available in the doc directory of the Squish Coco installation. Copy the directory doc/textedit to your working directory.

3.1Compiling the example application

In this section, we will work with the files in the directory textedit/textedit_v1.

We want to be able to build our application both normally and with generated test coverage instrumentation code, without having to change our source code. This can be achieved by making a small change to the application’s project (.pro) file. We can then use a command line option for qmake to generate an instrumented build instead of a normal one.

To make the project file suitable both for normal and for instrumented builds, we create a set of definitions that can be activated by a command line switch; in qmake’s terminology this is called a scope. The following listing (see Figure3.1) shows a minimal scope for code instrumentation.

The following must be done:

  • We must ensure that precompiled headers are disabled when the code is instrumented. qmake allows to do this by setting the PRECOMPILED_HEADER variable to an empty value.
  • It is also necessary to increase the value of QMAKE_LINK_OBJECT_MAX in order to disable the usage of the linker script. We set it here to 10000.
  • Finally, qmake must be instructed to use CoverageScanner’s wrappers for compilation. This is done by prefixing the names of the compilation tools with cs.
CodeCoverage{
PRECOMPILED_HEADERS=
QMAKE_LINK_OBJECT_MAX=10000

QMAKE_CC=cs$$QMAKE_CC
QMAKE_CXX=cs$$QMAKE_CXX
QMAKE_LINK=cs$$QMAKE_LINK
QMAKE_LINK_SHLIB=cs$$QMAKE_LINK_SHLIB
QMAKE_AR=cs$$QMAKE_AR
QMAKE_LIB=cs$$QMAKE_LIB
}
Figure 3.1: Minimal qmake configuration

These modification are sufficient for most standard C and C++ applications. For Qt applications we must use additional settings in order to ensure that CoverageScanner does not instrument the source code that is generated by Qt’s tools (e.g., by uic, qrc and the moc).

To exclude qrc resource files from instrumentation, we must tell CoverageScanner not to instrument any file with a name that begins with qrc_. This can be done with the command line option --cs-exclude-file-regex=qrc_.*. Since we don’t want to have to enter this option manually, we will put it in the .pro file. Similarly, to let CoverageScanner ignore the files generated by uic we can use the same command line option, only this time with a different file matching regular expression: --cs-exclude-file-regex=ui_.*.

Squish Coco also provides a command line option that is specific to applications built using the Qt4 toolkit: (--cs-qt4). This option ensures that CoverageScanner:

  • does not instrument the Q_OBJECT and Q_DECLARE_PLUGIN macros.
  • does not instrument code generated by the moc, except that signal emissions and slot receives are instrumented, since they are vital to a Qt program’s logic.

We can also exercise some control over the level of instrumentation and what information is reported. For example, we can switch on the counting of code executions with the --cs-count command line option, or we can enable full instrumentation at decision/condition level with the --cs-full-instrumentation option. With the --cs-output option we can specify the file the execution report is written to when the application terminates. (By default the output is written to the file ⟨appname.csexe, where ⟨appname⟩ is the name of the program that has been executed.)

So, for a Qt4-based application, the final Squish Coco scope in the application’s .pro file will look something like this.

CodeCoverage{
COVERAGE_OPTIONS=--cs-count--cs-full-instrumentation
COVERAGE_OPTIONS+=--cs-qt4
COVERAGE_OPTIONS+=--cs-output=textedit.exe
COVERAGE_OPTIONS+=--cs-exclude-file-regex=qrc_.*

QMAKE_CFLAGS+=$$COVERAGE_OPTIONS
QMAKE_CXXFLAGS+=$$COVERAGE_OPTIONS
QMAKE_LFLAGS+=$$COVERAGE_OPTIONS

QMAKE_CC=cs$$QMAKE_CC
QMAKE_CXX=cs$$QMAKE_CXX
QMAKE_LINK=cs$$QMAKE_LINK
QMAKE_LINK_SHLIB=cs$$QMAKE_LINK_SHLIB
QMAKE_AR=cs$$QMAKE_AR
QMAKE_LIB=cs$$QMAKE_LIB
}
Figure 3.2: Final qmake configuration

In this form, the scope has been added to the project file textedit_v1/textedit_v1.pro. When we now run qmake without options, a Makefile for a normal build is generated; but we can also build an instrumented version of the program in the following way:

Linux/Apple® MacOSXMicrosoft® Windows
qmake CONFIG+=CodeCoverage
make
qmake CONFIG+=CodeCoverage
nmake

In either case we still end up with a textedit.exe executable, but with an instrumented build we will also get an additional file, textedit.exe.csmes. It contains the instrumentation database.

3.1.1The First Code Coverage Results

For our very first exercise we will simply execute TextEdit and then quit the application straight away. This will cause a file called textedit.exe.csexe to be generated. The file is in a binary format, so it is not human readable. It is an instrumentation database that contains a snapshot of the most recent execution that we have just done.

To see the results we must run the CoverageBrowser tool and load the textedit.exe.csmes instrumentation database (Menu: "File->Open…"). After the file has been opened, no coverage information is available because no execution reports have been imported. The instrumented code lines are shown grayed-out and no coverage statistics has been computed (see Figure3.3).

pictures/manual.tmp002.png
Figure 3.3: CoverageBrowser after loading the TextEdit’s instrumentation database

In order to see an execution report, click "File->Load Execution Report…", which invokes the import dialog. As a minimum you should enter the filename (including the full path) of the textedit.exe.csexe file (field: ‘FileName’), and give the test a name (Field: ‘Name’), e.g. “StartandQuit”. Switch the ‘Deleteafterloading’ option on because the report is no longer needed after our import. It is also helpful to set the ‘Whenfilebecomesmodified’ option to “Openthisdialog”, because then the file import dialog is automatically opened after each run and the new .csexe file that it created can be added to the database.

After the import has finished, the code coverage information is visible:

  • Coverage statistics for the functions and methods of all source files of the application is shown.
  • The source window is now colored and shows executed code on a green background and unexecuted code on a red background.
  • The execution list now contains one selected item called “StartandQuit”, which is the only test execution report we have so far created.

3.1.2Interactive testing

CoverageBrowser correctly reveals, for example, that the TextEdit::fileSave() function is not executed. We will try to validate this function interactively, guided by the code coverage analysis.

In the source window, all unexecuted source code lines are shown with a red background. (see Listing3.4)

boolTextEdit::fileSave()
{
if (fileName.isEmpty())
{
QMessageBox::warning(this,tr("Nofilenamespecified"),
tr("Savefirstyourdocumentusing'SaveAs...'fromthemenu"),
QMessageBox::Ok);
return false;
}

QTextDocumentWriterwriter(fileName);
boolsuccess=writer.write(textEdit->document());
if (success)
textEdit->document()->setModified(false);
return success;
}
Figure 3.4: CoverageBrowser source view of the function TextEdit::fileSave()

To test this function we must perform the following steps:

  1. Start the TextEdit application.
  2. Click on the ‘Save’ button: TextEdit should display the error message "Save first your document using ‘Save As…’ from the menu".
  3. Quit the application.

After these steps have been done and the coverage report imported, CoverageBrowser shows that the return false; line just after the call to QMessageBox::warning() has been executed (as indicated by the green background). However, the line if (fileName.isEmpty()) is shown as partially executed (indicated by an orange background); (see Listing3.5).

boolTextEdit::fileSave()
{
if (fileName.isEmpty())
{
QMessageBox::warning(this,tr("Nofilenamespecified"),
tr("Savefirstyourdocumentusing'SaveAs...'fromthemenu"),
QMessageBox::Ok);
return false;
}

QTextDocumentWriterwriter(fileName);
boolsuccess=writer.write(textEdit->document());
if (success)
textEdit->document()->setModified(false);
return success;
}
Figure 3.5: CoverageBrowser source view after clicking TextEdit’s ‘Save’ button.

The explanation window (see Listing3.6) tells us that the value of the expression fileName.isEmpty() was true during one execution but was never false (hence, it is considered only partially executed). In order to fully test this expression we must click on the ‘Save As…’ button, then choose a filename, and finally click on the ‘Save’ button.

partially executed: fileName.isEmpty()

TRUE
FALSE
yes
Execution Count: 1
Executed by:
- Save Clicked
no
Execution Count: 0
Figure 3.6: CoverageBrowser explanation window after clicking on the ’Save’ button of TextEdit.

After rerunning the application and doing a “Save as”, the new execution report now has only one source code line that is partially untested (see Listing3.7). In this case, CoverageBrowser reveals that the Boolean variable success was never false, which means that saving the document has never failed.

We could force a write failure, and this would ensure that we had 100% code coverage for this function. But we will use a different test strategy to get complete code coverage: we will use a unit test and import the execution result into the TextEdit instrumentation database.

boolTextEdit::fileSave()
{
if (fileName.isEmpty())
{
QMessageBox::warning(this,tr("Nofilenamespecified"),
tr("Savefirstyourdocumentusing'SaveAs...'fromthemenu"),
QMessageBox::Ok);
return false;
}

QTextDocumentWriterwriter(fileName);
boolsuccess=writer.write(textEdit->document());
if (success)
textEdit->document()->setModified(false);
return success;
}
Figure 3.7: CoverageBrowser source view after clicking TextEdit’s ‘Save As…’ button and then the ‘Save’ button.

3.1.3Writing unit tests

The unit test infrastructure can be found in the directory textedit_v1_tests/. It contains just one test, which sets an illegal filename and then tries to execute the TextEdit’s fileSave() function. To do this we use the QTestLib unit testing library that is supplied with Qt. The test is contained in the file textedit_v1_tests/tst_textedit.cpp (see Listing3.8).

#include"tst_textedit.h"

voidTestTextEdit::tst_saveFile(){
TextEdittextEdit;
textEdit.fileName="/";
QVERIFY(!textEdit.fileSave());
}

QTEST_MAIN(TestTextEdit);
Figure 3.8: Unit test for the TextEdit application, tst_textedit.cpp

To import this test’s instrumentation result into TextEdit’s instrumentation database, the following infrastructure is necessary:

  1. A qmake project file with code coverage configured identically with that of the TextEdit project.
  2. A post-build rule which automatically executes the test and collects the coverage information.
  3. A unit test listener which saves the code coverage data (and the test status—passed or failed) for every executed unit into the unit test’s own instrumentation database.
  4. A way to import the code coverage report into the TextEdit’s instrumentation database.

The unit test will recompile along with textedit_v1/textedit.cpp. To make its results importable into the TextEdit instrumentation database, it is necessary that both executables (TextEdit and the unit test) are instrumented in exactly the same way. So, for this example, we must use the instrumentation options --cs-count, --cs-full-instrumentation and --cs-qt4.

Unfortunately, these command line options alone are insufficient, because Squish Coco’s default behavior is only to instrument header and source files in the current directory. But here we need to instrument the TextEdit application’s sources in addition to the unit test, so we must use another command line option to specify an additional path for files to instrument: --cs-include-path.

As before, we don’t want to have to remember these command line arguments every time, so we set them in the qmake project file, textedit_v1_tests.pro (see Figure3.9). With these lines in the unit test’s project file, the result will be that the qmake-generated Makefile creates the tst_textedit.exe executable which, when run, produces the tst_textedit.exe.csexe execution report. We then can use the CoverageBrowser to import this report into the file tst_textedit.exe.csmes.

HEADERS=../textedit_v1/textedit.htst_textedit.h
SOURCES=../textedit_v1/textedit.cpptst_textedit.cpp

CodeCoverage{
COVERAGE_OPTIONS=--cs-count--cs-full-instrumentation
COVERAGE_OPTIONS+=--cs-qt4
COVERAGE_OPTIONS+=--cs-output=tst_textedit.exe
COVERAGE_OPTIONS+=--cs-include-path=../textedit_v1
COVERAGE_OPTIONS+=--cs-exclude-file-regex=qrc_.*

QMAKE_CXXFLAGS+=$$COVERAGE_OPTIONS
QMAKE_CCFLAGS+=$$COVERAGE_OPTIONS
QMAKE_LFLAGS+=$$COVERAGE_OPTIONS

QMAKE_CC=cs$$QMAKE_CC
QMAKE_LINK=cs$$QMAKE_LINK
QMAKE_CXX=cs$$QMAKE_CXX
}
Figure 3.9: An extract from the unit test qmake project file

We can also execute the unit test automatically and import the execution report with a post-build rule. Squish Coco provides an extra command line tool, cmcsexeimport (see Chapter22), which imports an execution report into an instrumentation database. The post-build rule first deletes any previous execution report, then executes the test itself, and finally imports the results into the application’s execution database, tst_textedit.exe.csexe (see Listing3.10).

CodeCoverage{
win32:MAINDIR=$$replace(PWD,"/","\\")
!win32:MAINDIR=$$PWD

unix{
QMAKE_POST_LINK=rm$$MAINDIR/tst_textedit_v1.exe.csexe;
QMAKE_POST_LINK+=$$MAINDIR/tst_textedit_v1.exe;
QMAKE_POST_LINK+=cmcsexeimport-m$$MAINDIR/tst_textedit_v1.exe.csmes\
-e$$MAINDIR/tst_textedit_v1.exe.csexe-tUnitTest
}
win32{
QMAKE_POST_LINK=del/F$$MAINDIR\\tst_textedit_v1.csexe&
QMAKE_POST_LINK+=$$MAINDIR\\tst_textedit_v1.exe&
QMAKE_POST_LINK+=cmcsexeimport-m$$MAINDIR\\tst_textedit_v1.exe.csmes\
-e$$MAINDIR\\tst_textedit_v1.csexe-tUnitTest
}
}
Figure 3.10: Post-build rules for the import of the execution report into the unit test’s instrumentation database

By default, the coverage data is imported without any information about the executed tests: instead, an execution report called ‘UnitTest’ is created, which does not describe which test was executed or if its execution was successful. To provide the missing information, we must use the CoverageScanner API and generate an execution report for each test that is executed. An example that shows how this is done is available in the chapter 32.3.2. In the example, the API is used in the following way:

  • Two Squish Coco source files are added to the qmake project file (see Listing3.11).
  • The unit test class, TestTextEdit, inherits from TestCoverageObject instead of directly from QObject (see Listing3.12).
HEADERS+=testcoverageobject.h
SOURCES+=testcoverageobject.cpp
Figure 3.11: Including the CoverageScanner listener in textedit_v1_tests.pro
#include"testcoverageobject.h"
#include"../textedit_v1/textedit.h"
#include<QtTest/QtTest>

classTestTextEdit:publicTestCoverageObject
{
Q_OBJECT
privateslots:
voidtst_saveFile();
};
Figure 3.12: The TextEdit unit test header file, tst_textedit.h

The testcoverageobject.cpp file’s source code (see Listing3.13) is simple to understand. The file adds a single cleanup() function to QTestLib, which is executed after each unit test item. The code between #ifdef__COVERAGESCANNER__ and #endif is only compiled when CoverageScanner is invoked1. This extra code creates a test name by combining the test class’s object name and the test function’s name. In the the example, an execution item called unittest/TestTextEdit/tst_saveFile is generated. The slash is used to support the organization of tests in a tree view. The current test status (“PASSED” or “FAILED”) is also recorded and added to the execution report by the __coveragescanner_save() function.

...
voidTestCoverageObject::cleanup()
{
cleanupTest();
#ifdef__COVERAGESCANNER__
QStringtest_name="unittest/";
test_name+=metaObject()->className();
test_name+="/";
test_name+=QTest::currentTestFunction();
__coveragescanner_testname(test_name.toLatin1());
if(QTest::currentTestFailed())
__coveragescanner_teststate("FAILED");
else
__coveragescanner_teststate("PASSED");
__coveragescanner_save();
#endif
}
Figure 3.13: An extract from the TestCoverageObject’s source code

At this stage we could start CoverageBrowser, load the TextEdit instrumentation database, and import the unit test’s instrumentation database by clicking "File->Import Unit Tests…". An even more convenient alternative is to use the cmmerge tool to automate this step. The cmmerge program is designed to import one instrumentation database’s execution report into another instrumentation database. This means that we can extend our post-build rules to use the cmmerge program to import the coverage information automatically from the unit test into the TextEdit program’s instrumentation database (see Listing3.14).

CodeCoverage{
#MergecoveragedatabaseintoTextEditdatabase
unix{
QMAKE_POST_LINK+=;
QMAKE_POST_LINK+=cmmerge-o$$MAINDIR/../textedit_v1/textedit.tmp\
-i$$MAINDIR/../textedit_v1/textedit.exe.csmes\
$$MAINDIR/./tst_textedit_v1.exe.csmes&&
QMAKE_POST_LINK+=rm$$MAINDIR/../textedit_v1/textedit.exe.csmes&&
QMAKE_POST_LINK+=mv$$MAINDIR/../textedit_v1/textedit.tmp\
$$MAINDIR/../textedit_v1/textedit.exe.csmes
}
win32{
QMAKE_POST_LINK+=&
QMAKE_POST_LINK+=echoMergingunittestresultintothemainapplication&
QMAKE_POST_LINK+=cmmerge-o$$MAINDIR\\..\\textedit_v1\\textedit_unit.exe.csmes\
-i$$MAINDIR\\..\\textedit_v1\\textedit.exe.csmes\
$$MAINDIR\\tst_textedit_v1.exe.csmes&
QMAKE_POST_LINK+=COPY/Y$$MAINDIR\\..\\textedit_v1\\textedit_unit.exe.csmes\
$$MAINDIR\\..\\textedit_v1\\textedit.exe.csmes&
QMAKE_POST_LINK+=DEL/F$$MAINDIR\\..\\textedit_v1\\textedit_unit.exe.csmes
}
}
Figure 3.14: Post build rules: merging instrumentation results into the TextEdit instrumentation database

With all these changes to the .pro file in place, we can once again build and run the unit test. Now the CoverageBrowser shows the fileSave() function to be 100% covered, with the execution list containing our three original manual tests and the single unit test (see Figure3.15).

pictures/manual.tmp003.png
Figure 3.15: The execution list after all the tests have been executed

3.2Working with code coverage data

The most common ways in which code coverage is used are for developers to use it to find untested code, and for managers to use it to produce test status reports (e.g., as diagrams).

In addition to fully supporting the common use cases, Squish Coco also provides additional features which make it possible to go beyond these fundamentals and extend what can be achieved with code coverage. This will be discussed in the current subsection.

3.2.1Post mortem analysis

Recording each test’s coverage data makes it possible to compare their data to answer the question, “What does this test cover that the others don’t?” This is particularly useful if just one test fails, since it can help us to identify which part of the code is involved.

To see how this works in practice, let us return to the TextEdit example. If we click ‘Save’, we will get an error message that no filename is defined. This is not very convenient for users—we should have designed TextEdit to handle this particular case by opening the ‘Save As…’ dialog rather than by producing an error.

To identify where in the code this problem arises, we simply compare the ‘Save Clicked’ execution with all other executions that involve the ‘Save’ button. To do this we must first switch to the “Execution Comparison Analysis” mode (Menu: "Tools->Execution Comparison Analysis"). Select the checkboxes in the “Reference” column for the “tst_saveFile” and “SaveAs clicked before Save clicked” tests. This will make the execution comparison symbol appears in front of the affected names (see Figure3.16). In the “Executions” column, click on “Save clicked” checkbox.

pictures/manual.tmp004.png
Figure 3.16: The execution list being used to compare different executions

In this mode, the coverage analysis is based only on source code lines which are not executed by “tst_saveFile” and “SaveAs clicked before Save clicked”. This is why the overall coverage decreases to 1.29%: It means that “Save clicked” executes 1.29% more code than the selected tests.

Using cmreport it is possible to generate a HTML report which displays the same information:

cmreport--csmes=textedit_v1/textedit.exe.csmes\
--html=textedit.html\
--section=execution\
--select-reference=".*tst_saveFile"\
--select-reference="SaveAsclickedbeforeSaveclicked"

If we now look at the source code itself, we will see that only two lines of the TextEdit::fileSave() function (see Listing3.17) are not grayed: the lines which pop up the error message. These are the lines that must be modified to change the “Save” button’s behavior.

boolTextEdit::fileSave()
{
if (fileName.isEmpty())
{
QMessageBox::warning(this,tr("Nofilenamespecified"),
tr("Savefirstyourdocumentusing'SaveAs...'fromthemenu"),
QMessageBox::Ok);
return false;
}

QTextDocumentWriterwriter(fileName);
boolsuccess=writer.write(textEdit->document());
if (success)
textEdit->document()->setModified(false);
return success;
}
Figure 3.17: CoverageBrowser source view of the comparison of the “Save clicked” execution.

In this case, changing the fileSave() function is easy—we simply replace the QMessageBox::warning() call with a call to the fileSaveAs() method. (see Listing3.18)

boolTextEdit::fileSave()
{
if(fileName.isEmpty())
returnfileSaveAs();

QTextDocumentWriterwriter(fileName);
boolsuccess=writer.write(textEdit->document());
if(success)
textEdit->document()->setModified(false);
returnsuccess;
}
Figure 3.18: The TextEdit’s improved fileSave() function.

3.2.2Evaluating the impact of a hot fix

Before committing a change or starting to test a hot fix, it is possible to estimate the impact of the code modification. CoverageBrowser is able to perform an analysis on the difference between two source sets and can list the tests that will be affected (and those which won’t).

Start CoverageBrowser and load the modified TextEdit example’s instrumentation database. Now click on "Tools->Compare with…" and select the original version of the TextEdit instrumentation database. CoverageBrowser will now display the source code like a text comparison application does (e.g., diff).

Click on "Tools->Analysis of Modified Methods" to exclude all unmodified functions from the coverage analysis. In the TextEdit case, doing this will mean that only one function, TextEdit::fileSave(), will be treated as being instrumented since that is the only method we have changed (see Figure3.19). This also affects the statistic calculations since execution coverage statistics will now be limited to just this function. The test executions whose coverage statistic is not zero are the ones that are affected by the code modifications we have made.

In our case we have:

  • “Save clicked”,
  • “SaveAs clicked before Save clicked” and
  • “tst_saveFile” (our unit test).

The “Start and Exit” case has a coverage of 0% and so does not execute our modified code. It is for this reason no longer visible in the execution list. All entries in the ‘Execution’ column are struck through to inform us that these tests are not executed in the newest version and only present in the reference database.

In other words, only the two manual tests and the unit test listed above must be re-executed to ensure that no regressions have been introduced by our code changes.

pictures/manual.tmp005.png
Figure 3.19: List of tests affected by a code modification

3.2.3Black-box testing/distributed testing

Up to now we have done white-box testing, i.e. testing where we have access to the source code and which makes use of our knowledge of the code. It is also possible to use Squish Coco for black-box testing. In other words, we can still do code coverage analysis without having access to or even knowledge of the source code. If we use this approach, the generated instrumentation database will, of course, contain no source code.

To use black-box testing we must create a suitable instrumentation database by clicking "File->Generate Black-Box Configuration…". This database, along with the TextEdit executable, can be given to the test team which can then use them with a simplified version of CoverageBrowser (see Figure3.20). This version of CoverageBrowser only supports the importing and managing of execution reports since it does not have access to the application’s source code.

pictures/manual.tmp006.png
Figure 3.20: Black-box testing results as shown by CoverageBrowser

Once all the tests are finished, the black-box database can be merged into the original TextEdit instrumentation database using CoverageBrowser’s merge facilities (Menu: "File->Merge with…").

3.2.4Verifying if a bug fix is correctly tested

Often, when a small bug fix is made, the effects are very localized and leave most of the source code unchanged. In view of this, it is often unnecessary to retest the entire application with the whole test suite.

Squish Coco makes it possible to avoid unnecessary testing. We can tell it to restrict itself to the source code that has changed between the original and fixed version of the application (see Chapter3.2.2). This allows us to focus purely on the analysis of the fix. To achieve this, simply load the fixed application’s freshly generated instrumentation database (e.g., for the modified TextEdit application), and compare it with the earlier database for the unfixed version, using Squish Coco’s facility for analyzing modified functions.

pictures/manual.tmp007.png
Figure 3.21: Coverage of the patched function

We have done just such a comparison and the results are shown in in 3.21: The two tests, “Save clicked” and “Start and Exit”, cover 85% of the TextEdit::fileSave() function, the only method that was modified for our fix. From this we know exactly what additional testing is necessary to achieve 100% code coverage for our tests for the fixed version of the application. CoverageBrowser continues to display the list of missing tests (which are only executed using the first version of TextEdit) in strikeout style. This gives a hint of what remains for testing effort.

Using cmreport it is possible to generate a HTML report which displays the same information:

cmreport--csmes=textedit_v2/textedit.exe.csmes\
--csmes-reference=textedit_v1/textedit.exe.csmes\
--html=textedit.html\
--section=execution

3.3Conclusion

Squish Coco provides code coverage analysis which can be applied to all the usual testing techniques: unit, manual, and black-box testing. Squish Coco can easily be told to ignore the generated code produced by the Qt library’s tools (moc, qrc, and uic), so that only the code written by developers is instrumented. Test results can be collected into a database and can be used to evaluate how much code coverage our tests achieve and to show which statements are not currently tested. With this information we can target our testing efforts towards 100% test code coverage. In addition, Squish Coco makes it possible to see what effect a code modification would have in terms of test code coverage without having to test the entire application.

Overall, Squish Coco can help us target our tests to ensure that our applications have as much test coverage as possible, while avoiding or minimizing test coverage duplication. Furthermore, Squish Coco can help us see what effects changes to our code have on test coverage, so that we can adapt our test suites accordingly.