Squish Coco

Code Coverage Measurement for Tcl, QML, C# and C/C++

Part VII
CoverageBrowser Reference

Chapter 17  Introduction

CoverageBrowser is a graphical user interface program which enables the user to analyze the test coverage.

CoverageBrowser is typically used in the following way:

  1. Load an instrumentation database (a .csmes file) that was generated by CoverageScanner.
  2. Load a corresponding execution report (a .csexe file). There may be several reports to choose from: CoverageBrowser displays them in a tree view where they can be selected or deselected individually for coverage analysis.
  3. Search for untested code segments.
  4. Mark dead code or code which cannot be tested as “manually validated”.
  5. Add comments to the instrumented code.

CoverageBrowser saves all these data (execution reports, comments, etc.) to the instrumentation database.

17.1  Command Line Arguments


 -m ⟨csmes_file⟩ ...
coveragebrowser ⟨csmes_file⟩ ...


csmes_file |-m csmes_file | --csmes=csmes_file
Load an instrumentation database from the ⟨csmes_file⟩.

This option can be used at most once.

-e csexe_file | --csexe=csexe_file
After the CoverageBrowser has started, and import dialog is opened to import ⟨csexe_file⟩.

This option can be used only once, and only if a ⟨csmes_file⟩ is given.

Read command line options from the file at ⟨path⟩ and insert them at the position of this option. The option file is a text file with one option per line. Leading and trailing blanks and empty lines are ignored.

If no option is given, the CoverageBrowser tries to open the ⟨csmes_file⟩ again that it had shown in its previous run.

Chapter 18  Black box and white box testing

CoverageBrowser can be used both for white box testing and black box testing: If no source code information is available in the instrumentation database (i.e., in the .csmes file), CoverageBrowser will switch to black box testing mode. In this mode, CoverageBrowser has a simpler user interface that does not provide the functionality that is possible only with access to the source code. Nevertheless, even with this reduced functionality, it is still possible to import and manage executions.

A black box instrumentation database can be generated by clicking on "File->Generate Black-Box Configuration…". It is possible to merge such a database into a white box configuration at a later stage.

Figure 18.1: CoverageBrowser’s simplified user interface for black box tests

Chapter 19  The windows of CoverageBrowser

19.1  The Executions Window

This window is present in both black box and white box testing mode. It shows all application executions that have been done, including details of their code coverage.

19.1.1  Principles

Executions of the instrumented application are displayed in a tree view in the "Executions" window. CoverageBrowser uses a slash ‘/’ as a separator for grouping measurements together.

For example, the tests shown in Figure 19.1 have the following names:

Figure 19.1: The Executions View

The checkbox next to each item can be used to select executions. The "Sources", "Functions" and "Source Viewer" windows only display the code coverage status of the selected executions.

The code coverage percentage of the tests is visualized by gray horizontal bars in the "Coverage" column.

The input field pictures/filter.png allows filtering the output with regular expressions (see Chapter 20.1). Click the pictures/select.png button to select all the visible executions (i.e., all those that have not been filtered out). Or click the pictures/deselect.png button to unselect all the executions.

For a more finely controlled filter, click the "..." button: This will pop up a dialog by which it is possible to set filtering depending on the execution state and the comments.

Note that the text which is filtered is a test execution’s full name—for example, SubTest/test1.

Click the pictures/testPerformance.png button to switch to the execution comparison analysis mode (see Chapter 19.1.3).

The user can set the state of the executed test by clicking into the "State" field of a test. The new state can be any of the following:

Default state.
This state is used to mark the test as passed. The background colour of the "State" field is then green.
This state is used to mark the test as failed. The background colour of the "State" field is then red.
"Need manual check"
This state is used to indicate that the test must be done manually. The background colour of the "State" field is then orange.
The name of the test item and its state can also be defined by an external test suite. (see Chapter 34)

It is possible to rename, delete or merge executions, or add comments to them through the use of CoverageBrowser’s context menus and dock windows1. One can use regular expressions to identify the executions to which these modifications are applied. (The regular expression syntax is described in Chapter 20.1.) Before regular expression-driven actions are carried out, CoverageBrowser shows a preview of what effects the change would have.

To delete executions, right-click into the "Executions" window and select "Delete multiple executions..." from the context menu. A window appears into which the name of the executions can be entered. Here are some examples:

To rename executions, select "Rename multiple executions..." from the context menu. A window appears (see Figure 19.2) into which expressions for the old and new names of the executions can be entered. Here are some examples:

Figure 19.2: Renaming with regular expressions

19.1.2  Loading an Execution Report

An execution report is produced when an instrumented application finishes its execution. It contains the list of all executed code segments for each application that was run. The execution report is never overwritten; execution data are always appended. Its file name is defined by the initialization function __coveragescanner_install() of the CoverageScanner library (see Chapter 10.1.1) and always ends with ".csexe".

To load an execution report, click on "File->Load Execution Report…" or the icon pictures/csexeopen.png on the toolbar. A dialog like that in Figure 19.3 appears.

Figure 19.3: Execution report loading dialog
Import from a file

The report can be imported directly or via a script. To load it directly, select "File" in the top left menu. Enter the path of the .csexe file into the free form input box, or use the browse button.

The "Name" field specifies the name of the imported instrumentation if it is not already specified in the execution report (see Chapter 10.1.2). It is also possible to set the status (“Passed”, “Failed” or “Requires Manual Checking”) of all imported executions for which it was not set at runtime. By default the status is “Unknown”.

Invalid executions are not imported. If more than one instrumentation is imported with the same name, an index number is appended to the name to make it unique. An execution with the name “Main Test” may then become “Main Test (2)”.

To see which executions will be imported, press the button "Next", which shows a list of all executions in the file you want to load, together with a description whether they will be imported and why. (see Figure 19.4)

Figure 19.4: Import overview

At this point, no import yet has taken place. One still needs to press the "Import" (or "Import & Delete") button to load the .csexe data. Or one can press the "Back" to change the settings, or "Cancel" to stop it all.

Import by a script

If the execution report is not accessible through the file system, a script can be used to import it. To do this, select "Script:" in the menu at the top left of the dialog.

Two input fields become visible:

"Fetch Command"
Enter here the script that imports the execution report, together with its parameters.

The script must print the content of the execution report to the standard output (stdout). If it writes to the standard error stream (stderr), the output occurs in a window and can be used for debugging. On success, the script must return the value 0.

"Delete Command"
Enter here the script that deletes the execution report, together with its parameters.

This field needs only to be filled in when the "Delete execution report after loading" mode is selected.

In both input fields, quotation marks (single and double quotes) can be used to group arguments, and backslashes can be used as escape characters.

Pressing the "Next" button executes the fetch command. If it fails, an error report is shown. If it is successful, a preliminary list of imported executions is shown as before. To finish the import, it is again necessary to press the "Import" (or "Import & Delete") button.

Additional options grouped in the advanced parameters section can also be specified:

19.1.3  The Execution Comparison Analysis Mode

The Execution Comparison Analysis mode is activated by clicking the button pictures/testPerformance.png . In this mode, one or more executions must be selected as references and others as those to analyze. CoverageBrowser then shows only those lines as covered that were executed in the analyzed executions but not in the reference executions. Similarly, the coverage statistics displayed in the "Sources" list contain only the percentage of instrumented statements that were executed only by the executions to analyze.

With Execution Comparison Analysis mode it is therefore possible to find changes in the coverage between different runs of a program.

To select the reference executions, use the checkboxes in the "Reference" column at the right of the "Executions" window. They must be selected first. Then select the executions to analyze as before in the "Execution" column.

If the execution to analyse is present in the list of reference executions, it is implicitly removed from the list. So if execution A is compared to executions A and B, CoverageBrowser actually compares execution A only with B, since comparing A with itself will provide no useful information.
Figure 19.5: Execution Comparison Analysis Mode

19.2  The Source Browser Window

This feature is not available for black box testing.

The "Sources" window can be displayed by selecting the menu entry "View->Sources".

Figure 19.6: Source Browser Window

In each line, the "Source" column contains the name of a source file. When a line is clicked, the "Source Window" is displayed.

For C and C++, there are sub-entries for included header files which have been instrumented.

If a file has been compiled more than once but with different preprocessor options, the file gets more than one entry. To distinguish them, the entries get numbers attached. In the example, the file context.h could then become the two entries “context.h #1” and “context.h #2”. When coverage percentages are computed, each variant counts as a separate file.

The "Coverage" column displays a rudimentary code coverage statistics for each source code file. There is an underlying horizontal bar in each field whose length represents the fraction of the code that is covered. The color of the main part of the bar is selected according to code coverage statistics for each file and the value of the thresholds (see Chapter 22.3).

If parts of the code have been manually validated, then there is a second, gray, horizontal bar at the left of the "Coverage" column; its width represents the number of tests that have been only manually validated. The full length of the column still represents the fraction of all tests that have been validated. When the window however also contains a "Manual Validations" column, the gray horizontal bar is only displayed there.

The checkbox in front of each source file lets you exclude or include the source file to the statistics computation. If excluded, the file is treated as if it and its functions had not been instrumented.

The input field pictures/filter.png allows filtering the content of the window with regular expressions (see Chapter 20.1). The filter expression refers to the full path of the source file. (Example: c:\directory\file.cpp)

pictures/previousModule.png Ctrl+Shift+FPrevious source file
pictures/nextModule.png Ctrl+FNext source file
Table 19.1: Source Browser - Shortcuts

19.3  The Function Browser Window

This feature is not available for black box testing.

The "Functions" window can be opened by selecting the menu entry "View->Functions".

Figure 19.7: Function Browser Window

It displays the code coverage statistics for all functions, classes and namespaces. A click on an item in the window shows the code for the corresponding object in the "Source Viewer" window and highlights it.

The "Method" columns contains the names of the function or class method whose coverage is shown. If hierarchical display is enabled, it also displays classes and name spaces. If a function is defined in a file that is compiled more than once and with different preprocessor options, it gets several entries. To distiguis them, the name of the file and a number is attached. A function get_context() in the file context.h could then have the two entries “get_context() [error.h #1]” and “get_context() [error.h #2]”. The names and the numbers of the files are the same as those in the "Sources" window (see Chapter 19.2).

The "Coverage" column displays a rudimentary code coverage statistics for each function. There is an underlying horizontal bar in each field whose length represents the fraction of the code that is covered. The color of the main part of the bar is selected according to code coverage statistics for each file and the value of the thresholds (see Chapter 22.3).

If parts of the code have been manually validated, then there is a second, gray, horizontal bar at the left of the "Coverage" column; its width represents the number of tests that have been only manually validated. The full length of the column still represents the fraction of all tests that have been validated. When the window however also contains a "Manual Validations" column, the gray horizontal bar is only displayed there.

The input field pictures/filter.png lets you filter the output with regular expressions (see Chapter 20.1). The filter expression refers to the full names of the items, including the class name and the namespace. (Example: MyNamespace::MyClass::MyProc)

19.4  The Source Viewer Window

This feature is not available for black box testing.

19.4.1  Source Display

The "Source Viewer" window can be displayed by clicking on "View->New Source Window".

Figure 19.8: Source Window

The "Source Viewer" window displays the source file or its C or C++ preprocessed view. Clicking on pictures/preprocessorview.png enables the user to toggle between the 2 different views.

The source code is colored with code coverage instrumentations. The colors used are described in section 19.4.2. By selecting an area with the mouse, corresponding instrumentations are highlighted and a detailed description of them is displayed in the "Explanation" window (see Chapter 19.5). It is possible to navigate between instrumentations using the navigation buttons pictures/nextInstrumentation.png and pictures/previousInstrumentation.png . Navigation buttons in yellow, blue, red and green jump to the next or previous comments, manually validated instrumentations, non-executed code parts or executed code parts. Clicking on the source code selects the nearest instrumentation.

If a comment is entered for an instrumentation, the icon pictures/comments.png is displayed in the margin.

On the right side, CoverageBrowser displays the test coverage count2 or the code coverage count3 for each line. If a source code line contains more than one instrumentation, CoverageBrowser display the range of their counts.

Mouse WheelDescription
WheelScroll up/down
Ctrl+WheelZoom in/out
Shift+WheelNext/previous instrumentation
Table 19.2: Source Display - Mouse Wheel
pictures/previousInstrumentationComment.png Ctrl+Shift+BPrevious comment
pictures/nextInstrumentationComment.png Ctrl+BNext comment
pictures/previousInstrumentationUnTested.png Ctrl+Shift+UPrevious unexecuted code
pictures/nextInstrumentationUnTested.png Ctrl+UNext unexecuted code
pictures/previousInstrumentationTested.png Ctrl+Shift+TPrevious executed code
pictures/nextInstrumentationTested.png Ctrl+TNext executed code
pictures/previousInstrumentationManuallyValidated.png Ctrl+Shift+VPrevious manually validated instrumentation
pictures/nextInstrumentationManuallyValidated.png Ctrl+VNext manually validated instrumentation
pictures/previousInstrumentation.png Ctrl+Shift+IPrevious instrumentation
pictures/nextInstrumentation.png Ctrl+INext instrumentation
pictures/previousModule.png Ctrl+Shift+FPrevious source file
pictures/nextModule.png Ctrl+FNext source file
pictures/newview.png Ctrl+JOpen a new source window
pictures/preprocessorview.png Ctrl+Shift+JSwitch between the preprocessor view and the original source
pictures/comments.png Ctrl+KAdd/Edit Comments
pictures/no_comments.png Ctrl+Shift+KRemove Comments
pictures/validation.png Ctrl+DMark as Validated
pictures/no_validation.png Ctrl+Shift+DClear Validation Flag
pictures/commentundo.png Ctrl+ZUndo
pictures/commentredo.png Ctrl+Shift+ZRedo
Table 19.3: Source Display - Shortcuts

19.4.2  Color Convention

Instrumentations are displayed in a source window using different colors:

Green - "Executed"
An instrumentation is displayed in green when the code has been executed.
Orange - "Partially Executed"
An instrumentation is marked as "Partially Executed" when it is not completely executed. This occurs when a Boolean expression was only true or false for example. In the case of a source code line which contains more than one instrumentation, the line is marked as "Partially Executed" when one of its instrumentations has not been "Executed". A detailed information is displayed in the "Explanation" window (see Chapter 19.5).
Red - "Never Executed" or "Execution count too low"
An instrumentation is displayed in red when the code is never executed or when the execution count is lower that than the execution count requested.
Magenta - "Dead-Code"
An instrumentation is displayed in magenta when the code cannot be executed.
Blue - "Manually Set To Be Executed"
The user has the possibility to mark an instrumentation as ’Manually Validated’. This is usually to exclude dead code or code which cannot be tested for code coverage statistics. This state is only relevant if executions are in a "Never Executed" or "Partially Executed" state.
Gray - "Unknown" or "Hidden"
Gray is used when no information about instrumentation is available. This occurs when no executions are selected or when comparing executions of tests (see Chapter 19.1.3).


This feature is not available for black box testing.  Editing Comments

It is possible to add a comment by selecting an instrumentation and clicking on the context menu entry "Add/Set Comment", the main menu entry "Instrumentation->Add/Set Comment" or the icon pictures/comments.png on the toolbar.

The "Comment" Window 19.9 appears and allows a comment to be edited. The most recently entered comments can be retrieved by clicking on the "Last Comments" selection field. Basic text formatting is possible using the integrated toolbar buttons (see 19.4).

Figure 19.9: Comment Editing
If a minimal length for a comment is set, the comment can only be entered if this is reached (see Chapter 22.2).

The comment is printed in the explanation in a yellow box and the icon (pictures/comments.png ) is displayed in the source window near the line number.

pictures/commentstrikethrough.png Ctrl+SStrikeout
pictures/commentbold.png Ctrl+BBold
pictures/commentitalic.png Ctrl+IItalic
pictures/commentunderline.png Ctrl+UUnderline
pictures/commentjustify.png Ctrl+JJustify
pictures/commentright.png Ctrl+RAlign Right
pictures/commentleft.png Ctrl+LAlign Left
pictures/commentcenter.png Ctrl+MCenter
pictures/commentundo.png Ctrl+ZUndo
pictures/commentredo.png Ctrl+Shift+ZRedo
Table 19.4: Comments - Shortcuts  Removing Comments

It is possible to remove a comment by selecting an instrumentation and clicking on the context menu entry "Clear Comments", the main menu entry "Instrumentation->Clear Comment" or the icon pictures/no_comments.png on the toolbar.

19.5  The Explanation Window

This feature is not available for black box testing.

The "Explanation" Window 19.10 is a docking window which is automatically updated with a detailed description of the selected instrumentations of the source window. For each instrumentation, the following information is displayed:

  1. A short description of the instrumentation state (see Chapter 19.4.2).
  2. The preprocessed source code which is concerned by the instrumentation.
  3. For Boolean expressions, the truth-table which shows executed and unexecuted states.
  4. The list of executions which are executing the portion of code.
  5. User comments.
Figure 19.10: Explanation Window

CoverageBrowser displays the truth-table in the case of a Boolean expression which is partially executed. The truth-table indicates which value the expression has or has not reached during execution.

Example: the truth-table 19.5 indicates that the expression was false but not true.

Table 19.5: Truth-Table Example

19.6  The Statistics Window

The "Statistics" Window 19.11 is a docking window which is automatically updated with the code coverage statistic for the whole project.

Figure 19.11: Statistics Window

If parts of the code are manually validated, then their percentage is also displayed in the coverage statistics. The bar chart has then two regions: the percentage of the manually validated code at the left and then the percentage of the code that is covered by the automatic tests. The numbers in the bar chart refer to all validated code, that which was manually validated together with that covered by tests.

By clicking the "..." button, the two kinds of validation are split into two separate bars.

Figure 19.12: Statistics Window with Manual Validation, Split

Chapter 20  Working with CoverageBrowser

20.1  Filtering with wildcards or regular expressions

CoverageBrowser provides a generic filtering mechanism of rows using wildcard or regular expressions. Wildcard expressions are activated per default and regular expressions are selected when the expression starts with an equal sign (’=’). Clicking on the filter icon converts the expression from wildcard into regular form as far as this is possible and vice versa.

pictures/filterregexp.png The filter uses regular expression syntax.
pictures/filterwildcard.png The filter uses wildcard syntax.
pictures/filterinvalid.png Syntax error. More information are displayed in the status bar.
Table 20.1: Filter States

20.1.1  Wildcard Expressions

*any characters (0 or more)
?any character
[...]set of character

Examplefoo*bar match any tests containing the string foo followed by bar.

20.1.2  Regular Expression

The first character must be ’=’ to activate the regular expressions.  Pattern matching

cAny character represents itself unless it has a special regexp meaning. Thus c matches the character c.
\cA character that follows a backslash matches the character itself except where mentioned below. For example if you wished to match a literal caret at the beginning of a string you would write \^.
\aThis matches the ASCII bell character (BEL, 0x07).
\fThis matches the ASCII form feed character (FF, 0x0C).
\nThis matches the ASCII line feed character (LF, 0x0A, Unix newline).
\rThis matches the ASCII carriage return character (CR, 0x0D).
\tThis matches the ASCII horizontal tab character (HT, 0x09).
\vThis matches the ASCII vertical tab character (VT, 0x0B).
\xhhhhThis matches the Unicode character corresponding to the hexadecimal number hhhh (between 0x0000 and 0xFFFF).
\0ooo (i.e., zero ooo)matches the ASCII/Latin1 character corresponding to the octal number ooo (between 0 and 0377).
. (dot)This matches any character (including newline).
\dThis matches a digit.
\DThis matches a non-digit.
\sThis matches a whitespace.
\SThis matches a non-whitespace.
\wThis matches a word character.
\WThis matches a non-word character.
^The caret negates the character set if it occurs as the first character, i.e. immediately after the opening square bracket. For example, [abc] matches 'a' or 'b' or 'c', but [^abc] matches anything except 'a' or 'b' or 'c'.
-The dash is used to indicate a range of characters, for example [W-Z] matches 'W' or 'X' or 'Y' or 'Z'.
E?Matches zero or one occurrence of E. This quantifier means "the previous expression is optional" since it will match whether or not the expression occurs in the string. It is the same as E{0,1}. For example dents? will match 'dent' and 'dents'.
E+Matches one or more occurrences of E. This is the same as E{1,}. For example, 0+ will match '0', '00', '000', etc…
E*Matches zero or more occurrences of E. This is the same as E{0,}. The * quantifier is often used by a mistake. Since it matches zero or more occurrences it will match no occurrences at all. For example if we want to match strings that end in whitespace and use the regexp \s*$ we would get a match on every string. This is because we have said find zero or more whitespace followed by the end of string, so even strings that do not end in whitespace will match. The regexp we want in this case is \s+$ to match strings that have at least one whitespace at the end.
E{n}Matches exactly n occurrences of the expression. This is the same as repeating the expression n times. For example, x{5} is the same as xxxxx. It is also the same as E{n,n}, e.g. x{5,5}.
E{n,}Matches at least n occurrences of the expression.
E{,m}Matches at most m occurrences of the expression. This is the same as E{0,m}.
E{n,m}Matches at least n occurrences of the expression and at most m occurrences of the expression.
()groups expressions into sub-expressions.
|Alternative. Example: "aaa|bbb" matches the string "aaa" or "bbb".  String substitution

&Matched expression
\nsub-expression number n. Example: the regular expression is "(.*):([0-9]*)" matches the string "joe:18". The replacement string "\1 is \2" will produce the result: "joe is 18"

20.2  Code/Test Coverage Level

The menu entry "Instrumentation->Level:x" sets the targeted code coverage count or, if compiled with instrumentation hit support1, the targeted test coverage count.

The level determines the number of executions/test coverage runs necessary to consider that an instrumented code is executed.
Example: Setting the level to 10, will make it necessary to execute 10 times each line of the source code if compiled with code coverage count. If compiled with code coverage hit, 10 execution runs need to execute each line of the source code. The menu entry "Tools->Test Coverage Count Mode" and the button pictures/testCountMode.png switch between code coverage count and test coverage count analysis. This provides the behaviour of code coverage hit analysis 2 when the project is compiled with code coverage count support3.

20.3  Code Coverage Algorithm

CoverageBrowser displays the code coverage analysis (statement block, decision or condition) generated by CoverageScanner. But "Instrumentation->Coverage Method->Statement Block" lets you reduce the analysis to the code coverage of statement blocks. This produces the same result as compiling with the --cs-statement-block of CoverageScanner. Similarly, "Instrumentation->Coverage Method->Decision" shows the code coverage analysis at the Decision level.

Here is a short overview of the command line options necessary for each code coverage analysis method:

Coverage analysisCoverageScanner command line option
Statement Block--cs-statement-block
Decision with full instrumentation--cs-decision
Decision with partial instrumentation--cs-decision --cs-partial-instrumentation
Condition with full instrumentation(default)
Condition with partial instrumentation--cs-partial-instrumentation

20.4  Optimized execution order

CoverageBrowser can calculate an execution order for tests with which high code coverage can be reached fast with a small number of tests.

In this execution order, the test with the highest coverage comes first. The second test is the that one which makes the additional code coverage as high as possible, and so on.

This feature is meant for cases where the full test suite cannot be executed, e.g. because there is not enough time or there are many manual tests. Then one can run a number of tests from the beginning of the list, say the first 20, and still get a high coverage fast.

To calculate the execution order proceed as follows:

  1. Select a set of executions in the "Executions" window.
  2. Click on "View->Optimized Execution Order…". The "Optimized Execution Order" window (see Figure 20.1) is then displayed.
  3. Click on the "Compute" button to start the analysis.
Figure 20.1: Optimized execution order

20.5  Bug Location

20.5.1  Theory

To locate a bug, CoverageBrowser simulates the behaviour of a human programmer searching for a single error in the source code. It simplifies the programmer’s behaviour to a stochastic process that goes from source code line to source code line. After each step, the process tries to jump to the next better error candidate. After an infinite time, we can then look on the probability that a source code line was chosen as the best location of the failure.

Because instructions in a program are have a strong dependence, CoverageBrowser forms groups from instructions that are always executed together, because they cannot be distinguished from code coverage data. The bug location algorithms works with these groups of instruction instead of individual statement.

At the beginning of the process, a covered source code line is selected at random. Then we select an other instrumented source code line with the following rules:

  1. Select a test which covers the current line.
  2. Then select the next source line as follows:
    1. If the test had passed, the line that caused the failure is not expected to be among the source lines executed by the selected test. Therefore select any instrumented line that is not executed by this test.
    2. If the test had failed, select any source code line that is executed by this test.

We repeat this process until a set of pertinent source code lines are identified.

An example

We will use a trivial example to illustrate how the algorithm works. The following function computes the inverse of a number:

float inv( float x )
    if ( x != 0 )
        x = 1 * x ; // <- here is the bug
        x = 0;
    return x;

The bug itself is easy to understand; a multiplication is used instead of a division.

Our test suite is:

INV(0)inv(0) == 0Passed
INV(1)inv(1) == 1Passed
INV(2)inv(2) == 0.5Failed
INV(3)inv(3) == 0.3333333Failed
INV(4)inv(4) == 0.25Failed

We will now simulate the bug location algorithm step by step.

The following is a simplified version of the algorithm that Squish Coco uses. It would return the same results as the actual algorithm, but be to slow in practice. For better precision and better performance, CoverageBrowser therefore computes the probabilities directly and does not use a sampling method as below.

First we note that it is not possible to distinguish between the lines “if ( x != 0 )” and “return x;” with a test. If one of these lines is executed, the other one is also. We group them together and view them as a single line. This means that if we estimate that these lines are a good error candidate, we cannot determinate which of this line is responsible of the bug. To simplify the explanation, we omit now the “return;” statement.

The algorithm starts by randomly selecting a source code line as an error candidate. We will use here the line “if ( x != 0 )” as our starting point. The algorithm then searches the list of the tests that execute this line and chooses one at random; assume that it selects INV(2):

Figure 20.2: Bug location of INV example – Step 1

The test INV(2) has failed, and we suppose that one of the source code lines executed by this test is responsible of the error. The algorithm selects then as error candidate a line that is executed by INV(2); we assume it is “x = 1 * x”:

Figure 20.3: Bug location of INV example – Step 2

The algoritm then select randomly the test INV(1) in the set of tests executed by the line “x = 1 * x” (that are: INV(1), INV(2), INV(3) and INV(4)):

Figure 20.4: Bug location of INV example – Step 3

The test INV(1) was passed, so we suppose that a source code line which is not executed by this test is responsible of the error. We select then “x = 0;” as the next candidate:

Figure 20.5: Bug location of INV example – Step 4

We iterate then the process infinitely and compute the probabilities that a source line is chosen as error candidate:

x = 1 * x0.528
if ( x != 0 )0.283
x = 0;0.188

As expected, the line “x = 1 * x” has the highest probability of having a bug.

20.5.2  Usage

To calculate the bug location proceed as follows:

  1. Select a set of executions in the "Executions" window. At least one execution should have failed.
  2. Click on "View->Bug Location…". The window (see Figure 20.6) will be displayed.
  3. Click on the "Compute" button to start the analysis.
Figure 20.6: Bug Location

20.6  Patch Analysis

With the menu entry "Tools->Patch File Analysis…" one can generate a report about the influence of a patch on the test coverage of a project, without running the test suite for the patched version.

Prerequisites are a project for which .csmes and .csexe files exist and are loaded into CoverageBrowser, and a diff file. Patch analysis works best with programs that have automatic tests and which are instrumented in such a way that the names of the tests are known to Squish Coco (see Chapter 34.1). Line coverage (--cs-line) and statement block coverage (--cs-statement-block) should not be disabled. (They are on by default.)

The diff file must be in the “unified” format. It is generated by the Linux diff utility with the option -u, and is also the default output format of several version control systems (see Chapter 40).

Clicking the menu entry "Tools->Patch File Analysis…" lets the dialog "Patch File Analysis" appear. It has the following entries:

The title of the report, both for HTML and CSV.
Patch File
Path of the patch file that contains the changes to the project.
To select the output file and its type:
Either HTML or the Excel CSV format.
The field to the right of the "Type" field contains the name and path of the report file that is generated.
Source Code Parameter
For HTML reports, to select the display of the annotated source code:
Tooltip Maximal Size
The annotated patch file in the HTML report has a tool tip which displays the tests that have executed a certain line of code. This parameter sets the maximal number of tests that can appear in a tooltip. With a value of 0, no tooltips appear.
CSV Parameter
To select the format of the CSV report:
Column Separator
The column separator symbol can be either a comma or a semicolon.
To choose the content of the report:
Execution Statistics
Create a table that groups the tests by their results. It shows how many of the tests have passed, failed, need manual testing, and whose execution status is unknown.
Create a list of the tests that execute code which is affected by the patch. For each test, the name and the execution result is shown.
Source Code Statistics
Create a table that shows the influence of the patch on the test coverage. It shows how many lines in the patch are covered by a test, how many are not, and for how many lines Squish Coco could not find out whether they are covered. These numbers are shown for the lines that were removed, or added, and for all lines of the patch.
Annotated Patch Source
Create an annotated version of the patch file. Each line of code in the patch is shown in red if it is removed, green if it is added, or otherwise gray. Also shown are the line numbers of the code lines, both before and after the patch is applied, and the number of tests that cover a line. This last field also has a tooltip that shows which tests cover the specific line. (The tooltip is only visible if "Tooltip Maximal Size" is set to a non-zero value.)

To generate the report, one clicks either the button "OK" or "Show": The second button also opens a browser window to show the generated report. Clicking "Apply" saves the values of the dialog entries without generating a report, while "Cancel" closes the dialog without saving anything.

20.7  Comparing Code Coverage of Two Software Releases

CoverageBrowser is able to instrumentation database together in order to:

  1. check is the modified/unmodified code is correctly tested.
  2. find which tests are impacted by a source code modification.

This feature is particularly adapted to compare two releases together which contains small modifications (bug fixes only) and to limit the tests of the modified code only.

In this mode CoverageBrowser uses the following typographic rules:

RuleSource WindowMethod ListSource ListExecution List
Normal fontIdentical4 source partIdentical4 methodsIdentical4 filesExecutions available in both releases
Bold Modified methodsModified files 
Bold+UnderlineNew text insertedNew methodsNew filesNew executions
Bold+StrikeDeleted textDeleted methodsDeleted filesMissing executions

CoverageBrowser comparison and difference algorithm is particularly designed for C and C++ source code and ignore white spaces and modifications in comments.

20.7.1  Reference Database

The reference database is the base instrumentation database which are used for the comparison. To select it click on "Tools->Compare with…" and select a .csmes database. Switching the working database with the reference database can be performed by clicking on "Tools->Switch databases".

Once the reference file loaded, additional filter possibilities are available in the "Executions", the "Sources" and the "Methods" window. These filters let you show/hide, modified, new, deleted or identical procedures and source files.

The "Executions" window displays a mix between the executions of the reference and the current release:

20.7.2  Coverage analysis of modified/unmodified source code

CoverageBrowser is able to limit the code coverage analysis to the modified (resp. unmodified) functions. When selecting the coverage analysis on the modified (resp. unmodified) functions only, CoverageBrowser treat all unmodified (resp. modified) functions as if they are not instrumented. Limiting the code coverage analysis to modified functions can be a practical way to verify that the new features are tested and to identify the list of tests which have are impacted by a modification.
To limit the code coverage to modified function (resp. unmodified functions) click on "Tools->Analysis of Modified Methods" (resp. "Tools->Analysis on Identical Methods").

20.8  Changing the Instrumentation Database

20.8.1  Merging Instrumentations

Clicking on the menu entry "File->Merge with…" lets you import the executions, the source code, and the instrumentations from other .csmes files. Comments and code marked as validated are merged together.

20.8.2  Importing Unit Tests

Clicking on the menu entry "File->Import Unit Tests…" lets you import the execution report of unit tests into the current application. Only execution reports of source files present in the main application are imported. Executions of other source files (for example test code) are ignored.

20.8.3  Importing Reviewer Comments

Clicking on the menu entry "File->Import Reviewer Comments…" lets you import comments and manual validations of a previous version of the current instrumentation database. Comments and manual validations of unmodified functions will be imported even if the source code is modified.

20.9  Function Profiler

The function profiler is activated as soon as the --cs-function-profiler=option (option can be ’all’ or ’skip-trivial’) is added to the compiler and linker command line arguments. It lets you view the time spent for each function in CoverageBrowser.

Like for the code coverage, the profiler lets you analyze the time consumed for each procedure of each selected group of tests. It also lets you compare the timing between two product versions or between executions.

The profiling informations are displayed in the docking window "Function Profiler" which displays:

Total Duration
the cumulated execution time of the function.
the number of function calls.
Mean Duration
the mean execution time of a single call.

All timing information are for the selecting executions. It is possible to exclude or include interactively from the profiling analysis tests by selecting it into the "Execution" window. Clicking on the title bar of a column permits also to sort it and so quickly find the highest values.

The ticks used for the profiler is different to those used for computing the execution time of the application. The first one is able to measure short durations but has not the same absolute precision. For this reason, the timing displayed may differ a bit between the profiler window and the execution window.

20.9.1  Comparing Executions Together

CoverageBrowser permits also to compare the profiling information between two set of tests. The principle is simple: a set of reference functions is selected and compared to an other set.

The comparison is realized by a difference and a radio computed in an additional column:

the difference (selectedreference) lets you compare the absolute time and counts between the 2 sets.
the radio (selected / reference) lets you compare a relative difference between the 2 sets.

For all 3 kind of measurements provided by the function profiler (count, duration and mean duration) 3 additional columns are provided with the values of the reference set, the difference and the ratio. Through the possibility to sort each of these columns, it is possible to quickly identify the difference in term of the computation resources between 2 tests.

20.9.2  Comparing Two Software Versions Together

Exactly like for the comparisons of the executions of a binary, it is also possible to compare the executions of two different binaries. This lets you analyze the difference in terms of the performance between two software versions.

CoverageBrowser provides then the same computation as for the execution comparison in the "Profiler" window. It provide also some additional columns which permits also to see the functions which are differently instrumented from one version to an other.

Chapter 21  Generation of Reports

This feature is not available for black box testing.

21.1  HTML/CSV Report

The menu entry "Reports->Generate HTML/CSV Report…" allows to export code coverage statistics (per methods, source files, executions, …) of the selected executions in HTML format. It allows also to list the manually validated and unexecuted code parts.

21.2  EMMA-XML Report

Selecting the menu entry "Reports->Export Statistics in EMMA-XML Format…" allows exporting code coverage statistics of in EMMA-XML format. The output contains global statistics in a format that is compatible with EMMA. This allows using Squish Coco in tools that provide support for EMMA, notably to give an easy way to use Squish Coco with continuous integration servers like Jenkins CI.

EMMA defines four categories for coverage: classes, methods, blocks, and lines. With Squish Coco they have the following meaning:

EMMA categoryMeaning
classesA class is considered executed if one of its method is called. Code which is not in a class is located in the class "" (empty).
methodsA method is covered if it was called.
blocksCode coverage at statement block level.
linesLine code coverage (if compiled with line coverage support).
conditionsDecision/Condition coverage (if compiled with Decision/Condition coverage support).

21.3  Cobertura XML Report

The menu entry "Reports->Export Statistics in Cobertura-XML Format…" allows exporting code coverage statistics in Cobertura-XML format. The output contains global statistics in a format that is compatible with Cobertura. This allows the use of Squish Coco in tools that provide support for Cobertura, notably giving an easy way to use Squish Coco with continuous integration servers like Jenkins CI and SonarQube.

The statistics in the Cobertura report is computed a little bit differently from the usual way due to the limitations of the report format. The report is in fact a combination of line and condition coverage.

Every statement block, decision and condition instrumentation is counted as a Cobertura-condition. The end of a function is e.g. marked as one condition to fulfill.

The report format also requires that each line with a condition is also instrumented at line coverage level. This is not always the case (e.g. with empty functions), so in this case we add artificially some instrumentation values. Finally, if an instrumented statement is written in more than one line, it is necessary to generate instrumentation data for each line.

All this has an impact on the computation of the statistics for classes, methods and sources. But the resulting values are comparable to those in the other report formats.

21.4  JUnit Report

The menu entry "Reports->Export JUnit Report…" allows exporting the test result as JUnit report. This report does not contain coverage data and only list the test execution result (i.e. “passed” or “failed”) for each test item.

21.5  Text Report

Clicking on the menu entry "Reports->Generate Text Report…" generates a small text report in the form of one line per executed/unexecuted item. A distinct line format can be specified for executed or unexecuted lines.

The following escape sequences are recognized:

Absolute source code file name
Relative to the current build directory source code file name
Line number
Column number

Example: Setting the field "Unexecuted Code Fragments" to "%f:%l: %m" will create a text file which contains all unexecuted code parts. Each line will look like as follows:

foo.cpp:55: Unexecuted: 'return;'

Chapter 22  Preferences

22.1  Save/Load Project

"Save/Restore window position"
If this option is selected, the position of all windows and toolbars will be restored upon application restart.
"Reload automatically the last project"
If this option is selected, the last project opened will automatically be reloaded upon application restart.
"Saves project automatically on exit"
Saves the project file automatically without asking on application exit.


"Minimum Comment Size"
The minimum comment size, is the minimum length requested for a comment.
"Do not request a comment when setting an item to the ’manually validated’ state"
This option is used to allow the user to manually modify the state of an instrumentation without entering a comment.
Enabling this option should be avoided because modifying the state of an instrumentation should be performed with a valid reason which should be recorded as a comment.

22.3  Thresholds

Thresholds are trigger values that control the background color of:


"Medium/High Coverage Level"
If the statistic is above this value, the background color is set to green. Otherwise, the color is orange.
"Low/Medium Coverage Level"
If the statistic is below this value, the background color is set to red. Otherwise, the color is orange.

22.4  Cache


Maximum number of executions loaded into the RAM.
Maximum number of source files loaded into the RAM.