VDB-3 Unit Testing

Revision History:
2012-Jan-09boshkinaedited, added requirements
2012-Jan-03boshkinarewrote as a list of requirements
2011-Dec-28boshkinafree form draft

Unit Testing

Introduction

The goal of unit testing is to isolate each part of the system and show that the individual parts are correct. This is different from other kinds of testing, like integration, acceptance, performance, etc. This document describes requirements on components participating in unit testing, including testing infrastructure, the software being tested, the build process, and the development conventions.

Overview

For the purposes of this text, a Unit refers to an interface with an implementation. The interface is called Interface Under Test, or IUT. The implementation is called Code Under Test (CUT).

A Test Case is a function that executes and verifies one or more operations from the corresponding Unit.

A Test Suite is a collection of Tests Cases associated with the same Unit. There may be multiple Suites testing different aspects of a Unit's operation. Ideally, the union of all Suites for a Unit should provide a 100% test coverage of the Unit's code.

Test Suites can form a hierarchy, where, for example, the leaf Suites are associated with individual units within a library, and the higher level Suite represents a module-test for the entire library.

Test Suite is responsible for executing all its Test Cases and sub-suites and collecting the results in form of statistics, usually the number of passed and failed test cases, execution time, etc.

When some Test Cases share common prologue/epilogue code, Test Suite should support automatic execution of such code before/after each test case. This is done by introducing a special object called Test Fixture which encapsulates a pair of Setup/Teardown methods and a collection of Test Cases relying on them. Test Fixtures can be reused by test cases from different Test Suites.

Requirements

Framework

The purpose of a Unit Testing Framework is to support creation and mass execution of unit tests.

RF1. Support creation of Test Cases

RF1.1. Test Case must be identifiable by name

RF1.2. Test Case can be a C/C++ function, a C++ method, or a shell script

RF2. Support creation of test suites as collections of Test Cases

RF2.1 Test Suite must be identifiable by a globally unique name

RF2.2 Test Cases within one Test Suite must have unique names

RF2.3 Every Test Case must be assigned to a Test Suite

RF3. Support a hierarchy of Test Suites. A Test Suite should be able to contain a number or sub-suites, in addition to the suite's own test cases.

RF4. Support creation of Test Fixtures to contain common setup/teardown code and/or context object shared by multiple test cases.

RF4.1 A Text Fixture can be reused by Test Cases from different Suites.

RF5. Support application of the same test code to different implementations of an IUT

RF6. Support execution of test code using different data sources

RF7. Test framework must intercept abnormal termination signals sent by CUT, report them and continue execution of tests

RF7.1. Test framework should not interfere with signal-handling logic inside CUT

RF7.2. Each test case should run in its own address space as to not interfere with the test framework and other test cases

RF8. Support versioning of Test Cases in respect to versions of operations in IUT

RF9. Collect test execution statistics: unit tests passed, failed, elapsed time

RF10. Provide code coverage information: for each IUT, mark messages that were processed/not processed in the current run

RF11. Support error reporting from test code. Error reports must identify location in the test code that raised an error condition (suite, test case, source file, line) and provide user-specified or standard message

RF11.1. The test execution trace stream must be separate from the log stream used by CUT. The destination of the stream should be configurable.

RF11.2. Upon reporting an error, test case should have an option to continue or terminate

RF12. Support memory allocation monitoring. Report leaked memory blocks separately for CUT and test code

RF13. Support execution of selected test suite(s) from a hierarchy

RF14. Performance overhead introduced by the test framework should be minimal

RF15. Provide trace of test execution in normal, verbose and quiet modes

RF16. Test framework must protect itself from infinite loops in CUT, terminate test cases that take too long to run, report and continue execution of tests

RF16.1. A test case must be able to specify its own timeout value if it requires more time than usual.

Interfaces and Code Under Test

RI1. UIT should allow verification of the results of operations being tested, for operations that are expected to change the state of the object under test. One way to achieve that is to have a standard set of testability-oriented methods (e.g. parse/toString/equal) included in IUT.

RI2. CUT should be isolated from implementations of other modules it may depend on, at least in cases when the dependency modules involve external resources and/or introduce execution delays. The common technique for achieving this goal is Dependency Injection; to support it CUT should expose interfaces it depends on and accept externally instantiated implementations for them. The test code would be then responsible for instantiation and configuration of mock (or real if practical) dependencies of CUT before its initialization.

Build Process and Source Control

RB1. Test sources should be kept in a separate source sub-tree that mirrors the main code tree

RB2. Build command must provide means to build all test suites defined in the source tree, combined into one executable

RB3. Build command must provide means to build selected test suites, combined into one executable

RB4. Build command should support building a minimal retest executable based on file update timestamps and dependencies between source files

RB5. An auto-build (Continuous Integration) platform should periodically check out latest sources, build and run tests, and report results via email and/or web page update. This should be done on all supported platforms, in all supported configurations (debug/release/etc.)

Miscellaneous

M1. Before checking in, developers should merge their changes with the latest code base, and make sure 100% of unit tests pass.

M2. There should be a tool that parses an interface definition and generates code for a test suite and stubs for test cases for all opertaions in the interface.

M2.1. The test generation tool should not overwrite old test code if it exists.

Continuous Integration

A roadmap:

1. Revive tests under test/ and put them into one "executable"

2. Include test projects into standard building targets or add a new target ("tests"). Cover debug, release, etc. (linux only for now)

3. Set up a cron job to run make periodically. Check results manually.

4. E-mail build results to a buildmaster

5. Rotate the buildmaster duty (2 people/week)

6. Add text execution to cron job(s):

7. E-mail test results to buildmasters

8. Publish build/test results (intranet, lava lamps, desktop widget)

9. Port to Mac, Windows, ...