Project Test Plan

Project Test Plan


Document formatted by vherva at Fri Apr 24 11:23:46 1998 on the host schemestation. This document is produced by the SchemeStation project during the Tik-76.115 course.

The terminology is specified in Terminology specifications.

1 Introduction

This document describes the test planning of the SCHEMESTATION project. It is structured like the [IEEE 829] test plan template, with appropriate modifications for this particular case.

The purpose of the testing is to both verify and validate the SCHEMESTATION simulation so that the final product will meet the needs and expectations of the client; to produce quality software. This is done by evaluating the system - this evaluation should determine whether the implementation meets the specified definitions and requirements.

The scope of the testing is the whole SCHEMESTATION simulator, described in the Project requirements specification, SchemeStation Functional Specification and SchemeStation Technical Specification.

The testing process is divided to four levels:

  1. Unit testing
  2. Integration testing [Integration Testing Plan]
  3. System testing [System test specification]
  4. Acceptance testing [Acceptance testing specification]

In addition to the fact that the system should be tested against its direct definitions, the interoperability and performance of the system is tested together with stress testing and system monitoring. The testing process defined in this document has been designed to be as extensive as possible considering the resources available; it is not exhaustive and the result of the process should be a reasonable understanding of the functionality of the system.

2 Components to be tested

All components of the SCHEMESTATION simulator will be tested, with the exception of the event loop (the event loop module is not the product of this project; it has been acquired from SSH Communications Security Oy -- the module can be freely used for academic purposes and thus places no restrictions on the project).

The components include:

The testing is performed for smaller software components first (Unit testing), then composition of subsystems (Integration testing), and finally the whole software project (System and acceptance testing). The main goal of the testing phase is to pinpoint the failed tests to as small scope in actual code as possible (and as early as possible) to make the development process feasible.

3 Features to be tested

Each component is tested separately (unit testing), as described in the components' unit test plan:

The integration tests can be divided in separate categories. The most important is the core-category: the integration of the virtual machine, scheduler and heap. The tests for this triple mainly involve having a agent linearizations loaded into virtual machine and verifying that the execution yields correct results. The other significant category is the compiler-assembler pair. While the core category tests uses (presumably hand-made) agent linearizations for testing, the compiler-assembler produces them. When it is ensured that the core category works as intented, it can be used to execute agent-linearizations produced by the compiler. The result of the execution is the compared to other scheme implementations. The addressing and messaging modules form the above category are to be tested. Their integration test are mainly related to the consistence of the joint data structures. Finally the integration of the addressing-messaging and networking are tested. The phases of integration testing are more precisely described in [Integration Testing Plan].

The system testing (System test specification) consists mainly of running different applications on top of the system and based on the behavior of the applications the tests are either accepted or rejected.

The acceptance testing (Acceptance testing specification) is to be negotiated with the client; this testing determines whether the client is satisfied with the software as a whole (the validity of the software), from viability and usability to robustness. As the result of this test the client either accepts the software or not.

4 Features not to be tested

The features not to be tested are very limited: the event loop mechanism will not be extensively tested, due to the (previously mentioned) fact that we can trust the correctness of the implementation.

The compiler is not a direct part of this environment, so the testing of it is relaxed (generally speaking, testing a compiler is not a trivial task) -- however, as it is a very important piece for the usefulness of the system as a whole, it will be tested as thoroughly as it is convenient from our perspective (Unit testing of the SchemeStation cross-compiler and assembler).

5 Approach to testing

Testing an operating system is a complex and demanding task - it is impossible to test every combination and situation that might arise; choosing the right targets of the testing is very important.

One factor in the testing process is to automate the testing as much as possible; this implies that after changing something in a previously tested system it is possible to run all tests and be reasonably sure that no new errors have been introduced to the previously working system (regression testing). This applies to every test process of the system.

Some guidelines for the testing process, which involves the four classic test steps (unit, integration, system and acceptance testing):

  • The enforcement of the pre- and postconditions of public interfaces (unit testing)
  • Status mechanism of public interfaces (erraneous input must also be provided) (unit testing)
  • The basic functionality of each module (unit/integration testing)
  • The basic overall system functionality (integration/system testing)
  • The system functionality in abnormal situations (system testing)

5.1 Test coverage

As there is no tool to measure the exact coverage of the tests (in relation to the produced source code), the unit tests have to be designed carefully and reviewed by someone other than the designer of the tests. The aim is to cover as much of the core system code as possible, but in reality the coverage observations are to be focused on few particularly important modules (VM, scheduler, messaging). The measurement of the coverage will be decided later.

5.2 Interoperability testing

The purpose of interoperability testing is to address the portability of the SCHEMESTATION; several SCHEMESTATIONs running on different platforms must be able to interoperate and function correctly.

5.3 Performance testing

Performance testing is performed for the units alone when reasonable, as well as the whole system. Resource (CPU, memory, disk) usage profiles can also be generated if necessary using some test units. The results enable the compasison of the system performance with the plans and make it easier to optimize the system, and, later, possibly compare different implementations.

5.4 Usability testing

Testing the usability of an operating system simulation is somewhat unreasonable, due to the fact that there are no objective ways to measure the usability of operating system interfaces. There are components such as the SCHEMESTATION terminal, which could be tested for usability, but this is in our view not in the scope of this project; the interfaces of the system are in much more dominating position. The possible usability testing will be considered again later, when there is more information about the system as a whole.

5.5 System Monitoring

The monitoring of the system must be adequate; it must be tested that there is enough information available from all modules that can be monitored. At this point (the desing phase) this has not yet been defined, due to the fact that there is not enough first-hand experience of the functionality of the system as a whole.

5.6 Stress testing

Stress testing is done in particular where state is involved; the heap, for instance, must be stress tested in order to be sure that there are no memory leaks. The system as a whole will also be stress tested (see System test specification).

5.7 Tools

There are few tools that are used in the testing process: the automation of the testing involves scripting (perl, Unix shell-scripts, GNU make, etc) and memory allocation with the use of Electric Fence -malloc debugger. In some testing situations it is most covenient to view the state of the system using gdb - this way one can be certain on what line the variable's value changed.

5.8 Miscellaneous

As a part of this testing (or quality assurance) process there are the guidelines of coding (Project coding policy) which enforce some quite useful practices:

  • Every function has been declared with a standard template concerning the parameters and return values, that is, every public interface function returns status as a direct return value and all actual information is passed by arguments.
  • Each public interface function of a module must have (if possible) pre- and postconditions for their parameters.

6 Conditions on accepting/rejecting a test

Each test scenario defines the success of the test so that an individual test can be either accepted or rejected. In case a performed test is not accepted, a bug report has to be produced (but not in the case of unit testing). The test units should be constructed so, that they automatically report the status of the test after the testing has finished. The guideline for all tests is that every time a test case finds something unexpected, it should fail.

7 Conditions on interrupting and continuing a test

Individual test units contain the necessary information for interrupting and continuing a particular test. The combined execution of all test units will be built so that the test process can be interrupted and continued later with the aid of a traditional makefile approach based on dependencies of the targets and the results of the tests.

8 Materials produced by the tests

Each test unit should produce a report document where the indications of failures and successes can be observed. The test units are independent so that the inputs, output, and other side-products that they might produce are of their own concern and specified within their particular documentation.

A bug report should provide at least the following information:

  • Title
  • Name of the reporter
  • Status of the bug (under inspection/delayed repair/fixed/..)
  • Proposed person responsible for fixing the bug
  • Priority of repair
  • Failed component
  • Test configuration
  • Description of the bug
  • How to repeat the behaviour

There is an automatic bug report facility for this project that is already being used.

9 Test tasks

The list of tasks for the testing process, for creating each individual test unit:

  • Analysis of the requirements of the tests
  • Designing the verification methods
  • Test environment definition and maintenance requirements
  • Test report definition
  • Creating the test document (with test plan)
  • Consulting the testing manager to accept the test unit

Using the test unit implies following:

  • Acquiring the test unit and reading its documentation
  • Creating the test environment
  • Running the tests
  • Possibly reporting the results of the tests to the testing manager
  • If there were problems, reporting them (possibly a bug report)

10 Environment requirements

Test environment depends on the level of testing to be done. The unit and integration testing is done within the individual implementors development environment. For system and acceptance testing a special production environment has to be set up; this means that in the version control system (CVS) a base configuration must be freezed for this particular test, so that it can be reproduced later.

The specific test environment requirements are part of the individual test documents.

11 Responsibilities

The testing manager takes the main responsibility of the testing process. This means that the testing policies are enforced and the individual test units accepted by the testing manager. Also, control over test reporting and the actions to be taken in case of difficulties are the managers' responsibility.

Unit testing is the responsibility of the implementor of the unit. Integration tests are done by the implementors of the units to be integrated in co-operation. System testing is the responsibility of the testing manager and project manager. The definition of acceptance testing is the responsibility of the client and it will be negotiated later.

12 Resources

The resources for the testing are the whole project group. Everyone has at least one component to be tested as a unit and everyone participates with the integration testing. System testing is the responsibility of the testing manager and project manager; if necessary, they can appoint other project members to assist them with the system testing.

13 Schedule

The testing process as a whole should go with the overall development of the SCHEMESTATION simulator. The basic schedule is:

  1. Unit testing ready 15.1.1998
  2. Integration testing units ready 15.2.1998
  3. System testing units ready 15.3.1998
  4. Acceptance testing with the delivery of the system (by 15.4.1998)

There is no reason to specifically schedule a particular test unit to a certain date; the testing manager controls that the tests are ready for on the given date. Also, the integration and system testing units are to be used extensively when reconstructing a tested system so that it is not possible to state when e.g. integration testing is ready; it ends when the software development is completed.

14 Risk management

The risks involved in the testing process are:

  • Too fine grained testing; wasted effort
  • Too coarse grained testing; tests do not reflect the functionality of the item to be tested
  • Wrong test targetting; testing something that is not necessary or of little or no importance
  • Underestimated scheduling; creating the test units is more time consuming than expected

In case of serious troubles with the testing process, the testing manager has to decide the following:

  • How does this affect scheduling; is rescheduling necessary?
  • What kind of tests can be skipped if there is no time to finish all planned testing?
  • How to retarget the testing?
  • Are there available resources that could be used to help in testing?

15 Acceptance

Each test is accepted so that there are two persons that agree on the success of a test unit (they "sign" the test unit as accepted for a particular configuration); this acceptance information is then delivered to the testing manager who keeps track of the accepted tests and the configurations the tests were made with. This method ensures that the testing manager is able tell whether a particular configuration has been tested and if not, what kind of tests there are to be made. This signing process is only for system testing (testing the functionality of the system) due to the unnecessary overhead it might provide to unit or integration testing.

16 References

[IEEE 829]
IEEE Standard for Software Test Documentation, Std 829-1983
Terminology specifications
Project requirements specification
SchemeStation Functional Specification
SchemeStation Technical Specification
Integration Testing Plan
System test specification
Acceptance testing specification
Heap Module Specification
Virtual machine layout and interface definition
Scheduler definition
Addressing module specification
Messaging System Specification
Networking Specification
External Agent Interface Specification
Compiler technical specification
Heap Unit Testing Plan
VM Unit Testing
Scheduler Unit Testing
Addressing System Unit Testing Plan
Messaging System Unit Testing Plan
Networking Testing Plan
External Agent Interface Testing Plan
Unit testing of the SchemeStation cross-compiler and assembler
Project coding policy


© SchemeStation project 1997-1998 [Back to the document index]