[Blog] [Docs] [Code] [Slides] [About]

Software Testing and Analysis 阅读笔记

2020-09-09 11:00 CST

2020-09-16 19:13 CST

书籍:Software Testing and Analysis: Process, Principles and Techniques

Chapter 1: Software Test and Analysis in a Nutshell

Verification activities steer the process toward the construction of a product that satisfies the requirements by checking the quality of intermediate artifacts as well as the ultimate product.

Validation activities check the correspondence of the intermediate artifacts and the final product to users’ expectations.

  • When do verification and validation (V&V) start and complete?
    • V&V starts as soon as we decide to build a software product, or even before.
    • Feasibility study: qualities, risks, impacts, division of software, etc.
    • Architectural design divides work and separates qualities that can be verified independently in the different subsystems.
    • If the feasibility study leads to a project commitment, V&V activities will commence with other development activities.
    • V&V activities continue through each small or large change to the system.
  • What techniques should be applied?
    • Feasibility study
    • Requirement specifications
    • Analysis and test plan
      • Functional unit tests, test scaffolding and test oracles
        • Scaffolding: additional code to execute test
        • Oracle: check results and expected outputs
      • Integration and system tests
    • Quality Plan
      • Performance
      • Usability
      • Security
  • How to assess readiness of a product for release?
    • Products must be delivered when they meet an adequate level of functionality and quality
    • Availablity and reliability: mean time between failures (MTBF)
    • It’s hard to generate large, statistically valid sets of tests based on operational files
    • Alpha/Beta testing: verify reliability using a sample of/or real users
  • How can we control the quality of successive releases?
    • Adapt to environment changes: device drivers, operating system, etc.
    • Serve new and changing user requirements.
    • Test and analysis new and modified code, reexecution of tests and extensive record-keeping.
    • Major revisions (point releases) and smaller revisions (patch level releases).
    • Each point release must undergo complete regression testing before release, patch level revisions may be released with a subset of regression tests.
  • How can the development process itself be improved over the course of the current and future projects to improve products and make verification costeffective?
    • Defining data to be collected and implemeting procedures to collect it;
    • Analyzing collected data to identify important fault classes;
    • Analyzing selected fault classes to identify weakness in development and quality measures;
    • Adjusting the quality and development process.

The quality process has three distinct goals:

  • Improving a software product;
  • Assessing the quality of the software product;
  • Improving the long-term quality and cost-effectiveness of the quality process itself.

Chapter 2: A Framework for Test and Analysis

2.1 Validation and Verification

Definitions:

  • Validation: assessing the degree to which a software system actually fulfills its requirements, in the sense of meeting the user’s real needs.
    • A system that meats its actual goals is useful.
    • A system that is consistent with its specification is dependable.
  • Verfication: checking the consistency of an implementation with a specification.
    • Check a detailed design (implementation) with an overall design (specification).
    • Check source code (implementation) with a detailed design (specification).

Verification is a check of consistency between two descriptions, where validation compares a description against actual needs.

Dependability properties includes correctness, reliability, robostness and safety.

2.2 Degress of Freedom

Program testing is a verification technique and is as vulnerable to undecidablity as other techniques. For most programs, exhaustive testing cannot be completed in any finite amount of time.

A technique for verifying a property can be inaccurate in one of two directions:

  • Pessimistic: not guaranteed to accept a program even if the program does possess the property being analyzed;
  • Optimistic: may accept some programs that does not possess the property.

Other terminology:

  • Safe: a safe analysis has no optimistic inaccuracy, it acceptes only correct programs.
  • Sound: an analysis of a program $P$ with respect to a formula $F$ is sound if the analysis returns true only when the program actually does satisfy the formula. If satisfication of a formula $F$ is taken as an indication of correctness, then a sound analysis is the same as a safe and conservative analysis.
  • Complete: an analysis of a program $P$ with respect to a formula $F$ is complete if the analysis always returns true when the program actually does satisfy the formula. If satisfaction of a formula $F$ is taken as an indication of correctness, then a complete analysis is one that admits only optimistic inaccuracy. An analysis that is sound but incomplete is a conservative analysis.

A software verfication technique that errs only in the pessimistic direction is called a conservative analysis. A conservative analysis will often produce a very large number of spurious error reports in addition to a few accurate reports.

A third dimension of compromise is possible: substituting a property that is more easily checked, or constraining the class of programs that can be checked.

Chapter 3: Basic Principles

Six principles that characterize various approaches and techniques for analysis and testing (A&T): sensitivity, redundancy, restriction, partition, visibility and feedback.

3.1 Sensitivity

Faults may lead to failures, but faulty software may not fail on every execution. The sensitivity principle states that it is better to fail every time than sometimes.

  • C/C++: replace strcpy and strncpy
  • Java: fail-fast property of iterators (modify enumerable while iterating over it)

3.2 Redundancy

Detect faults that could lead to differences between intended behavior and actual behavior, so the most valuable form of redundanct is in the form of an explicit, redundant statement of intent.

Static type checking is a classic application of redundancy principle: the type declaration is a statement of intent that is at least partly redundant with the use of a variable in the source code. The type declaration constrains other parts of the code so a consistenct check can be applied.

  • Introduce additional ways to declare intent and automatically check for consistency.
  • Java: enforces rules about explicitly declaring each exception that can be thrown by a method.

3.3 Restriction

When there are no cheap and effective ways to check a property, one can change the problem by checking a different, more restrictive property or by limiting the check to a smaller, more restrictive class of programs.

Additional restrictions may be inposed in the form of programming standards.

  • C: restricting the use of type casts or pointer artihmetic.
  • Java: enforce variable to be initialized in any control path.

Other forms of restriction can apply to architectural and detailed design.

  • Accesses to a data structure by one process appears to the outside world as if it had occurred atomatically (serializability). A set of such transactions appears as if they were applied in some serial order even if they didn’t.
  • Stateless component interfaces does not remember anything about previous requests and are easier to test.

3.4 Partition

Divide and conquer.

  • At the process level, divide complex activities into sets of simple activities that can be attacked independently.
    • Unit, integration, subsystem and system testing.
  • At the technique level, many static analysis techniques first construct a model of a system and then analyze the model.
    • The analysis is divided into two subtasks: simplify the system and make the desired properties feasible, then prove the property with respect to the simplified model.

3.5 Visibility

Visibility means the ability to measure progress or status against goals.

Visibility is closely related to observability, the ability to extract useful information from a software artifact.

  • Internet protocols: use simple, human-readable textual commands rather than a more compact binary encoding.
    • Small cost in performace;
    • Large payoff in observability.

3.6 Feedback

Feedback applies both to the process itself and to individual techniques.

The six principles described in this chapter are:

  • Sensitivity: better to fail every time than sometimes,
  • Redundancy: making intentions explicit,
  • Restriction: making the problem easier,
  • Partition: divide and conquer,
  • Visibility: making information accessible, and
  • Feedback: applying lessons from experience in process and techniques.

Chapter 4: Test and Analysis Activities Within a Software Process

4.1 The Quality Process

The quality process should be structured for completeness, timeliness and cost-effectiveness.

  • Completeness: appropriate activities are planned to detech each important class of faults.
  • Timeliness: faults are detected at a point of high leverage, which in practice almost always means that they are detected as early as possible.
  • Cost-effectiveness: subject to the constraints of completeness and timeliness, one chooses activities depending on their cost as well as their effectiveness.

4.2 Planning and Monitoring

<EOF>

Loading Comments By Disqus