Difference between revisions of "Fault coverage"

From Self-sufficiency
Jump to: navigation, search
(Replaced dead link)
 
m (1 revision)
 
(No difference)

Latest revision as of 13:47, 10 December 2011

Fault coverage refers to the percentage of some type of fault that can be detected during the test of any engineered system. High fault coverage is particularly valuable during manufacturing test, and techniques such as Design For Test (DFT) and automatic test pattern generation are used to increase it.

In electronics for example, stuck-at fault coverage is measured by sticking each pin of the hardware model at logic '0' and logic '1', respectively, and running the test vectors. If at least one of the outputs differs from what is to be expected, the fault is said to be detected. Conceptually, the total number of simulation runs is twice the number of pins (since each pin is stuck in one of two ways, and both faults should be detected). However, there are many optimizations that can reduce the needed computation. In particular, often many non-interacting faults can be simulated in one run, and each simulation can be terminated as soon as a fault is detected.

A fault coverage test passes when at least a specified percentage of all possible faults can be detected. If it does not pass, at least three options are possible. First, the designer can augment or otherwise improve the vector set, perhaps by using a more effective automatic test pattern generation tool. Second, the circuit may be re-defined for better fault detectibility (improved controllability and observability). Third, the designer may simply accept the lower coverage.

Test coverage (computing)

The term Test Coverage used in the context of programming / software engineering, refers to measuring how much a software program has been exercised by tests. Coverage is a means of determining the rigour with which the question underlying the test has been answered. There are many kinds of test coverage:

  • code coverage,
  • feature coverage,
  • scenario coverage,
  • screen item coverage
  • model coverage.

Each of these coverage types assumes that some kind of baseline exists which the defines the system under test. The number of types of test coverage therefore varies as the number of ways of defining the system.

For example, in code coverage:

  • has a particular statement ever been executed?
  • how many times has a statement been executed?
  • have all the statements in a program been executed, at least once?
  • have all the decision points in the code been exercised such that every decision path has been taken?
  • has the last optimization reduced the instruction path length significantly?

See also

External links