Testability, a property applying to an empirical hypothesis, involves two components:

  1. The logical property that is variously described as contingency, defeasibility, or falsifiability, which means that counterexamples to the hypothesis are logically possible.
  2. The practical feasibility of observing a reproducible series of such counterexamples if they do exist.

In short, a hypothesis is testable if there is some real hope of deciding whether it is true or false of real experience. Upon this property of its constituent hypotheses rests the ability to decide whether a theory can be supported or falsified by the data of actual experience. If hypotheses are tested, initial results may also be labeled inconclusive.

See also

Further reading

  • Popper, Karl (2008). The logic of scientific discovery (Reprint ed.). London: Routledge. ISBN 0415278449.
Active record pattern

In software engineering, the active record pattern is an architectural pattern found in software that stores in-memory object data in relational databases. It was named by Martin Fowler in his 2003 book Patterns of Enterprise Application Architecture. The interface of an object conforming to this pattern would include functions such as Insert, Update, and Delete, plus properties that correspond more or less directly to the columns in the underlying database table.

The active record pattern is an approach to accessing data in a database. A database table or view is wrapped into a class. Thus, an object instance is tied to a single row in the table. After creation of an object, a new row is added to the table upon save. Any object loaded gets its information from the database. When an object is updated, the corresponding row in the table is also updated. The wrapper class implements accessor methods or properties for each column in the table or view.

This pattern is commonly used by object persistence tools and in object-relational mapping (ORM). Typically, foreign key relationships will be exposed as an object instance of the appropriate type via a property.


In reliability theory and reliability engineering, the term availability has the following meanings:

The degree to which a system, subsystem or equipment is in a specified operable and committable state at the start of a mission, when the mission is called for at an unknown, i.e. a random, time. Simply put, availability is the proportion of time a system is in a functioning condition. This is often described as a mission capable rate. Mathematically, this is expressed as 100% minus unavailability.

The ratio of (a) the total time a functional unit is capable of being used during a given interval to (b) the length of the interval.For example, a unit that is capable of being used 100 hours per week (168 hours) would have an availability of 100/168. However, typical availability values are specified in decimal (such as 0.9998). In high availability applications, a metric known as nines, corresponding to the number of nines following the decimal point, is used. With this convention, "five nines" equals 0.99999 (or 99.999%) availability.


DFT may refer to:

Discrete Fourier transform

Decision field theory

Density functional theory

Demand flow technology

Department for Transport in the United Kingdom

Design for testing (design for testability)

Deaereating feed tank

Digital Film Technology, maker of the Spirit DataCine

DuPont Fabros Technology, US data center company, NYSE symbol

Design for testing

Design for testing or design for testability (DFT) consists of IC design techniques that add testability features to a hardware product design. The added features make it easier to develop and apply manufacturing tests to the designed hardware. The purpose of manufacturing tests is to validate that the product hardware contains no manufacturing defects that could adversely affect the product's correct functioning.

Tests are applied at several steps in the hardware manufacturing flow and, for certain products, may also be used for hardware maintenance in the customer's environment. The tests are generally driven by test programs that execute using automatic test equipment (ATE) or, in the case of system maintenance, inside the assembled system itself. In addition to finding and indicating the presence of defects (i.e., the test fails), tests may be able to log diagnostic information about the nature of the encountered test fails. The diagnostic information can be used to locate the source of the failure.

In other words, the response of vectors (patterns) from a good circuit is compared with the response of vectors (using the same patterns) from a DUT (device under test). If the response is the same or matches, the circuit is good. Otherwise, the circuit is not manufactured as it was intended.

DFT plays an important role in the development of test programs and as an interface for test application and diagnostics. Automatic test pattern generation, or ATPG, is much easier if appropriate DFT rules and suggestions have been implemented.

Experimental psychology

Experimental psychology refers to work done by those who apply experimental methods to psychological study and the processes that underlie it. Experimental psychologists employ human participants and animal subjects to study a great many topics, including (among others) sensation & perception, memory, cognition, learning, motivation, emotion; developmental processes, social psychology, and the neural substrates of all of these.


FURPS is an acronym representing a model for classifying software quality attributes (functional and non-functional requirements):

Functionality - Capability (Size & Generality of Feature Set), Reusability (Compatibility, Interoperability, Portability), Security (Safety & Exploitability)

Usability (UX) - Human Factors, Aesthetics, Consistency, Documentation, Responsiveness

Reliability - Availability (Failure Frequency (Robustness/Durability/Resilience), Failure Extent & Time-Length (Recoverability/Survivability)), Predictability (Stability), Accuracy (Frequency/Severity of Error)

Performance - Speed, Efficiency, Resource Consumption (power, ram, cache, etc.), Throughput, Capacity, Scalability

Supportability (Serviceability, Maintainability, Sustainability, Repair Speed) - Testability, Flexibility (Modifiability, Configurability, Adaptability, Extensibility, Modularity), Installability, LocalizabilityThe model, developed at Hewlett-Packard was first publicly elaborated by Grady and Caswell. FURPS+ is now widely used in the software industry. The + was later added to the model after various campaigns at HP to extend the acronym to emphasize various attributes.

Fault grading

Fault grading is a procedure that rates testability by relating the number of fabrication defects that can in fact be detected with a test vector set under consideration to the total number of conceivable faults.

It is used for refining both the test circuitry and the test patterns iteratively, until a satisfactory fault coverage is obtained.

Growth of knowledge

A term coined by Karl Popper in his work The Logic of Scientific Discovery to denote what he regarded as the main problem of methodology and the philosophy of science, i.e. to explain and promote the further growth of scientific knowledge. To this purpose, Popper advocated his theory of falsifiability, testability and testing.

List of system quality attributes

Within systems engineering, quality attributes are realized non-functional requirements used to evaluate the performance of a system. These are sometimes named "ilities" after the suffix many of the words share. They are usually Architecturally Significant Requirements that require architects' attention.

Martin Woodward

Dr Martin R. Woodward (28 May 1948 – 27 October 2006) was a British computer scientist who made leading contributions in the field of in software testing.Martin Woodward was an academic in the Department of Computer Science at the University of Liverpool in England. As part of his leading role in software testing, for 13 years, until shortly before his death, Woodward was the Chief Editor of the journal Software Testing, Verification & Reliability (STVR), a major international journal in the field of software testing.Woodward undertook software testing research in areas such as mutation testing, maturity models, testability, etc.

Non-functional requirement

In systems engineering and requirements engineering, a non-functional requirement (NFR) is a requirement that specifies criteria that can be used to judge the operation of a system, rather than specific behaviors. They are contrasted with functional requirements that define specific behavior or functions. The plan for implementing functional requirements is detailed in the system design. The plan for implementing non-functional requirements is detailed in the system architecture, because they are usually architecturally significant requirements.Broadly, functional requirements define what a system is supposed to do and non-functional requirements define how a system is supposed to be. Functional requirements are usually in the form of "system shall do ", an individual action or part of the system, perhaps explicitly in the sense of a mathematical function, a black box description input, output, process and control functional model or IPO Model. In contrast, non-functional requirements are in the form of "system shall be ", an overall property of the system as a whole or of a particular aspect and not a specific function. The system's overall properties commonly mark the difference between whether the development project has succeeded or failed.

Non-functional requirements are often called "quality attributes" of a system. Other terms for non-functional requirements are "qualities", "quality goals", "quality of service requirements", "constraints", "non-behavioral requirements", or "technical requirements". Informally these are sometimes called the "ilities", from attributes like stability and portability. Qualities—that is non-functional requirements—can be divided into two main categories:

Execution qualities, such as safety, security and usability, which are observable during operation (at run time).

Evolution qualities, such as testability, maintainability, extensibility and scalability, which are embodied in the static structure of the system.

Objectivity (science)

Objectivity in science is an attempt to uncover truths about the natural world by eliminating personal biases, emotions, and false beliefs. It is often linked to observation as part of the scientific method. It is thus intimately related to the aim of testability and reproducibility. To be considered objective, the results of measurement must be communicated from person to person, and then demonstrated for third parties, as an advance in a collective understanding of the world. Such demonstrable knowledge has ordinarily conferred demonstrable powers of prediction or technology.

The problem of philosophical objectivity is contrasted with personal subjectivity, sometimes exacerbated by the overgeneralization of a hypothesis to the whole. E.g. Newton's law of universal gravitation appears to be the norm for the attraction between celestial bodies, but it was later superseded by the more general theory of relativity.

Reliability engineering

Reliability engineering is a sub-discipline of systems engineering that emphasizes dependability in the lifecycle management of a product. Dependability, or reliability, describes the ability of a system or component to function under stated conditions for a specified period of time. Reliability is closely related to availability, which is typically described as the ability of a component or system to function at a specified moment or interval of time.

Reliability is theoretically defined as the probability of success as the frequency of failures; or in terms of availability, as a probability derived from reliability, testability and maintainability. Testability, maintainability and maintenance are often defined as a part of "reliability engineering" in reliability programs. Reliability plays a key role in the cost-effectiveness of systems.

Reliability engineering deals with the estimation, prevention and management of high levels of "lifetime" engineering uncertainty and risks of failure. Although stochastic parameters define and affect reliability, reliability is not (solely) achieved by mathematics and statistics. One cannot really find a root cause (needed to effectively prevent failures) by only looking at statistics. "Nearly all teaching and literature on the subject emphasize these aspects, and ignore the reality that the ranges of uncertainty involved largely invalidate quantitative methods for prediction and measurement." For example, it is easy to represent "probability of failure" as a symbol or value in an equation, but it is almost impossible to predict its true magnitude in practice, which is massively multivariate, so having the equation for reliability does not begin to equal having an accurate predictive measurement of reliability.

Reliability engineering relates closely to safety engineering and to system safety, in that they use common methods for their analysis and may require input from each other. Reliability engineering focuses on costs of failure caused by system downtime, cost of spares, repair equipment, personnel, and cost of warranty claims. Safety engineering normally focuses more on preserving life and nature than on cost, and therefore deals only with particularly dangerous system-failure modes. High reliability (safety factor) levels also result from good engineering and from attention to detail, and almost never from only reactive failure management (using reliability accounting and statistics).

Software testability

Software testability is the degree to which a software artifact (i.e. a software system, software module, requirements- or design document) supports testing in a given test context. If the testability of the software artifact is high, then finding faults in the system (if it has any) by means of testing is easier.

Formally, some systems are testable, and some are not. This classification can be achieved by noticing that, to be testable, for a functionality of the system under test "S", which takes input "I", a computable functional predicate "V" must exists such that is true when S, given input I, produce a valid output, false otherwise. This function "V" is known as the verification function for the system with input I.

Many software systems are untestable, or not immediately testable. For example, Google's ReCAPTCHA, without having any metadata about the images is not a testable system. Recaptcha, however, can be immediately tested if for each image shown, there is a tag stored elsewhere. Given this meta information, one can test the system.

Therefore, testability is often thought of as an extrinsic property which results from interdependency of the software to be tested and the test goals, test methods used, and test resources (i.e., the test context). Even though testability can not be measured directly (such as software size) it should be considered an intrinsic property of a software artifact because it is highly correlated with other key software qualities such as encapsulation, coupling, cohesion, and redundancy.

The correlation of 'testability' to good design can be observed by seeing that code that has weak cohesion, tight coupling, redundancy and lack of encapsulation is difficult to test.

A lower degree of testability results in increased test effort. In extreme cases a lack of testability may hinder testing parts of the software or software requirements at all.

In order to link the testability with the difficulty to find potential faults in a system (if they exist) by testing it, a relevant measure to assess the testability is how many test cases are needed in each case to form a complete test suite (i.e. a test suite such that, after applying all test cases to the system, collected outputs will let us unambiguously determine whether the system is correct or not according to some specification). If this size is small, then the testability is high. Based on this measure, a testability hierarchy has been proposed.


Solipsism ( (listen); from Latin solus, meaning 'alone', and ipse, meaning 'self') is the philosophical idea that only one's own mind is sure to exist. As an epistemological position, solipsism holds that knowledge of anything outside one's own mind is unsure; the external world and other minds cannot be known and might not exist outside the mind. As a metaphysical position, solipsism goes further to the conclusion that the world and other minds do not exist. This extreme position is claimed to be irrefutable, as the solipsist believes himself to be the only true authority, all others being creations of their own mind.

Test-driven development

Test-driven development (TDD) is a software development process that relies on the repetition of a very short development cycle: requirements are turned into very specific test cases, then the software is improved to pass the new tests, only. This is opposed to software development that allows software to be added that is not proven to meet requirements.

American software engineer Kent Beck, who is credited with having developed or "rediscovered" the technique, stated in 2003 that TDD encourages simple designs and inspires confidence.Test-driven development is related to the test-first programming concepts of extreme programming, begun in 1999, but more recently has created more general interest in its own right.Programmers also apply the concept to improving and debugging legacy code developed with older techniques.

Test effort

In software development, test effort refers to the expenses for (still to come) tests. There is a relation with test costs and failure costs (direct, indirect, costs for fault correction). Some factors which influence test effort are: maturity of the software development process, quality and testability of the testobject, test infrastructure, skills of staff members, quality goals and test strategy.

Test engineer

A test engineer is a professional who determines how to create a process that would best test a particular product in manufacturing and related disciplines in order to assure that the product meets applicable specifications. Test engineers are also responsible for determining the best way a test can be performed in order to achieve adequate test coverage. Often test engineers also serve as a liaison between manufacturing, design engineering, sales engineering and marketing communities as well.

Test engineers can have different expertise which depends on what test process they are more familiar with (although many test engineers have full familiarity from the PCB level processes like ICT, JTAG, and AXI) to PCBA and system level processes like board functional test (BFT or FT), burn-in test, system level test (ST). Some of the processes used in manufacturing where a test engineer is needed are:

In-circuit test (ICT)

Stand-alone JTAG test

Automated x-ray inspection (AXI) (also known as X-ray test)

Automated optical inspection (AOI) test

Continuity or flying probe test

(Board) functional test (BFT/FT)

Burn-in test

Environmental stress screening (ESS) test

Highly Accelerated Life Test (HALT)

Highly accelerated stress screening (HASS) test

Ongoing reliability test (ORT)

System test (ST)

Final quality audit process (FQA) test

Theistic science

Theistic science, also referred to as theistic realism, is the pseudoscientific proposal that the central scientific method of requiring testability, known as methodological naturalism, should be replaced by a philosophy of science that allows occasional supernatural explanations which are inherently untestable. Proponents propose supernatural explanations for topics raised by their theology, in particular evolution.Supporters of theistic realism or theistic science include intelligent design creationism proponents J. P. Moreland, Stephen C. Meyer and Phillip E. Johnson.Instead of the relationship between religion and science being a dialogue, theistic science seeks to alter the basic methods of science. As Alvin Plantinga puts it, this is a "science stopper", and these concepts lack any mainstream credence.

This page is based on a Wikipedia article written by authors (here).
Text is available under the CC BY-SA 3.0 license; additional terms may apply.
Images, videos and audio are available under their respective licenses.