Signal detection theory (SDT), developed in the 1950s by mathematicians and engineers at the University of Michigan and Bell Labs, and introduced to psychology by John Swets, Wilson Tanner, and Theodore Birdsall, revolutionized the study of perception by providing a principled way to separate two fundamentally different aspects of detection performance: the observer's ability to discriminate signal from noise (sensitivity) and their willingness to report detecting a signal (criterion or bias).
The Basic Framework
SDT assumes that on each trial, the observer receives an internal response along a decision axis. The internal response comes from one of two probability distributions: one for noise-alone trials and one for signal-plus-noise trials. Both distributions are typically assumed to be Gaussian (normal) with equal variance. The observer sets a criterion — a threshold on the decision axis — and responds "signal present" when the internal response exceeds the criterion.
β = f(z_hit) / f(z_fa) [likelihood ratio at criterion]
c = −0.5 × [z(HR) + z(FAR)] [criterion location]
where z() is the inverse normal CDF and f() is the normal PDF
Sensitivity and Criterion
The key insight of SDT is that these two aspects of performance — sensitivity (d') and criterion (c or beta) — are independent. An observer can be highly sensitive (good at distinguishing signal from noise) but liberal (reporting "signal" even with weak evidence) or conservative (requiring strong evidence). Changes in motivation, payoffs, or base rates shift the criterion without affecting sensitivity, while changes in stimulus intensity or attentional state affect sensitivity without necessarily changing the criterion.
This separation resolved a longstanding problem in psychophysics: the classical threshold concept assumed a fixed boundary between detection and non-detection, but empirical data showed that observers' "thresholds" changed systematically with payoffs and instructions. SDT explained these changes as criterion shifts rather than true changes in sensory capability.
The receiver operating characteristic (ROC) curve plots the hit rate against the false alarm rate across all possible criterion settings. A more sensitive observer produces an ROC curve that bows further toward the upper-left corner. The area under the ROC curve (AUC) provides a criterion-free measure of sensitivity. ROC analysis has become standard not only in perception research but in medical diagnosis, weather forecasting, memory research, and machine learning.
Applications Beyond Perception
SDT has been applied far beyond its origins in sensory detection. In recognition memory, old items correspond to signals and new items to noise; d' measures memory strength while criterion measures the observer's willingness to say "old." In medical diagnosis, diseases are signals and healthy patients are noise; ROC analysis reveals the trade-off between sensitivity and specificity. In eyewitness identification, SDT helps distinguish the ability to discriminate guilty from innocent suspects from the willingness to make a positive identification.
Extensions and Limitations
The basic equal-variance Gaussian model has been extended in numerous ways: unequal-variance models (where signal and noise distributions differ in spread), multiple-observation models, models with multiple criteria for confidence ratings, and multidimensional SDT for tasks involving multiple stimulus dimensions. These extensions have proven essential for accurate modeling of recognition memory, where the old-item distribution is typically broader than the new-item distribution.
SDT assumes that decision-making can be modeled as a threshold applied to a continuous internal representation. While this assumption is widely supported, some detection tasks may involve fundamentally discrete representations (as in high-threshold theory), and the field continues to debate whether threshold or SDT models better describe certain domains of performance.