Computational models of learning use mathematical formalism and computer simulation to specify, test, and refine theories of how organisms learn. By translating verbal theories into precise quantitative predictions, computational models make theories testable, reveal hidden assumptions, and often generate novel predictions that guide new experiments.
Associative Models
The Rescorla-Wagner model and its descendants formalize classical conditioning as error-driven learning. Temporal difference (TD) learning extends this to handle temporal aspects: predictions about future events are updated based on the difference between successive predictions, not just the final outcome. TD learning connects animal conditioning to reinforcement learning in computer science and to dopaminergic prediction error signals in the brain.
Connectionist Models
Connectionist (neural network) models represent knowledge as patterns of activation across interconnected processing units, with learning occurring through changes in connection weights. The back-propagation algorithm, which adjusts weights to minimize error, enabled networks to learn complex mappings from input to output. Rumelhart and McClelland's (1986) PDP models demonstrated that connectionist networks could learn rule-like behavior (e.g., past tense formation) from examples without explicitly representing rules, sparking intense debate about the nature of cognitive representations.
Bayesian models frame learning as rational inference: learners combine prior knowledge with observed data to compute posterior beliefs about the world. These models have been applied to word learning, causal learning, category learning, and motor adaptation. They naturally capture phenomena like one-shot learning (when strong priors exist), the role of ambiguity in slowing learning, and the interaction between prior knowledge and evidence. Their success suggests that human learning approximates optimal inference under uncertainty.
Deep Learning and Cognitive Modeling
Deep neural networks (DNNs) have achieved remarkable performance in perception, language, and game-playing tasks that were previously thought to require human-level intelligence. While primarily engineering achievements, DNNs have also served as models of cognitive and neural processes. Comparisons between DNN representations and brain activity (using representational similarity analysis) have revealed striking correspondences in visual processing, auditory processing, and language understanding, suggesting that task-optimized networks converge on representations similar to those used by the brain.