Full Program »
Reliability of reinforcement learning models in psychopathology
Reinforcement learning and similar models are increasingly used to describe disrupted processes in psychopathology. These models promise precise, falsifiable insights bridging levels of analysis. However, whether such models actually measure stable constructs in humans, including in clinical populations, is unclear. In addition, existing work suggests that methodological approaches play a large role in psychometric properties like validity of reinforcement learning models, but how these approaches relate to reliability has not been investigated. In the current study, we investigated reliability (within and across sessions) of a reinforcement learning task (the Daw ‘two-step’ task) commonly used in psychopathology research, particularly in studies of compulsivity. Participants with clinically elevated compulsive symptoms (n = 38) completed the task twice, one week apart. Reliability of constructs of 1) model-based versus model-free learning and 2) value updating were measured using different modeling approaches, model parameterizations, and types of data cleaning. Reliability values ranged from nonexistent to excellent. The type of modeling approach had the largest effect on reliability, with hierarchical Bayesian modeling approaches generally resulting in acceptable to excellent reliability. Implications for interpreting results from computational models in psychopathology and suggestions for maximizing reliability of computational modeling studies will be discussed.