Accounting for outcome and process measures in dynamic decision-making tasks through model calibration
Identifiers (Article)
Identifiers (Files)
Abstract
Computational models of learning and the theories they represent are often validated by calibrating them to human data on decision outcomes. However, only a few models explain the process by which these decision outcomes are reached. We argue that models of learning should be able to reflect the process through which the decision outcomes are reached, and validating a model on the process is likely to help simultaneously explain both the process as well as the decision outcome. To demonstrate the proposed validation, we use a large dataset from the Technion Prediction Tournament and an existing Instance-based Learning model. We present two ways of calibrating the model’s parameters to human data: on an outcome measure and on a process measure. In agreement with our expectations, we find that calibrating the model on the process measure helps to explain both the process and outcome measures compared to calibrating the model on the outcome measure. These results hold when the model is generalized to a different dataset. We discuss implications for explaining the process and the decision outcomes in computational models of learning.
Supplementary files
Supplementary files are available here.
Statistics
Supplementary Content
-
Model data, plots, and model file
Description
IBLModel example for table.xls
Graphs.xls
EstimationSet
CompetitionSet
Creator (or owner) of fileVarun DuttPublisherJournal of Dynamic Decision Making
License
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.