Accounting for outcome and process measures in dynamic decision-making tasks through model calibration

Varun Dutt, Cleotilde Gonzalez

Abstract


Computational models of learning and the theories they represent are often validated by calibrating them to human data on decision outcomes. However, only a few models explain the process by which these decision outcomes are reached. We argue that models of learning should be able to reflect the process through which the decision outcomes are reached, and validating a model on the process is likely to help simultaneously explain both the process as well as the decision outcome. To demonstrate the proposed validation, we use a large dataset from the Technion Prediction Tournament and an existing Instance-based Learning model. We present two ways of calibrating the model’s parameters to human data: on an outcome measure and on a process measure. In agreement with our expectations, we find that calibrating the model on the process measure helps to explain both the process and outcome measures compared to calibrating the model on the outcome measure. These results hold when the model is generalized to a different dataset. We discuss implications for explaining the process and the decision outcomes in computational models of learning.

 

Supplementary files

Supplementary files are available here.


Keywords


Outcome and Process measures, Computational models of learning, Instance-based learning, Dynamic decisions, Binary choice, Calibration

Full Text:

PDF



DOI: https://doi.org/10.11588/jddm.2015.1.17663

URN (PDF): http://nbn-resolving.de/urn:nbn:de:bsz:16-jddm-176634

Refbacks

  • There are currently no refbacks.


Copyright (c) 2015 Journal of Dynamic Decision Making