Whereas experts may predict less accurately than models, and only slightly more accurately than novices, they seem to have better self-insight about the accuracy of their predictions. Such self-insight is called “calibra- tion.” Most people are poorly calibrated, offering erroneous reports of the quality of their predictions, and these reports systematically err in the direc- tion of overconfidence: When they say a class of events are 80% likely, those events occur less than 80% of the time – There is some evidence that experts are less overconfident than nov- ices. For instance, Levenberg had subjects look at “kinetic family drawings” to detect whether the children who drew them were normal. The results were, typically, a small victory for training: Psychologists and secretar- ies got 66% and 61% right, respectively (a coinfiip would get half right). Of these cases about which subjects were “positively certain,” the psychologists and secretaries got 76% and 59% right, respectively. The psychologists were better calibrated than novices- they used the phrase “positively certain” more cautiously (and appropriately) – but they were still overconfident.
The process-performance paradox in expert judgment 203
Better calibration of experts has also been found in some other studies. Expert calibration is ~etter than novice calibration in bridge (Keren, in press), but not in blackjack. Doctors’ judgments of pneumonia and skull fracture are badly calibrated. Weather forecasters are extremely we11 calibrated. Experiments with novices showed that training improved calibration, reducing extreme overconfidence in estimating probabilities and numerical quantities
Assignment status: Solved by our experts