Human decision-makers frequently override the recommendations generated by predictive algorithms, but it is unclear whether these discretionary overrides add valuable private information or reintroduce the human biases and mistakes that motivated the adoption of the algorithms in the first place. We develop new quasi-experimental tools to measure the impact of human discretion over an algorithm, even when the outcome of interest is only selectively observed, in the context of bail decisions. We find that three-quarters of the judges in our setting underperform the algorithm on average when they make a discretionary override, with most judges making override decisions that are no better than random. Yet the remaining one-quarter of judges substantially outperform the algorithm in terms of both accuracy and fairness when they make a discretionary override. We provide suggestive evidence on the behavior underlying these differences in judge performance, showing that the high-performing judges are more likely to use relevant private information and less likely to overreact to highly-salient adverse events compared to the low-performing judges.
PaperLink (coming soon)
If you would like to be added to the distribution list or for further details regarding this seminar, please contact Ryan Bubb at email@example.com
More External Law and Economics Workshops