QED Working Paper Number
1510

Predictive AI is increasingly used to guide decisions on agents. I show that even a bias-neutral predictive AI can potentially amplify exogenous (human) bias in settings where the predictive AI represents a cost-adjusted precision gain to unbiased predictions, and the final judgments are made by biased human evaluators. In the absence of perfect and instantaneous belief updating, expected victims of bias become less likely to be saved by randomness under more precise predictions. An increase in aggregate discrimination is possible if this effect dominates. Not accounting for this mechanism may result in AI being unduly blamed for creating bias.

Author(s)
JEL Codes
Keywords
artificial intelligence
AI
algorithm
human-machine interactions
discrimination
bias
algorithmic bias
financial institutions
Working Paper