Brattle Experts Highlight the Unintended Risks of “Black-Box” Artificial Intelligence Models in a New Law Journal Article
Published in the University of Illinois Journal of Law, Technology & Policy
Financial services firms, including lenders, are increasingly leveraging algorithmic models driven by machine learning and artificial intelligence (AI). While these models don’t rely on subjective assessment processes, they are often proprietary “black-box” models – meaning their internal workings are unknown to users (and sometimes, even developers) – that may produce biased or inconsistent outcomes and violate regulatory or legal restrictions.
In a recent University of Illinois Journal of Law, Technology & Policy article, Brattle experts present a framework for evaluating unintended discrimination in black-box AI models where race is unobserved and highlight the complexities of disparate impact analysis in the context of black-box algorithmic models. The authors illustrate how econometric analyses may be applied to detect and evaluate the disparate impact of credit scoring, a key factor used to determine who gets credit and how it is priced.
The article “Evaluating Discrimination of AI and Algorithmic Lending Decisions When Race Data Are Unavailable,” authored by Brattle Principal Dr. Shastri Sandy, Associate Dr. Joe Chance, Senior Research Analyst Daniel Wang, and Keystone Strategy’s Christine Polek, is available below.
View Article