Thu May 6, 2021 from 4 to 5:30 pm: Joshua Loftus (London School of Economics)
Bias in Artificial Intelligence and Data Science
The session will be followed by two parallel debrief sessions (5:30-6 PM): a general one in the same Zoom as the lecture, and one for trainees organized by NeuroPIL.
We have seen a rapid increase in the use of machine learning methods to automate decisions in areas such as healthcare, insurance, and predictive policing. But the training data in such cases may encode biases against people belonging to population subgroups based on race or other ethically or legally sensitive categories. Organizations and researchers must account for this to avoid perpetuating discriminatory practices and possibly even running afoul of civil rights law. In this talk, I will introduce algorithmic fairness with several real world examples focusing on race in healthcare and scientific research, and summarize and classify some of the work in this area, with particular attention to the new or increased risks accompanying advances in data methodology and technology.
- Data Feminism, Catherine D'Ignazio and Lauren Klein, chapter 2 https://data-feminism.mitpress.mit.edu/pub/ei7cogfn/release/2
- Fairness and machine learning, Solon Barocas, Moritz Hardt, Arvind Narayanan, chapter 1 https://fairmlbook.org/introduction.html
- How our data encodes systematic racism, Deborah Raji https://www.technologyreview.com/2020/12/10/1013617/racism-data-science-artificial-intelligence-ai-opinion/
- Anatomy of an AI System, Kate Crawford and Vladan Joler https://anatomyof.ai/
It will be helpful to know about a few basic concepts/terms in machine learning, which can be reviewed from Wikipedia:
References for some examples:
ProPublica article on COMPAS https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
To predict and serve? Kristian Lum and William Isaac https://rss.onlinelibrary.wiley.com/doi/full/10.1111/j.1740-9713.2016.00960.x
Bloomberg article on Amazon https://www.bloomberg.com/graphics/2016-amazon-same-day/
Dissecting racial bias in an algorithm used to manage the health of populations, Obermeyer et al https://science.sciencemag.org/content/366/6464/447.editor-summary
Joshua's research interests involve high-dimensional statistics and causal inference to improve practices in data science and machine learning, and particularly reducing biases associated with social harms and scientific reproducibility. This includes developing methods and software for statistical inference after model selection, and using causality to analyse the fairness and interpretability of algorithms in machine learning and artificial intelligence. Joshua earned his PhD in Statistics at Stanford University, was a Research Fellow at the Alan Turing Institute and the University of Cambridge, and then an Assistant Professor at New York University from 2017 until joining the London School of Economics in 2021.