(Un)Objective Machines: A Look at Historical Bias in Machine Learning

A deep dive into biases in machine learning, with a focus on historical (or social) biases.

Humans are biased. To anyone who has had to deal with bigoted individuals, unfair bosses, or oppressive systems — in other words, all of us — this is no surprise. We should thus welcome machine learning models which can help us to make more objective decisions, especially in crucial fields like healthcare, policing, or employment, where prejudiced humans can make life-changing judgements which severely affect the lives of others… right? Well, no. Although we might be forgiven for thinking that machine learning models are objective and rational, biases can be in-built into models in a myraid of ways. In this blog post, we will be focusing on historical biases in machine learning (ML).

What is a Bias?

In our daily lives, when we invoke bias, we often mean “judgement based on preconceived notions or prejudices, as opposed to the impartial evaluation of facts”. Statisticians also use “bias” to describe pretty much anything which may lead to a systematic disparity between the ‘true’ parameters and what is estimated by the model.

ML models suffer from statistical biases since statistics play a big role in how they work. However, these models are also designed by humans, and use data generated by humans for training, making them vulnerable to learning and perpetuating human biases. Thus, perhaps counterintuitively, ML models are arguably more susceptible to biases than humans, not less.

Experts disagree on the exact number of algorithmic biases, but there are at least 7 potential sources of harmful bias (Suresh & Guttag, 2021), each generated at a different point in the data analysis pipeline:

  1. Historical bias, which arises from the world, in the data generation phase;
  2. Representation bias, which comes about when we take samples of data from the world;
  3. Measurement bias, where the metrics we use or the data we collect might not reflect what we actually want to measure;
  4. Aggregation bias, where we apply the same approach to our whole data set, even though there are subsets which need to be treated differently;
  5. Learning bias, where the ways we have defined our models cause systematic errors;
  6. Evaluation bias, where we ‘grade’ our models’ performances on data which does not actually reflect the population we want to use the models on, and finally;
  7. Deployment bias, where the model is not used in the way the developers intended for it to be used.
Photo by Hunter Harritt on Unsplash

While all of these are important biases, which any budding data scientist should consider, today I will be focusing on historical bias, which occurs at the first stage of the pipeline.

Psst! Interested in learning more about other types of biases? Watch this helpful video:

Historical Bias

Unlike the other types of biases, historical bias does not originate from ML processes, but from our world. Our world has historically been, and still is peppered with prejudices, so even when the data we use to train our models perfectly reflects the world we live in, our data might capture these discriminatory patterns. This is where historical bias arises. Historical bias may also manifest in instances where our world has made strides towards equality, but our data does not adequately capture these changes, reflecting past inequalities instead.

Why Should We Care?

Most societies have anti-discrimination laws, which aim to protect the rights of vulnerable groups in society, who have been historically oppressed. If we are not careful, previous acts of discrimination might be learned and perpetuated by our ML models due to historical bias. With the rising prevalence of ML models in practically every area of our lives, from the mundane to the life-changing, this poses a particularly insidious threat — historically biased ML models have the potential to perpetuate inequality on a never-before-seen scale. Data scientist and mathematician Cathy O’Neil calls such models ‘weapons of math destruction’ or WMDs for short — models whose workings are a mystery, generate harmful outcomes which victims cannot dispute, and which often penalise the poor and oppressed in our society, while benefiting those who are already well off (O’Neil, 2017).

Photo by engin akyurt on Unsplash

Such WMDs are already impacting vulnerable groups worldwide. Although we would think that Amazon, which profits from recommending us items we have never heard of, yet suddenly desperately want, would have mastered machine learning, it was found that an algorithm they used to scan CVs had learned a gender bias, due to the historically low number of women in tech. Perhaps more chillingly, predictive policing tools have also been shown to have racial biases, as have algorithms used in healthcare, and even the courtroom. The mass proliferation of such tools obviously has great impacts, particularly since they may serve as a way to entrench the already deep-rooted inequalities in our society. I would argue that these WMDs are a far greater hindrance in our collective efforts to stamp out inequality compared to biased humans, for two main reasons:

Firstly, it is hard to get insight into why ML models make certain predictions. Deep learning seems to be the buzzword of the season, with complicated neural networks taking the world by storm. While these models are exciting since they have the potential to model very complex phenomena which humans cannot understand, they are considered black-box models, since their workings are often opaque, even to their creators. Without concerted efforts to test for historical (and other) biases, it is difficult to tell if they are inadvertently discriminating against protected groups.

Secondly, the scale of damage which might be done by a historically biased model is, in my opinion, unprecedented and overlooked. Since humans have to rest, and need time to process information effectively, the damage a single prejudiced person might do is limited. However, just one biased ML model can pass thousands of discriminatory judgements in a matter of minutes, without resting. Dangerously, many also believe that machines are more objective than humans, leading to reduced oversight over potentially rogue models. This is especially concerning to me, since with the massive success of large language models like ChatGPT, more and more people are developing an interest in implementing ML models into their workflows, potentially automating the rise of WMDs in our society, with devastating consequences.

What Can We Do About It?

While the impacts of biased models might be scary, this does not mean that we have to abandon ML models entirely. Artificial Intelligence (AI) ethics is a growing field, and researchers and activists alike are working towards solutions to get rid of, or at least reduce the biases in models. Notably, there has been a recent push for FAT or FATE AI — fair, accountable, transparent and ethical AI, which might help in the detection and correction of biases (among other ethical issues). While it is not a comprehensive list, I will provide a brief overview of some ways to mitigate historical biases in models, which will hopefully help you on your own data science journey.

Statistical Solutions

Since the problem arises from disproportionate outcomes in the real world’s data, why not fix it by making our collected data more proportional? This is one statistical approach of dealing with historical bias, suggested by Suresh, H., & Guttag, J. (2021). Put simply, it comprises collecting more data from some groups and less from others (systematic over- or under- sampling), resulting in a more balanced distribution of outcomes in our training dataset.

Model-based Solutions

In line with the goals of FATE AI, interpretability can be built into models, making their decision-making processes more transparent. Interpretability allows data scientists to see why models make the decisions they do, providing opportunities to spot and mitigate potential instances of historical biases in their models. In the real world, this also means that victims of machine-based discrimination can challenge decisions made by previously inscrutable models, and hopefully cause them to be reconsidered. This will hopefully increase trust in our models.

More technically, algorithms and models to address biases in ML models are also being developed. Adversarial debiasing is one interesting solution. Such models essentially consist of two parts: a predictor, which aims to predict an outcome, like hireability, and an adversary, which tries to predict protected attributes based on the predicted outcomes. Like boxers in a ring, these two components go back and forth, fighting to perform better than the other, and when the adversary can no longer detect protected attributes based on the predicted outcomes, the model is considered to have been debiased. Such models have performed quite well compared to models which have not been debiased, showing that we need not compromise on performance while prioritising fairness. Algorithms have also been developed to reduce bias in ML models, while retaining good performances.

Human-based Solutions

Lastly, and perhaps most crucially, it is critical to remember that while our machines are doing the work for us, we are their creators. Data science starts and ends with us — humans who are aware of historical biases, decide to prioritise fairness, and take steps to mitigate the effects of historical biases. We should not cede power to our creations, and should remain in the loop at all stages of data analysis. To this end, I would like to add my voice to the chorus calling for the creation of transnational third party organisations to audit ML processes, and to enforce best practices. While it is no silver bullet, it is a good way to check if our ML models are fair and unbiased, and to concretise our commitment to the cause. On an organisational level, I am also heartened by the calls for increased diversity in data science and ML teams, as I believe that this will help to identify and correct existing blind spots in our data analysis processes. It is also necessary for business leaders to be aware of the limits of AI, and to use it wisely, instead of abusing it in the name of productivity or profit.

As data scientists, we should also take responsibility for our models, and remember the power they wield. As much as historical biases arise from the real world, I believe that ML tools also have the potential to help us correct present injustices. For example, while in the past, racist or sexist recruiters might filter out capable applicants because of their prejudices before handing the candidate list to the hiring manager, a fair ML model may be able to efficiently find capable candidates, disregarding their protected attributes, which might lead to valuable opportunities being provided to previously ignored applicants. Of course, this is not an easy task, and is itself fraught with ethical questions. However, if our tools can indeed shape the world we live in, why not make them reflect the world we want to live in, not just the world as it is?

Conclusion

Whether you are a budding data scientist, a machine learning engineer, or just someone who is interested in using ML tools, I hope this blog post has shed some light on the ways historical biases can amplify and automate inequality, with disastrous impacts. Though ML models and other AI tools have made our lives a lot easier, and are becoming inseparable from modern living, we must remember that they are not infallible, and that thorough oversight is needed to make sure that our tools stay helpful, and not harmful.

Interested in Learning More?

Here are some resources I found useful in learning more about biases and ethics in machine learning:

Videos

Books

  • Weapons of Math Destruction by Cathy O’Neil (highly recommended!)
  • Invisible Women: Data Bias in a World Designed for Men by Caroline Criado-Perez
  • Atlas of AI by Kate Crawford
  • AI Ethics by Mark Coeckelbergh
  • Data Feminism by Catherine D’Ignazio and Lauren F. Klein

Papers

References:

AI Now Institute. (2024, January 10). Ai now 2017 report. https://ainowinstitute.org/publication/ai-now-2017-report-2

Belenguer, L. (2022). AI Bias: Exploring discriminatory algorithmic decision-making models and the application of possible machine-centric solutions adapted from the pharmaceutical industry. AI and Ethics, 2(4), 771–787. https://doi.org/10.1007/s43681-022-00138-8

Bolukbasi, T., Chang, K.-W., Zou, J., Saligrama, V., & Kalai, A. (2016, July 21). Man is to computer programmer as woman is to homemaker? Debiasing word embeddings. arXiv.org. https://doi.org/10.48550/arXiv.1607.06520

Chakraborty, J., Majumder, S., & Menzies, T. (2021). Bias in machine learning software: Why? how? what to do? Proceedings of the 29th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering. https://doi.org/10.1145/3468264.3468537

Gutbezahl, J. (2017, June 13). 5 types of statistical biases to avoid in your analyses. Business Insights Blog. https://online.hbs.edu/blog/post/types-of-statistical-bias

Heaven, W. D. (2023a, June 21). Predictive policing algorithms are racist. they need to be dismantled. MIT Technology Review. https://www.technologyreview.com/2020/07/17/1005396/predictive-policing-algorithms-racist-dismantled-machine-learning-bias-criminal-justice/

Heaven, W. D. (2023b, June 21). Predictive policing is still racist-whatever data it uses. MIT Technology Review. https://www.technologyreview.com/2021/02/05/1017560/predictive-policing-racist-algorithmic-bias-data-crime-predpol/#:~:text=It%27s%20no%20secret%20that%20predictive,lessen%20bias%20has%20little%20effect.

Hellström, T., Dignum, V., & Bensch, S. (2020, September 20). Bias in machine learning — what is it good for?. arXiv.org. https://arxiv.org/abs/2004.00686

Historical bias in AI systems. The Australian Human Rights Commission. (2020, November 24). https://humanrights.gov.au/about/news/media-releases/historical-bias-ai-systems#:~:text=Historical%20bias%20arises%20when%20the,by%20women%20was%20even%20worse.

Memarian, B., & Doleck, T. (2023). Fairness, accountability, transparency, and ethics (fate) in Artificial Intelligence (AI) and Higher Education: A systematic review. Computers and Education: Artificial Intelligence, 5, 100152. https://doi.org/10.1016/j.caeai.2023.100152

Obermeyer, Z., Powers, B., Vogeli, C., & Mullainathan, S. (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Science, 366(6464), 447–453. https://doi.org/10.1126/science.aax2342

O’Neil, C. (2017). Weapons of math destruction: How big data increases inequality and threatens democracy. Penguin Random House.

Roselli, D., Matthews, J., & Talagala, N. (2019). Managing bias in AI. Companion Proceedings of The 2019 World Wide Web Conference. https://doi.org/10.1145/3308560.3317590

Suresh, H., & Guttag, J. (2021). A framework for understanding sources of harm throughout the machine learning life cycle. Equity and Access in Algorithms, Mechanisms, and Optimization. https://doi.org/10.1145/3465416.3483305

van Giffen, B., Herhausen, D., & Fahse, T. (2022). Overcoming the pitfalls and perils of algorithms: A classification of machine learning biases and mitigation methods. Journal of Business Research, 144, 93–106. https://doi.org/10.1016/j.jbusres.2022.01.076

Zhang, B. H., Lemoine, B., & Mitchell, M. (2018). Mitigating unwanted biases with adversarial learning. Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society. https://doi.org/10.1145/3278721.3278779