Fairness in Machine Learning: The Principles of Governance 

March 21, 2022

Insights

By Jonathan Woods
March 21, 2022

This article is a part of our Trustworthy AI series. As a part of this series, we will be releasing an article per week around

  • Interpretability
  • Fairness
  • Governance

In this week’s article Vector’s Industry Innovation team looks at fairness in machine learning and breaks down ML-specific risks and principles to help non-technical stakeholders better understand, communicate about, and participate in ML model risk management.

Governance is key to the responsible use of machine learning (ML) models. One element of that governance is fairness, a complex yet crucial piece of ML model risk management.

As more decisions across industries and sectors are being automated, the question arises:  How can we ensure that these decisions are being made in a fair manner? It’s no simple question to answer. To start, there are many competing definitions of what fair means. Second, even with a clear definition, biases can be hidden and subtle. Even when measures are taken to eliminate bias, it can arise, introducing potential harm to customers, users, and the organization employing the ML system.

This primer on fairness in machine learning breaks down ML-specific risks and principles in plain language to help non-technical stakeholders better understand, communicate about, and participate in ML model risk management. For the purpose of this paper, ‘fair’ means non-discriminatory against protected groups, whether that relates to race, sex, gender, religion, age, or another similarly protected attribute.

Reinforcing or perpetuating historical biases is undesirable for several reasons. The first is simply ethical: modern societal values condemn marginalization based solely on the presence one protected attribute. Another reason is that such bias likely contravenes most organizations’ own stated values. Yet another reason is legal: an organization may be liable for the unfair decisions that its algorithms make. Further, highly-publicized unfair practices – even if inadvertent – may introduce large financial and reputational risks. Finally, zooming out to consider AI adoption in general, instances of unfairness can undermine trust in AI, and that trust is key to further adoption and exploration of the value the technology can deliver.

Understand how bias in machine learning can arise

Bias in machine learning can arise in a number of ways. Some ways are straightforward – for instance, through careless data collection – while others more subtle and insidious. Stakeholders in ML systems and their output should be familiar with the following ways that bias can arise, so that they can recognize potential sources and ask questions to ensure they are accounted for.

Bias in historical data

Bias due to past discriminatory practices may be embedded in historical data that’s used for ML training. For an example of how this has occurred, consider “redlining,” the practice of systematically denying or overcharging for a service like a mortgage or insurance plan based on the community in which an applicant resides. As some communities are associated with members of a protected group, redlining can unfairly correlate creditworthiness with a single attribute of a person – e.g., race – when that attribute shouldn’t factor into the assessment at all. If an ML system is trained on historical data with the results of redlining embedded within, the same group of people may be discriminated against in the future.

Bias in data collection

Sampling and measurement problems in the data collection process can lead to datasets tainted by bias. Sample bias occurs when the method for collecting data results in a sample that is not representative of the population in question. For example, one can collect data by publishing a questionnaire in a magazine and requesting readers to fill it out and send it back. This method can produce a biased sample, because the subset of people willing to spend the time and energy to fill out the questionnaire may not be representative of the entire readership. Measurement bias occurs when errors in the act of sampling impact the dataset. An example of this would be poorly trained surveyors collecting the right kind of information, but including information from outside the time period of interest, and in so doing, distorting the data.

Bias in model design

Algorithmic bias occurs when a model produces biased results due to erroneous assumptions made by the modeler or poor implementation of the model by practitioners. For example, the exclusion of relevant financial criteria as features may cause otherwise qualified borrowers to be charged a higher interest rate or denied a loan altogether.

Bias due to feature correlation

Possible correlation between some sensitive features and non-sensitive features can result in bias. A feature, as defined in an earlier primer, is an individual property or variable used as an input in an ML system. Consider a model that predicts housing prices. Features may include a house’s location, size, number of bedrooms, previous sale price, among other attributes. In the context of fairness, a ‘sensitive’ feature is an attribute that identifies a protected group – e.g. , race, sex, gender, and the like. Practitioners may remove sensitive features from a dataset during data preparation with the intention of reducing the risk of bias occurring. However, that may not be enough to ensure fairness. This is because certain non-sensitive features may be highly-correlated with sensitive ones. For instance, the use of income as a feature may be a proxy for gender when examining a profession where one gender is systematically underpaid. In such a case, the gender does not need to be an explicit feature for it to effectively become one. In other words, categorizing people by income in a particular profession may effectively categorize them by gender, whether that was intended or not.

In a further twist on this, models may also be able to infer sensitive information through the analysis of several, seemingly non-correlated features. Because machine learning models are excellent at detecting patterns, sometimes only a few pieces of general information, taken together, can betray sensitive information about an individual. A famous incident from 2000 illustrates this possibility. In the 1990s, a U.S.-based insurance company released de-identified data about hospital visits by state employees. The data included birthdate, sex, and ZIP code. A researcher showed that by using only those three pieces of information, she could re-identify – by name – nearly 90% of all people in the country. The lesson is that removing sensitive information from datasets may not be sufficient to eliminate its discovery.

How should organizations approach fairness in ML?

Considering the number of bias-related pitfalls practitioners can encounter, how can organizations cover the bases on fairness when using ML? While there are highly technical answers to this question, at a general level, practitioners and stakeholders should follow the principles listed below.

  1. Consider the fairness requirements for each use case on its own

Fairness considerations will differ depending on the intended use of the model. For instance, models that impact customers directly may require a more stringent approach to fairness than models used for internal, lower-stakes processes, like staffing decisions. Consider the sensitivity of the use case in question to determine its risks, the appropriate definition of fairness to apply, and the level of attention it will require.

  1. Prioritize fairness at every stage

Every part of the ML pipeline should be examined through a fairness lens. Fairness should be an ongoing concern across task definition, dataset construction, model definition, training and testing, and deployment. Monitoring of fairness, input data, and model performance should be done on a continuous basis.

  1. Include diverse stakeholders

Involve diverse stakeholders and multiple perspectives in the design, interpretation, and monitoring of models to help identify sources of potential biases in data, model design, or feature selection. Some sources of bias may be subtle, and are most likely spotted when a group comprising different backgrounds and experiences are on the task.

  1. Involve humans when necessary

For models employed in high stakes use cases, be sure to include humans in the loop. Human experts should be given the ability to overrule model decisions if bias is detected or even suspected in the output.

Summing up

Fairness in machine learning can be a complicated topic. Simply determining the specifics of what ‘fair’ means in any given case requires consideration of societal and industry norms, consultation with internal technical teams that build ML pipelines and implement models, and discussion with a set of diverse stakeholders. Regardless of the complexity involved, fairness requires attention. It is key to deploying ML in a responsible way, and must be an element of ML model risk governance from the idea generation stage through to deployment and monitoring. Awareness of the key concepts relating to fairness allows stakeholders with non-technical backgrounds to participate and contribute to this important element of the governance process.

Related:

A man looks at a white board with red formulas on it
Insights
Trustworthy AI

How to safely implement AI systems

Two people playing chess
Insights
Research

Vector Research Blog: Is Your Neural Network at Risk? The Pitfall of Adaptive Gradient Optimizers

Keith Strier and Tony Gaffney speak on stage at the Remarkable 2024 conference.
Insights

Remarkable 2024 spotlights Canada’s flourishing ecosystem