Vector researcher develops fairness model that accounts for individual preferences

August 10, 2020

Photo by Elena Mozhvilo on Unsplash

August 10, 2020

By Ian Gormely

From music discovery to finance, we interact with AI every day, often without even knowing it. Yet, as algorithms are considered for use in societal decision making, questions of fairness become a critical issue. 

Definitions of what is fair vary, but concerns often boil down to a question of bias: are the data algorithms are trained on representative of the population that they serve? “Machine learning algorithms are fundamentally data-driven,” says Vector researcher Safwan Hossain, who is supervised by Vector Faculty Affiliate Nisarg Shah. “If there is bias in the data, that bias could very well carry forward to bias in the model.” 

Many machine learning algorithms are built on a yes/no binary. In the case of something like bank loans, a model would decide whether or not someone should be granted a loan; a model should not unfairly favour one group over another.

But, fairness can also be in the eye of the beholder. Hossain notes that people tend to value goods differently from one another. Yet, most models fail to account for individual preference or more complex, non-binary settings, something addressed in the new paper “Designing Fairly Fair Classifiers Via Economic Fairness Notions,” co-authored by Hossain, Andjela Mladenovic, and Vector Faculty Affiliate Nisarg Shah. Someone might be granted a loan, but getting the wrong loan say a five-year, variable rate mortgage when they wanted a 10-year fixed-rate one can be as unfair as not getting one at all.

In building their model, Hossain, who works at the intersection of economics and computer science, took two well-studied economic definitions of fairness — envy freeness and equitability, which compare the differences between how different people value different items or ideas — and adapted them to a machine learning setting. 

In doing so, they were able to build a generalizable fairness model that encompasses a number of existing fairness concepts. This will enable them to deploy it in new settings, such as targetted advertising or end-of-life care, with new data. Hossain is already working on a follow-up paper that applies the work to the health sector where questions of individual preference become even more important for personalized care. “People know what they want,” he says, “and people tend to believe that something is fair if they are happy with it.”

Related:

A man looks at a white board with red formulas on it
Insights
Trustworthy AI

How to safely implement AI systems

Keith Strier and Tony Gaffney speak on stage at the Remarkable 2024 conference.
Insights

Remarkable 2024 spotlights Canada’s flourishing ecosystem

Merck and Vector logos
News
Partnership

Merck Canada announces collaboration with Vector Institute