About

Project Details

Fairness is defined as treating everyone equally or equitably based on people’s performance or needs (Lee 2018 apud Leventhal 1980).

“We have defined individual fairness in this work as a learning algorithm not making any big mistakes on any samples.”

Digital technologies and artificial intelligence permeate every bit of our existence, and their uses regulate several aspects of our lives.

Knowing that the use of such technologies tends to increase and not to decrease in upcoming years, a team of researchers at Oregon State University developed a project that aims to tackle the issue of responsible development of artificial intelligence specifically in regards to fairness.

The premise of the project is that there is always a tradeoff between utility (accuracy) and fairness and that individual fairness—or the accurate depiction of every person in a dataset, for example—should not be the end goal, given the impossibility of achieving perfect classifiers. In other words, the idea that the learning algorithm would predict all the data needs to be demystified once it proved impossible.

Another starting point of this project is that in real-life decision-making processes involving sensitive topics, such as recidivism, loan offers, and many others, shouldn’t be solely reliant on an algorithmic decision to predict on all the data. But if Machine Learning technologies could alleviate the work of humans, it would already be a promising step.

The project innovates as it aims for the ML algorithm to fairly classify only what it has the confidence to classify while adding the option “I don’t know” for the cases in which it can’t classify, leaving room for human interpretation of such cases.

This project aims to implement algorithms that allow trade offs or exchange between accuracy, fairness, and coverage of the input data. A hybrid iterative approach will allow human users to tune our algorithm to achieve the fairest result.

About the Interface

In this visualization interface, you’ll find displays of data and classification from individual examples, intermediate results, as well as population-level statistics that offer real-time feedback or updated results from human interaction. 

This interactive visual interface will present scenarios of exchange between the focused objectives (which are prediction accuracy, prediction fairness, case coverage and algorithm training time) in which users will change the parameters and thresholds, allowing ML model adaptation to specific scenarios. In our iterative platform, users will insert their dataset and adjust each of the following parameters through a sliding bar.

Algorithm Details

In this visualization interface, you’ll find displays of data and classification from individual examples, intermediate results, as well as population-level statistics that offer real-time feedback or updated results from human interaction. 

This interactive visual interface will present scenarios of exchange between the focused objectives (which are prediction accuracy, prediction fairness, case coverage and algorithm training time) in which users will change the parameters and thresholds, allowing ML model adaptation to specific scenarios. In our iterative platform, users will insert their dataset and adjust each of the following parameters through a sliding bar.