At the Intersection of Technology, Law, and Business
November 19, 2018 - Artificial Intelligence, European Union

Algorithms can reduce discrimination, but only with proper data

Don’t Get Caught by the Phisherman’s Hook This Tax Season

Since the advent of artificial intelligence technology, there have been countless instances of machine learning algorithms yielding discriminatory outcomes. For instance, crime prediction tools frequently assign disproportionately higher risk scores to ethnic minorities. This is not because of an error in the algorithm, but because the historical data used to train the algorithm are “biased” (as the police stopped and searched more ethnic minorities, this group, by extension, also shows more convictions). To solve this, group indicators such as race, gender, and religion are often removed from the training data. The idea is that if the algorithm cannot “see” these elements, the outcome will not be discriminatory.     

In her op-ed for IAPP Privacy Perspectives, Morrison & Foerster Senior Of Counsel Lokke Moerel explains why this approach is ineffective in combatting algorithmic bias. She argues that only if we know which data subjects belong to vulnerable groups can biases in the historical data be made transparent and algorithms trained properly. The taboo against collecting such sensitive group indicators must be broken if we ever hope to eliminate future discrimination.

Read Lokke's op-ed.