The EU rights watchdog said on Thursday that applications that use artificial intelligence (AI) to predict crime and moderate online hate-speech should be free of bias to avoid discrimination.
In a report that based its findings on a 2021 study, the Vienna-based EU Agency for Fundamental Rights (FRA) said algorithms based on poor data quality could harm people’s lives.
The study comes against the backdrop of the proposed AI Act by the European Commission, which drew criticism from lawmakers and consumer groups from EU countries for not fully addressing risks from AI systems that could violate fundamental rights.
“We need a system for assessing and mitigating bias before and while using algorithms to protect people from discrimination,” said FRA Director Michael O’Flaherty.
Crime detection varies and can affect data, FRA said as certain crimes are easy to detect while predictions focus too much on crimes that are easier for the police to record.
As a result, some demographic groups may be more often connected to “simpler crimes,” which can lead to bias in law enforcement, FRA said.
On offensive speech moderation, the agency found algorithms developed for automated hate speech detection unreliable as it flagged non-offensive terms in English alluding to “Muslim,” “gay” or “Jew” as offensive.
Similarly, some algorithms in German and Italian languages, which use gendered nouns, judged the feminine version of a term more insulting than its masculine counterpart, or vice versa.
FRA said the alogrithms’ predictions tend to over-react as it is linked to online hatred in its text datasets.