The EU’s agenda to regulate AI does little to rein in facial recognition

Europe is backpedaling on facial recognition.
Europe is backpedaling on facial recognition.
We may earn a commission from links on this page.

The term “facial recognition” only appears four times in the 27-page document that outlines Europe’s vision for the future of artificial intelligence. Three of those four instances are in footnotes.

The document, known as the White Paper on Artificial Intelligence, is a part of the European Union Commission’s ambitious agenda to regulate the tech sector of the EU’s 27 member nations, which it released this week (Feb. 19). 

AI ethics experts warn against the unregulated use of facial recognition, which is currently being deployed by both governments and the private sector. The fact that the controversial technology is barely mentioned in the white paper represents a remarkable shift in the EU’s willingness to draw a hard line on its use. 

Last month, a draft white paper revealed that Europe was weighing a temporary five-year ban on facial recognition, a move that was praised by digital rights advocates but decried by the security community. That ban no longer appears in the final draft. FT reported this week that the ban was removed from later drafts due to fears that it would stifle innovation and compromise national security.

Instead, the non-binding document lays out a definition for “high-risk” AI applications that can interfere with people’s rights, such as those used in the fields of employment, transportation, healthcare, and law enforcement. Those tools, it proposes, should go through extra testing, certification, and human oversight.

A person’s face often reveals their race and gender, which is why facial recognition is an obvious candidate for both racial and gender-based discrimination. While the white paper imposes no new restrictions on facial recognition, it lays the groundwork for laws that the EU expects to pass later this year.

One solution suggested by the white paper would require that training data used by AI vendors come from the local European population, better reflecting its demographic diversity. Training data that disproportionately contains white males is one of ways that facial recognition has proven to introduce bias against women and people of color. But Joseph Halpern, a computer science professor at Cornell University, thinks that training data is only a very small part of the problem. 

“It is well known there are problems with facial recognition algorithms due to bad training sets. But I’m concerned that, although the EU data set might deal with the known problems, who knows what other biases it might introduce,” wrote Halpern in an email to Quartz. Halpern would prefer a clear statement of an algorithm’s expectations, along with penalties for if those expectations aren’t met.

Citizens should also get a clear warning of when facial recognition may be used, he says. While the the proposal suggests a “trustworthy” AI certification that would ask for compliance in low-risk uses, it doesn’t impose the same demands on law enforcement. “The problem that I suspect most people have with the Chinese use of facial recognition on the Uighur population is not that it misidentifies people; rather, it’s that it identifies people all too well,” wrote Halpern.  

Automated facial recognition in public spaces, without a person’s consent, has emerged as a point of controversy in Europe. Germany is planning on installing automated facial recognition cameras in train stations and airports, despite opposition from civil liberties groups. European Commission Vice-President for Digital Margrethe Vestager acknowledged this week that while such technology is in violation of GDPR rules, there are exceptions for public security. But critics warn that such a loophole gives governments the freedom to install Orwellian-style surveillance technology in public spaces. 

“We’re glad the EU report acknowledges that facial recognition, when deployed in public spaces, poses a threat to fundamental rights and to the GDPR,” wrote Amba Kak, director of global strategy and programs at the AI Now Institute at NYU, in an email to Quartz. “But it stops short of prescribing any hard limits, instead recommending ‘broad European debate’ on the topic. It’s urgent for the Commission to take leadership on drawing red lines around facial recognition, particularly contending with the issue of concentrated power and the harms this technology presents to civil society,”

Stephanie Hare, an AI ethics researcher who advocated for a temporary ban on facial recognition before the EU Parliament last year, calls its omission from the white paper “disappointing.” Without a blanket ban, individual member nations will be responsible for regulating facial recognition. And European countries have varied in their views about the ethics and legality of the technology.

Sweden’s Data Protection Authority, for example, has allowed for the use of facial recognition to identify criminal suspects but has blocked its use in schools. France is using facial recognition in AliceM, its mandatory national ID program, which is currently being challenged by a privacy group in the nation’s highest court. 

“In sum, the EU is allowing a free for all on live facial recognition technology,” wrote Hare in an email to Quartz, “when it could have shown leadership.”