Joseph Halpern, Joseph C. Ford Professor of Engineering and Professor of Computer Science at Cornell, offered commentary in Amrita Khalid's report "The EU's Agenda to Regulate AI Does Little to Rein in Facial Recognition." As Khald writes:
The term “facial recognition” only appears four times in the 27-page document that outlines Europe’s vision for the future of artificial intelligence. Three of those four instances are in footnotes.
The document, known as the White Paper on Artificial Intelligence, is a part of the European Union Commission’s ambitious agenda to regulate the tech sector of the EU’s 27 member nations, which it released this week (Feb. 19).
AI ethics experts warn against the unregulated use of facial recognition, which is currently being deployed by both governmentsand the private sector. The fact that the controversial technology is barely mentioned in the white paper represents a remarkable shift in the EU’s willingness to draw a hard line on its use.
Halpern's orienting comments on facial recognition were integrated into the ongoing discussion:
One solution suggested by the white paper would require that training data used by AI vendors come from the local European population, better reflecting its demographic diversity. Training data that disproportionately contains white males is one of ways that facial recognition has proven to introduce bias against women and people of color. But Joseph Halpern, a computer science professor at Cornell University, thinks that training data is only a very small part of the problem.
“It is well known there are problems with facial recognition algorithms due to bad training sets. But I’m concerned that, although the EU data set might deal with the known problems, who knows what other biases it might introduce,” wrote Halpern in an email to Quartz. Halpern would prefer a clear statement of an algorithm’s expectations, along with penalties for if those expectations aren’t met.
Citizens should also get a clear warning of when facial recognition may be used, he says. While the the proposal suggests a “trustworthy” AI certification that would ask for compliance in low-risk uses, it doesn’t impose the same demands on law enforcement. “The problem that I suspect most people have with the Chinese use of facial recognition on the Uighur population is not that it misidentifies people; rather, it’s that it identifies people all too well,” wrote Halpern.
For more on this topic, see related CS News items, such as: