UMass College of Information and Computer Sciences (CICS) Professor Erik Learned-Miller recently co-authored a white paper calling for a new federal authority to regulate facial recognition technology. On July 15, Learned-Miller held a Q&A session on the topic of facial recognition and regulation, attended by 160 participants from academia, industry, and government. The session was the first in the CICS Technology and Social Justice webinar series, which will “explore how computing innovation intersects with vitally important issues such as structural bias, civic participation, economic inequality, and citizen privacy,” according to CICS Dean Laura Haas, who hosted the session.
Learned-Miller, co-creator of Labeled Faces in the Wild, one of the most influential face datasets in computer vision, advocates for creating a central federal agency to approve and regulate facial recognition technology, similar to the way the Federal Drug Administration (FDA) approves and regulates medication and medical devices. In his talk, he proposed a thought experiment to attendees: Imagine it’s the year 1900 and you go to the drugstore to get medicine for your sick child. There you see a tonic which claims to cure 12 different conditions, including headaches, neuralgia, colds, hiccups, gout, gonorrhea, mumps, measles, whooping cough, and tuberculosis. “Today, we know that’s absurd,” Learned-Miller said. “And we take it for granted that such medicines have been carefully vetted, that they're safe and effective, and that they're not going to interact with the types of drugs we're already taking, and so forth.” Because FDA approval is required before a product can go to market, drugs that make false claims or cause harm can’t make it into people’s medicine cabinets.
Along with the co-authors of “Facial Recognition Technologies in the Wild: A Call for a Federal Office,” Learned-Miller recommends an FDA-like agency that would require defining key elements of any new product, such as the intended use, conditions for proper use, sufficient training for users, usage risk level, etc. “Better databases with more diversity and more cases -- that's not going to solve the problem by itself,” says Learned-Miller. “Self-regulation by industry is not enough, and ethical guidance is not enough. We argue for an independent government organization with dedicated expertise and the authority to keep products from emerging [in the marketplace] until they've demonstrated safety and efficacy.” Regulation like this could prevent cases like Robert Williams, a Michigan man who was wrongly accused of robbing a jewelry store, due to a mistaken facial recognition match.
Throughout the Q&A following Learned-Miller’s brief talk, he emphasized that “the horse has already left the barn” when it comes to computer vision technology. “Anyone who takes my class at UMass can build a passable face recognition system,” he said. However, while it’s not possible to prevent bad actors from getting access to the technology, we can still pass laws that make it illegal. “People marketed dangerous drugs before the FDA, and after the FDA was put into place people had to stop marketing those drugs. So nothing says that we can’t outlaw software that hasn’t been properly vetted.”
Learned-Miller believes that setting a high bar for approval -- requiring documentation of testing, quality control, engineering practices, use cases, etc. -- will ensure that the technology is used as intended, and used appropriately. However, he acknowledges that a high cost of entry could dissuade smaller companies with fewer resources from entering the market. To address this, he proposes defining usage risk levels. A product used to automatically sort personal photos, for example, has less risk than a product used to identify criminal suspects. Products intended for low-risk usage would have a lower bar for approval, allowing start-ups and smaller companies to develop technologies for the marketplace.
One attendee asked about the ideal make-up of expertise in the proposed new agency. Learned-Miller suggested that it will be important to include experts from law enforcement, who have experience in the application of the technology, as well as advocacy organizations like the American Civil Liberties Union or public defenders, who can bring forth potential issues and unintended consequences.
Another attendee asked how law enforcement justifies using the technology today. Learned-Miller shared that agencies like the FBI successfully use facial recognition technology to identify child abuse and child trafficking victims. One objective of the proposed federal agency would be to force companies to define the appropriate usage of their technology (such as identifying child trafficking victims), and if they then want to market it for a different use (such as identifying criminal suspects), they would need to reapply to the agency for approval for the new use, similar to how a pharmaceutical company needs to reapply to market a drug for a new use.
Joy Buolamwini of MIT's Media Lab, who is the founder of the Algorithmic Justice League and one of the co-authors of the white paper, asked if Learned-Miller would support a federal moratorium on facial recognition technology, in the absence of a centralized regulation agency. In response, Learned-Miller referenced a recent Forbes opinion piece by TOC Biometrics CEO Ricardo Navarro, in which Navarro suggests that “all facial recognition use cases must be approved by user consent, and all others should be banned.” Learned-Miller said he would agree with implementing a temporary ban of this nature, however he noted that such a ban would eliminate benign uses, such as sorting personal photos on a private computer. Instead, he would prefer a temporary ban on applications deemed high-risk, and allow low-risk uses.
Another attendee brought up how deepfakes (synthetically created images, videos, and recordings) are rapidly evolving, and asked if there will be a point in the future when we can no longer trust facial recognition matches. Learned-Miller responded that part of the approval and regulation process must include that the provenance of any image is known in high risk cases. “If you can’t ensure the provenance of that picture,” he said, “then the regulatory agency should say that you can’t use it to put somebody in jail.”
The final question was regarding what individuals can do to ensure that this push for regulation of facial recognition leads to tangible change. Learned-Miller shared that by working with Buolamwini and others, he came to be aware of the harmful impacts of this technology, which then drove him to want to inform others. “When you understand the negative consequences of these things, I think it’s important to share them with other technology people, because technology people tend to be overconfident and not spend that much time thinking about the flaws of their products,” he said. “I would encourage you to read about use cases and share them with the people that you know, because I think ultimately we all want safe and effective technologies out there.”