Facial recognition technology is becoming an increasingly regular part of our everyday lives. Systems that can quickly match your photos to your identity are quickly spreading out across the world, potentially affecting how you use your smartphone, check in at airports, or even shop.
But as the technology becomes more ubiquitous, civil rights groups and officials from local, state, and federal governments are raising serious questions about how and when it should be used, and who owns the photos taken of you.
More recently, some Amazon (AMZN) investors called on the company to halt the sale of its own facial recognition tech to law enforcement organizations and governments. Amazon’s shareholders rejected the proposal. And last month, lawmakers on both sides of the aisle in the House Committee on Oversight and Reform expressed concerns about the technology and whether its use violates citizens’ rights.
But facial recognition can seem opaque to many people. And while systems are being used in stores, airports, and by police, activists and lawmakers worry that the technology could lead to wrongful arrests and other civil rights issues.
What is facial recognition?
Facial recognition technology, a form of computer vision, allows a piece of software to scan an image or live video for a person’s face and then match it with a similar, previously taken image or video of that same person.
With facial recognition technology, algorithms are fed thousands of images of individuals to “teach” them how faces normally look. To find a single person using such systems, an operator uploads a photo of whoever they are trying to identify, the computer then looks at the person’s facial landmarks, such as the distance between their eyes, and other features, and compares that against the other images in its stockpile.
In some instances, when it finds a similar individual, the software will provide a percentage indicating how close of a match the provided image is to the images in its stockpile.
Where is it used?
Facial recognition technology has a multitude of applications. Businesses can use it to scan employees, as a more secure alternative to keycards, which can be passed from person to person.
Retailers might use it to scan customers against collections of known shoplifters to prevent theft. Meanwhile, U.S. airports currently use facial recognition technology to scan departing travellers so authorities know who’s leaving the country or even check you into your flight. And social media sites use it to suggest tags for people in photos you upload.
Not all forms of facial recognition technology are the same, though. Smartphone makers are increasingly including the tech as a feature in their devices, but only to identify you, the user.
Apple’s (AAPL) Face ID feature on its latest iPhones is designed specifically to recognize your own face. It registers your identity by capturing a depth map of your face using 30,000 infrared dots and a secondary 2D infrared photo. All of that information is then turned into a mathematical representation of your face and saved on your device protected by a secure enclave.
No pictures of your face are ever used when identifying you to unlock your phone, and none of that information is ever sent out to Apple. The idea is to have a highly secure means to unlock your phone rather than using a fingerprint, which has a greater chance of being spoofed than your face. Apple says Face ID has a 1 in 1 million chance of being tricked, thanks to the depth mapping used in the enrollment process.
Amazon’s own facial recognition technology, called Rekognition, is designed to be able to look at an image of a person, and determine if they are the same person in a separate image or video.
The company’s software provides users with a confidence score that shows how much the program believes an image or video matches a previous image of a person. Law enforcement agencies, for instance, are recommended to only use a confidence score of 99% and have a human review the results.
Amazon also doesn’t keep the images scanned by Rekognition. Instead, they are held by the user or organization that uses the service.
Why is it controversial?
The potential problem with public facial recognition technology is that it’s based on algorithms that have to be programmed by humans, and bias can creep in when engineers don’t provide the algorithms with enough diversity in their samples.
M.I.T. Media Lab and the A.C.L.U. have both conducted studies that show flaws with how facial recognition identifies individuals. In the M.I.T. Media Lab study, facial recognition technologies from Microsoft (MSFT), IBM (IBM), and Amazon had a harder time identifying women than men and darker skinned individuals than lighter skinned subjects.
Amazon’s facial recognition technology performed notably worse than Microsoft’s or IBM’s. According to M.I.T. Media Lab, Amazon’s offering accurately identified light-skinned males 100% of the time, but misclassified women as men 29% of the time and dark-skinned women as men 31% of the time.
The A.C.L.U., meanwhile, performed a study showing that Amazon’s Rekognition matched images of 28 members of Congress with mugshots of criminals.
Amazon has said that the tests performed by M.I.T. Media Lab and the A.C.L.U. were flawed, because the administrators didn’t use the system properly.
In a statement released after the M.I.T. Media Lab report was made public, Amazon said that the study and a New York Times article reporting on it were “misleading and draw false conclusions.”
In the statement, Matt Wood, general manager of AI for Amazon Web Services, said, “The research paper in question does not use the recommended facial recognition capabilities, does not share the confidence levels used in their research, and we have not been able to reproduce the results of the study.”
Microsoft and IBM responded to the Media Lab study with separate blog posts indicating that they had worked to improve how their services distinguish darker-skinned individuals.
If facial recognition systems have problems distinguishing between different genders and skin colors it could disproportionately target women and minorities with false positive results.
It’s for that reason that in April Microsoft President Brad Smith told a tech conference at Stanford University that the company refused to sell its own technology to a law enforcement agency, as well as an unnamed foreign government.
Outside of false positives, civil rights groups and lawmakers question whether facial recognition technology is a form of unwarranted surveillance.
During last month’s House Committee on Oversight and Reform meeting, lawmakers on both sides of the political spectrum including Rep. Alexandria Ocasio-Cortez (D.-N.Y.) and Rep. Jim Jordan (R-OH) questioned how such images of citizens are collected, who has access to them, and when they’re taken.
Cities and states across the country are also looking into legislating facial recognition tech. San Francisco’s Board of Supervisors recently voted to ban the use of the technology by police and nearby Oakland is considering a similar measure. The California Senate is looking into a ban on the software for police body cameras, as well. A bill banning the tech is also in committee in the Washington State Legislature.
It’s not just West Coast cities considering bans on the software. Massachusetts state legislators have also brought forward a bill that would limit the use of facial recognition technologies.
Microsoft’s Smith has also called for some form of government regulation of facial recognition technology, while Google (GOOG, GOOGL) has said it isn’t ready to sell its own tech until policy questions surrounding the issue are sorted out.
The debate surrounding the use of facial recognition technology is still in its early stages. And while the technology is already on the market, it too is still being worked out. As the software is provided with more opportunities to view faces, its ability to identify individuals will improve.
Until then though, there will continue to be questions surrounding civil rights, law enforcement use, and bias.