Facial recognition technology stars in three recent Hollywood movies: Isle of Dogs, Ready Player One, and Black Panther. In Wes Anderson’s stop-motion near-future Japan, a corrupt mayor uses the technology to capture the Little Pilot who only wants to save his dog. In Steven Spielberg’s dystopic America, a megalomaniacal billionaire uses drones equipped with face scanners to find one of the movie’s heroes as she drives her van through an impoverished futuristic cityscape. And in Ryan Coogler’s Wakanda, the royal technologist’s team uses her facial recognition tool to identify intruders in the kingdom.
All three films show the ways that facial recognition technology leaves no place to hide — for heroes and villains alike — in a surveillance state of the future. We don’t yet inhabit these imagined worlds, but if we aren’t careful, the reality of ubiquitous tracking via facial recognition isn’t far away.
Popular technologies are already making use of the tool. Facebook has an advanced facial recognition technology that identifies photo subjects (prompting concern in countries that have yet to adopt the technology). The iPhone X employs Face ID as an authentication tool to unlock devices. At least one major retailer has openly admitted to using facial recognition technology to surveil customers and attempt to identify shoplifters. And an Israeli company is creating a privatized watch list that enables users to scan the faces of strangers and identify “allegedly dangerous people.” These are just a few examples of many more real-world applications of facial recognition technology — including by American police departments.
These technologies threaten to subject us to perpetual, dragnet surveillance in which we are nonconsenting subjects in a never-ending series of investigations. Our face geometries might be captured, retained, and connected to our real-world identities, and combined with information about our income, education, demographics, health, and other data. Our appearance, preferences, and physical locations could be sold to data brokers and advertisers and used to feed automated decision-making tools that control important decisions around our housing, employment, health care, policing, and much more. We could be digitally tracked based on our political activities, religion, or nationality; misidentified as criminals or terrorists; or otherwise blacklisted.
The prospect of individualized tracking and misidentification should concern everyone, but it is particularly potent for people of color, who most often bear the brunt of enhanced surveillance. Black and brown people already are overpoliced and face disparate treatment in every stage of the criminal justice system. Facial recognition likely will be disproportionately trained on these communities, further exacerbating bias under the cover of technology.
But the growing prevalence of facial recognition in film doesn’t mean its use is inevitable or irreversible. We can reject a surveillance infrastructure in which our faces populate government databases. We can also call corporate actors to account. For example, a coalition of civil rights and civil liberties groups — including the ACLU — recently called on Axon, which creates law enforcement technologies, to refrain from outfitting its police body cameras with real-time facial recognition technology.
We must also ensure that law enforcement acquisition of surveillance technologies does not happen in secret, but is rather subject to vigorous public debate. Residents in cities across the country — including in Oakland, California; Seattle, Washington; Nashville, Tennessee; and Somerville, Massachusetts — have all adopted local ordinances that require an extensive public process to make sure the right questions are asked and answered about any new surveillance proposals.
Community control over police surveillance
Movies might be less exciting if they focused on the city council meetings, policy conferences, and legislative developments that help preclude the use of surveillance technologies in the worlds the directors have imagined. But if we don’t want our lives to be subject to pervasive, unaccountable surveillance no matter where we go and what we do, then it is incumbent upon us, as concerned individuals and members of our local communities, to demand control over our personal information and to become our own privacy heroes.
Blog by Jennifer Stisa Granick, Surveillance and Cybersecurity Counsel, ACLU Speech, Privacy, and Technology Project
& Nicola Morrow, Legal Assistant, ACLU Speech, Privacy, and Technology Project
Originally published on ACLU's Speak Freely.
Date
Friday, May 11, 2018 - 3:45pm
Featured image
Show featured image
Hide banner image
Related issues
Privacy and Surveillance
Show related content
Tweet Text
[node:title]
Type
Show PDF in viewer on page
Style
Standard with sidebar
Brown University student Amara Majeed awoke on a Thursday in late April to find that she had been falsely accused of perpetrating the Easter terror attacks in Sri Lanka. Her photo was disseminated internationally and she received death threats, even though she was at school in Providence, Rhode Island during the attacks, not in Sri Lanka. Later, it was revealed that the Sri Lankan police had used facial recognition software, leading them to mistakenly identify Amara as one of the bombers.
Amara’s story is an extreme example of what can go wrong when government agencies use untested, often unreliable, and racially biased facial recognition algorithms behind closed doors, absent public debate and regulations. Despite the significant risks and the legal and regulatory vacuum, law enforcement agencies, corporations, and others have embraced facial recognition technology, meaning it is fast becoming more prevalent in public life.
Probably due to its extreme creep factor, government agencies and corporations don’t generally advertise when and how they deploy face surveillance. But the companies that manufacture the tools need to sell them, making it is easier to find information on the companies building and selling facial recognition technology than those using it. I did some research to see what’s new in the space, and found that it’s a hot market, with gargantuan tech firms like Amazon competing against startups you’ve probably never heard of.
Some of those startups are working on products that are downright frightening. A company called TVision Insights, for example, is working on a product to track viewers watching television in their homes. Households who “opt in” are monitored by TVision software whenever the TV is on. TVision claims to be able to tell who in the house is watching the TV and whether that person is actually paying attention to the screen. The website says the “computation software can be easily integrated into the graphic processing unit of any web camera. Once installed, their AI technique tracks how many people are watching, their attention level, even their emotions, all in real time,” the company boasts. What does it mean to “opt-in” to this surveillance? It’s unclear, as TVision’s website doesn’t get into details. It’s possible the “opt-in” takes place when someone buys a smart TV, turns it on, and then quickly “agrees” to the Terms of Service in order to watch their football game or nightly news program.
Other companies are working on two-factor authentication using facial recognition technology. In this case, a face scan replaces a user entering a code texted to them or inserting a card into a reader to validate their password. In this scenario, users are at least aware that their faces are being scanned. But nonetheless, the technology could cause problems for people. First of all, a user’s choice to engage with a system like this depends on where the technology is deployed. If businesses decide to force their workers to use these systems, for example, it doesn’t leave workers with much choice. Workers may understandably not want to hand over their biometric information to their employer or a third party to get a paycheck. Equally troubling is a product under development by a company called Voatz, which is building technology to “make voting safer.” Their system currently allows voters to identify themselves with biometric data, either a fingerprint or facial recognition.
Facial recognition technology is also popular with video surveillance companies and companies making police technology. Among those is the Cambridge, Massachusetts based firm Suspect Technologies. Suspect advertises an “AgentAI” platform to assist police officers. In documents obtained by the ACLU of Massachusetts from the Plymouth Police Department, the CEO of the company admitted his tool might only work about 30 percent of the time, but nonetheless wanted to test it on unsuspecting people minding their business in public spaces.
Indeed, facial recognition failures in government do not only occur in developing countries like Sri Lanka. The MTA deployed facial recognition technology in New York City earlier this year and their technology failed to correctly recognize a single face. In the UK, officials acknowledged their pilot test of facial recognition was a disaster—with a 96 percent failure rate.
Although the facial recognition and security industries tout retail as a promising market for their products, most retailers refuse to discuss whether they use facial recognition technology. In 2018 the ACLU asked the 20 largest retailers in the US whether they use facial recognition technology. 18 of the companies refused to answer, and only one said that they did not use any facial recognition technology in their stores.
Government agencies in the United States are subject to the public records law, giving organizations like the ACLU a window into their adoption of surveillance technologies like face recognition. But private companies are not subject to open records laws, and most companies deploying facial recognition technology do not want the public to know whether or not they’re using the tool. There are a few exceptions, however.
Sak’s Fifth Avenue is known to use facial recognition in their stores. Live video feeds in stores are monitored from company headquarters in New York and facial recognition is used to run customer faces against those in a private database that Sak’s has of “past shoplifters or known criminals.” It is unclear whether Sak’s stores in Massachusetts use this technology. Recently Apple was accused in a lawsuit of misidentifying a teen as a shoplifter and arresting him for a theft that occurred while he was attending his high school prom. The case is ongoing, but it presents another example of the harms of jumping to conclusions based on facial recognition software.
Retailers are interested in tracking their customers as well. Walgreens stores in Chicago are piloting a technology developed by the company Cooler Screens. The company uses facial analysis software to determine the demographics of customers looking at the beverage refrigerators in its stores, including using software to track eye movements. Thanks to a pioneering biometric privacy law in Illinois, the Biometric Privacy Act (BIPA), Cooler Screens and Walgreens cannot use recognition technology to individually identify customers, but that feature could be used in stores in other states. In the absence of laws regulating the use of facial recognition technology, companies are working to bring more facial recognition into their stores. Walmart has filed a patent for a system to capture customers’ facial information while they wait to checkout.
There are many important unanswered questions about the use of facial recognition technology by the government and by corporations. If someone mistakenly ends up in a private database tagged as a criminal they likely will not even know. This could cause them to be harassed in stores, possibly around the country as retailers form information sharing networks. Due to the secrecy surrounding even the existence of these systems, an innocent person has no way of appealing their inclusion in such a database. These scenarios are not mere hypotheticals; facial recognition misidentifications are all too common.
As companies and government agencies continue to adopt this technology in more contexts, ordinary people may have little recourse to escape facial capture—little recourse, that is, besides democratic engagement. The most important thing we can do right now to stop oncoming dystopia is pass laws to make sure our rights keep pace with technological advancement and protect ourselves from flawed software that violates our privacy.
As a start, the ACLU of Massachusetts is advocating for a moratorium on facial recognition technology use by the state government. We need to press “pause” until the legislature has passed rigorous protections to protect our rights. Yesterday, San Francisco became the first city in the country to ban the government’s use of face surveillance tech, and Somerville, Massachusetts looks like it will soon do the same.
Technology brings many benefits, but it also comes with risks. The only way forward is to get government involved, to make sure our rights in this democracy are paramount—no matter what companies like Amazon have to say about it.
This blog post was written by ACLU of Massachusetts intern Alex Leblang. Originally published on Privacy SOS.
Date
Wednesday, May 15, 2019 - 3:00pm
Featured image
Show featured image
Hide banner image
Override default banner image
Related issues
Privacy and Surveillance
Show related content
Tweet Text
[node:title]
Share Image
Type
Menu parent dynamic listing
Show PDF in viewer on page
Style
Standard with sidebar
Date
Friday, May 10, 2019 - 9:15am
Menu parent dynamic listing
Show PDF in viewer on page
Style
Standard with sidebar
Tweet Text
[node:title]
Pages