Blogs

Blog by Kade Crockford, director of the ACLU of Massachusetts' Technology for Liberty Program.

In two reports published this week, Georgetown University Law School’s Center on Privacy and Technology joins the ACLU of Massachusetts in calling for a moratorium on the government’s use of face surveillance technology, citing alarming new findings about law enforcement’s use of the tool nationwide.

The reports document disturbing and sometimes bizarre law enforcement uses of the unregulated, often inaccurate technology. In many cases, the Georgetown scholars say, police are cutting and pasting parts of faces onto suspect images to generate face recognition “hits.” In others, officials are encouraged to run police sketches of suspects through face surveillance algorithms to try to identify people. Astonishingly, in New York, the scholars find, police have on more than one occasion substituted images of famous people including Woody Harrelson and a Knicks player in place of suspect photos when the photos are too grainy to be useful for face recognition purposes. 

In some cases, these practices have led police to identify people who were ultimately arrested and prosecuted. According to NYPD figures, police have made nearly 3,000 arrests based on these identifications. But despite its growing use in law enforcement, criminal defendants are almost never notified that face surveillance was used to identify them as suspects, raising significant constitutional due process concerns.

The reports also provide the most comprehensive view the public has seen yet of the development of city-wide, real-time face surveillance and tracking operations in Detroit and Chicago. Neither Michigan nor Illinois has a law regulating law enforcement’s use of the technology, which multiple studies have shown is highly inaccurate when evaluating people of color and women’s faces. Neither police department has policy protections in place to adequately guard residents of those cities against dystopian surveillance and tracking of everyday life or political speech, or to prevent racial profiling.

Earlier this month, the ACLU published emails between a face surveillance vendor called Suspect Technologies and the Plymouth, Massachusetts police department. The emails sketch out the contours of a plan, in the works for two years, to use Suspect’s technology to track the movements and behaviors of people in 21 or 22 public buildings around the town of 60,000 people, including in schools. In the emails, Suspect Technologies CEO Jacob Sniff acknowledges that his product may work only 30 percent of the time, and admits that it might produce as many as one false positive “hit” per day. After the ACLU obtained the emails and notified journalists about them, the Chief of Police in Plymouth backed away from the plans.

In response to the Georgetown reports, ACLU of Massachusetts executive director Carol Rose said: “People often think of facial recognition as objective science, but Georgetown’s findings show law enforcement is using the technology in highly subjective ways, by running scans against police sketches of suspects, editing photos by combining facial features from different people, and in some cases even melding two faces into one. These practices—which are occurring largely in the dark, and absent any regulation—should concern every freedom-loving person. These reports underscore the urgent need for the Massachusetts state legislature to pass a moratorium to press pause on the government’s use of face surveillance technology.”

“In a free society, you shouldn’t be tracked by the government as you visit the doctor, seek mental health or substance use treatment, worship your religion, or attend a political meeting. But face surveillance enables persistent, constant tracking of not one person’s First Amendment protected activity, but every person’s—and not on just one day, but on all days,” said Kade Crockford, director of the ACLU of Massachusetts Technology for Liberty Program. “That’s exactly what is happening in Detroit and Chicago, and we need to make sure it doesn’t spread to the Bay State. Right here in Massachusetts, we’ve uncovered documents that show private companies are going to great lengths to use our communities as testing grounds for this very type of authoritarian surveillance. That’s why from San Francisco to Somerville, local governments are moving to ban the use of these technologies by government entities. Until the state legislature in Massachusetts adopts a moratorium, that’s the only way we can make sure the types of abuses laid out in Georgetown’s report aren’t occurring here in the Commonwealth.”


Photo: The NYPD is sometimes photoshopping random facial features onto suspect photos, and then searching those images using face recognition tech. Source: https://www.flawedfacedata.com/

Date

Friday, May 17, 2019 - 11:30am

Featured image

A slide from NYPD FIS describing "removal of facial expression" technique

Show featured image

Hide banner image

Override default banner image

Privacy SOS over grainy security photo of people entering train

Related issues

Privacy and Surveillance Police Accountability

Show related content

Tweet Text

[node:title]

Type

Menu parent dynamic listing

25

Show PDF in viewer on page

Style

Standard with sidebar

Facial recognition technology stars in three recent Hollywood movies: Isle of Dogs, Ready Player One, and Black Panther. In Wes Anderson’s stop-motion near-future Japan, a corrupt mayor uses the technology to capture the Little Pilot who only wants to save his dog. In Steven Spielberg’s dystopic America, a megalomaniacal billionaire uses drones equipped with face scanners to find one of the movie’s heroes as she drives her van through an impoverished futuristic cityscape. And in Ryan Coogler’s Wakanda, the royal technologist’s team uses her facial recognition tool to identify intruders in the kingdom. 

All three films show the ways that facial recognition technology leaves no place to hide — for heroes and villains alike — in a surveillance state of the future. We don’t yet inhabit these imagined worlds, but if we aren’t careful, the reality of ubiquitous tracking via facial recognition isn’t far away. 

Popular technologies are already making use of the tool. Facebook has an advanced facial recognition technology that identifies photo subjects (prompting concern in countries that have yet to adopt the technology). The iPhone X employs Face ID as an authentication tool to unlock devices. At least one major retailer has openly admitted to using facial recognition technology to surveil customers and attempt to identify shoplifters. And an Israeli company is creating a privatized watch list that enables users to scan the faces of strangers and identify “allegedly dangerous people.” These are just a few examples of many more real-world applications of facial recognition technology — including by American police departments. 

These technologies threaten to subject us to perpetual, dragnet surveillance in which we are nonconsenting subjects in a never-ending series of investigations. Our face geometries might be captured, retained, and connected to our real-world identities, and combined with information about our income, education, demographics, health, and other data. Our appearance, preferences, and physical locations could be sold to data brokers and advertisers and used to feed automated decision-making tools that control important decisions around our housing, employment, health care, policing, and much more. We could be digitally tracked based on our political activities, religion, or nationality; misidentified as criminals or terrorists; or otherwise blacklisted. 

The prospect of individualized tracking and misidentification should concern everyone, but it is particularly potent for people of color, who most often bear the brunt of enhanced surveillance. Black and brown people already are overpoliced and face disparate treatment in every stage of the criminal justice system. Facial recognition likely will be disproportionately trained on these communities, further exacerbating bias under the cover of technology. 

But the growing prevalence of facial recognition in film doesn’t mean its use is inevitable or irreversible. We can reject a surveillance infrastructure in which our faces populate government databases. We can also call corporate actors to account. For example, a coalition of civil rights and civil liberties groups — including the ACLU — recently called on Axon, which creates law enforcement technologies, to refrain from outfitting its police body cameras with real-time facial recognition technology. 

We must also ensure that law enforcement acquisition of surveillance technologies does not happen in secret, but is rather subject to vigorous public debate. Residents in cities across the country — including in Oakland, California; Seattle, Washington; Nashville, Tennessee; and Somerville, Massachusetts — have all adopted local ordinances that require an extensive public process to make sure the right questions are asked and answered about any new surveillance proposals.

Community control over police surveillance

Movies might be less exciting if they focused on the city council meetings, policy conferences, and legislative developments that help preclude the use of surveillance technologies in the worlds the directors have imagined. But if we don’t want our lives to be subject to pervasive, unaccountable surveillance no matter where we go and what we do, then it is incumbent upon us, as concerned individuals and members of our local communities, to demand control over our personal information and to become our own privacy heroes.


Blog by Jennifer Stisa Granick, Surveillance and Cybersecurity Counsel, ACLU Speech, Privacy, and Technology Project
& Nicola Morrow, Legal Assistant, ACLU Speech, Privacy, and Technology Project

Originally published on ACLU's Speak Freely.

Date

Friday, May 11, 2018 - 3:45pm

Featured image

Ready Player One

Show featured image

Hide banner image

Related issues

Privacy and Surveillance

Show related content

Tweet Text

[node:title]

Type

Show PDF in viewer on page

Style

Standard with sidebar

Brown University student Amara Majeed awoke on a Thursday in late April to find that she had been falsely accused of perpetrating the Easter terror attacks in Sri Lanka. Her photo was disseminated internationally and she received death threats, even though she was at school in Providence, Rhode Island during the attacks, not in Sri Lanka. Later, it was revealed that the Sri Lankan police had used facial recognition software, leading them to mistakenly identify Amara as one of the bombers.

Amara’s story is an extreme example of what can go wrong when government agencies use untested, often unreliable, and racially biased facial recognition algorithms behind closed doors, absent public debate and regulations. Despite the significant risks and the legal and regulatory vacuum, law enforcement agencies, corporations, and others have embraced facial recognition technology, meaning it is fast becoming more prevalent in public life.

Probably due to its extreme creep factor, government agencies and corporations don’t generally advertise when and how they deploy face surveillance. But the companies that manufacture the tools need to sell them, making it is easier to find information on the companies building and selling facial recognition technology than those using it. I did some research to see what’s new in the space, and found that it’s a hot market, with gargantuan tech firms like Amazon competing against startups you’ve probably never heard of.

Some of those startups are working on products that are downright frightening. A company called TVision Insights, for example, is working on a product to track viewers watching television in their homes. Households who “opt in” are monitored by TVision software whenever the TV is on. TVision claims to be able to tell who in the house is watching the TV and whether that person is actually paying attention to the screen. The website says the “computation software can be easily integrated into the graphic processing unit of any web camera. Once installed, their AI technique tracks how many people are watching, their attention level, even their emotions, all in real time,” the company boasts. What does it mean to “opt-in” to this surveillance? It’s unclear, as TVision’s website doesn’t get into details. It’s possible the “opt-in” takes place when someone buys a smart TV, turns it on, and then quickly “agrees” to the Terms of Service in order to watch their football game or nightly news program.

Other companies are working on two-factor authentication using facial recognition technology. In this case, a face scan replaces a user entering a code texted to them or inserting a card into a reader to validate their password. In this scenario, users are at least aware that their faces are being scanned. But nonetheless, the technology could cause problems for people. First of all, a user’s choice to engage with a system like this depends on where the technology is deployed. If businesses decide to force their workers to use these systems, for example, it doesn’t leave workers with much choice. Workers may understandably not want to hand over their biometric information to their employer or a third party to get a paycheck. Equally troubling is a product under development by a company called Voatz, which is building technology to “make voting safer.” Their system currently allows voters to identify themselves with biometric data, either a fingerprint or facial recognition.

Facial recognition technology is also popular with video surveillance companies and companies making police technology. Among those is the Cambridge, Massachusetts based firm Suspect Technologies. Suspect advertises an “AgentAI” platform to assist police officers. In documents obtained by the ACLU of Massachusetts from the Plymouth Police Department, the CEO of the company admitted his tool might only work about 30 percent of the time, but nonetheless wanted to test it on unsuspecting people minding their business in public spaces.

Indeed, facial recognition failures in government do not only occur in developing countries like Sri Lanka. The MTA deployed facial recognition technology in New York City earlier this year and their technology failed to correctly recognize a single face. In the UK, officials acknowledged their pilot test of facial recognition was a disaster—with a 96 percent failure rate.

Although the facial recognition and security industries tout retail as a promising market for their products, most retailers refuse to discuss whether they use facial recognition technology. In 2018 the ACLU asked the 20 largest retailers in the US whether they use facial recognition technology. 18 of the companies refused to answer, and only one said that they did not use any facial recognition technology in their stores.

Government agencies in the United States are subject to the public records law, giving organizations like the ACLU a window into their adoption of surveillance technologies like face recognition. But private companies are not subject to open records laws, and most companies deploying facial recognition technology do not want the public to know whether or not they’re using the tool. There are a few exceptions, however.

Sak’s Fifth Avenue is known to use facial recognition in their stores. Live video feeds in stores are monitored from company headquarters in New York and facial recognition is used to run customer faces against those in a private database that Sak’s has of “past shoplifters or known criminals.” It is unclear whether Sak’s stores in Massachusetts use this technology. Recently Apple was accused in a lawsuit of misidentifying a teen as a shoplifter and arresting him for a theft that occurred while he was attending his high school prom. The case is ongoing, but it presents another example of the harms of jumping to conclusions based on facial recognition software.

Retailers are interested in tracking their customers as well. Walgreens stores in Chicago are piloting a technology developed by the company Cooler Screens. The company uses facial analysis software to determine the demographics of customers looking at the beverage refrigerators in its stores, including using software to track eye movements. Thanks to a pioneering biometric privacy law in Illinois, the Biometric Privacy Act (BIPA), Cooler Screens and Walgreens cannot use recognition technology to individually identify customers, but that feature could be used in stores in other states. In the absence of laws regulating the use of facial recognition technology, companies are working to bring more facial recognition into their stores. Walmart has filed a patent for a system to capture customers’ facial information while they wait to checkout.

There are many important unanswered questions about the use of facial recognition technology by the government and by corporations. If someone mistakenly ends up in a private database tagged as a criminal they likely will not even know. This could cause them to be harassed in stores, possibly around the country as retailers form information sharing networks. Due to the secrecy surrounding even the existence of these systems, an innocent person has no way of appealing their inclusion in such a database. These scenarios are not mere hypotheticals; facial recognition misidentifications are all too common.

As companies and government agencies continue to adopt this technology in more contexts, ordinary people may have little recourse to escape facial capture—little recourse, that is, besides democratic engagement. The most important thing we can do right now to stop oncoming dystopia is pass laws to make sure our rights keep pace with technological advancement and protect ourselves from flawed software that violates our privacy.

As a start, the ACLU of Massachusetts is advocating for a moratorium on facial recognition technology use by the state government. We need to press “pause” until the legislature has passed rigorous protections to protect our rights. Yesterday, San Francisco became the first city in the country to ban the government’s use of face surveillance tech, and Somerville, Massachusetts looks like it will soon do the same.

Technology brings many benefits, but it also comes with risks. The only way forward is to get government involved, to make sure our rights in this democracy are paramount—no matter what companies like Amazon have to say about it.

This blog post was written by ACLU of Massachusetts intern Alex Leblang. Originally published on Privacy SOS.

Date

Wednesday, May 15, 2019 - 3:00pm

Featured image

Surveillance Camera Stock by Rama

Show featured image

Hide banner image

Override default banner image

Privacy SOS over grainy security photo of people entering train

Related issues

Privacy and Surveillance

Show related content

Tweet Text

[node:title]

Share Image

Surveillance cameras mounted on a concrete wall

Type

Menu parent dynamic listing

25

Show PDF in viewer on page

Style

Standard with sidebar

Pages

Subscribe to RSS - Blogs