Blogs

On Tuesday, July 30 Cambridge Mayor Marc McGovern and Councilors Craig Kelley and Sumbul Siddiqui will introduce a measure to ban municipal government use of face surveillance technology. If the measure passes, Cambridge would become the second city in Massachusetts, after Somerville, to take the step. In June, San Francisco became the first city in the country to ban the technology in government hands, and Oakland is set to do the same.

If you are a Cambridge resident concerned about the spread of this dystopian technology, please email your public comment. Details below.

  • WHAT: Introduction of face surveillance ban (on government use) in Cambridge
  • WHERE: Cambridge City Hall, Council Chambers
  • WHEN: Tuesday, July 30, 5:30pm
  • WHO: Cambridge residents

Details: At Tuesday’s City Council meeting, the Mayor of Cambridge will introduce a measure to ban municipal government entities from using face surveillance technology and information derived from it. You can read the text of the ordinance here.

How you can help: Please submit written testimony to the council in support of the ban! You can submit written comments to the council by emailing council@cambridgema.gov, clerk@cambridgema.gov, and mayor@cambridgema.gov.

Face surveillance technology poses unprecedented threats to privacy, free speech, and racial and gender justice. Studies including those done in Cambridge at MIT have shown the technology is highly inaccurate when evaluating the faces of Black women, with inaccuracy rates of up to 35 percent for that demographic. The tech is dangerous when it doesn’t work, and it’s dangerous when it does. People should be able to walk around Cambridge, attend protests, seek medical treatment, and visit friends and family without worrying that government agencies are keeping track of their every movement.

You can’t leave your face at home, meaning this technology poses extreme risks to our privacy and security as individuals and as a community. We look forward to Cambridge passing this ordinance to join Somerville, San Francisco, and Oakland, to protect civil rights and racial justice for all.

Date

Friday, July 26, 2019 - 1:15pm

Featured image

Cambridge City Hall

Show featured image

Hide banner image

Override default banner image

Privacy SOS over grainy security photo of people entering train

Override site-wide featured action/member

Related issues

Privacy and Surveillance

Show related content

Pinned related content

Tweet Text

[node:title]

Type

Menu parent dynamic listing

25

Show PDF in viewer on page

Style

Standard with sidebar

Emotion recognition is a hot new area, with numerous companies peddling products that claim to be able to read people’s internal emotional states, and AI researchers looking to improve computers’ ability to do so. This is done through voice analysisbody language analysisgait analysiseye tracking, and remote measurement of physiological signs like pulse and breathing rates. Most of all, though, it’s done through analysis of facial expressions.

A new study, however, strongly suggests that these products are built on a bed of intellectual quicksand.

The key question is whether human emotions can be reliably determined from facial expressions. “The topic of facial expressions of emotion — whether they’re universal, whether you can look at someone’s face and read emotion in their face — is a topic of great contention that scientists have been debating for at least 100 years,” Lisa Feldman Barrett, Professor of Psychology at Northeastern University and an expert on emotion, told me. Despite that long history, she said, a comprehensive assessment of all the emotion research that has been done over the past century had never been done. So, several years ago, the Association for Psychological Science brought together five distinguished scientists from various sides of the debate to conduct “a systematic review of the evidence testing the common view” that emotion can be reliably determined by external facial movements.

The five scientists “represented very different theoretical views,” according to Barrett, who was one of them. “We came to the project with very different expectations of what the data would show, and our job was to see if we could find consensus in what the data shows and how to best interpret it. We were not convinced we could, just because it’s such a contentious topic.” The process, expected to take a few months, ended up taking two years.

Nevertheless, in the end, after reviewing over 1,000 scientific papers in the psychological literature, these experts came to a unanimous conclusion: there is no scientific support for the common assumption “that a person’s emotional state can be readily inferred from his or her facial movements.”

The scientists conclude that there are three specific misunderstandings “about how emotions are expressed and perceived in facial movements.” The link between facial expressions and emotions is not reliable (i.e., the same emotions are not always expressed in the same way), specific (the same facial expressions do not reliably indicate the same emotions), or generalizable (the effects of different cultures and contexts has not been sufficiently documented).

As Barrett put it to me, “A scowling face may or may not be an expression of anger. Sometimes people scowl in anger, sometimes you might smile, or cry, or just seethe with a neutral expression. Also, people scowl at other times — when they’re confused, when they’re concentrating, when they have gas.”

The scientists conclude:

These research findings do not imply that people move their faces randomly or that [facial expressions] have no psychological meaning. Instead, they reveal that the facial configurations in question are not “fingerprints” or diagnostic displays that reliably and specifically signal particular emotional states regardless of context, person, and culture. It is not possible to confidently infer happiness from a smile, anger from a scowl, or sadness from a frown, as much of current technology tries to do when applying what are mistakenly believed to be the scientific facts.

This paper is significant because an entire industry of automated purported emotion-reading technologies is quickly emerging. As we wrote in our recent paper on “Robot Surveillance,” the market for emotion recognition software is forecast to reach at least $3.8 billion by 2025. Emotion recognition (aka “affect recognition” or “affective computing”) is already being incorporated into products for purposes such as marketing, robotics, driver safety, and (as we recently wrote about) audio “aggression detectors.”

Emotion recognition is based on the same underlying premise as polygraphs aka “lie detectors”: that physical body movements and conditions can be reliably correlated with a person’s internal mental state. They cannot — and that very much includes facial muscles. What is true of facial muscles, it stands to reason, would also be true of all the other methods of detecting emotion such as body language and gait.

The belief that such mind reading is possible, however, can do real harm. A jury’s cultural misunderstanding about what a foreign defendant’s facial expressions mean can lead them to sentence him to death, for example, rather than prison. Translated into automated systems, that belief could lead to other harms; a “smart” body camera falsely telling a police officer that someone is hostile and full of anger could contribute to an unnecessary shooting.

As Barrett put it to me, “there is no automated emotion recognition. The best algorithms can encounter a face — full frontal, no occlusions, ideal lighting — and those algorithms are very good at detecting facial movements. But they’re not equipped to infer what those facial movements mean.”

Blog by Jay Stanley, Senior Policy Analyst, ACLU Speech, Privacy, and Technology Project.

Date

Thursday, July 18, 2019 - 10:00am

Featured image

Screens with AI assisted analysis and surveillance of individuals

Show featured image

Hide banner image

Related issues

Privacy and Surveillance

Show related content

Tweet Text

[node:title]

Share Image

Screens with AI assisted analysis and surveillance of individuals

Type

Menu parent dynamic listing

25

Show PDF in viewer on page

Style

Standard with sidebar

You’ve seen it on TV or in the movies: police are investigating a crime, and they ask a witness to pick the culprit from a lineup — a handful of suspects, maybe a half-dozen of them. Sometimes they pick the right person. Other times, the witness gets it wrong.

Now consider this: at this very moment, you may be in a lineup yourself — though you wouldn’t know it. Right now, someone, somewhere, could be comparing a picture of your face with a picture of a suspect. Nobody will ever tell you this is happening, and you will never get the chance to opt out or contest your inclusion. No judge signed off on this process, and no elected official authorized it. In this case, instead of a person identifying you, it’s a piece of software — one that’s known to make mistakes. Even if it doesn’t flag you as a match this time, your photo will always be there in the database, ready for the next search. This lineup never ends.

It’s not science fiction; it’s the status quo — but we have a plan.

Last week, news broke that federal agencies like ICE and the FBI are secretly accessing state driver’s license databases to scan millions of photos — using face surveillance technology — in order to identify possible suspects. Documents uncovered by researchers at Georgetown’s Center for Privacy and Technology show that it can be shockingly easy for federal agents to access this highly sensitive data — sometimes all it takes is an email. That means millions of people with driver’s licenses are subject to invasive searches without their knowledge or consent, every day. Federal agents do not need a warrant or even probable cause to scan your photo — the practice is entirely unregulated in Massachusetts and nationwide. Even worse, face surveillance technology is prone to serious errors, misidentifying Black women up to 35 percent of the time in some systems.

Face surveillance technology poses unprecedented threats to our civil rights and civil liberties, and across the country, there’s a growing movement to “press pause” on this dystopian trend. The cities of San Francisco, Somerville and — just this week — Oakland have all passed ordinances banning municipal use of face surveillance.

Massachusetts voters get it: an ACLU poll shows 76 percent of voters do not think the government should be able to monitor and track people with face surveillance technology. We agree – and that’s why the ACLU of Massachusetts has been working on this issue and raising concerns about the dangers of face surveillance technology for some time.

In February 2019, we asked MassDOT to hand over all documents related to the use of their Registry of Motor Vehicles (RMV) database for face surveillance. In April, we asked for information about how the RMV shares its database with law enforcement, and how often they run face scans. MassDOT and the RMV ignored us, so last week we filed suit. From prior public records work, we know the state RMV has been using face surveillance technology since 2006, absent any legislative authorization or independent oversight. But we know virtually no details about how many times the technology has been used to identify people, for what reasons, or in which circumstances. Secrecy surrounding government use of face surveillance is intolerable in a free society.

In response to recent news reports, MassDOT said that “the RMV cooperates with law enforcement on specific case by case queries related to criminal investigations, but does not provide system access to federal authorities and is not negotiating to do so.”

Here's why that answer doesn’t satisfy us, and why we can’t take the government’s word for it that their use of face surveillance is above board:

First, privacy advocates are not alleging that law enforcement agents have “system access” to the driver’s license database. In other words, we don’t have reason to believe that ICE or the FBI can log in to state systems whenever they want. The problem is that when government agencies want to run a face surveillance scan, there are no checks and balances in place to make sure the system isn’t misused or abused. As far as we know, all it takes is a request submitted via email. Just as a police officer shouldn’t be able to rifle through your home computer on a whim, government agencies shouldn’t be able to use your photo in a virtual lineup whenever they want.

Second, it doesn’t particularly matter if the RMV cooperates on a “case by case basis.” What matters most is how often the RMV receives face surveillance requests from law enforcement, how often they accept or refuse, and on what basis. If the RMV accepts all requests they receive, then “case by case” doesn’t mean much, since every case gets handled the same way. The truth is that we have no idea how the RMV shares its driver’s license database. That’s why we’re suing to find out. We have a right to know how the government is using the faces of ordinary people in Massachusetts in dragnet surveillance operations.

Finally, even if the RMV had a perfect answer to our questions, it means nothing if we can’t see for ourselves how the system works. The agency acquired face recognition software in 2006; yet, in the 14 years since, we have learned virtually nothing about how and when it has been used. Face surveillance technology gives the government unprecedented power to track who we are, where we go, what we do, and who we know. But despite these profound dangers, there are no laws establishing safeguards for privacy, free speech, racial justice, or due process. It’s the wild west, and with the stakes so high, vague assurances from the government aren’t enough.

At a time when the government is using flawed, experimental, and unregulated technology to conduct warrantless scans of our personal information, transparency is the first step toward accountability.

Face surveillance is a clear and present danger to our civil liberties, and the ACLU is fighting back. In June 2019, we launched “Press Pause on Face Surveillance,” a campaign to build awareness about the civil liberties concerns posed by face surveillance technology. We’re calling for municipalities to ban government use of face surveillance technology, and for the state legislature to pass a statewide moratorium on government use.

In the hands of authoritarian governments, face surveillance is a powerful tool of oppression. It’s not too late to make sure the technology doesn’t get out ahead of our basic rights. If we want to prevent a dystopian police state right here in Massachusetts, we must take action to ensure our lawmakers act. Join us.

Take action

Date

Thursday, July 18, 2019 - 10:30am

Featured image

Collage of Massachusetts state IDs

Show featured image

Hide banner image

Related issues

Privacy and Surveillance

Show related content

Tweet Text

[node:title]

Type

Menu parent dynamic listing

25

Show PDF in viewer on page

Style

Standard with sidebar

Pages

Subscribe to RSS - Blogs