Blogs

Documents obtained by the ACLU of Massachusetts, published today for the first time, show that surveillance oversight ordinances are effective means of stopping secret agreements between companies like Amazon’s Ring and local police.

For years, the ACLU and other digital rights advocates struggled with a problem: How can we keep up with the rapid pace of surveillance technology development and acquisition by state and local government agencies? Since 9/11, state and local law enforcement have been acquiring — mostly in the dark and often with federal funds — surveillance technologies like license plate readers, surveillance cameras, and even secret cellphone snooping devices called “stingrays.” All of this was happening without any democratic approval, oversight, transparency, or accountability. The plague of secret surveillance, we realized, was spreading beyond anyone’s control. Meanwhile, ordinary people were locked out not only from deliberation about how local governments ought to conduct surveillance and protect privacy, but also the threshold question of whether certain technologies ought to be used at all.

In response to this nationwide problem, we came up with a novel solution: Instead of trying to scrutinize each new technology as it emerged, whack-a-mole style, we would instead create a democratic, transparent, and accountable process, whereby every local government agency would need to get approval from the local legislative body before acquiring a new technology. This system, which we call Community Control Over Police Surveillance (CCOPS), has now been implemented in cities and counties across the United States.

And emails recently obtained by the ACLU of Massachusetts show it works.

Last month, the Washington Post revealed that over 400 police departments across the United States have entered into secret relationships with Amazon, to build out a private-public surveillance partnership using the company’s Ring surveillance doorbell. The latest reporting follows an extensive investigation by Vice’s Caroline Haskins, who in late July published the first report on the secret agreements, based on emails she obtained from a Florida department. In early August, Haskins reported that at least 200 police departments were secretly working with Amazon to push the doorbell surveillance in their communities, essentially acting as marketing agents of the world’s most powerful technology company. Thanks to the Post’s reporting, we now know the number of participating police departments is substantially higher.

Following Haskins’ reporting, my office submitted 100 public records requests to police departments across Massachusetts. We’re still waiting for documents from some of those departments, but we are publishing emails we received from the Cambridge Police Department (CPD) now, because they tell an important story about the impact of local surveillance oversight laws.

In December 2018, the Cambridge City Council passed a comprehensive surveillance oversight law, after two years of working with the ACLU and other local advocates. That law requires local agencies, including the police, to get permission from the City Council to enter into surveillance data sharing arrangements and to acquire new surveillance technologies. The ordinance, which also applies retroactively to technologies already in use, takes effect today.

In our records request to CPD, we sought information about its communications with Amazon’s Ring, among other documents. The department’s response shows that the City Council’s new ordinance does exactly what it’s designed to do: stop secret surveillance, and ensure democratic oversight and accountability.

For months, the emails show, Ring employees emailed back and forth with members of the CPD, urging the police to enter into an agreement with Ring to create a “law enforcement portal” for Cambridge surveillance videos. Two days after the City Council’s unanimous December 11, 2018 vote to pass the surveillance oversight ordinance, Ring’s Law Enforcement Liaison/Territory Manager Brad Wentlandt wrote to the Cambridge police:

The Cambridge surveillance ordinance has been getting some national media attention and has been a topic of discussion here.

In our discussions at Cambridge we talked about how the Ring Law Enforcement Portal and Neighbors App seems to fit in with the stated desire of the Cambridge Common Council to have the police department implement a voluntary, consent- based process for gathering video.

I’d like to introduce our PR Coordinator Morgan Culbertson who is copied on this email and ask that we set up a conference call in the near future so we at Ring can get a better understanding of the surveillance ordinance and discuss your thoughts on messaging.

In response, an employee of the Cambridge police wrote:

While we appreciate your interest in connecting with us, a call at this point is a little too premature at this time. Once we make progress internally with next steps on moving forward with the ordinance, I will be sure to be in touch.

Fast forward to 2019. In February, not satisfied with that response, Ring wrote again:

It’s been awhile since we’ve connected and I wanted to reach out and see if there’s anything you need to help inform your decision relative to the Ring Law Enforcement Portal.

Cambridge police rep Jeremy Warnick again declined to move forward with the partnership, citing the surveillance oversight ordinance:

Thanks for your continued outreach. We’ll be in touch as we near a decision on whether or not to proceed with the portal. There’s still a few items related to the new ordinance that we are working through before we consider adding new platforms.

Unsatisfied, the Ring employee wrote again in June 2019:

Just wanted to check in and see if there’s any desire to move forward.

We sent our request to CPD on August 2. The emails we received do not include any response to Ring’s  latest communication, indicating that as of August 14 — the date Cambridge responded to our request — CPD hadn’t written back.

Kudos to the Cambridge Police Department for following the will of the Cambridge community, and to the Cambridge City Council for passing such a strong surveillance oversight ordinance. The law goes into effect today, meaning the Council and the public will soon have access to loads of information about what kinds of surveillance technologies are currently in use in the community, and an opportunity to weigh in on whether those tools are right for Cambridge. Let the sunshine in.

Learn more about CCOPS


This blog was written by Kade Crockford, director of the ACLU of Massachusetts' Technology for Liberty Program.

Date

Tuesday, September 10, 2019 - 8:30am

Featured image

Three surveillance cameras with Amazon logo

Show featured image

Hide banner image

Override default banner image

Privacy SOS over grainy security photo of people entering train

Related issues

Privacy and Surveillance

Show related content

Tweet Text

[node:title]

Type

Menu parent dynamic listing

25

Show PDF in viewer on page

Style

Standard with sidebar

On Thursday, the U.S. Court of Appeals for the Ninth Circuit became the first appellate court in the nation to directly address the privacy harms posed by face recognition technology. The decision is a significant advance in the fight against the threats of face surveillance, sounding the alarm on the potential for this technology to seriously violate people’s privacy.

In Patel v. Facebook, a group of Facebook users from Illinois allege that Facebook violated the Illinois Biometric Information Privacy Act (BIPA) by using face recognition technology on the users’ photographs without their knowledge and consent. BIPA is the oldest and strongest biometric privacy law in the country, requiring companies to obtain informed consent before collecting a person’s biometric identifiers, including face recognition scans. Importantly, the law provides individuals in Illinois with a right to sue for damages if a company has violated their rights.

Facebook’s primary argument in the case was that in order to establish “standing” to sue, the plaintiffs should have to demonstrate some concrete injury beyond a violation of BIPA's requirement of notice and consent. As we argued in an amicus brief last year, surreptitious use of face recognition technology does cause harm, by subjecting people to unwanted tracking and by leaving them vulnerable to data breaches and invasive surveillance. Given the rapid proliferation of face surveillance technology in recent years, it is critical that Illinoisans are able to enforce BIPA’s protections against unwanted collection of their biometric information. A requirement that a person must demonstrate monetary loss or similar injury in order to sue would seriously undermine BIPA’s intent to safeguard against abusive collection of biometric data in the first place.

In Thursday’s ruling the Ninth Circuit agreed, holding that “the development of a face template using facial-recognition technology without consent (as alleged here) invades an individual’s private affairs and concrete interests.” 

To reach that conclusion, the court looked not only to the long-recognized entitlement of people to sue private parties over violations of common-law privacy rights, but also to evolving Fourth Amendment protections against law enforcement surveillance. This includes the landmark decision in Carpenter v. United States, an ACLU case about police access to cell phone location data decided last year. As the Ninth Circuit explained, drawing from language in Carpenter, “[i]n its recent Fourth Amendment jurisprudence, the Supreme Court has recognized that advances in technology can increase the potential for unreasonable intrusions into personal privacy… As in the Fourth Amendment context, the facial-recognition technology at issue here can obtain information that is ‘detailed, encyclopedic, and effortlessly compiled,’ which would be almost impossible without such technology.”

The Ninth Circuit’s ruling is important not only because it explains why surreptitious use of face recognition by corporations harms people’s privacy interests, but also because it puts law enforcement on notice that recent Supreme Court cases regulating other forms of electronic surveillance have something to say about face surveillance technology.

Indeed, the potential for this technology to enable the government to pervasively identify and track anyone (and everyone) as they go about their daily lives is one of the reasons the ACLU is urging lawmakers across the country to halt law enforcement use of face surveillance systems. This decision puts both corporations and law enforcement agencies on notice that face surveillance technology poses unique risks to people’s privacy and safety.

The Ninth Circuit’s ruling also demonstrates the importance of privacy laws including strong private rights of action, affirming people’s right to turn to the federal courts for redress when their rights have been violated. Without a right to sue, privacy guarantees will often prove ephemeral. As state legislatures and Congress move forward on consumer privacy legislation, they should follow Illinois’ lead by including private rights of action in these statutes.

Blog by Nathan Freed Wessler, staff attorney at the ACLU Speech, Privacy, and Technology Project

Date

Friday, August 9, 2019 - 4:00pm

Featured image

A screen displaying Facebook's facial recognition technology

Show featured image

Hide banner image

Show related content

Tweet Text

[node:title]

Share Image

A screen displaying Facebook's facial recognition technology

Type

Menu parent dynamic listing

25

Show PDF in viewer on page

Style

Standard with sidebar

A new study has found major flaws in a live facial recognition trial by police in the United Kingdom. Researchers said the trials violated human rights, muddled public consent, and were inaccurate in 81 percent of cases.

The Met police began with a set of ten trials between 2016 and 2019, testing the technology in public spaces like sports matches, festivals, and transit hubs. The department endeavored to educate the public before the trials, through an informative website, and by hanging signage and handing out fliers in proximity to the trial areas. But despite these efforts, an independent study by the University of Essex found that the trial of live facial recognition on members of the British public violated the European Convention on Human Rights (ECHR), and likely would be held unlawful in court. 

The EU Convention on Human Rights requires that any interference with people’s human rights must be “necessary in a democratic society.” This is known as the necessity test. The University of Essex study found that none of the risk assessments done prior to the trials were sufficient in proving that live face surveillance  technology met this human rights test, largely due to the police department’s use of an overly broad watchlist that included not only people who had been convicted of a crime, but also anyone who had ever been detained by police. According to the researchers, the Met police failed to demonstrate that a surveillance program of this magnitude passed the necessity test.

The system was also found to be wildly inaccurate. Their facial recognition software, contracted from NEC in 2018, found 42 eilible “matches” to the watchlist over the six trials that researchers observed. Of these 42, only eight turned out to be the person in question, meaning that the system falsely flagged 4 in 5 people as suspects. These false matches resulted in real intimidation of innocent people. In one troubling case, the system misidentified a 14-year-old boy, and before the control room verified if he was a match, five officers led him by the wrists to a side street for questioning. Researchers described the boy as “visibly distressed and clearly intimidated,” and said the incident was an example of how the technology encourages physical interventions. 

The flaws of facial surveillance are especially telling coming from the United Kingdom, one of the most heavily surveilled nations on earth. The UK has several regulatory agencies tasked with governing surveillance, including the Office of the Biometrics Commissioner, the Information Commissioner’s Office, and the Surveillance Camera Commissioner. It has numerous laws controlling the use of surveillance data, including the Protection of Freedoms Act and the more recent Data Protection Act of 2018. 

But existing regulatory frameworks in the United Kingdom are insufficient  to govern a new and relatively understudied technology like face surveillance, according to the University of Essex study. Facial recognition in the UK is a form of “overt surveillance” since it is used in cameras that are noticeable and visible to the public. This kind of surveillance requires informed public consent, as set forth in the Surveillance Camera Code of Practice. The researchers found that the Met police failed to meet these informed consent standards, concluding that people were “required to make an informed decision in a very short time-frame, a factor exacerbated by the limits on prior knowledge amongst the public.” 

Press pause on face surveillance in Massachusetts

Making matters worse, the researchers could identify no specific laws to authorize the use of facial recognition, leaving police with no explicit legal precedent for their trials. Face surveillance poses unprecedented threats to personal privacy and civil liberties, and the concerns surrounding the live face surveillance trial in the UK highlight the need for laws specific to face surveillance. Folding the technology into existing surveillance law is simply not enough.

Though the Met police use of face recognition was only a trial, it has launched an intense national debate about the public use of this technology. While the Home Office supports the police trials, Members of Parliament in the Science and Technology Committee have recently stepped in and called for a moratorium on live facial recognition until proper regulations are in place, emphasizing that “a legislative framework on the use of these technologies is urgently needed”. 

In the United States, government agencies have taken a more dangerous approach than open trials, largely rolling out the technology in secret, and without any of the privacy specific regulatory infrastructure present in  the UK. But we at the ACLU of Massachusetts are fighting back, working to ban the use of the technology in municipalities and to pass a statewide moratorium on government use.  

We are at a critical moment, before the technology has come into widespread use by government agencies in Massachusetts. Every time face recognition is subjected to scrutiny, as it was in this case in the UK, we find more and more reasons to be concerned. We must press pause now, before it’s too late.

This blog post was written by Technology for Liberty Program intern Sarah Powazek.

Date

Monday, July 29, 2019 - 10:30am

Featured image

Three surveillance cameras mounted on a poll in front of gray sky

Show featured image

Hide banner image

Override default banner image

Privacy SOS over grainy security photo of people entering train

Override site-wide featured action/member

Related issues

Privacy and Surveillance

Show related content

Tweet Text

[node:title]

Type

Menu parent dynamic listing

25

Show PDF in viewer on page

Style

Standard with sidebar

Pages

Subscribe to RSS - Blogs