District attorneys are the most powerful people in the criminal legal system – and they answer to you.

Early voting has already started in Massachusetts, and the general election is right around the corner on November 6. Do you know what the candidates for Plymouth County District Attorney stand for?

Join the ACLU of Massachusetts' What a Difference a DA Makes campaign on Friday, October 26, for a conversation about the issues that impact Plymouth County – like racial justice in the criminal legal system, sentencing reform, and over-incarceration in your community.

We will be joined by DA candidate John Bradley (D) – incumbent candidate Timothy Cruz (R) declined to join because of a previous commitment.

Get informed before you go out to the polls!

 

Check out the What a Difference a  DA Makes Plymouth County Voter Guide.

Event Date

Friday, October 26, 2018 - 7:00pm to
Saturday, October 27, 2018 - 8:45pm

Featured image

More information / register

Venue

Bridgewater State University

Address

100 Burrill Avenue
John Joseph Moakley Center
Bridgewater, MA 02324
United States

Website

Tweet Text

[node:title]

Date

Friday, October 26, 2018 - 9:00pm

Menu parent dynamic listing

21

A showdown between data privacy advocates and major tech firms is heating up in Congress, with one side arguing for robust regulations outlawing data discrimination and other exploitative practices, and the other pushing for a federal law to wipe out state level privacy protections. On October 10, the Senate Committee on Commerce, Science, and Transportation held its second hearing on consumer privacy. In September, tech companies pleaded their case, asking Congress to pass a federal law that would preempt state-level data privacy laws. At last week’s hearing, the committee heard from consumer advocates. Unlike the big tech representatives who testified at the first hearing, these advocates focused on consumer protection.

The message from these experts was clear: The United States desperately needs to catch up with Europe and pass a strong federal consumer privacy law, which must act as a floor—not a ceiling—for data regulation nationwide.

The issue facing Congress doesn’t get enough attention given its gravity. Companies like Google, Facebook, and Amazon collect and process unprecedented quantities of extremely sensitive information about hundreds of millions of people in the United States, but remain almost entirely unregulated. The axiom that knowledge is power has never been truer than it is today. It’s not an exaggeration to say that in the Information Age—the age of big data, automation, and artificial intelligence—she who controls information controls the world.

Unregulated, big data and “black box” automated decision systems have been shown to compound existing and historic inequalities, routinely producing outcomes that benefit the powerful and harm the poorpeople of color, and women. Barely a week passes without news of another catastrophic data breach, or of a tech company with God-like powers getting caught in a scandal implicating the lives of millions of people currently at the mercy of Silicon Valley’s whims.

Advocates at the hearing last week sounded the alarm, recommending that Congress draw some firm lines in the sand around what companies should be permitted to do with the vast troves of records they collect about us, and highlighting the European Union’s General Data Protection Regulations as an example worth emulating. Laura Moy of the Georgetown Law Center on Privacy & Technology called on Congress to ban companies from digital redlining, or denying consumers information about educational or employment opportunities on the basis of their race, gender, or sexual orientation. Moy questioned the value of consent based data regulation, highlighting the disproportionate power the tech industry has in its asymmetric relationship with consumers. “Where a service is essential or unavoidable for consumers,” she warned, as so many big tech digital services are, “consent may not be freely given.”

Other advocates, like Massachusetts Senator Ed Markey, stressed the importance of protecting children’s privacy. He and Alastair Mactaggart, Board Chair of Californians for Consumer Privacy, said any federal consumer data law should include special protections for people under the age of 16, including the right for individuals to demand companies delete data amassed on them when they were juveniles. These protections are critical for all people, but especially for black and brown children already disproportionately targeted for surveillance, policing, and incarceration.

According to figures provided by the Center for Democracy and Technology’s Nuala O’Connor, people in the United States are overwhelmingly concerned that they’ve lost control over their personal information:

  • 91% of adults in the United States agree or strongly agree that consumers have lost control of how personal information is collected and used by companies.

  • 80% are concerned or very concerned about their online privacy.

  • 68% believe that current laws are not good enough at protecting privacy online.

The witnesses made clear that Americans want comprehensive, strong consumer data privacy law, but that today it doesn’t exist.

The ACLU and others have at times asked big tech firms to self-regulate in the interest of ensuring their tools are only used in ways that support, rather than impede, civil rights and civil liberties. But the tech companies’ performances at the September 26 Senate hearing demonstrated that when it comes to comprehensive consumer data privacy regulation, big tech won’t do the right thing on its own. Lawmakers must step in.

Company claims that regulation will harm innovation or impose untenable cost burdens are overblown and exaggerated. Amazon, one among many astonishingly profitable big data players, is the most highly valued publicly traded company in the country, boasting more wealth than many nations. If we want to live in a free and open society, and exercise democratic control of our own lives, we must be able to control information about ourselves. Seizing this control back from major corporations requires lawmakers to act—at the federal level, and where needed, at the state level, too—in the public’s interest.

Data should never be used to discriminate, compromise individual liberty, or disrupt democracy. No matter what compromise emerges on these issues at the federal level, Congress must reject tech industry calls to prohibit states from enacting their own protections. The choices our lawmakers make now will have vast repercussions for the future. Let’s choose wisely.

This blog post was co-authored by Kade Crockford and Siri Nelson. Originally posted on Privacy SOS.

Date

Monday, October 22, 2018 - 3:00pm

Featured image

red pixelated padlock on data screen

Show featured image

Hide banner image

Related issues

Privacy and Surveillance

Show related content

Tweet Text

[node:title]

Type

Style

Standard with sidebar

The use of face surveillance technologies is fast becoming the norm in government and the private sector, but a new survey by the Brookings Institution shows agencies and companies deploying this technology are out of step with the desires of most Americans in rushing to deploy it without any restraints. An overwhelming majority—nearly 70 percent—of people surveyed said they support the regulation of face surveillance technology, with half calling for regulation of law enforcement’s use of the tool.

As privacy scholars Woodrow Hartzog and Evan Selinger recently argued, face surveillance is the “perfect tool for oppression.” The technology uses a type of biometric identification that traces the distance between points on faces to create face templates. These templates are then fed into algorithms which compare images of a person from a digital image or a video frame to a database of other images or templates, looking for a match.  

Right now, small start-ups and big tech institutions like Amazon are marketing face surveillance tech to schools, shopping malls, app developers, and airports, while consumer-facing companies like Jet Blue, Live Nation, Nike, Apple, and Virgin are pushing the technology on us in public and private space, whether we want it or not. A web search for the words “face recognition technology in marketing” returns over 49 million results.

As Hartzog and Selinger write in their essay calling for a face surveillance ban, “The mere existence of facial recognition systems, which are often invisible, harms civil liberties, because people will act differently if they suspect they’re being surveilled.” The privacy scholars sketch out a short list of the harms that will inevitably befall us in a face-tracked world:

Clearly, the most threatening and dystopian deployments of face surveillance we will encounter arise in the government context. Face recognition technology in a CCTV-blanketed world enables perfect government surveillance, meaning it eradicates anonymity and privacy in public (and potentially even some private) space. This technology gives already unaccountablediscriminatory, and opaque government agencies like the police and federal intelligence agencies unprecedented power to invisibly create comprehensive and persistent records of people’s movements, interests and associations. It is the stuff of dystopian horror come to life.

Americans are rightfully concerned about this near-future world. The Brookings survey asked people about their views of face recognition in four contexts: retail stores, airports, stadiums, and schools. In no case did a majority of people surveyed feel favorably about the deployment of this technology. Half reported that they oppose the use of face surveillance technology in the retail context, while only 27 percent supported it. In the airport context, 44 percent expressed concern about using the technology for identification and security purposes, while only 31 percent supported it.

Despite these concerns, the federal government is plowing ahead with plans to deploy face surveillance systems in airports throughout the country. In September, the Transportation Security Administration (TSA) released its roadmap for expanding biometrics in airports. Among other things, the report reveals that the agency plans to work with Customs and Border Protection (CBP) to link up existing image and tracking systems, towards the end of using face surveillance to track people from the moment they step inside the airport until the moment they board their plane. The goal appears to be to replace all existing forms of identification with one: your face.

Fast technology, slow law

Exponential advances in processing power make these systems inexpensive and relatively easy to deploy, enabling persistent mass surveillance and tracking in real time and retroactively. The falling cost of computing power, storage, and web connected surveillance cameras removes a traditional limit on the government’s ability to deploy extensive networks built on this technology. But while the technology is developing rapidly, enabling the widespread deployment of face surveillance in all areas of our lives, the law has failed to keep pace. There is currently no federal law explicitly addressing the technology, and the vast majority of states have no statutes on the books protecting us from exploitative face tracking or repressive surveillance.

As Hartzog and Selinger argue in their call to arms against face surveillance technology, the deployment of these systems infringes on privacy and chills the exercise of protected speech, association, and religious activity, posing threats to individual rights and liberties and to the safety of communities of color, immigrants, and activists. For these reasons, the ACLU has joined with dozens of national and local civil rights and racial justice partners to call on major tech companies like Amazon to stop selling face surveillance systems to government agencies.

According to the Brookings survey, these civil society groups are acting in accordance with the views of most people in the country. If Brookings is right, half of Americans believe that there should be limits on the use of facial recognition software by law enforcement, and half of those surveyed said they oppose the creation of face surveillance databases. Nearly 70 percent want some form of government regulation.

Meanwhile, tech workers at companies building these dystopian technologies are raising the alarm, calling on their executives to stop selling them to law enforcement agencies. At a Wired conference this week, Jeff Bezos defended working with the government, arguing that if companies like Amazon refuse government contracts, “this country is going to be in trouble.” Bezos said controversial tools like Amazon’s face tracking Rekognition product, like the printing press, are “two sided.” “The book was invented,” he said, “and people could write really evil books and lead bad revolutions with them and create fascist empires with books.” Bezos argued that society’s “immune response” will “eventually” mitigate any harms arising from the use of persistent surveillance tools like Rekognition.

The day after Bezos made these astonishing comments, an anonymous Amazon employee published a public letter calling for his employer to stop selling face surveillance technology to the police. The concerns aren’t hypothetical, the employee wrote: “Amazon is designing, marketing, and selling a system for dangerous mass surveillance right now.” An immune system responds to attacks, but some attacks can be deadly. Face surveillance that undermines Americans’ ability to exercise our First Amendment rights could be the kind of attack that overwhelms our democratic immune system. For that reason, Amazon and other companies should stop selling it to government agencies until Congress and state legislatures can take meaningful action to protect the public from its inevitable harms.

The Brookings survey on Americans’ views of face recognition technology shows that, despite the current lack of regulation, people are increasingly aware of the dangers this technology poses. Now it’s up to us to translate that awareness into action.

This blog post was co-authored by Kade Crockford and Emiliano Falcon. Originally posted on Privacy SOS.

Date

Friday, October 19, 2018 - 2:15pm

Featured image

Person's eye with bars around it to represent face surveillance

Show featured image

Hide banner image

Related issues

Privacy and Surveillance

Show related content

Tweet Text

[node:title]

Type

Menu parent dynamic listing

25

Style

Standard with sidebar

Pages

Subscribe to ACLU Massachusetts RSS