Weak "Guardrails" on Police Face Recognition Use Make Things Worse

3 weeks 2 days ago

Police use of face recognition technology (FRT) poses a particularly massive risk to our civil liberties, particularly for Black men and women and other marginalized communities. That's why EFF supports a ban on government FRT use. Half-measures aren't up to the task.

However, even as half-measures go, California's legislature is currently considering a particularly weak proposal in the form of A.B. 1814, authored by Asm. Phil Ting (San Francisco). It would introduce paltry limits that will do nothing to address the many problems that police use of face recognition raises. In fact, the bill's language could make things worse in California.

This something? It's worse than nothing—by a long shot.

For example, major police departments in California have pledged not to use Clearview AI, a company that's been sued repeatedly for building its database from scraped social media posts, in light of public pressure. But A.B. 1814 expressly gives police departments the right to access "third-party databases," including Clearview AI. This could give law enforcement agencies cover to use databases that they have previously distanced themselves from and will erode progress civil liberties advocates have already made. The bill also states police have access to any state database, even if the images were not collected for law enforcement purposes.

California should not give law enforcement the green light to mine databases, particularly those built for completely different reasons. This goes against what people are expecting when they give their information to one database, only to learn later that information has been informing police face surveillance.

Finally, A.B. 1814 fails to even meet the bar of restrictions other police departments have agreed to adopt. As we have previously written, the Detroit Police Department agreed to limits on its use of face recognition technology after it falsely arrested Robert Williams as a result of a incorrect face recognition "match." As part of these limits, the Detroit police have agreed to "bar arrests based solely on face recognition results, or the results of the ensuing photo lineup." Their agreement also affirms that prosecutors and defense attorneys will have access to information about any uses of FRT in cases where law enforcement files charges.

The California bill does not even include these safeguards. It says that police can not use a database match as the sole basis for an arrest, but it would allow a photo lineup based on a match to be considered a second technique. This puts people who look like the suspect in front of witnesses who may be likely to pick that person—even if it is an entirely different person. That lets law enforcement agencies easily clear the low bar the bill sets.

A.B. 1814 is sitting in the Senate Appropriations Committee. EFF has joined with dozens of civil liberties organizations to urge the committee not to advance the bill. If it does move forward, we'll be asking you to help us fight it on the Senate floor.

Proponents of the bill have argued that, essentially, it is better to do something than have no guardrails in place. But this something? It's worse than nothing—by a long shot.

Hayley Tsukayama

EFF and 12 Organizations Tell Bumble: Don’t Sell User Data Without Opt-In Consent

3 weeks 2 days ago

Bumble markets itself as a safe dating app, but it may be selling your deeply personal data unless you opt-out—risking your privacy for their profit. Despite repeated requests, Bumble hasn’t confirmed if they sell or share user data, and its policy is also unclear about whether all users can delete their data, regardless of where they live. The company had previously struggled with security vulnerabilities

So EFF has joined Mozilla Foundation and 11 other organizations urging Bumble to do a better job protecting user privacy.

Bumble needs to respect the privacy of its users and ensure that the company does not disclose a user’s data unless that user opts-in to such disclosure. This privacy threat should not be something users have to opt-out of. Protecting personal data should be effortless, especially from a company that markets itself as a safe and ethical alternative.

Dating apps collect vast amounts of intimate details about their customers—everything from sexual preferences to precise location—who are often just searching for compatibility and love. This data falling into the wrong hands can come with unacceptable consequences, especially for those seeking reproductive health care, survivors of intimate partner violence, and members of the LGBTQ+ community. For this reason, the threshold for a company collecting, selling, and transferring such personal data—and providing transparency about privacy practices—is high.

The letter urges Bumble to:

  1. Clarify in unambiguous terms whether or not Bumble sells customer data. 
  2. If the answer is yes, identify what data or personal information Bumble sells, and to which partners, identifying particularly if any companies would be considered data brokers. 
  3. Strengthen customers’ consent mechanism to opt-in to the sharing or sale of data, rather than opt-out.”

Read the full letter here.

Paige Collings

EFF Tells Yet Another Court to Ensure Everyone Has Access to the Law and Reject Private Gatekeepers

3 weeks 3 days ago

Our laws belong to all of us, and we should be able to find, read, and comment on them free of registration requirements, fees, and other roadblocks. That means private organizations shouldn’t be able to control who can read and share the law, or where and how we can do those things. But that’s exactly what some industry groups are trying to do.

EFF has been fighting for years to stop them. The most recent instance is ASTM v. Upcodes. ASTM, an organization that develops technical standards, claims it retains copyright in those standards even when they’ve become binding law through “incorporation by reference.” When a standard is incorporated “by reference,” that means its text is not actually reprinted in the body of the government’s published regulations. Instead, the regulations include a citation to the standard, which means you have to track down a copy somewhere else if you want to know what the law requires.

 Incorporation by reference is common for a wide variety of laws governing the safety of buildings, pipelines, consumer products and so on. Often, these are laws that affect us directly in our everyday lives—but they can also be the most inaccessible. ASTM makes some of those laws available for free, but not all of them, and only via “reading rooms” that are hard to navigate and full of restrictions. Services like UpCodes have emerged to try to bridge the gap by making mandatory standards more easily available online. Among other things, UpCodes has created a searchable online library of some of the thousands of ASTM standards that have been incorporated by reference around the country. According to ASTM, that’s copyright infringement.

 EFF litigated a pair of cases on this issue for our client Public.Resource.Org (or “Public Resource”). We argued there that incorporated standards are the law, and no one can own copyright in the law. And in any event, it’s a fair use to republish incorporated standards in a centralized repository that makes them easier to access and use. In December 2023, the D.C. Circuit Court of Appeals ruled in Public Resource’s favor on fair use grounds.

 Based on our experience, we filed an amicus brief supporting UpCodes, joined by Public Knowledge and iFixit, Inc. and with essential support from local counsel Sam Silver and Abigail Burton at Welsh & Recker.  Unlike our cases for Public Resource, in UpCodes the standards at issue haven’t been directly incorporated into any laws. Instead, they’re incorporated by reference into other standards, which in turn have been incorporated into law. As we explain in our brief, this extra degree of separation shouldn’t make a difference in the legal analysis. If the government tells you, “Do what Document A says,” and Document A says, “Do what Document B says,” you’re going to need to read Document B to know what the government is telling you to do.

TAKE ACTION

Tell Congress: Access To Laws Should Be Fully Open

At the same time that we’re fighting this battle in the courts, we’re fighting a similar one in Congress. The Pro Codes Act would effectively endorse the claim that organizations like ASTM can “retain” copyright in codes, even after they are made law, as long as they make the codes available through a “publicly accessible” website—which means read-only, and subject to licensing limits. The Pro Codes Act recently fell short of the necessary votes to pass through the House, but it’s still being pushed by some lawmakers.

Whether it’s in courts or in Congress, we’ll keep fighting for your right to read and share the laws that we all must live by. A nation governed by the rule of law should not tolerate private control of that law. We hope the court in UpCodes comes to the same conclusion.

Cara Gagliano

Support Justice for Digital Creators and Tech Users

3 weeks 4 days ago

People work at EFF because they believe in wringing justice from a world that’s often unfair. For us, setting things right means legal work, activism, convincing policymakers, and creating tech tools to tip the balance of power back toward you. Will you move that mission forward by supporting EFF?

Join EFF

Support Digital Creators and Tech Users

This week, many EFFers will head to the Las Vegas hacker conferences—BSidesLV, Black Hat USA, and DEF CON—to rally behind researchers and tinkerers. EFF gives legal advice to folks like them all year because computer security has always relied on skilled hackers, and your privacy and free expression rely on strong web security. Check our conference Deeplinks post to get a full rundown of EFF's presentations and activities in Las Vegas.

For Justice

EFF's member t-shirt design for this year's DEF CON is inspired by the 11th card of the tarot: Justice. It can be challenging, and it can be slow. But that comes with the territory when your goal is truth and integrity. Your choices have meaningful consequences, so I hope you will support a better future for your privacy and free expression today.

Support EFF’s work at the Gold membership level and (for a short time!) you can choose EFF’s DEF CON 32 t-shirt design with some dazzling glow-in-the-dark details. The path to Justice will lead you to this year’s puzzle challenge, too! Donate today or even set up an easy automatic monthly donation. Help EFF spread the word about this “Virtual Vegas” membership week! Here’s some language you can use to share with friends:

Support privacy, free speech, and digital creators! EFF members have fought for rights online for decades, and it's more important now than ever before. https://eff.org/VV

Facebook | LinkedIn | Twitter/X

EFF takes tough stances and tackles complicated problems for tech creators and users like you because it’s the right thing to do. Please help fight for everyone’s freedom online by joining EFF.

Join EFF

Defend Digital Freedom

EFF is a member-supported U.S. 501(c)(3) organization celebrating TEN YEARS of top ratings from the nonprofit watchdog Charity Navigator! Your donation is tax-deductible as allowed by law.

Aaron Jue

EFF at the Las Vegas Hacker Conferences

3 weeks 4 days ago

Las Vegas is blazing hot and that means it's time for EFF to return to the hacker summer camp conferences—BSidesLV, Black Hat USA and DEF CON—to rally behind computer security researchers and tinkerers. EFF is glad to support members of this community all year long. Computer security has always relied on skilled hackers, and your privacy and free expression rely on strong web security. Below you will find all of EFF's scheduled talks and activities at the conferences.

As in past years, EFF staff attorneys will be present to help support speakers and attendees. If you have legal concerns regarding an upcoming talk or sensitive infosec research that you are conducting at any time, please email info@eff.org. Outline the basic issues and we will do our best to connect you with the resources you need. Read more about EFF's work defending, offering legal counsel, and publicly advocating for technologists on our Coders' Rights Project page.

EFF staff members will be on hand in the expo areas of all three conferences. You may encounter us in the wild elsewhere, but we hope you stop by the EFF tables to talk to us about the latest in online rights, get on our action alert list, or become an EFF member. We'll also have our limited-edition DEF CON 32 member t-shirts on hand starting Friday or snag yours online today! This year’s DEF CON member t-shirt is inspired by the 11th card of the tarot. The path to Justice will lead you to this year’s puzzle challenge—give it a try!

EFF Staff Presentations

Ask the EFF Panel at BSidesLV
At this interactive session, our panelists will share updates on critical digital rights issues and EFF's ongoing efforts to safeguard privacy, combat surveillance, and advocate for freedom of expression.
WHEN: Wednesday, August 7, 18:00
WHERE: Skytalks at the Tuscany Suites Hotel & Casino

Bricked & Abandoned: How To Keep The IoT From Becoming An Internet of Trash
After years of warnings from the cybersecurity community, alarms are finally sounding in the halls of power. But more is needed: a clarion call to reset, to redefine ownership and security in an age of smart, connected devices before it's too late. In this panel you’ll be enlisted to join the fight. You’ll hear from experts working at the forefront of a fight to challenge the status quo and seek solutions to safeguard our digital futures.Are you ready to stand up for your right to a secure, connected world? The battle for control, for transparency- for a sustainable and resilient digital future begins now!
WHEN: Friday, August 9, 17:00-17:45
WHERE: LVCC - L1 - HW1-11-01 (Track 1)

Ask the EFF at DEF CON 32
Our expert panelists will offer brief updates on EFF's work defending your digital rights, before opening the floor for attendees to ask their questions. This dynamic conversation centers challenges DEF CON attendees actually face, and is an opportunity to connect on common causes.
WHEN: Friday, August 9, 18:00-19:30
WHERE: DEF CON Room 307-308

DEF CON Keynote: Disenshittify or die! How hackers can seize the means of computation and build a new, good internet that is hardened against our asshole bosses' insatiable horniness for enshittification.
Join this DEF CON keynote address with author and EFF Special Advisor Cory Doctorow. The enshittification of the internet wasn't inevitable. The old, good internet gave way to the enshitternet because we let our bosses enshittify it. We took away the constraints of competition, regulation, interop and tech worker power, and so when our bosses yanked on the big enshittification lever in the c-suite, it started to budge further and further, toward total enshittification. A new, good internet is possible - and necessary - and it needs you.
WHEN: Saturday, August 10, 12:00-12:45
WHERE: DEF CON L1 - HW1-11-01 (Track 1)

EFF Benefit Poker Tournament at DEF CON 32

We’re going all in on internet freedom. Join special guest hosts Tarah Wheeler and Cory Doctorow to face off with your competition at the tables—and benefit EFF! Your buy-in is paired with a donation to support EFF’s mission to protect online privacy and free expression for all. Every participant will receive a custom EFF deck of cards celebrating the tournament! Join us in the Horseshoe Poker Room as a player or spectator. Play for glory. Play for money. Play for the future of the web.
WHEN: Pre-tournament clinic on Friday, August 9, 11:00-12:00, Live tournament on Friday, August 9, 12:00-15:00
WHERE: Horseshoe Poker Room | 3645 Las Vegas Blvd Overpass, Las Vegas, NV 89109

Tech Trivia Contest at DEF CON 32

Join us for some tech trivia on Saturday, August 10 at 6:30 PM! EFF's team of technology experts have crafted challenging trivia about the fascinating, obscure, and trivial aspects of digital security, online rights, and internet culture. Competing teams will plumb the unfathomable depths of their knowledge, but only the champion hive mind will claim the First Place Tech Trivia Trophy and EFF swag pack. The second and third place teams will also win great EFF gear.
WHEN: Saturday, August 10, 18:30-21:30 PM
WHERE: DEF CON Room 307-308

Meet the EFA at DEF CON 32

Rory & Chris from the organizing team will be hosting space for Electronic Frontier Alliance members to network in person at DEF CON. This is also open for anyone interested in joining the EFA too!
WHEN: Friday, August 9, 19:30 - 20:30
WHERE: DEF CON Room 307-308

Beard and Moustache Contest at DEF CON 32

Yes, it's exactly what it sounds like. Join EFF at the intersection of facial hair and hacker culture. Spectate, heckle, or compete in any of four categories: Full beard, Partial Beard, Moustache  Only, or Freestyle (anything goes so create your own facial apparatus!). Prizes! Donations to EFF! Beard oil! Get the latest updates.
WHEN: Saturday, August 10, 11:00- 13:00
WHERE: DEF CON Contests Room (Look for the Moustache Flag)

Join the Cause!

Come find our table at BSidesLV (Middle Ground), Black Hat USA (back of the Business Hall), and DEF CON (Vendor Hall West) to learn more about the latest in online rights, get on our action alert list, or donate to become an EFF member. We'll also have our limited-edition DEF CON 32 shirts available starting Friday at DEF CON! These shirts have a puzzle incorporated into the design. You don't need to be a hacker to give it a try!

Join EFF

Support Security & Digital Innovation

Aaron Jue

To Fight Surveillance Pricing, We Need Privacy First

3 weeks 5 days ago

Digital surveillance is ubiquitous. Corporate snoops collect information about everything we do, everywhere we go, and everyone we communicate with. Then they compile it, store it, and use it against us.  

Increasingly, companies exploit this information to set individualized prices based on personal characteristics and behavior. This “surveillance pricing” allows retailers to charge two people different prices for the exact same product, based on information that the law should protect, such as your internet browsing history, physical location, and credit history. Fortunately, the Federal Trade Commission (FTC) is stepping up with a new investigation of this dangerous practice.  

What Is Surveillance Pricing? 

Surveillance pricing analyzes massive troves of your personal information to predict the price you would be willing to pay for an item—and charge you accordingly. Retailers can charge a higher price when it thinks you can afford to spend more—on payday, for example. Or when you need something the most, such as in an emergency.  

For example, in 2019, investigative journalists revealed that prices on the Target app increased depending on a user’s location. The app collected the user’s geolocation information. The company charged significantly higher prices when a user was in a Target parking lot than it did when a user was somewhere else. These price increases were reportedly based on the assumption that a user who has already traveled to the store is committed to buying the product, and is therefore willing to pay more, whereas other shoppers may need a greater incentive to travel to the store and purchase the product. 

Similarly, Staples used users’ location information to charge higher online prices to customers with fewer options nearby. The website did this by offering lower prices to customers located within approximately 20 miles of a brick-and-mortar OfficeMax or Office Depot store.   

Surveillance Pricing Hurts Us All 

The American privacy deficit makes surveillance pricing possible. Unlike many other countries, the U.S. lacks a comprehensive privacy law. As a result, companies can often surveil us with impunity. Unregulated data brokerages buy and sell the enormous amounts of information generated every time you swipe a credit card, browse the internet, visit the doctor, drive your car, or simply move through the world while in possession of a mobile phone. And it is difficult to shield yourself from prying eyes.  

Corporate surveillance yields comprehensive—but often inaccurate and unappealable—personal profiles. Surveillance pricing uses these profiles to set prices for everything from homes to groceries.  

This is fundamentally unfair. You have a human right to privacy (even though U.S. lawmakers haven’t updated privacy laws in decades). You shouldn’t be spied on, period. And constant surveillance pricing compromises your ability to freely use the internet without fear of adverse economic consequences.  

Worse, surveillance pricing will often have disparate impacts on people of color and those living in poverty, who have historically suffered greater price increases when companies adopted AI-powered pricing tools. For example, an algorithmic pricing model used by the Princeton Review—a test prep company—allegedly charged higher prices to Asian American customers than to customers of other racial backgrounds. Likewise, ridesharing apps—such as Uber and Lyft—have charged higher fares to residents of neighborhoods with more residents of color and residents living below the poverty line. 

Further, surveillance pricing tools are notoriously opaque. Lack of transparency into pricing decisions makes it difficult for customers and regulators to assess harms and seek redress for these problems.  

Surveillance pricing—a form of “price gouging,” according to U.S. Sen. Sherrod Brown—may also suppress market competition.  It incentivizes the continuous, fine-grained extraction of your data, because it offers big companies a competitive advantage—and the ability to charge higher prices—by collecting more personal information than their competitors. This fosters a race to the bottom that rewards companies that win by violating our privacy rights. And it puts smaller competitors at a disadvantage when they don’t have reams of intimate data about their potential customers. 

Consumers know that surveillance pricing is unfair, but our legal rights to resist it are exceedingly limited. Some websites simply ignore browsers’ requests not to be tracked. Others have even charged higher prices to consumers who use digital privacy tools to prevent tracking. For example, they increase regular prices, and then offer discounts only to customers who allow the companies to collect their data. This kind of pay-for-privacy scheme undermines your personal choices, and disproportionately harms people who can’t afford to pay for their basic rights. 

Putting a Stop to Surveillance Pricing 

This is a critical time to resist surveillance pricing: most vendors have not adopted it yet. Correcting course is still possible, and it’s vital for our right to privacy.  

Good news: the FTC recently announced that it is investigating surveillance pricing practices. Specifically, the FTC ordered eight companies to provide information about surveillance pricing tools they make available to others. The FTC sent these orders to Mastercard, Revionics, Bloomreach, JPMorgan Chase, Task Software, PROS, Accenture, and McKinsey & Co. 

These eight firms play a key role in the collection, analysis, and weaponization of your private information: they are the “middlemen” that provide surveillance pricing tools to other companies. In particular, the FTC instructed the companies to submit reports detailing technical specifics of tools, the types and sources of consumer information they use, which companies are currently using them, and how they impact consumer prices. 

As FTC Chair Lina Khan explained: 

Firms that harvest Americans’ personal data can put people’s privacy at risk. Now firms could be exploiting this vast trove of personal information to charge people higher prices...Americans deserve to know whether businesses are using detailed consumer data to deploy surveillance pricing. 

These FTC investigations are an important step towards public understanding of opaque business pricing practices that may be harming consumers. Increased transparency into new pricing models will facilitate efforts to curb this unfair pricing practice and could be the prelude to a rulemaking or enforcement action to halt the practice altogether. 

We can mitigate surveillance pricing’s myriad harms by preventing surveillance. How? By doing privacy first.  

Comprehensive privacy legislation would prevent companies from accumulating massive amounts of our information in the first place. Companies cannot charge prices based on our personal information if they don’t have it.  

Economic research shows that opt-in privacy regulations—such as the GDPR—mitigate the negative effects of surveillance pricing and make us all better off. When all businesses, big and small, must respect customers’ privacy, surveillance will no longer create a competitive advantage for the biggest online platforms.  

That’s in addition to the myriad other benefits of strong privacy protections, which would help combat financial fraud, support local and national news outlets, protect reproductive rights, mitigate foreign government surveillance on apps like TikTok, and improve competition in the tech sector. 

Most importantly, strong legal protections for your privacy would guard against the emergence of new, increasingly harmful ways of weaponizing your data against you. Without a strong, comprehensive federal privacy law, “surveillance pricing” may give way to a never-ending parade of ways to use the most intimate facts about your life against you.

Tori Noble

EFF to Ninth Circuit: Don’t Shield Foreign Spyware Company from Human Rights Accountability in U.S. Court

4 weeks 1 day ago

Legal intern Danya Hajjaji was the lead author of this post.

EFF filed an amicus brief in the U.S. Court of Appeals for the Ninth Circuit supporting a group of journalists in their lawsuit against Israeli spyware company NSO Group. In our amicus brief backing the plaintiffs’ appeal, we argued that victims of human rights abuses enabled by powerful surveillance technologies must be able to seek redress through U.S. courts against both foreign and domestic corporations. 

NSO Group notoriously manufactures “Pegasus” spyware, which enables full remote control of a target’s smartphone. Pegasus attacks are stealthy and sophisticated: the spyware embeds itself into phones without an owner having to click anything (such as an email or text message). A Pegasus-infected phone allows government operatives to intercept personal data on a device as well as cloud-based data connected to the device.

Our brief highlights multiple examples of Pegasus spyware having been used by governmental bodies around the world to spy on targets such as journalists, human rights defenders, dissidents, and their families. For example, the Saudi Arabian government was found to have deployed Pegasus against Washington Post columnist Jamal Khashoggi, who was murdered at the Saudi consulate in Istanbul, Turkey.

In the present case, Dada v. NSO Group, the plaintiffs are affiliated with El Faro, a prominent independent news outlet based in El Salvador, and were targeted with Pegasus through their iPhones. The attacks on El Faro journalists coincided with their investigative reporting into the Salvadorian government.

The plaintiffs sued NSO Group in California because NSO Group, in deploying Pegasus against iPhones, abused the services of Apple, a California-based company. However, the district court dismissed the case on a forum non conveniens theory, holding that California is an inconvenient forum for NSO Group. The court thus concluded that exercising jurisdiction over the foreign corporation was inappropriate and that the case would be better considered by a court in Israel or elsewhere.

However, as we argued in our brief, NSO Group is already defending two other lawsuits in California brought by both Apple and WhatsApp. And the company is unlikely to face legal accountability in its home country—the Israeli Ministry of Defense provides an export license to NSO Group, and its technology has been used against citizens within Israel.

That's why this case is critical—victims of powerful, increasingly-common surveillance technologies like Pegasus spyware must not be barred from U.S. courts.

As we explained in our brief, the private spyware industry is a lucrative industry worth an estimated $12 billion, largely bankrolled by repressive governments. These parties widely fail to comport with the United Nations’ Guiding Principles on Business and Human Rights, which caution against creating a situation where victims of human rights abuses “face a denial of justice in a host State and cannot access home State courts regardless of the merits of the claim.”

The U.S. government has endorsed the Guiding Principles as applied to U.S. companies selling surveillance technologies to foreign governments, but also sought to address the issue of spyware facilitating state-sponsored human rights violations. In 2021, for example, the Biden Administration recognized NSO Group as engaging in such practices by placing it on a list of entities prohibited from receiving U.S. exports of hardware or software.

Unfortunately, the Guiding Principles expressly avoid creating any “new international law obligations,” thus leaving accountability to either domestic law or voluntary mechanisms.

Yet voluntary enforcement mechanisms are wholly inadequate for human rights accountability. The weakness of voluntary enforcement is best illustrated by NSO Group supposedly implementing its own human rights policies, all the while acting as a facilitator of human rights abuses.

Restraining the use of the forum non conveniens doctrine and opening courthouse doors to victims of human rights violations wrought by surveillance technologies would bind companies like NSO Group through judicial liability.

But this would not mean that U.S. courts have unfettered discretion over foreign corporations. The reach of courts is limited by rules of personal jurisdiction and plaintiffs must still prove the specific required elements of their legal claims.

The Ninth Circuit must give the El Faro plaintiffs the chance to vindicate their rights in federal court. Shielding spyware companies like NSO Group from legal accountability does not only diminish digital civil liberties like privacy and freedom of speech—it paves the way for the worst of the worst human rights abuses, including physical apprehensions, unlawful detentions, torture, and even summary executions by the governments that use the spyware.

Sophia Cope

Federal Appeals Court Rules That Fair Use May Be Narrowed to Serve Hollywood Profits

4 weeks 1 day ago

Section 1201 of the Digital Millennium Copyright Act is a ban on reading any copyrighted work that is encumbered by access restrictions. It makes it illegal for you to read and understand the code that determines how your phone or car works and whether those devices are safe. It makes it illegal to create fair use videos for expressive purposes, reporting, or teaching. It makes it illegal for people with disabilities to convert ebooks they own into a format they can perceive. EFF and co-counsel at WSGR challenged Section 1201 in court on behalf of computer science professor Matthew Green and engineer Andrew “bunnie” Huang, and we asked the court to invalidate the law on First Amendment grounds.

Despite this law's many burdens on expression and research, the Court of Appeals for the D.C. Circuit concluded that these restrictions are necessary to incentivize copyright owners to publish works online, and rejected our court challenge. It reached this conclusion despite the evidence that many works are published without digital access restrictions (such as mp3 files sold without DRM) and the fact that people willingly pay for copyrighted works even though they're readily available through piracy. Once again, copyright law has been used to squash expression in order to serve a particular business model favored by rightsholders, and we are all the poorer for it.

Integral to the Court’s decision was the conclusion that Section 1201’s ban on circumvention of access restrictions is a regulation of “conduct” rather than “speech.” This is akin to saying that the government could regulate the reading of microfiche as “conduct” rather than “speech,” because technology is necessary to do so. Of course you want to be able to read the microfiche you purchased, but you can only do so using the licensed microfiche reader the copyright owner sells you. And if that reader doesn’t meet your needs because you’re blind or you want to excerpt the microfiche to make your own fair use materials, the government can make it illegal for you to use a reader that does.

It’s a back door into speech regulation that favors large, commercial entertainment products over everyday people using those works for their own, fair-use expression or for documentary films or media literacy.

Even worse, the law governs access to copyrighted software. In the microfiche analogy, this would be microfiche that’s locked inside your car or phone or other digital device that you’re never allowed to read. It’s illegal to learn how technology works under this regime, which is very dangerous for our digital future.

The Court asserts that the existing defenses to the anti-circumvention law are good enough – even though the Library of Congress has repeatedly admitted that they weren’t when it decided to issue exemptions to expand them.

All in all, the opinion represents a victory for rightsholder business models that allow them to profit by eroding the traditional rights of fair users, and a victory for device manufacturers that would like to run software in your devices that you’re not allowed to understand or change.

Courts must reject the mistaken notion that draconian copyright regimes are helpful to “expression” as a general matter rather than just the largest copyright owners. EFF will continue to fight for your rights to express yourself and to understand the technology in your life.

Related Cases: Green v. U.S. Department of Justice
Kit Walsh

Here Are EFF's Sacramento Priorities Right Now

4 weeks 1 day ago

California is one of the nation’s few full-time state legislatures. That means advocates have to track and speak up on hundreds of bills that move through the legislative process on a strict schedule between January and August every year. The legislature has been adjourned for a month, and won't be back until August. So it's a good time to take stock and share what we've been up to in Sacramento.

EFF has been tracking nearly 100 bills this session in California alone. They cover a wide array of privacy, free speech, and innovation issues, including bills that cover what standards Artificial Intelligence (A.I.) systems should meet before being used by state agencies, how AI and copyright interact, police use of surveillance, and a lot of privacy questions. While the session isn't over yet, we have already logged a significant victory by helping stop S.B.1076, by Senator Scott Wilk (Lancaster). This bill would have weakened the California Delete Act (S.B. 362), which we fought hard to pass last year. 

Under S.B. 362, The Delete Act made it easier for anyone to exert greater control over their privacy under California's Consumer Privacy Act (CCPA). The law created a one-click “delete” button in the state's data broker registry, allowing Californians to request the removal of their personal information held by data brokers registered in California. It built on the state's existing data broker registry law to expand the information data brokers are required to disclose about data they collect on consumers. It also added strong enforcement mechanisms to ensure that data brokers comply with these reporting requirements.

S.B. 1076 would have undermined the Delete Act’s aim to provide consumers with an easy “one-click” button. It also would have opened loopholes in the law for data brokers to duck compliance. This would have hurt consumer rights and undone oversight on an opaque ecosystem of entities that collect then sell personal information they’ve amassed on individuals. S.B. 1076's proponents, which included data brokers and advertisers, argued that the Delete Act is too burdensome and makes it impossible for consumers to exercise their privacy rights under California's privacy laws. In truth, S.B. 1076 would have aided fraudsters or credit abusers to misuse your personal information. The existing guardrails and protections under the Delete Act are some of the strongest in empowering vulnerable Californians to exercise their privacy rights under CCPA, and we're proud to have protected it.

Of course, there are still a lot of bills. Let’s dive into six bills we're paying close attention to right now, to give you a taste of what's cooking in Sacramento:

A.B. 3080 EFF opposes this bill by State Assemblymember Juan Alanis (Modesto). It would create powerful incentives for so-called “pornographic internet websites” to use age-verification mechanisms. The bill is not clear on what, exactly, counts as “sexually explicit content.” Without clear guidelines, this bill will further harm the ability of all youth—particularly LGBTQ+ youth—to access legitimate content online. Different versions of bills requiring age verification have appeared in more than a dozen states. An Indiana law similar to A.B. 3080 was preliminarily enjoined—temporarily halted— after a judge ruled it was likely unconstitutional. California should not enact this bill into law.

S.B. 892 EFF supports this bill by State Senator Steve Padilla (Chula Vista), which would require the Department of Technology to establish safety, privacy, and nondiscrimination standards relating to AI services procured by the State and prohibit the state from entering into any contract for AI services unless the service provider meets the standards established. This bill is a critical first step towards ensuring that any future investment in AI technology by the State of California to support the delivery of services is grounded in consumer protection.

A.B. 3138 EFF opposes this bill by State Assemblymember Lori Wilson (Suisun City), which will turn state-issued digital license plates into surveillance trackers that record everywhere a car goes. When a similar bill came up in 2022, several domestic violence, LGBTQIA+, reproductive justice, youth, and privacy organizations negotiated to prohibit the use of GPS in passenger car digital license plates. A.B. 3138 would no longer honor the agreement under A.B. 984 (2022) and reverse that negotiation.

A.B. 1814 EFF opposes this bill from State Assemblymember Phil Ting (San Francisco). It is an attempt to sanction and expand the use of facial recognition software by police to “match” images from surveillance databases to possible suspects. Those images can then be used to issue arrest warrants or probable searches. The bill says merely that these matches can't be the sole reason for a warrant to be issued by a judge—a standard that has already failed to stop false arrests in other states. By codifying such a weak standard with the hope that “something is better than nothing”, and expanding police access to state databases, makes bill is worse than no regulation.

S.B. 981 EFF opposes this bill from State Senator Aisha Wahab (Fremont), which would require online platforms to create a reporting mechanism for certain intimate materials, and ensure that those materials cannot be viewed on the platform. This reporting mechanism and the requirement to block and remove reported content will lead to over-censorship of protected speech. If passed as written it would violate the First Amendment and run afoul of federal preemption.

A.B. 1836 EFF opposes this bill by State Assemblymember Rebecca Bauer-Kahan (San Ramon). It will create a broad new “digital replica” right of publicity for deceased personalities for the unauthorized production, distribution, or availability of their digital replica in an audiovisual work or sound recording. If passed, a deceased personality’s estate could use it to extract statutory damages of $10,000 for the use of the dead person’s image or voice “in any manner related to the work performed by the deceased personality while living” – an incredibly unclear standard that will invite years of litigation.

Of course, this isn't every bill that EFF is engaged on, or even every bill we care about. Over the coming months, you'll hear more from us about ways that Californians can help us tell lawmakers to be on the right side of digital rights issues.

Hayley Tsukayama

Google Breaks Promise to Block Third-Party Cookies

4 weeks 1 day ago

Last week, Google backtracked on its long-standing promise to block third-party cookies in Chrome. This is bad for your privacy and good for Google's business. Third-party cookies are a pervasive tracking technology that allow companies to snoop on your online activity for surveillance and ad-targeting purposes. The consumer harm caused by these cookies has been well-documented for years, prompting Safari and Firefox to block them since 2020. Google knows this—that’s why they pledged to phase out third-party cookies in 2020. By abandoning this plan, Google leaves billions of Chrome users vulnerable to online surveillance.

How do third-party cookies facilitate online surveillance?

Cookies are small packets of information stored in your browser by websites you visit. They were built to enable useful functionality, like letting a website remember your language preferences or the contents of your shopping cart. But for years, companies have abused this functionality to track user behavior across the web, fueling a vast network of online surveillance. 

While first-party cookies enable useful functionality, third-party cookies are primarily used for online tracking. Third-party cookies are set by websites other than the one you’re currently viewing. Websites often include code from third-party companies to load resources like ads, analytics, and social media buttons. When you visit a website, this third-party code can create a cookie with a unique identifier for you. When you visit another website that loads resources from the same third-party company, that company receives your unique identifier from the cookie they previously set. By recognizing your unique identifier across multiple sites, third-party companies build a detailed profile of your browsing habits. 

For example, if you visit WebMD's “HIV & AIDS Resource Center,” you might expect WebMD to get information about your visit to their page. What you probably don't expect, and what third-party cookies enable, is that your visit to WebMD is tracked by dozens of companies you've never heard of. At the time of writing, visiting WebMD’s “HIV & AIDS Resource Center” sets 257 third-party cookies on your browser. The businesses that set those cookies include big tech companies (Google, Amazon, X, Microsoft) and data brokers (Lotame, LiveRamp, Experian). By setting a cookie on WebMD, these companies can link your visit to WebMD to your activity on other websites.

How does this online surveillance harm consumers?

Third-party cookies allow companies to build detailed profiles of your online activities, which can be used for targeted advertising or sold to the highest bidder. The consequences are far-reaching and deeply concerning. Your browsing history can reveal sensitive information, including your financial status, sexual orientation, and medical conditions. Data brokers collect and sell this information without your knowledge or consent. Once your data is for sale, anyone can buy it. Purchasers include insurance companies, hedge funds, scammers, anti-abortion groups, stalkers, and government agencies such as the military, FBI, and ICE

Online surveillance tools built for advertisers are exploited by others. For example, the NSA used third-party cookies set by Google to identify targets for hacking and people attempting to remain anonymous online. Likewise, a conservative Catholic nonprofit paid data brokers millions to identify priests using gay dating apps, and the brokers obtained this information from online advertising systems. 

Targeted ads also hurt us. They enable predatory advertisers to target vulnerable groups, like payday lenders targeting people in financial trouble. They also facilitate discriminatory advertising, like landlords targeting housing ads by race.

Yet again, Google puts profits over privacy

Google's decision to continue allowing third-party cookies, despite overwhelming evidence of their surveillance harms, is a direct consequence of their advertising-driven business model. Google makes most of its money from tracker-driven, behaviorally-targeted ads

If Google wanted, Chrome could do much more to protect your privacy. Other major browsers, like Safari and Firefox, provide significantly more protection against online tracking by default. Notably, Google is the internet’s biggest tracker, and most of the websites you visit include Google trackers (including but not limited to third-party cookies). As Chrome leaves users vulnerable to tracking, Google continues to receive nearly 80% of their revenue from online advertising.

Google’s change in plans follows concerns from advertisers and regulators that the loss of third-party cookies in Chrome would harm competition in digital advertising. Google’s anti-competitive practices in the ad-tech industry must be addressed, but maintaining online surveillance systems is not the answer. Instead, we should focus on addressing the root of these competition concerns. The bipartisan AMERICA Act, which proposed breaking up vertically integrated ad-tech giants like Google, offers a more effective approach. We don’t need to sacrifice user privacy to foster a competitive digital marketplace.

What now?

First, we call on Google to reverse this harmful decision. Continuing to allow one of the most pervasive forms of online tracking, especially when other major browsers have blocked it for years, is a clear betrayal of user trust. Google must prioritize people’s privacy over their advertising revenue and find real solutions to competition concerns. 

In the meantime, users can take steps to protect themselves from online tracking. Installing Privacy Badger can help block third-party cookies and other forms of online tracking.

We also need robust privacy legislation to ensure that privacy standards aren’t set by advertising companies. Companies use various tracking methods, like fingerprinting and link redirection, to monitor users across the web without third-party cookies. As long as it remains legal and profitable, companies will continue building and selling profiles of your online activities. Already, Google has developed alternative tracking tools that may be less invasive than third-party cookies but still enable harmful surveillance. Blocking third-party cookies is important but insufficient to address pervasive online tracking. Strong privacy legislation in the United States is possible, necessary, and long overdue. A comprehensive data privacy law should protect our browsing history by default and ban behavioral ads, which drive excessive data collection.

Google's decision to continue allowing third-party cookies in Chrome is a major disappointment. Browsing the internet shouldn't require submitting to extensive surveillance. As Google prioritizes profits over privacy, we need legislation that gives you control over your data.

Lena Cohen

Victory! D.C. Circuit Rules in Favor of Animal Rights Activists Censored on Government Social Media Pages

4 weeks 1 day ago

In a big win for free speech online, the U.S. Court of Appeals for the D.C. Circuit ruled that a federal agency violated the First Amendment when it blocked animal rights activists from commenting on the agency’s social media pages. We filed an amicus brief in the case, joined by the Foundation for Individual Rights and Expression (FIRE).

People for the Ethical Treatment of Animals (PETA) sued the National Institutes of Health (NIH) in 2021, arguing that the agency unconstitutionally blocked their comments opposing animal testing in scientific research on the agency’s Facebook and Instagram pages. (NIH provides funding for research that involves testing on animals.)

NIH argued it was simply implementing reasonable content guidelines that included a prohibition against public comments that are “off topic” to the agency’s social media posts. Yet the agency implemented the “off topic” rule by employing keyword filters that included words such as cruelty, revolting, tormenting, torture, hurt, kill, and stop to block PETA activists from posting comments that included these words.

NIH’s Social Media Pages Are Limited Public Forums

The D.C. Circuit first had to determine whether the comment sections of NIH’s social media pages are designated public forums or limited public forums. As the court explained, “comment threads of government social media pages are designated public forums when the pages are open for comment without restrictions and limited public forums when the government prospectively sets restrictions.”

The court concluded that the comment sections of NIH’s Facebook and Instagram pages are limited public forums: “because NIH attempted to remove a range of speech violating its policies … we find sufficient evidence that the government intended to limit the forum to only speech that meets its public guidelines.”

The nature of the government forum determines what First Amendment standard courts apply in evaluating the constitutionality of a speech restriction. Speech restrictions that define limited public forums must only be reasonable in light of the purposes of the forum, while speech restrictions in designated public forums must satisfy more demanding standards. In both forums, however, viewpoint discrimination is prohibited.

NIH’s Social Media Censorship Violated Animal Rights Activists’ First Amendment Rights

After holding that the comment sections of NIH’s Facebook and Instagram pages are limited public forums subject to a lower standard of reasonableness, the D.C. Circuit then nevertheless held that NIH’s “off topic” rule as implemented by keyword filters is unreasonable and thus violates the First Amendment.

The court explained that because the purpose of the forums (the comment sections of NIH’s social media pages) is directly related to speech, “reasonableness in this context is thus necessarily a more demanding test than in forums that have a primary purpose that is less compatible with expressive activity, like the football stadium.”

In rightly holding that NIH’s censorship was unreasonable, the court adopted several of the arguments we made in our amicus brief, in which we assumed that NIH’s social media pages are limited public forums but argued that the agency’s implementation of its “off topic” rule was unreasonable and thus unconstitutional.

Keyword Filters Can’t Discern Context

We argued, for example, that keyword filters are an “unreasonable form of automated content moderation because they are imprecise and preclude the necessary consideration of context and nuance.”

Similarly, the D.C. Circuit stated, “NIH’s off-topic policy, as implemented by the keywords, is further unreasonable because it is inflexible and unresponsive to context … The permanent and context-insensitive nature of NIH’s speech restriction reinforces its unreasonableness.”

Keyword Filters Are Overinclusive

We also argued, related to context, that keyword filters are unreasonable “because they are blunt tools that are overinclusive, censoring more speech than the ‘off topic’ rule was intended to block … NIH’s keyword filters assume that words related to animal testing will never be used in an on-topic comment to a particular NIH post. But this is false. Animal testing is certainly relevant to NIH’s work.”

The court acknowledged this, stating, “To say that comments related to animal testing are categorically off-topic when a significant portion of NIH’s posts are about research conducted on animals defies common sense.”

NIH’s Keyword Filters Reflect Viewpoint Discrimination

We also argued that NIH’s implementation of its “off topic” rule through keyword filters was unreasonable because those filters reflected a clear intent to censor speech critical of the government, that is, speech reflecting a viewpoint that the government did not like.

The court recognized this, stating, “NIH’s off-topic restriction is further compromised by the fact that NIH chose to moderate its comment threads in a way that skews sharply against the appellants’ viewpoint that the agency should stop funding animal testing by filtering terms such as ‘torture’ and ‘cruel,’ not to mention terms previously included such as ‘PETA’ and ‘#stopanimaltesting.’”

On this point, we further argued that “courts should consider the actual vocabulary or terminology used … Certain terminology may be used by those on only one side of the debate … Those in favor of animal testing in scientific research, for example, do not typically use words like cruelty, revolting, tormenting, torture, hurt, kill, and stop.”

Additionally, we argued that “a highly regulated social media comments section that censors Plaintiffs’ comments against animal testing gives the false impression that no member of the public disagrees with the agency on this issue.”

The court acknowledged both points, stating, “The right to ‘praise or criticize governmental agents’ lies at the heart of the First Amendment’s protections … and censoring speech that contains words more likely to be used by animal rights advocates has the potential to distort public discourse over NIH’s work.”

We are pleased that the D.C. Circuit took many of our arguments to heart in upholding the First Amendment rights of social media users in this important internet free speech case.

Sophia Cope

CrowdStrike, Antitrust, and the Digital Monoculture

4 weeks 2 days ago

Last month’s unprecedented global IT failure should be a wakeup call. Decades of antitrust inaction have made many industries dangerously reliant on the same tools, making such crises inevitable. We must demand regulators break up the digital monocultures that are creating a less competitive, less safe, and less free digital world.

The Federal Trade Commission (FTC) solicited public comments last year on the state of the cloud computing market. EFF made it clear that the consolidation of service providers has created new dangers for everyone and urged the commission to encourage interoperability so customers could more easily switch and mix cloud services. Microsoft cautioned against intervention, touting the benefits of centralized cloud services for IT security.

A year later, a key cloud-based cybersecurity firm released a bug unique to Microsoft systems. Vital IT systems were disrupted for millions worldwide. 

This fragility goes beyond issues at a specific firm, it results from power being overly concentrated around a few major companies.

What Happened

The widespread and disruptive tech outage last month happened thanks to an overreliance on one particular tool, CrowdStrike's Falcon sensor software. While not a monopoly, this tool is the most popular in end-point protection platforms.

This niche service often used by companies is best understood as an antivirus tool for devices, controlled by a cloud platform. “End-point” computers run the agent with very deep system permissions to scan for security issues, and the company CrowdStrike regularly pushes remote software updates to this tool. This setup means many devices rely on a single source for their security, leveraging shared insights learned across devices. It also means that many devices share a single point of failure.

Instead of an inconvenience for a few companies, it more closely resembled a government shutdown or a natural disaster.

An early sign of this problem came last April, when a CrowdStrike update disrupted devices running Debian and Rocky Linux operating systems. Linux “end-point” devices are uncommon, let alone those running these specific distributions with CrowdStrike software. What should have been a red flag in April was instead barely a blip.

Last month CrowdStike disrupted two other operating systems with a bad update: Windows 10 and 11. This time it spurred a Y2K-like collapse of crucial computer systems around the globe. Airlines, hospitals, financial institutions, schools, broadcasters, and more were brought to a standstill as an erroneous update on CrowdStrike’s platform caused system crashes. Instead of an inconvenience for a few companies, it more closely resembled a government shutdown or a natural disaster.

Both cases had similar impacts to devices, but the later case was an absolute disaster for infrastructure because of a digital landscape dominated by a few key players. Having so many sectors rely on a handful of services for the same operating systems makes them all susceptible to the same bugs, with even systems running absurdly old versions of Windows gaining an advantage for providing some diversity.

Whatever went wrong at CrowdStrike was just a spark. Last month it ignited the powder keg of digital monocultures.

Digital Monoculture

All computers are broken. Every piece of software and hardware is just waiting to fail in unexpected ways, and while your friendly neighborhood hackers and researchers can often hold off some of the worst problems by finding and reporting them, we need to mitigate inevitable failures. A resilient and secure digital future can’t be built on hope alone.

Yet, that’s exactly what we’re doing. The US has not just tolerated but encouraged a monopolistic tech industry with too little competition in key markets. Decades of antitrust policy have been based on the wrongheaded idea that sheer size will make tech companies efficient and better able to serve customers. Instead, we have airports, hospitals, schools, financial systems, and more all reliant on the same software, vulnerable to the same bugs and hacks. We created a tech industry that is too big to fail.

The lack of diversity makes the whole ecosystem more fragile

We live in the age of the digital monoculture, where single vulnerabilities can tear through systems globally; sabotaging hospitals and city governments with ransomware; electrical systems with state-sponsored attacks; and breaching staggering amounts of private data. Name a class of device or software, and more often than not the majority of the market is controlled by a few companies—often the same ones: Android and iPhone; Windows and Mac; Gmail and Outlook; Chrome and Safari.  When it comes to endpoint security products three companies control half of the market, the largest being Microsoft and CrowdStrike.

Much like monocultures in agriculture, the lack of diversity makes the whole ecosystem more fragile. A new pest or disease can cause a widespread collapse without a backup plan. The solution, conversely, is to increase diversity in the tech market through tougher antitrust enforcement, and for organizations to make IT system diversity a priority.

Allowing an over-reliance on a shrinking number of companies like Microsoft will only ensure more frequent and more devastating harms in the future.

How we got here Broken Antitrust

As EFF has pointed out, and argued to the FTC, antitrust has failed to address the realities of a 21st-century internet.

Viewing consumers as more than walking wallets, but as individuals who deserve to live unburdened by monopoly interests.

Since the 1980s, US antitrust has been dominated by “consumer welfare” theory, which suggests corporate monopolies are fine, and maybe even preferable, so long as they are not raising prices. Subtler economic harms of monopoly, along with harms to democracy, labor rights, and the environment are largely ignored.

 For the past several years, the FTC has pressed for a return to the original intent of antitrust law: viewing consumers as more than walking wallets, but as individuals who deserve to live unburdened by monopoly interests.

But we have a long way to go. We are still saddled with fewer and less adequate choices built on a tech industry which subsidizes consumer prices by compromising privacy and diminishing ownership through subscriptions and restrictive DRM. Today’s empires of industry exert more and more influence on our day to day life, building a greater lock-in to their monoculture. When they fail, the scale and impact rival those of a government shutdown.

We deserve a more stable and secure digital future, where an error code puts lives at risk. Vital infrastructure cannot be built on a digital monoculture.

To do this, antitrust enforcers, including the FTC, the Department of Justice (DOJ), and state attorneys general must increase scrutiny in every corner of the tech industry to prevent dangerous levels of centralization. An important first step would be to go after lock-in practices by IT vendors.

Procurement and Vendor Lock-In

Most organizations depend on their IT teams, even if that team is just the one friend who is “good with computers”. It’s quite common for these teams to be significantly under-resourced, forced to meet increasingly complex needs from the organization with a stagnant or shrinking budget.

Lock-in doubles down on a monopoly’s power and entrenches it across different markets.

This squeeze creates a need for off-the-shelf solutions that centralize that expertise among vendors and consultants. Renting these IT solutions from major companies like Microsoft or Google may be cost-effective, but it entrusts a good deal of control to those companies.

All too often however, software vendors take advantage of this dynamic. They will bundle many services for a low initial price, making an organization wholly reliant on them, and then hinder the ability of the organization to adopt alternative tools while later raising prices. This is a longstanding manipulative playbook of vendor lock-in.

Once locked in, a company will discover switching to alternatives is costly both in terms of money and effort. Say you want to switch email providers. Rather than an easy way to port over data and settings, your company will need to resort to manual efforts or expensive consultant groups. This is also often paired with selective interoperability, like having an email client work smoothly with a bundled calendar system, while a competitor’s service faces unstable or deliberately broken support.

Lock-in doubles down on a monopoly’s power and entrenches it across different markets. That is why EFF calls for interoperability to end vendor lock-in, and let IT teams choose the tools that reflect the values and priorities of their organization.

Buying or building more highly-tailored systems makes sense in a competitive market. It’s unlikely a single cloud provider will be the best at every service, and with interoperability, in-house alternatives become more viable to develop and host. Fostering more of that internal expertise can only bolster the resilience of bigger institutions.

Fallout from The Cloud

Allowing the economy and the well-being of countless people to rely on a few cloud services is reprehensible. The CrowdStrike Falcon incident is just the latest and largest in a growing list of hacks, breaches, and collapses coming to define the era. But each time everyday people endure real harms.

Each time, we see the poorest and most marginalized people face costly or even deadly consequences. A grounded flight might mean having to spend money on a hotel, and it might mean losing a job. Strained hospital capacity means fewer people receive lifesaving care. Each time these impacts further exacerbate existing inequalities, and they are happening with increasing frequency.

We must reject this as the status quo. CrowdStrike’s outage is a billion-dollar wake-up call to make antitrust an immediate priority. It's not just about preventing the next crash—it's about building a future where our digital world is as diverse and resilient as the people who depend on it.

Rory Mir

Atlanta Police Must Stop High-Tech Spying on Political Movements

4 weeks 2 days ago

The Atlanta Police Department has been snooping on social media to closely monitor the meetings, protests, canvassing–even book clubs and pizza parties–of the political movement to stop “Cop City,” a police training center that would destroy part of an urban forest. Activists already believed they were likely under surveillance by the Atlanta Police Department due to evidence in criminal cases brought against them, but the extent of the monitoring has only just been revealed. The Brennan Center for Justice has obtained and released over 2,000 pages of emails from inside the Atlanta Police Department chronicling how closely they were watching the social media of the movement.

You can read all of the emails here.

Atlanta is one of the most heavily surveilled cities in the United States.

The emails reveal monitoring that went far beyond when the department felt that laws might have been broken. Instead, they tracked every event even tangentially related to the movement–not just protests but pizza nights, canvassing for petition signatures, and reading groups. This threatens people’s ability to exercise their first-amendment protected right to protest and affiliate with various groups and political movements. The police overreach in Atlanta will deter people from practicing their politics in a way that is supposed to be protected in the United States.

To understand the many lines crossed by the Atlanta Police Department’s high-tech spying, it’s helpful to look back at the efforts to end political spying in New York City. In 1985, the pivotal legal case Handschu v. Special Services Division yielded important limits, which have been strengthened in several subsequent court decisions. The case demonstrated the illegality of police spying on people because of their religious or political beliefs. Indeed, people nationwide should have similar protections of their rights to protest, organize, and speak publicly without fear of invasive surveillance and harassment. The Atlanta Police Department’s use of social media to spy on protesters today echoes NYPD’s use of film to spy on protesters going back decades. In 2019, the New York City municipal archives digitized 140 hours of NYPD surveillance footage of protests and political activity from the 1950s through the 1970s. This footage shows the type of organizing and protesting the APD is so eager to monitor now in Atlanta.

Atlanta is one of the most heavily surveilled cities in the United States. According to EFF’s Atlas of Surveillance, law enforcement in Atlanta, supported financially by the Atlanta Police Foundation, have contracts to use nearly every type of surveillance technology we track. This is a dangerous combination. Worse, Atlanta lacks laws like CCOPS or a Face Recognition Ban to rein in police tech. Thanks to the Brennan Center, we also have strong proof of widespread social media monitoring of political activity. This is exactly why the city is so ripe for legislation to impose democratic limits on whether police can use its ever-mounting pile of invasive technology, and to place privacy limits on such use.

Until that time comes, make sure you’re up to speed on EFF’s Surveillance Self Defense Guide for attending a protest. And, if you’re on the go, bring this printable pocket version with you. 

Matthew Guariglia

Broad Scope Will Authorize Cross-Border Spying for Acts of Expression: Why You Should Oppose Draft UN Cybercrime Treaty

4 weeks 2 days ago

The draft UN Cybercrime Convention was supposed to help tackle serious online threats like ransomware attacks, which cost billions of dollars in damages every year.

But, after two and a half years of negotiations among UN Member States, the draft treaty’s broad rules for collecting evidence across borders may turn it into a tool for spying on people. In other words, an extensive surveillance pact.

It permits countries to collect evidence on individuals for actions classified as serious crimes—defined as offenses punishable by four years or more. This could include protected speech activities, like criticizing a government or posting a rainbow flag, if these actions are considered serious crimes under local laws.

Here’s an example illustrating why this is a problem:

If you’re an activist in Country A tweeting about human rights atrocities in Country B, and criticizing government officials or the king is considered a serious crime in both countries under vague cybercrime laws, the UN Cybercrime Treaty could allow Country A to spy on you for Country B. This means Country A could access your email or track your location without prior judicial authorization and keep this information secret, even when it no longer impacts the investigation.

Criticizing the government is a far cry from launching a phishing attack or causing a data breach. But since it involves using a computer and is a serious crime as defined by national law, it falls within the scope of the treaty’s cross-border spying powers, as currently written.

This isn’t hyperbole. In countries like Russia and China, serious “cybercrime” has become a catchall term for any activity the government disapproves of if it involves a computer. This broad and vague definition of serious crimes allows these governments to target political dissidents and suppress free speech under the guise of cybercrime enforcement.

Posting a rainbow flag on social media could be considered a serious cybercrime in countries outlawing LGBTQ+ rights. Journalists publishing articles based on leaked data about human rights atrocities and digital activists organizing protests through social media could be accused of committing cybercrimes under the draft convention.

The text’s broad scope could allow governments to misuse the convention’s cross border spying powers to gather “evidence” on political dissidents and suppress free speech and privacy under the pretext of enforcing cybercrime laws.

Canada said it best at a negotiating session earlier this year: “Criticizing a leader, innocently dancing on social media, being born a certain way, or simply saying a single word, all far exceed the definition of serious crime in some States. These acts will all come under the scope of this UN treaty in the current draft.”

The UN Cybercrime Treaty’s broad scope must be limited to core cybercrimes. Otherwise it risks authorizing cross-border spying and extensive surveillance, and enabling Russia, China, and other countries to collaborate in targeting and spying on activists, journalists, and marginalized communities for protected speech.

It is crucial to exclude such overreach from the scope of the treaty to genuinely protect human rights and ensure comprehensive mandatory safeguards to prevent abuse. Additionally, the definition of serious crimes must be revised to include those involving death, injury, or other grave harms to further limit the scope of the treaty.

For a more in-depth discussion about the flawed treaty, read here, here, and here.

Karen Gullo

Texas Wins $1.4 Billion Biometric Settlement Against Meta. It Would Have Happened Sooner With Consumer Enforcement

1 month ago

In Texas’ first public enforcement of its biometric privacy law, Meta agreed to pay $1.4 billion to settle claims that its now-defunct face recognition system violated state law. The law was first passed in 2001.

As part of the Texas settlement, Meta (formerly Facebook) can seek pre-approval from the state for any future biometric projects. The settlement does not require Meta to destroy any models or algorithms trained on Texas biometric data, as the state urged in its original lawsuit. And the settlement appears designed to avoid that remedy in the future.

Facebook Previously Discontinued Its Face Recognition System

Meta announced in November 2021 that it would shut down its tool that scanned the face of every person in photos posted on the platform. The tool identified and tagged users who had purportedly opted in to the feature. At the time, Meta also announced that it would delete more than a billion face templates.

The company shut down this tool months after agreeing to a $650 million class action settlement brought by Illinois consumers under the state's strong biometric privacy law.

Texas’ Law Has Steep Penalties But Little Enforcement

The Texas settlement nearly three years later is welcome, but it also highlights the need to give consumers their own private right of action to enforce consumer data privacy laws.

The Texas Attorney General has sole authority to enforce the Texas Capture or Use of Biometric Identifier (CUBI) law, which prevents companies from capturing biometric identifiers for a commercial purpose, unless notice and consent is first given. The law has existed for decades, but this is the first time it has been enforced.

State regulators do not always have the resources or the will to aggressively enforce consumer privacy laws. And without strong enforcement, companies will not make compliance a priority. Most of the comprehensive consumer data privacy laws passed at the state level in the past few years lack a private right of action, and there have been few public enforcement actions.

That is why consumers should be empowered to bring lawsuits on their own behalf. That should be the priority in any new law passed. Hopefully, states like Texas will continue to enforce their privacy laws. Still, there is no substitute for consumer enforcement.

Mario Trujillo

Our Last Chance to Stop KOSA | EFFector 36.10

1 month ago

EFF is chugging along, continuing to push for your rights online! We're sending out a last call for supporters to tell Congress to vote NO on the Kids Online Safety Act, exposing the flaws of the UN Cybercrime Treaty, and continuing to update Privacy Badger to protect your privacy online.

It can feel overwhelming to stay up to date, but we've got you covered with our EFFector newsletter! You can read the full issue here, or subscribe to get the next one in your inbox automatically! You can also listen to the audio version of the newsletter on the Internet Archive, or by clicking the button below:

LISTEN ON YouTube

EFFECTOR 36.10 - Our Last Chance to Stop KOSA

Since 1990 EFF has published EFFector to help keep readers on the bleeding edge of their digital rights. We know that the intersection of technology, civil liberties, human rights, and the law can be complicated, so EFFector is a great way to stay on top of things. The newsletter is chock full of links to updates, announcements, blog posts, and other stories to help keep readers—and listeners—up to date on the movement to protect online privacy and free expression. 

Thank you to the supporters around the world who make our work possible! If you're not a member yet, join EFF today to help us fight for a brighter digital future.

Christian Romero

Security Researchers and Journalists at Risk: Why You Should Hate the Proposed UN Cybercrime Treaty

1 month ago

The proposed UN Cybercrime Treaty puts security researchers and journalists at risk of being criminally prosecuted for their work identifying and reporting computer system vulnerabilities, work that keeps the digital ecosystem safer for everyone.

The proposed text fails to exempt security research from the expansive scope of its cybercrime prohibitions, and does not provide mandatory safeguards to protect their rights.

Instead, the draft text includes weak wording that criminalizes accessing a computer “without right.” This could allow authorities to prosecute security researchers and investigative journalists who, for example, independently find and publish information about holes in computer networks.

These vulnerabilities could be exploited to spread malware, cause data breaches, and get access to sensitive information of millions of people. This would undermine the very purpose of the draft treaty: to protect individuals and our institutions from cybercrime.

What's more, the draft treaty's overbroad scope, extensive secret surveillance provisions, and weak safeguards risk making the convention a tool for state abuse. Journalists reporting on government corruption, protests, public dissent, and other issues states don't like can and do become targets for surveillance, location tracking, and private data collection.

Without clear protections, the convention, if adopted, will deter critical activities that enhance cybersecurity and press freedom. For instance, the text does not make it mandatory to distinguish between unauthorized access and bypassing effective security measures, which would protect researchers and journalists.

By not mandating malicious or dishonest intent when accessing computers “without right,” the draft convention threatens to penalize researchers and journalists for actions that are fundamental to safeguards the digital ecosystem or reporting on issues of public interest, such as government transparency, corporate misconduct, and cybersecurity flaws.¸

For an in-depth analysis, please read further.

Karen Gullo

Calls Mount—from Principal UN Human Rights Official, Business, and Tech Groups—To Address Dangerous Flaws in Draft UN Surveillance Treaty

1 month ago

As UN delegates sat down in New York this week to restart negotiations, calls are mounting from all corners—from the United Nations High Commissioner for Human Rights (OHCHR) to Big Tech—to add critical human rights protections to, and fix other major flaws in, the proposed UN surveillance treaty, which as written will jeopardize fundamental rights for people across the globe.

Six influential organizations representing the UN itself, cybersecurity companies, civil society, and internet service providers have in recent days weighed in on the flawed treaty ahead of the two-week negotiating session that began today.

The message is clear and unambiguous: the proposed UN treaty is highly flawed and dangerous and must be fixed.

The groups have raised many points EFF raised over the last two and half years, including whether the treaty is necessary at all, the risks it poses to journalists and security researchers, and an overbroad scope that criminalizes offenses beyond core cybercrimes—crimes against computer systems, data, and networks. We have summarized our concerns here.

Some delegates meeting in New York are showing enthusiasm to approve the draft treaty, despite its numerous flaws. We question whether UN Member States, including the U.S., will take the lead over the next two weeks to push for significant changes in the text. So, we applaud the six organizations cited here for speaking out at this crucial time.

“The concluding session is a pivotal moment for human rights in the digital age,” the OHCHR said in comments on the new draft. Many of its provisions fail to meet international human rights standards, the commissioner said.

“These shortcomings are particularly problematic against the backdrop of an already expansive use of existing cybercrime laws in some jurisdictions to unduly restrict freedom of expression, target dissenting voices and arbitrarily interfere with the privacy and anonymity of communications.”

The OHCHR recommends including in the draft an explicit reference to specific human rights instruments, in particular the International Covenant on Civil and Political Right, narrowing the treaty’s scope, explicitly including language that crimes covered by the treaty must be committed with “criminal intent,” and several other changes.

The proposed treaty should comprehensively integrate human rights throughout the text, OHCHR said. Without that, the convention “could jeopardize the protection of human rights of people world-wide, undermine the functionality of the internet infrastructure, create new security risks and undercut business opportunities and economic well-being.”

EFF has called on delegates to oppose the treaty if it’s not significantly improved, and we are not alone in this stance.

The Global Network Initiative (GNI), a multistakeholder organization that sets standards for responsible business conduct based on human rights, in the liability of online platforms for offenses committed by their users, raising the risk that online intermediaries could be liable when they don’t know or are unaware of such user-generated content.

“This could lead to excessively broad content moderation and removal of legitimate, protected speech by platforms, thereby negatively impacting freedom of expression,” GNI said.

“Countries committed to human rights and the rule of law must unite to demand stronger data protection and human rights safeguards. Without these they should refuse to agree to the draft Convention.”

Human Rights Watch (HRW), a close EFF ally on the convention, called out the draft’s article on offenses related to online child sexual abuse or child sexual exploitation material (CSAM), which could lead to criminal liability for service providers acting as mere conduits. Moreover, it could criminalize or risk criminalizing content and conduct that has evidentiary, scientific, or artistic value, and doesn’t sufficiently decriminalize the consensual conduct of older children in consensual relationships.

This is particularly dangerous for rights organizations that investigate child abuse and collect material depicting children subjected to torture or other abuses, including material that is sexual in nature. The draft text isn’t clear on whether legitimate use of this material is excluded from criminalization, thereby jeopardizing the safety of survivors to report CSAM activity to law enforcement or platforms.

HRW recommends adding language that excludes material manifestly artistic, among other uses, and conduct that is carried out for legitimate purposes related to documentation of human rights abuses or the administration of justice.

The Cybersecurity Tech Accord, which represents over 150 companies, raised concerns in a statement today that aspects of the draft treaty allow cooperation between states to be kept confidential or secret, without mandating any procedural legal protections.

The convention will result in more private user information being shared with more governments around the world, with no transparency or accountability. The statement provides specific examples of national security risks that could result from abuse of the convention’s powers.

The International Chamber of Commerce, a proponent of international trade for businesses in 170 countries, said the current draft would make it difficult for service providers to challenge overbroad data requests or extraterrestrial requests for data from law enforcement, potentially jeopardizing the safety and freedom of tech company employees in places where they could face arrest “as accessories to the crime for which that data is being sought.”

Further, unchecked data collection, especially from traveling employees, government officials, or government contractors, could lead to sensitive information being exposed or misused, increasing risks of security breaches or unauthorized access to critical data, the group said.

The Global Initiative Against Transnational Organized Crime, a network of law enforcement, governance, and development officials, raised concerns in a recent analysis about the draft treaty’s new title, which says the convention is against both cybercrime and, more broadly, crimes committed through the use of an information or communications technology (ICT) system.

“Through this formulation, it not only privileges Russia’s preferred terminology but also effectively redefines cybercrime,” the analysis said. With this title, the UN effectively “redefines computer systems (and the crimes committed using them)­ as ICT—a broader term with a wider remit.”

 

Karen Gullo

Certbot Is Now on 4 Million Servers, Maintaining Over 31 Million Websites

1 month ago

EFF’s Certbot is now installed on over 4 million web servers, where it’s used to maintain HTTPS certificates for more than 31 million websites. The recent achievement of these milestones helps show the success of the project and the important role it plays in the infrastructure of a secure and encrypted internet.

When EFF helped launch the Let’s Encrypt certificate authority and released the software that’d become Certbot in 2015, the web was a very different place. Less than 40% of websites were loaded using HTTPS, while the rest used unencrypted HTTP. This unencrypted traffic made it easy for malicious actors to eavesdrop, inject content, and take over online accounts by stealing cookies. Today, the percentage of web traffic using HTTPS is over 80% worldwide and over 93% in the United States.
https-pages-firefox-2024.png
Since Certbot’s first release, it has never stopped growing. The recent achievement of Certbot exceeding 4 million installations actively maintaining certificates with Let’s Encrypt is just our latest metric showcasing this growth. Additionally, since many servers host more than one website, these installations are responsible for more than 22 million certificates covering more than 31 million domain names. That’s more than 31 million websites that Certbot is helping to offer HTTPS. These benefits extend to every person who visits those sites.

But even these numbers are probably low, because they reflect only Certbot use with Let’s Encrypt. The ACME protocol is an open standard which allows others to create their own projects that are compatible with these tools. Since Certbot and Let’s Encrypt launched, lots of other software has been created —including other ACME certificate authorities —and the number of these is likely to increase.

Earlier this year, Google made changes to the Chrome root program that require all new certificate authorities to offer automated certificate issuance, and specifically encouraged certificate authorities to support ACME. These changes are good for the security of the internet and are likely to further encourage the adoption of ACME software like Certbot.

If you’d like to support us in our work in continuing to develop and support Certbot, especially for the millions of people who find it useful and have come to rely on it, please consider donating to EFF.

Brad Warren

The KOSA Internet Censorship Bill Just Passed The Senate—It's Our Last Chance To Stop It

1 month ago

The Senate just passed a bill that will let the federal and state governments investigate and sue websites that they claim cause kids mental distress. It’s a terrible idea to let politicians and bureaucrats decide what people should read and view online, but the Senate passed KOSA on a 91-3 vote.   

TAKE ACTION

Don't let congress censor the internet

Bill proponents have focused on some truly tragic stories of loss, and then tied these tragedies to the internet. But anxiety, eating disorders, drug abuse, gambling, tobacco and alcohol use by minors, and the host of other ills that KOSA purports to address all existed well before the internet

The Senate vote means that the House could take up and vote on this bill at any time. The House could also choose to debate its own, similarly flawed, version of KOSA. Several members of the House have expressed concerns about the bill. 

The members of Congress who vote for this bill should remember—they do not, and will not, control who will be in charge of punishing bad internet speech. The Federal Trade Commission,  majority-controlled by the President’s party, will be able to decide what kind of content “harms” minors, then investigate or file lawsuits against websites that host that content. 

Politicians in both parties have sought to control various types of internet content. One bill sponsor has said that widely used educational materials that teach about the history of racism in the U.S. causes depression in kids. Kids speaking out about mental health challenges or trying to help friends with addiction are likely to be treated the same as those promoting addictive or self-harming behaviors, and will be kicked offline. Minors engaging in activism or even discussing the news could be shut down, since the grounds for suing websites expand to conditions like “anxiety.” 

KOSA will lead to people who make online content about sex education, and LGBTQ+ identity and health, being persecuted and shut down as well. Views on how, or if, these subjects should be broached vary widely across U.S. communities. All it will take is one member of the Federal Trade Commission seeking to score political points, or a state attorney general seeking to ensure re-election, to start going after the online speech his or her constituents don’t like. 

All of these speech burdens will affect adults, too. Adults simply won’t find the content that was mass-deleted in the name of avoiding KOSA-inspired lawsuits; and we’ll all be burdened by websites and apps that install ID checks, age gates, and invasive (and poorly functioning) software content filters. 

The vast majority of speech that KOSA affects is constitutionally protected in the U.S., which is why there is a long list of reasons that KOSA is unconstitutional. Unfortunately, the lawmakers voting for this bill have hand-waved away those concerns. They’ve also blown off the voices of millions of young people who will have their free expression constricted by this bill, including the thousands who spoke to EFF directly about their concerns and fears around KOSA. 

We can’t rely solely on lawsuits and courts to protect us from the growing wave of anti-speech internet legislation, with KOSA at its forefront. We need to let the people making the laws know that the public is becoming aware of their censorship plans—and won’t stand for them.

TAKE ACTION

Our Freedom Of Speech Doesn't End Online

Joe Mullin
Checked
2 hours 33 minutes ago
EFF's Deeplinks Blog: Noteworthy news from around the internet
Subscribe to EFF update feed