Online Tracking is Out of Control—Privacy Badger Can Help You Fight Back

3 hours 52 minutes ago

Every time you browse the web, you're being tracked. Most websites contain invisible tracking code that allows companies to collect and monetize data about your online activity. Many of those companies are data brokers, who sell your sensitive information to anyone willing to pay. That’s why EFF created Privacy Badger, a free, open-source browser extension used by millions to fight corporate surveillance and take back control of their data. 

Since we first released Privacy Badger in 2014, online tracking has only gotten more invasive and Privacy Badger has evolved to keep up. Whether this is your first time using it or you’ve had it installed since day one, here’s a primer on how Privacy Badger protects you.

Online Tracking Isn't Just Creepy—It’s Dangerous 

The rampant data collection, sharing, and selling fueled by online tracking has serious consequences. Fraudsters purchase data to identify elderly people susceptible to scams. Government agencies and law enforcement purchase people’s location data and web browsing records without a warrant. Data brokers help predatory companies target people in financial distress. And surveillance companies repackage data into government spy tools.

Once your data enters the data broker ecosystem, it’s nearly impossible to know who buys it and what they’re doing with it. Privacy Badger blocks online tracking to prevent your browsing data from being used against you. 

Privacy Badger Disrupts Surveillance Business Models

Online tracking is pervasive because it’s profitable. Tech companies earn enormous profits by targeting ads based on your online activity—a practice called “online behavioral advertising.” In fact, Big Tech giants like Google, Meta, and Amazon are among the top companies tracking you across the web. By automatically blocking their trackers, Privacy Badger makes it harder for Big Tech companies to profit from your personal information.

Online behavioral advertising has made surveillance the business model of the internet. Companies are incentivized to collect as much of our data as possible, then share it widely through ad networks with no oversight. This not only exposes our sensitive information to bad actors, but also fuels government surveillance. Ending surveillance-based advertising is essential for building a safer, more private web. 

While strong federal privacy legislation is the ideal solution—and one that we continue to advocate for—Privacy Badger gives you a way to take action today. 

Privacy Badger fights for a better web by incentivizing companies to respect your privacy. Privacy Badger sends the Global Privacy Control and Do Not Track signals to tell companies not to track you or share your data. If they ignore these signals, Privacy Badger will block them, whether they are advertisers or trackers of other kinds. By withholding your browsing data from advertisers, data brokers, and Big Tech companies, you can help make online surveillance less profitable. 

How Privacy Badger Protects You From Online Tracking

Whether you're looking to protect your sensitive information from data brokers or simply don’t want Big Tech monetizing your data, Privacy Badger is here to help.

Over the past decade, Privacy Badger has evolved to fight many different methods of online tracking. Here are some of the ways that Privacy Badger protects your data:

  • Blocks Third-Party Trackers and Cookies: Privacy Badger stops tracking code from loading on sites that you visit. That prevents companies from collecting data about your online activity on sites that they don’t own. 
  • Sends the GPC Signal to Opt Out of Data Sharing: Privacy Badger sends the Global Privacy Control (GPC) signal to opt out of websites selling or sharing your personal information. This signal is legally binding in some states, including California, Colorado, and Connecticut. 
  • Stops Social Media Companies From Tracking You Through Embedded Content: Privacy Badger replaces page elements that track you but are potentially useful (like embedded tweets) with click-to-activate placeholders. Social media buttons, comments sections, and video players can send your data to other companies, even if you don’t click on them.
  • Blocks Link Tracking on Google and Facebook: Privacy Badger blocks Google and Facebook’s attempts to follow you whenever you click a link on their websites. Google not only tracks the links you visit from Google Search, but also the links you click on platforms that feel more private, like Google Docs and Gmail
  • Blocks Invasive “Fingerprinting” Trackers: Privacy Badger blocks trackers that try to identify you based on your browser's unique characteristics, a particularly problematic form of tracking called “fingerprinting.” 
  • Automatically learns to block new trackers: Our Badger Swarm research project continuously discovers new trackers for Privacy Badger to block. Trackers are identified based on their behavior, not just human-curated blocklists.
  • Disables Harmful Chrome Settings: Automatically disables Google Chrome settings that are bad for your privacy.
  • Easy to Disable on Individual Sites While Maintaining Protections Everywhere Else: If blocking harmful trackers ends up breaking something on a website, you can disable Privacy Badger for that specific site while maintaining privacy protections everywhere else.

All of these privacy protections work automatically when you install Privacy Badger—there’s no setup required! And it turns out that when Privacy Badger blocks tracking, you’ll also see fewer ads and your pages will load faster. 

You can always check to see what Privacy Badger has done on the site you’re visiting by clicking on Privacy Badger’s icon in your browser toolbar.

Fight Corporate Surveillance by Spreading the Word About Privacy Badger

Privacy is a team sport. The more people who withhold their data from data brokers and Big Tech companies, the less profitable online surveillance becomes. If you haven’t already, visit privacybadger.org to install Privacy Badger on your web browser. And if you like Privacy Badger, tell your friends about how they can join us in fighting for a better web!

Install Privacy Badger

Lena Cohen

A New Tool to Detect Cellular Spying | EFFector 37.3

1 day 5 hours ago

Take some time during your Spring Break to catch up on the latest digital rights news by subscribing to EFF's EFFector newsletter!

This edition of the newsletter covers our new open source tool to detect cellular spying, Rayhunter; The Foilies 2025, our tongue-in-cheek awards to the worst responses to public records requests; and our recommendations to the NSF for the new AI Action Plan to put people first.

You can read the full newsletter here, and even get future editions directly to your inbox when you subscribe! Additionally, we've got an audio edition of EFFector on the Internet Archive, or you can view it by clicking the button below:

LISTEN ON YouTube

EFFECTOR 37.3 - A NEW TOOL TO DETECT CELLULAR SPYING

Since 1990 EFF has published EFFector to help keep readers on the bleeding edge of their digital rights. We know that the intersection of technology, civil liberties, human rights, and the law can be complicated, so EFFector is a great way to stay on top of things. The newsletter is chock full of links to updates, announcements, blog posts, and other stories to help keep readers—and listeners—up to date on the movement to protect online privacy and free expression. 

Thank you to the supporters around the world who make our work possible! If you're not a member yet, join EFF today to help us fight for a brighter digital future.

Christian Romero

How to Delete Your 23andMe Data

1 day 6 hours ago

This week, the genetic testing company 23andMe filed for bankruptcy, which means the genetic data the company collected on millions of users is now up for sale. If you do not want your data included in any potential sale, it’s a good time to ask the company to delete it.

When the company first announced it was considering a sale, we highlighted many of the potential issues, including selling that data to companies with poor security practices or direct links to law enforcement. With this bankruptcy, the concerns we expressed last year remain the same. It is unclear what will happen with your genetic data if 23andMe finds a buyer, and that uncertainty is a clear indication that you should consider deleting your data. California attorney general Rob Bonta agrees.

First: Download Your Data

Before you delete your account, you may want to download the data for your own uses. If you do so, be sure to store it securely. To download you data:

  1. Log into your 23andMe account and click your username, then click "Settings." 
  2. Scroll down to the bottom where it says "23andMe Data" and click "View."
  3. Here, you'll find the option to download various parts of your 23andMe data. The most important ones to consider are:
    1. The "Reports Summary" includes details like the "Wellness Reports," "Ancestry Reports," and "Traits Reports."
    2. The "Ancestry Composition Raw Data" the company's interpretation of your raw genetic data.
    3. If you were using the DNA Relatives feature, the "Family Tree Data" includes all the information about your relatives. Based on the descriptions of the data we've seen, this sounds like the data the bad actors collected.
    4. You can also download the "Raw data," which is the uninterpreted version of your DNA. 

There are other types of data you can download on this page, though much of it will not be of use to you without special software. But there's no harm in downloading it all. 

How to Delete Your Data

Finally, you can delete your data and revoke consent for research. While it doesn’t make this clear on the deletion page, this also authorizes the company to destroy your DNA sample, if you hadn't already asked them to do so. You can also make this request more explicit if you want in the Account preferences section page.

If you're still on the page to download your data from the steps above, you can skip to step three. Otherwise:

  1. Click your username, then click "Settings." 
  2. Scroll down to the bottom where it says "23andMe Data" and click "View."
  3. Scroll down to the bottom of this page, and click "Permanently Delete Data."
  4. You should get a message stating that 23andMe received the request but you need to confirm by clicking a link sent to your email. 
  5. Head to your email account associated with your 23andMe account to find the email titled "23andMe Delete Account Request." Click the "Permanently Delete All Records" button at the bottom of the email, and you will be taken to a page that will say "Your data is being deleted" (You may need to log in again, if you logged out).

23andMe should give every user a real choice to say “no” to a data transfer in this bankruptcy and ensure that any buyer makes real privacy commitments. Other consumer genetic genealogy companies should proactively take these steps as well. Our DNA contains our entire genetic makeup. It can reveal where our ancestors came from, who we are related to, our physical characteristics, and whether we are likely to get genetically determined diseases. Even if you don’t add your own DNA to a private database, a relative could make that choice for you by adding their own.

This incident is an example of why this matters, and how certain features that may seem useful in the moment can be weaponized in novel ways. A bankruptcy should not result in our data getting shuffled off to the highest bidder without our input or a guarantee of  real protections.

Thorin Klosowski

Saving the Internet in Europe: Fostering Choice, Competition and the Right to Innovate

2 days 11 hours ago

This is the fourth instalment in a four-part blog series documenting EFF's work in Europe. You can read additional posts here: 

EFF’s mission is to ensure that technology supports freedom, justice, and innovation for all people of the world. While our work has taken us to far corners of the globe, in recent years we have worked to expand our efforts in Europe, building up a policy team with key expertise in the region, and bringing our experience in advocacy and technology to the European fight for digital rights.   

In this blog post series, we will introduce you to the various players involved in that fight, share how we work in Europe, and discuss how what happens in Europe can affect digital rights across the globe.  

EFF’s Approach to Competition  

Market concentration and monopoly power among internet companies and internet access impacts many of EFF’s issues, particularly innovation, consumer privacy, net neutrality, and platform censorship. And we have said it many times: Antitrust law and rules on market fairness are powerful tools with the potential to either cement the hold of established giants over a market even more or to challenge incumbents and spur innovation and choice that benefit users. Antitrust enforcement must hit monopolists where it hurts: ensuring that anti-competitive behaviors like abuse of dominance by multi-billion-dollar tech giants come at a price high enough to force real change.  

The EU has recently shown that it is serious about cracking down on Big Tech companies with its full arsenal of antitrust rules. For example, in a high-stakes appeal in 2022, EU judges hit Google with a record fine of more than €4.13 billion for abusing its dominant position by locking Android users into its search engine (now pending before the Court of Justice). 

We believe that with the right dials and knobs, clever competition rules can complement antitrust enforcement and ensure that firms that grow top heavy and sluggish are displaced by nimbler new competitors. Good competition rules should enable better alternatives that protect users’ privacy and enhance users’ technological self-determination. In the EU, this requires not only proper enforcement of existing rules but also new regulation that tackles gatekeeper’s dominance before harm is done. 

The Digital Markets Act  

The DMA will probably turn out to be one of the most impactful pieces of EU tech legislation in history. It’s complex but the overall approach is to place new requirements and restrictions on online “gatekeepers”: the largest tech platforms, which control access to digital markets for other businesses. These requirements are designed to break down the barriers businesses face in competing with the tech giants. 

Let’s break down some of the DMA’s rules. If enforced robustly, the DMA will make it easier for users to switch services, install third party apps and app stores and have more power over default settings on their mobile computing devices. Users will no longer be steered into sticking with the defaults embedded in their devices and can choose, for example, their own default browser on Apple’s iOS. The DMA also tackles data collection practices: gatekeepers can no longer cross-combine user data or sign them into new services without their explicit consent and must provide them with a specific choice. A “pay or consent” advertising model as proposed by Meta will probably not cut it.  

There are also new data access and sharing requirements that could benefit users, such as the right of end users to request effective portability of data and get access to effective tools to this end. One section of the DMA even requires gatekeepers to make their person-to-person messaging systems (like WhatsApp) interoperable with competitors’ systems on request—making it a globally unique ex ante obligation in competition regulation. At EFF, we believe that interoperable platforms can be a driver for technological self-determination and a more open internet. But even though data portability and interoperability are anti-monopoly medicine, they come with challenges: Ported data can contain sensitive information about you and interoperability poses difficult questions about security and governance, especially when it’s mandated for encrypted messaging services. Ideally, the DMA should be implemented to offer better protections for users’ privacy and security, new features, new ways of communication and better terms of service.  

There are many more do's and don'ts in the new fairness rulebook of the EU, such as the prohibition of platforms to favour their own products and services over those of rivals in ranking, crawling and indexing (ensuring users a real choice!), along with many other measures. All these and other requirements are to create more fairness and contestability in digital markets—a laudable objective.  If done right, the DMA presents an option for a real change for technology users—and a real threat to current abusive or unfair industry practices by Big Tech. But if implemented poorly, it could create more legal uncertainty, restrict free expression, or even legitimize the status quo. It is now up to the European Commission to bring the DMA’s promises to life. 

Public Interest 

As the EU’s 2024–2029 mandate is now in full swing, it will be important to not lose sight of the big picture. Fairness rules can only be truly fair if they follow a public-interest approach by empowering users, business, and society more broadly and make it easier for users to control the technology they rely on. And we cannot stop here: the EU must strive to foster a public interest internet and support open-source and decentralized alternatives. Competition and innovation are interconnected forces and the recent rise of the Fediverse makes this clear. Platforms like Mastodon and Bluesky thrive by filling gaps (and addressing frustrations) left by corporate giants, offering users more control over their experience and ultimately strengthening the resilience of the open internet. The EU should generally support user-controlled alternatives to Big Tech and use smart legislation to foster interoperability for services like social networks. In an ideal world, users are no longer locked into dominant platforms and the ad-tech industry—responsible for pervasive surveillance and other harms—is brought under control. 

What we don’t want is a European Union that conflates fairness with protectionist industrial policies or reacts to geopolitical tensions with measures that could backfire on digital openness and fair markets. The enforcement of the DMA and new EU competition and digital rights policies must remain focused on prioritizing user rights and ensuring compliance from Big Tech—not tolerating malicious (non)compliance tactics—and upholding the rule of law rather than politicized interventions. The EU should avoid policies that could lead to a fragmented internet and must remain committed to net neutrality. It should also not hesitate to counter the concentration of power in the emerging AI stack market, where control over infrastructure and technology is increasingly in the hands of a few dominant players. 

EFF will be watching. And we will continue to fight to save the internet in Europe, ensuring that fairness in digital markets remains rooted in choice, competition, and the right to innovate. 

Christoph Schmon

230 Protects Users, Not Big Tech

3 days 5 hours ago

Once again, several Senators appear poised to gut one of the most important laws protecting internet users - Section 230 (47 U.S.C. § 230)

Don’t be fooled - many of Section 230’s detractors claim that this critical law only protects big tech. The reality is that Section 230 provides limited protection for all platforms, though the biggest beneficiaries are small platforms and users. Why else would some of the biggest platforms be willing to endorse a bill that guts the law? In fact, repealing Section 230 would only cement the status of Big Tech monopolies.

As EFF has said for years, Section 230 is essential to protecting individuals’ ability to speak, organize, and create online. 

Congress knew exactly what Section 230 would do – that it would lay the groundwork for speech of all kinds across the internet, on websites both small and large. And that’s exactly what has happened.  

Section 230 isn’t in conflict with American values. It upholds them in the digital world. People are able to find and create their own communities, and moderate them as they see fit. People and companies are responsible for their own speech, but (with narrow exceptions) not the speech of others. 

The law is not a shield for Big Tech. Critically, the law benefits the millions of users who don’t have the resources to build and host their own blogs, email services, or social media sites, and instead rely on services to host that speech. Section 230 also benefits thousands of small online services that host speech. Those people are being shut out as the bill sponsors pursue a dangerously misguided policy.  

If Big Tech is at the table in any future discussion for what rules should govern internet speech, EFF has no confidence that the result will protect and benefit internet users, as Section 230 does currently. If Congress is serious about rewriting the internet’s speech rules, it must spend time listening to the small services and everyday users who would be harmed should they repeal Section 230.  

Section 230 Protects Everyday Internet Users 

There’s another glaring omission in the arguments to end Section 230: how central the law is to ensuring that every person can speak online, and that Congress or the Administration does not get to define what speech is “good” and “bad”.   

Let’s start with the text of Section 230. Importantly, the law protects both online services and users. It says that “no provider or user shall be treated as the publisher” of content created by another. That's in clear agreement with most Americans’ belief that people should be held responsible for their own speech—not that of others.   

Section 230 protects individual bloggers, anyone who forwards an email, and social media users who have ever reshared or retweeted another person’s content online. Section 230 also protects individual moderators who might delete or otherwise curate others’ online content, along with anyone who provides web hosting services

As EFF has explained, online speech is frequently targeted with meritless lawsuits. Big Tech can afford to fight these lawsuits without Section 230. Everyday internet users, community forums, and small businesses cannot. Engine has estimated that without Section 230, many startups and small services would be inundated with costly litigation that could drive them offline. Even entirely meritless lawsuits cost thousands of dollars to fight, and often tens or hundreds of thousands of dollars.

Deleting Section 230 Will Create A Field Day For The Internet’s Worst Users  

Section 230’s detractors say that too many websites and apps have “refused” to go after “predators, drug dealers, sex traffickers, extortioners and cyberbullies,” and imagine that removing Section 230 will somehow force these services to better moderate user-generated content on their sites.  

These arguments fundamentally misunderstand Section 230. The law lets platforms decide, largely for themselves, what kind of speech they want to host, and to remove speech that doesn’t fit their own standards without penalty. 

 If lawmakers are legitimately motivated to help online services root out unlawful activity and terrible content appearing online, the last thing they should do is eliminate Section 230. The current law strongly incentivizes websites and apps, both large and small, to kick off their worst-behaving users, to remove offensive content, and in cases of illegal behavior, work with law enforcement to hold those users responsible. 

If Congress deletes Section 230, the pre-digital legal rules around distributing content would kick in. That law strongly discourages services from moderating or even knowing about user-generated content. This is because the more a service moderates user content, the more likely it is to be held liable for that content. Under that legal regime, online services will have a huge incentive to just not moderate and not look for bad behavior. This would result in the exact opposite of their goal of protecting children and adults from harmful content online.

India McKinney

Podcast Episode Rerelease: Dr. Seuss Warned Us

4 days 8 hours ago

This episode was first released on May 2, 2023.

We’re excited to announce that we’re working on a new season of How to Fix the Internet, coming in the next few months! But today we want to lift up an earlier episode that has particular significance right now. In 2023, we spoke with our friend Alvaro Bedoya, who was appointed as a Commissioner for the Federal Trade Commission in 2022. In our conversation, we talked about his work there, about why we need to be wary of workplace surveillance, and why it’s so important for everyone that we strengthen our privacy laws. We even talked about Dr. Seuss!

Last week the Trump administration attempted to terminate Alvaro, along with another FTC commissioner, even though Alvaro's appointment doesn't expire until 2029. The law is clear: The president does not have the power to fire FTC commissioners at will. The FTC’s focus on protecting privacy has been particularly important over the last five years; with Alvaro's firing, the Trump Administration has stepped far away from that needed focus to protect all of us as users of digital technologies.

We hope you’ll take some time to listen to this May 2023 conversation with Alvaro about the better digital world he’s been trying to build through his work at the FTC and his previous work as the founding director of the Center on Privacy & Technology at Georgetown University Law Center.

Dr. Seuss wrote a story about a Hawtch-Hawtcher Bee-Watcher whose job it is to watch his town’s one lazy bee, because “a bee that is watched will work harder, you see.” But that doesn’t seem to work, so another Hawtch-Hawtcher is assigned to watch the first, and then another to watch the second... until the whole town is watching each other watch a bee.

%3Ciframe%20height%3D%2252px%22%20width%3D%22100%25%22%20frameborder%3D%22no%22%20scrolling%3D%22no%22%20seamless%3D%22%22%20src%3D%22https%3A%2F%2Fplayer.simplecast.com%2F988938e8-8496-4ce4-874d-60e0fe967232%3Fdark%3Dtrue%26amp%3Bcolor%3D000000%22%20allow%3D%22autoplay%22%3E%3C%2Fiframe%3E Privacy info. This embed will serve content from simplecast.com

    

You can also find this episode on the Internet Archive and on YouTube.

To Federal Trade Commissioner Alvaro Bedoya, the story—which long predates the internet—is a great metaphor for why we must be wary of workplace surveillance, and why we need to strengthen our privacy laws. Bedoya has made a career of studying privacy, trust, and competition, and wishes for a world in which we can do, see, and read what we want, living our lives without being held back by our identity, income, faith, or any other attribute. In that world, all our interactions with technology —from social media to job or mortgage applications—are on a level playing field. 

Bedoya speaks with EFF’s Cindy Cohn and Jason Kelley about how fixing the internet should allow all people to live their lives with dignity, pride, and purpose.

In this episode, you’ll learn about: 

  • The nuances of work that “bossware,” employee surveillance technology, can’t catch. 
  • Why the Health Insurance Portability Accountability Act (HIPAA) isn’t the privacy panacea you might think it is. 
  • Making sure that one-size-fits-all privacy rules don’t backfire against new entrants and small competitors. 
  • How antitrust fundamentally is about small competitors and working people, like laborers and farmers, deserving fairness in our economy. 

Alvaro Bedoya was nominated by President Joe Biden, confirmed by the U.S. Senate, and sworn in May 16, 2022 as a Commissioner of the Federal Trade Commission; his term expires in 2029. Bedoya was the founding director of the Center on Privacy & Technology at Georgetown University Law Center, where he was also a visiting professor of law. He has been influential in research and policy at the intersection of privacy and civil rights, and co-authored a 2016 report on the use of facial recognition by law enforcement and the risks that it poses. He previously served as the first Chief Counsel to the Senate Judiciary Subcommittee on Privacy, Technology and the Law after its founding in 2011, and as Chief Counsel to former U.S. Sen. Al Franken (D-MN); earlier, he was an associate at the law firm WilmerHale. A naturalized immigrant born in Peru and raised in upstate New York, Bedoya previously co-founded the Esperanza Education Fund, a college scholarship for immigrant students in the District of Columbia, Maryland, and Virginia. He also served on the Board of Directors of the Hispanic Bar Association of the District of Columbia. He graduated summa cum laude from Harvard College and holds a J.D. from Yale Law School, where he served on the Yale Law Journal and received the Paul & Daisy Soros Fellowship for New Americans.

Transcript

ALVARO BEDOYA
One of my favorite Dr. Seuss stories is about this town called Hawtch Hawtch. So, in the town of Hawtch Hawtch, there's a town bee and you know, they presumably make honey, but the Hawtch Hawtcher one day realize that the bee that is watched will work harder you see? And so they hire a Hawtch Hawtcher to be on bee watching watch, but then you know, the bee isn't really doing much more than it normally is doing. And so they think, oh, well, the Hawtch Hawtcher is not watching hard enough. And so they hire another hot hocher to be on bee watcher watcher watch, I think is what Dr. Seuss calls it. And so there's this wonderful drawing of 12 Hawtch Hawtchers, you know, each one and either watching, watching watch, or actually, you know, the first one's watching the bee and, and the whole thing is just completely absurd.

CINDY COHN
That’s FTC Commissioner Alvaro Bedoya describing his favorite Dr. Seuss story – which he says works perfectly as a metaphor for why we need to be wary of workplace surveillance, and strengthen our privacy laws.

I’m Cindy Cohn, the executive director of the Electronic Frontier Foundation.

JASON KELLEY
And I’m Jason Kelley. EFF’s Associate Director of Digital Strategy. This is our podcast, How to Fix the Internet.

Our guest today is Alvaro Bedoya. He’s served as a commissioner for the Federal Trade Commission since May of 2022, and before that he was the founding director of the Center on Privacy & Technology at Georgetown University Law Center, where he was also a visiting professor of law. So he thinks a lot about many of the issues we’re also passionate about at EFF – trust, privacy, competition, for example – and about how these issues are all deeply intertwined

CINDY COHN
We decided to start with our favorite question: What does the world look like if we get this stuff right?

ALVARO BEDOYA
For me, I think it is a world where you wake up in the morning, live your life and your ability to do what you want to do. See what you wanna see. Read what you wanna read and live the life that you want to live is unconnected to who you are in a good way.

In other words, what you look like, what side of the tracks you're from, how much money you have. Your gender, your gender identity, your sexuality, your religious beliefs, that those things don't hold you down in any way, and that you can love those things and have those things be a part of your life. But that they only empower you and help you. I think it's also a world… we see the great parts of technology. You know, one of the annoying things of having worked in privacy for so long is that you're often in this position where you have to talk about how technology hurts people. Technology can be amazing, right?

Mysterious, wonderful, uh, empowering. And so I think this is a world where those interactions are defined by those positive aspects of technology. And so for me, when I think about where those things go wrong, sorry, falling into old tropes here, but thinking about it positively, increasingly, people are applying for jobs online. They're applying for mortgages online. They are doing all these capital letter decisions that are now mediated by technology.

And so this world is also a world where, again, you are treated fairly in those decisions and you don't have to think twice about, hold on a second, I just applied for a loan. I just applied for a job, you know, I just applied for a mortgage. Is my zip code going to be used against me? Is my social media profile, you know, that reveals my interests gonna be used against me. Is my race gonna be used against me? In this world, none of that happens, and you can focus on preparing for that job interview and finding the right house for you and your family, finding the right rental for you and your family.

Now, I think it's also a world where you can start a small business without fear that the simple fact that you're not connected to a bigger platform or a bigger brand won't be used against you, where you have a level playing field to win people over.

CINDY COHN
I think that's great. You know, leveling the playing field is one of the original things that we were hoping, you know, that digital technologies could do. It also makes me think of that old New Yorker thing, you know, on the internet, no one knows you're a dog.

ALVARO BEDOYA
(Laughs) Right.

CINDY COHN
In some ways I think the vision is on the internet. You know, again, I don't think that people should leave the other parts of their lives behind when they go on the internet. Your identity matters, but that it doesn't, the fact that you're a dog doesn't mean you can't play. I'm probably butchering that poor cartoon too much.

ALVARO BEDOYA
No, I don't. I don't think you are, but I don't know why it did, but it reminded me of one other thing, which is in this world, you, you go to a. Whether it's at home in your basement like I am now, you know, or in your car or at an office, uh, uh, at a business. And you have a shot at working with pride and dignity where every minute of your work isn't measured and quantified. Where you have the ability to focus on the work rather than the surveillance of that work and the judgments that other people might make around that minute surveillance and, and you can focus on the work itself. I think too often people don't recognize the strangeness of the fact that when you watch tv, when you watch a streaming site, when you watch cable, when you go shopping, all of that stuff is protected by privacy law. And yet most of us spend a good part of our waking hours working and there are. Really no federal, uh, uh, worker privacy protections. That, for me is, is one of the biggest gaps in our sectoral privacy system that we've yet to confront.

But the world that you wanted me to talk about definitely is a world where you can go to work and do that work with dignity and pride, uh, without minute surveillance of everything you.

CINDY COHN
Yeah. And I think inherent in that is this, you know, this, this observation that, you know, being watched all the time doesn't work as a matter of humanity, right? It's a human rights issue to be watched all the time. I mean, that's why when they build prisons, right, it's the panopticon, right? That's where that idea comes from, is this idea that people who have lost their liberty get watched all the time.

So that has to be a part of building this better future, a space where, you know, we’re not being watched all the time. And I think you're exactly right that we kind of have this gigantic hole in people's lives, which is their work lives where it's not only that people don't have enough freedom right now, it's actually headed in the other direction. I know this is something that we think about a lot, especially Jason does at EFF.

JASON KELLEY
Yeah, I mean we, we write quite a bit about Boss Ware. We've done a variety of research into Boss Ware technology. I wonder if you could talk a little bit about maybe like some concrete examples that you've seen where that technology is sort of coming to fruition, if you will. Like it's being used more and more and, and why we need to, to tackle it, because I think a lot of people probably, uh, listening to this aren't, aren't as familiar with it as they could be.

And at the top of this episode we heard you describe your favorite Dr. Seuss tale – about the bees and the watchers, and the watchers watching the watchers, and so on to absurdity. Now can you tell us why you think that’s such an important image?

ALVARO BEDOYA
I think it's a valuable metaphor for the fact that a lot of this surveillance software may not offer as complete a picture as employers might think it does. It may not have the effect that employers think it does, and it may not ultimately do what people want it to do. And so I think that anyone who is thinking about using the software should ask hard questions about ‘is this actually gonna capture what I'm being told it will capture? Does it account for the 20% tasks of my workers' jobs?’ So, you know, there's always an 80/20 rule and so, you know, as with, as with work, most of what you do is one thing, but there's usually 20% that's another thing. And I think there's a lot of examples where that 20%, like, you know, occasionally using the bathroom right, isn't accounted for by the software. And so it looks like the employee’s slacking, but actually they're just being a human being. And so I would encourage people to ask hard questions about the sophistication of the software and how it maps onto the realities of work.

JASON KELLEY
Yeah. That's a really accurate way for people to start to think about it because I think a lot of people really feel that. Um, if they can measure it, then it must be useful.

ALVARO BEDOYA
Yes!

JASON KELLEY
In my own experience, before I worked at EFF, I worked somewhere where, eventually, a sort of boss ware type tool was installed and it had no connection to the job I was doing.

ALVARO BEDOYA
That’s interesting.

JASON KELLEY
It was literally disconnected.

ALVARO BEDOYA:
Can you share the general industry?

JASON KELLEY
It was software. I worked as a, I was in marketing for a software company and um, I was remote and it was remote way before p the pandemic. So, you know, there's sort of, I think boss ware has increased probably during the pandemic. I think we've seen that because people are worried that if you're not in the office, you're not working.

ALVARO BEDOYA
Right.

JASON KELLEY
There's no evidence, boss wear can't give evidence that that's true. It can just give evidence in, you know, whether you're at your computer –

ALVARO BEDOYA
Right. Whether you're typing.

JASON KELLEY
Whether you're typing. Yeah. And what happened in my scenario without going into too much detail was that it mattered what window I was in. and it didn't always, at first it was just like, are you at your computer for eight hours? And then it was, are you at your computer in these specific windows for eight hours? And then it was, are you typing in those specific windows for eight hours? The screws kept getting twisted, right, until I was actually at my computer for 12 hours to get eight hours of ‘productive’ work in, as it was called.

And so, yeah, I left that job. Obviously, I work at EFF now for a reason. And is was one of the things that I remember when I started at EFF, part of what I like about what we do is that we think about people's humanity in what they're doing and how that interacts with technology.

And I think boss ware is one of those areas where it doesn't, um, because it, it is so common for an employer to sort of disengage from the employee and sort of think of them as like a tool. It's, it's an area where it's easy for to install something or try to install something where that happens. So I'm glad you're working on it. It's definitely an issue.

ALVARO BEDOYA
Well, I'm thinking about it, you know, and it's certainly something I, I care about and, and I think, I think my hope is, My hope is that, um, you know, the pandemic was horrific. Is horrific. My hope is that one of the realizations coming out of it from so many people going remote is the realization that particularly for some jobs, you know, uh, um, a lot of us are lucky to have these jobs where a lot of our time turns.

Being able to think clearly and carefully about a, about something, and that's a luxury. Um, but particularly for those jobs, my, my suspicion is for an even broader range of jobs that this idea of a workday where you sit down, work eight hours and sit up, you know, and, and that is the ideal workday I don't think that's a maximally productive day, and I think there's some really interesting trials around the four-day work week, and my hope is that, you know, when my kids are older, that there will be a recognition that working harder, staying up later, getting up earlier, is not the best way to get the best work from people. And people need time to think. They need time to relax. They need time to process things. And so that is my hope that that is one of the realizations around it. But you're exactly right, Jason, is that one of my concerns around this software is that there's this idea that if it can be measured, it must be important. And I think you use a great example, speaking in general here, that of software that may presume that if you aren't typing, you're not working, or if you're not in a window, you're not working, when actually you might be doing the most important work. You know, jotting down notes, organizing your thoughts, that lets you do the best stuff as it were.

Music transition

JASON KELLEY
I want to jump in for a little mid-show break to say thank you to our sponsor.

“How to Fix the Internet” is supported by The Alfred P. Sloan Foundation’s Program in Public Understanding of Science and Technology. Enriching people’s lives through a keener appreciation of our increasingly technological world and portraying the complex humanity of scientists, engineers, and mathematicians. So a tip of the hat to them for their assistance.

Now back to our conversation with Alvaro Bedoya.

CINDY COHN
Privacy issues are of course near and dear to our hearts at EFF and I know that's really the world you come out of as well. Although your perch is a little, a little different right now. We came to the conclusion that we can't address privacy if we don't address competition and antitrust issues. And I think you've come someplace similar perhaps, and I'd love for you to talk about how you think privacy and questions around competition and antitrust intertwine.

ALVARO BEDOYA
So I will confess, I don't know if I have figured it out, but I can offer a few thoughts. First of all, I think that a lot of the antitrust claims are not what they seem to be. When companies talk about how important it is to have gatekeeping around app stores because of privacy and this is one of the reasons I support the bills, I think it's Blumenthal Blackburn bill to, um, to change the way app stores are, are run and, and, and kick the tires on that gatekeeping model because I am skeptical about a lot of those pro-privacy, anti-antitrust claims, that is one thing. On the other hand, I do think we need to think carefully about the rules that are put in place, backfiring against new entrants and small competitors. And I think a lot of legislators and policy makers in the US and Europe appreciate this and are getting this right and institute a certain set of rules for bigger companies and different ones for smaller ones, I think one of the ways this can go wrong is when it's just about the size of the company rather than the size of the user base.

I think that if you are, you know, suddenly of a hundred million users that you're not a small company, even if you have, you know, a small number of employees, but I, I do think that those concerns are real and that that policy makers and people in my role need to think about the costs of privacy compliance in a way that does not inadvertently create an unlevel playing field for, for small competitors.

I will confess that sometimes things that appear to be, uh, um, antitrust problems are privacy problems in that they reflect legal gaps around the sectoral privacy framework that unfortunately has yet to be updated. So I think I can give one example where there was the recent merger of, uh, Amazon and One Medical, and, well, I can't go into the antitrust analysis that may or may not have occurred at the commission. I wrote a statement on the completion of the merger, which highlighted a gap that we have around the anonymization rule in our health privacy law. For example, people think that HIPAA is actually the Health Information Privacy Act. It's not, it's actually the Health Insurance Portability Accountability Act. And I think that little piece of common wisdom speaks to a broader gap in our understanding of health privacy. So I think a lot of people think HIPAA will protect their data and that it won't be used in other ways by their doctor, by whoever it is that has their HIPAA protected data. Well, it turns out that in 2000 when HHS promulgated. The privacy rule in good faith, it had a provision that said, Hey, look, we want to encourage the improvement in health services. We want to encourage health research and we want to encourage public health. And so we're gonna say that if you remove these, you know, 18 identifiers from health data, that it can be used for other purposes and if you look at the rule that was issued, the justification for it is that they want to promote public health.

Unfortunately, they did not put a use restriction on that. And so now, if any, doctor's practice, anyone covered by HIPAA, and I'm not gonna go into the rabbit hole of who is and who isn't, but if you're covered by HIPAA, All they need to do is remove those identifiers from the data.

And HHS is unfortunately very clear that you can essentially do a whole lot of things that have nothing to do with healthcare as long as you do that. And what I wrote in my statement is that would surprise most consumers. Frankly, it surprised me when I connected the dots.

CINDY COHN
What I'm hearing here, which I think is really important is, first of all, we start off by thinking that some of our privacy problems are really due to antitrust concerns, but what we learn pretty quickly when we're looking at this is, first of all, privacy is used frankly, as a blocker for common sense reforms that we might need, that these giants come in and they say, well, we're gonna protect people's privacy by limiting what apps are in the app store. And, and we need to look closely at that because it doesn't seem to be necessarily true.

So first of all, you have to watch out for the kind of fake privacy argument or the argument that the tech giants need to be protected because they're protecting our privacy and we need to really interrogate that. And at the bottom of it, it often comes down to the fact that we haven't really protected people's privacy as a legal matter, right? We, we, We ground ourselves in Larry Lessig, uh, four pillars of change, right? Code, norms, laws, and markets. And you know, what they're saying is, well, we have to protect, you know, essentially what is a non-market, but the, the tech giants, that markets will protect privacy and so therefore we can't introduce more competition. And I think at the bottom of this, what we find a lot is that it's, you know, the law should be setting the baseline, and then markets can build on top of that. But we've got things a little backwards. And I think that's especially true in health. It's, it's, it's very front and center for those of us who care about reproductive justice, who are looking at the way health insurance companies are now part and parcel of other data analysis companies. And the Amazon/One Medical one is, is another one of those that unless we get the privacy law right, it's gonna be hard to get at some of these other problems.

ALVARO BEDOYA
Yeah. And those are the three things that I think a lot about first, that those propri arguments that seem to cut against, uh, competition concerns are often not what they seem.

Second, that we do need to take into account how one size fits all privacy rules could backfire in a way that hurts, uh, small companies, small competitors, uh, who are the lifeblood of, uh, innovation and employment frankly. And, and lastly, Sometimes what we're actually seeing are gaps in our sectoral privacy system.

CINDY COHN
One of the things that I know you've, you've talked about a little bit is, um, you're calling it a return to fairness, and that's specifically talking about a piece of the FTC’s authority. And I wonder if you could talk about that a little more and how you see that fitting into a, a better world.

ALVARO BEDOYA
Sure. One of the best parts of this job, um, was having this need and opportunity to immerse myself in antitrust. So as a Senate staffer, I did a little bit of work on the Comcast, uh, NBC merger against, against that merger, uh, for my old boss, Senator Franken. But I didn't spend a whole lot of time on competition concerns. And so when I was nominated, I, you know, quite literally, you know, ordered antitrust treatises and read them cover to cover.

CINDY COHN
Wonderful!

ALVARO BEDOYA
Well, sometimes it's wonderful and sometimes it's not. But in this case it was. And what you see is this complete two-sided story where on the one hand you have this really anodyne, efficiency-based description of antitrust, where it is about enforcing abstract laws and maximizing efficiency and the saying, you know antitrust is about protects competition, not competitors, and you so quickly lose sight of why we have antitrust laws and how we got them.

And so I didn't just read treatises on the law. I also read histories. And one of the things that you read and realize when you read those histories is that antitrust isn't about efficiency, antitrust is about people. And yes, it's about protecting competition, but the reason we have it is because of what happened to certain people. And so, you know, the Sherman Act, you listen to those floor debates, it is fascinating because first of all, everyone agrees as to what we want to do, what Congress wanted to do. Congress wanted to reign in the trust they wanted to reign in John Rockefeller, JP Morgan, the beef trust, the sugar trust, the steel trust. Not to mention, you know, the Rockefeller's Oil Trust. The most common concern on the floor of the Senate was what was happening to cattlemen because of concentration in meat packing plants and the prices they were getting when they brought their cattle to processors, and to market. And then you look at, uh, 1914, the Clayton Act again. There was outrage, true outrage about how those antitrust laws, you know, 10 out of the first 12 antitrust injunctions in our, in our country post-Sherman, were targeted at workers and not just any workers. They were targeted at rail car manufacturers in Pullman, where it was an integrated workforce and they were working extremely long hours for a pittance and wages, and they decided to strike.

And some of the first injunctions we saw in this country were used to. Their strike or how it was used against, uh, uh, I think they're called drayage men or dray men in New Orleans, port workers and dock workers in New New Orleans, who again, were working these 12 hour days for, for nothing in wages. And this beautiful thing happened in New Orleans where the entire city went on strike.

It was, I think it was 30 unions. It was like the typographical workers unions. And if you think that that refers to people typing on keyboards, it does. From the people typing on mechanical typewriters to the people, you know, unload loading ships in the dock of, in the port of New Orleans, everyone went on strike and they had this, this organization called the Amalgamated Working Men's Council. And um, and they went, they wanted a 10 hour, uh, uh, workday. They wanted overtime pay, and they wanted, uh, uh, union shops. They got two out of those three things. But, um, but I think it was the trade board was so unhappy with it that they, uh, persuaded federal prosecutors to sue under Sherman.

And it went before Judge Billings. And Judge Billings said, absolutely this is a violation of the antitrust laws. And the curious thing about Judge Billings decision is one of the first German decisions in a federal court, and he didn't cite for the proposition that the strike was a restraint on trade to restrain on trade law. He cited to much older decisions about criminal conspiracies and unions to justify his decision.

And so what I'm trying to say is over and over and over again, whenever, you know, you look at the actual history of antitrust laws, you know, it isn't about efficiency, it's about fairness. It is about how small competitors and working people, farmers, laborers, deserve a level playing field. And in 1890, 1914, 1936, 1950, this was what was front and center for Congress.

CINDY COHN
It's great to end with a deep dive into the original intent of Congress to protect ordinary people and fairness with antitrust laws, especially in this time when history and original intent are so powerful for so many judges. You know, it’s solid grounding for going forward. But I also appreciate how you mapped the history to see how that Congressional intent was perverted by the judicial branch almost from the very start.

This shows us where we need to go to set things right but also that it’s a difficult road. Thanks so much Alvaro.

JASON KELLEY
Well, it's a rare privilege to get to complain about a former employer directly to a sitting FTC commissioner. So that was a very enjoyable conversation for me. It's also rare to learn something new about Dr. Seuss and a Dr. Seuss story, which we got to do. But as far as actual concrete takeaways go from that conversation, Cindy, what did you pull away from that really wide ranging discussion?

CINDY COHN
It’s always fun to talk to Alvaro. I loved his vision of a life lived with dignity and pride as the goal of our fixed internet. I mean those are good solid north stars, and from them we can begin to see how that means that we use technology in a way that, for example, allows workers to just focus on their work. And honestly, while that gives us dignity, it also stops the kind of mistakes we’re seeing like tracking keystrokes, or eye contact as secondary trackers that are feeding all kinds of discrimination.

So I really appreciate him really articulating, you know, what are the kinds of lives we wanna have. I also appreciate his thinking about the privacy gaps that get revealed as technology changes and, and the, the story of healthcare and how HIPAA doesn't protect us in the way that we'd hoped to protect us, in part because I think HIPAA didn't start off at a very good place, but as things have shifted and say, you know, one medical is being bought by Amazon, suddenly we see that the presumption of who your insurance provider was and what they might use that information for, has shifted a lot, and that the privacy law hasn't, hasn't kept up.

So I appreciate thinking about it from, you know, both of those perspectives, both, you know, what the law gets wrong and how technology can reveal gaps in the law.

JASON KELLEY
Yeah. That really stood out for me as well, especially the parts where Alvero was talking about looking into the law in a way that he hadn't had to before. Like you say, because that is kind of what we do at EFF at least part of what we do. And it's nice to hear that we are sort of on the same page and that there are people in government doing that. There are people at EFF doing that. There are people all over, in different areas doing that. And that's what we have to do because technology does change so quickly and so much.

CINDY COHN
Yeah, and I really appreciate the deep dive he's done into antitrust law and, and revealing really the, the, the fairness is a deep, deep part of it. And this idea that it's only about efficiency and especially efficiency for consumers only. It's ahistorical. And that's a good thing for us all to remember since we, especially these days have a Supreme Court that is really, you know, likes history a lot and grounds and limits what it does in history. The history's on our side in terms of, you know, bringing competition law, frankly, to the digital age.

JASON KELLEY
Well that’s it for this episode of How to Fix the Internet.

Thank you so much for listening. If you want to get in touch about the show, you can write to us at podcast@eff.org or check out the EFF website to become a member or donate, or look at hoodies, t-shirts, hats or other merch.

This podcast is licensed Creative Commons Attribution 4.0 International, and includes music licensed Creative Commons Attribution 3.0 Unported by their creators. You can find their names and links to their music in our episode notes, or on our website at eff.org/podcast.

Our theme music is by Nat Keefe of BeatMower with Reed Mathis

And How to Fix the Internet is supported by the Alfred P. Sloan Foundation's program in public understanding of science and technology.

We’ll see you next time.

I’m Jason Kelley…

CINDY COHN
And I’m Cindy Cohn.

MUSIC CREDITS

This podcast is licensed Creative Commons Attribution 4.0 International, and includes the following music licensed Creative Commons Attribution 3.0 Unported by its creators:

Lost track by airtone
Common ground by airtone
Probably shouldn’t by J Lang

Josh Richman

A Win for Encryption: France Rejects Backdoor Mandate

6 days 5 hours ago

In a moment of clarity after initially moving forward a deeply flawed piece of legislation, the French National Assembly has done the right thing: it rejected a dangerous proposal that would have gutted end-to-end encryption in the name of fighting drug trafficking. Despite heavy pressure from the Interior Ministry, lawmakers voted Thursday night (article in French) to strike down a provision that would have forced messaging platforms like Signal and WhatsApp to allow hidden access to private conversations.

The vote is a victory for digital rights, for privacy and security, and for common sense.

The proposed law was a surveillance wishlist disguised as anti-drug legislation. Tucked into its text was a resurrection of the widely discredited "ghost” participant model—a backdoor that pretends not to be one. Under this scheme, law enforcement could silently join encrypted chats, undermining the very idea of private communication. Security experts have condemned the approach, warning it would introduce systemic vulnerabilities, damage trust in secure communication platforms, and create tools ripe for abuse.

The French lawmakers who voted this provision down deserve credit. They listened—not only to French digital rights organizations and technologists, but also to basic principles of cybersecurity and civil liberties. They understood that encryption protects everyone, not just activists and dissidents, but also journalists, medical professionals, abuse survivors, and ordinary citizens trying to live private lives in an increasingly surveilled world.

A Global Signal

France’s rejection of the backdoor provision should send a message to legislatures around the world: you don’t have to sacrifice fundamental rights in the name of public safety. Encryption is not the enemy of justice; it’s a tool that supports our fundamental human rights, including the right to have a private conversation. It is a pillar of modern democracy and cybersecurity.

As governments in the U.S., U.K., Australia, and elsewhere continue to flirt with anti-encryption laws, this decision should serve as a model—and a warning. Undermining encryption doesn’t make society safer. It makes everyone more vulnerable.

This victory was not inevitable. It came after sustained public pressure, expert input, and tireless advocacy from civil society. It shows that pushing back works. But for the foreseeable future, misguided lobbyists for police national security agencies will continue to push similar proposals—perhaps repackaged, or rushed through quieter legislative moments.

Supporters of privacy should celebrate this win today. Tomorrow, we will continue to keep watch.

Joe Mullin

New USPTO Memo Makes Fighting Patent Trolls Even Harder

6 days 6 hours ago

The U.S. Patent and Trademark Office (USPTO) just made a move that will protect bad patents at the expense of everyone else. In a memo released February 28, the USPTO further restricted access to inter partes review, or IPR—the process Congress created to let the public challenge invalid patents without having to wage million-dollar court battles.

If left unchecked, this decision will shield bad patents from scrutiny, embolden patent trolls, and make it even easier for hedge funds and large corporations to weaponize weak patents against small businesses and developers.

IPR Exists Because the Patent Office Makes Mistakes

The USPTO grants over 300,000 patents a year, but many of them should not have been issued in the first place. Patent examiners spend, on average, around 20 hours per patent, often missing key prior art or granting patents that are overly broad or vague. That’s how bogus patents on basic ideas—like podcasting, online shopping carts, or watching ads online—have ended up in court.

Congress created IPR in 2012 to fix this problem. IPR allows anyone to challenge a patent’s validity based on prior art, and it’s done before specialized judges at the USPTO, where experts can re-evaluate whether a patent was properly granted. It’s faster, cheaper, and often fairer than fighting it out in federal court.

The USPTO is Blocking Patent Challenges—Again

Instead of defending IPR, the USPTO is working to sabotage it. The February 28 memo reinstates a rule that allows for widespread use of “discretionary denials.” That’s when the Patent Trial and Appeal Board (PTAB) refuses to hear an IPR case for procedural reasons—even if the patent is likely invalid. 

The February 28 memo reinstates widespread use of the Apple v. Fintiv rule, under which the USPTO often rejected IPR petitions whenever there’s an ongoing district court case about the same patent. This is backwards. If anything, an active lawsuit is proof that a patent’s validity needs to be reviewed—not an excuse to dodge the issue.

In 2022, former USPTO Director Kathi Vidal issued a memo making clear that the PTAB should hear patent challenges when “a petition presents compelling evidence of unpatentability,” even if there is parallel court litigation. 

That 2022 guidance essentially saved the IPR system. Once PTAB judges were told to consider all petitions that showed “compelling evidence,” the procedural denials dropped to almost nothing. This February 28 memo signals that the USPTO will once again use discretionary denials to sharply limit access to IPR—effectively making patent challenges harder across the board.  

Discretionary Denials Let Patent Trolls Rig the System

The top beneficiary of this decision will be patent trolls, shell companies formed expressly for the purpose of filing patent lawsuits. Often patent trolls seek to extract a quick settlement before a patent can be challenged. With IPR becoming increasingly unavailable, that will be easier than ever. 

Patent owners know that discretionary denials will block IPRs if they file a lawsuit first. That’s why trolls flock to specific courts, like the Western District of Texas, where judges move cases quickly and rarely rule against patent owners.

By filing lawsuits in these troll-friendly courts, patent owners can game the system—forcing companies to pay up rather than risk millions in litigation costs.

The recent USPTO memo makes this problem even worse. Instead of stopping the abuse of discretionary denials, the USPTO is doubling down—undermining one of the most effective ways businesses, developers, and consumers can fight back against bad patents.

Congress Created IPR to Protect the Public—Not Just Patent Owners

The USPTO doesn’t get to rewrite the law. Congress passed IPR to ensure that weak patents don’t become weapons for extortionary lawsuits. By reinforcing discretionary denials with minimal restrictions, and, as a result, blocking access to IPRs, the USPTO is directly undermining what Congress intended.

Leaders at the USPTO should immediately revoke the February 28 memo. If they refuse, as we pointed out the last time IPR denials spiraled out of control, it’s time for Congress to step in and fix this. They must ensure that IPR remains a fast, affordable way to challenge bad patents—not just a tool for the largest corporations. Patent quality matters—because when bad patents stand, we all pay the price.

Joe Mullin

How Do You Solve a Problem Like Google Search? Courts Must Enable Competition While Protecting Privacy.

1 week ago

Can we get from a world where Google is synonymous with search to a world  where other search engines have a real chance to compete? The U.S. and state governments’ bipartisan antitrust suit, challenging the many ways that Google has maintained its search monopoly, offers an opportunity.

Antitrust enforcers have proposed a set of complementary remedies, from giving users a choice of search engine, to forcing Google to spin off Chrome and possibly Android into separate companies. Overall, this is the right approach. Google’s dominance in search is too entrenched to yield to a single fix. But there are real risks to users in the mix as well: Forced sharing of people’s sensitive search queries with competitors could seriously undermine user privacy, as could a breakup without adequate safeguards.

Let’s break it down.

The Antitrust Challenge to Google Search

The Google Search antitrust suit began in 2020 under the first Trump administration, brought by the Department of Justice and 11 states. (Another 38 states filed a companion suit.) The heart of the suit was Google’s agreements with mobile phone makers, browser makers, and wireless carriers, requiring that Google Search be the default search engine, in return for revenue share payments including up to $20 billion per year that Google paid to Apple. A separate case, filed in 2023, challenged Google’s dominance in online advertising. Following a bench trial in summer 2023, Judge Amit Mehta of the D.C. federal court found Google’s search placement agreements to be illegal under the Sherman Antitrust Act, because they foreclosed competition in the markets for “general search” and “general search text advertising.”

The antitrust enforcers proposed a set of remedies in fall 2024, and filed a revised version this month, signalling that the new administration remains committed to the case. A hearing on remedies is scheduled for April.

The Obvious Fix: Ban Search Engine Exclusivity and Other Anticompetitive Agreements

The first part of the government’s remedy proposal bans Google from making the kinds of agreements that led to this lawsuit: agreements to make Google the default search engine on a variety of platforms, agreements to pre-install Google Search products on a platform, and other agreements that would give platforms an incentive not to develop a general search engine of their own. This would mean the end of Google’s pay-for-placement agreements with Apple, Samsung, other hardware makers, and browser vendors like Mozilla.

In practice, a ban on search engine default agreements means presenting users with a screen that prompts them to choose a default search engine from among various competitors. Choice screens aren’t a perfect solution, because people tend to stick with what they know. Still, research shows that choice screens can have a positive impact on competition if they are implemented thoughtfully. The court, and the technical committee appointed to oversee Google’s compliance, should apply the lessons of this research.

It makes sense that the first step of a remedy for illegal conduct should be stopping that illegal conduct. But that’s not enough on its own. Many users choose Google Search, and will continue to choose it, because it works well enough and is familiar. Also, as the evidence in this case demonstrated, the walls that Google has built around its search monopoly have kept potential rivals from gaining enough scale to deliver the best results for uncommon search queries. So we’ll need more tools to fix the competition problem.

Safe Sharing: Syndication and Search Index

The enforcers’ proposal also includes some measures that are meant to enable competitors to overcome the scale advantages that Google illegally obtained. One is requiring Google to let competitors use “syndicated” Google search results for 10 years, with no conditions or use restrictions other than “that Google may take reasonable steps to protect its brand, its reputation, and security.” Google would also have to share the results of “synthetic queries”—search terms generated by competitors to test Google’s results—and the “ranking signals” that underlie those queries. Many search engines, including DuckDuckGo, use syndicated search results from Microsoft’s Bing, and a few, like Startpage, receive syndicated results from Google. But Google currently limits re-ranking and mixing of those results—techniques that could allow competitors to offer real alternatives. Syndication is a powerful mechanism for allowing rivals the benefits of scale and size, giving them a chance to achieve a similar scale.

Importantly, syndication doesn’t reveal Google users’ queries or other personal information, so it is a privacy-conscious tool.

Similarly, the proposal orders Google to make its index – the snapshot of the web that forms the basis for its search results - available to competitors. This too is reasonably privacy-conscious, because it presumably includes only data from web pages that were already visible to the public.

Scary Sharing: Users’ “Click and Query” Data

Another data-sharing proposal is more complicated from a privacy perspective: requiring Google to provide qualified competitors with “user-side data,” including users’ search queries and data sets used to train Google's ranking algorithms. Those queries and data sets can include intensely personal details, including medical issues, political opinions and activities, and personal conflicts. Google is supposed to apply “security and privacy safeguards,” but it's not clear how this will be accomplished. An order that requires Google to share even part of this data with competitors raises the risk of data breaches, improper law enforcement access, commercial data mining and aggregation, and other serious privacy harms.

Some in the search industry, including privacy-conscious companies like DuckDuckGo, argue that filtering this “click and query” data to remove personally identifying information can adequately protect users’ privacy while still helping Google’s competitors generate more useful search results. For example, Google could share only queries that were used by some number of unique users. This is the approach Google already takes to sharing user data under the European Union’s Digital Markets Act, though Google sets a high threshold that eliminates about 97% of the data. Other rules that could apply are excluding strings of numbers that could be Social Security or other identification numbers, and other patterns of data that may be sensitive information.

But click and query data sharing still sets up a direct conflict between competition and privacy. Google, naturally, wants to share as little data as possible, while competitors will want more. It’s not clear to us that there’s an optimal point that both protects users’ privacy well and also meaningfully promotes competition. More research might reveal a better answer, but until then, this is a dangerous path, where pursuing the benefits of competition for users might become a race to the bottom for users’ privacy.

The Sledgehammer: Splitting off Chrome and Maybe Android

The most dramatic part of the enforcers’ proposal calls for an order to split off the Chrome browser as a separate company, and potentially also the Android operating system. This could be a powerful way to open up search competition. An independent Chrome and Android could provide many opportunities for users to choose alternative search engines, and potentially to integrate with AI-based information location tools and other new search competitors. A breakup would complement the ban on agreements for search engine exclusivity by applying the same ban to Chrome and Android as to iOS and other platforms.

The complication here is that a newly independent Chrome or Android might have an incentive to exploit users’ privacy in other ways. Given a period of exclusivity in which Google could not offer a competing browser or mobile operating system, Chrome and Android could adopt a business model of monetizing users’ personal data to an even greater extent than Google. To prevent this, a divestiture (breakup) order would also have to include privacy safeguards, to keep the millions of Chrome and Android users from facing an even worse privacy landscape than they do now.

The DOJ and states are pursuing a strong, comprehensive remedy for Google’s monopoly abuses in search, and we hope they will see that effort through to a remedies hearing and the inevitable appeals. We’re also happy to see that the antitrust enforcers are seeking to preserve users’ privacy. To achieve that goal, and keep internet users’ consumer welfare squarely in sight, they should proceed with caution on any user data sharing, and on breakups.

Mitch Stoltz

State AGs Must Act: EFF Expands Call to Investigate Crisis Pregnancy Centers

1 week ago

Back in January, EFF called on attorneys general in Florida, Texas, Arkansas, and Missouri to investigate potential privacy violations and hold accountable crisis pregnancy centers (CPCs) that engage in deceptive practices. Since then, some of these centers have begun to change their websites, quietly removing misleading language and privacy claims; the Hawaii legislature is considering a bill calling on the attorney general to investigate CPCs in the state, and legislators in Georgia have introduced a slate of bills to tackle deceptive CPC practices.

But there is much more to do. Today, we’re expanding our call to attorneys general in Tennessee, Oklahoma, Nebraska, and North Carolina, urging them to investigate the centers in their states.

Many CPCs have been operating under a veil of misleading promises for years—suggesting that clients’ personal health data is protected under HIPAA, even though numerous reports suggest otherwise; that privacy policies are not followed consistently, and that clients' personal data may be shared across networks without appropriate consent. For example, in a case in Louisiana, we saw firsthand how a CPC inadvertently exposed personal data from multiple clients in a software training video. This kind of error not only violates individuals’ privacy but could also lead to emotional and psychological harm for individuals who trusted these centers with their sensitive information.

We list multiple examples from CPCs in each of the states that claim to comply with HIPAA in our letters to Attorneys General Hilgers, Jackson, Drummond, and Skrmetti. Those include:

  • Gateway Women’s Care in North Carolina claims that “we hold your right to confidentiality with the utmost care and respect and comply with HIPAA privacy standards, which protect your personal and health information” in a blog post titled “Is My Visit Confidential?” Gateway Women’s Care received $56,514 in government grants in 2023. 
  • Assure Women’s Center in Nebraska stresses that it is “HIPAA compliant!” in a blog post that expressly urges people to visit them “before your doctor.”

As we’ve noted before, there are far too few protections for user privacy–including medical privacy—and individuals have little control over how their personal data is collected, stored, and used. Until Congress passes a comprehensive privacy law that includes a private right of action, state attorneys general must take proactive steps to protect their constituents from unfair or deceptive privacy practices.

It’s time for state and federal leaders to reassess how public funds are allocated to these centers. Our elected officials are responsible for ensuring that personal information, especially our sensitive medical data, is protected. After all, no one should have to choose between their healthcare and their privacy.

Corynne McSherry

EFF’s Reflections from RightsCon 2025 

1 week 1 day ago

EFF was delighted to once again attend RightsCon—this year hosted in Taipei, Taiwan between 24-27 February. As with previous years, RightsCon provided an invaluable opportunity for human rights experts, technologists, activists, and government representatives to discuss pressing human rights challenges and their potential solutions. 

For some attending from EFF, this was the first RightsCon. For others, their 10th or 11th. But for all, one message was spoken loud and clear: the need to collectivize digital rights in the face of growing authoritarian governments and leaders occupying positions of power around the globe, as well as Big Tech’s creation and provision of consumer technologies for use in rights-abusing ways. 

EFF hosted a multitude of sessions, and appeared on many more panels—from a global perspective on platform accountability frameworks, to the perverse gears supporting transnational repression, and exploring tech tools for queer liberation online. Here we share some of our highlights.

Major Concerns Around Funding Cuts to Civil Society 

Two major shifts affecting the digital rights space underlined the renewed need for solidarity and collective responses. First, the Trump administration’s summary (and largely illegal) funding cuts for the global digital rights movement from USAID, the State Department, the National Endowment for Democracy and other programs, are impacting many digital rights organizations across the globe and deeply harming the field. By some estimates, U.S. government cuts, along with major changes in the Netherlands and elsewhere, will result in a 30% reduction in the size of the global digital rights community, especially in global majority countries. 

Second, the Trump administration’s announcement to respond to the regulation of U.S. tech companies with tariffs has thrown another wrench into the work of many of us working towards improved tech accountability. 

We know that attacks on civil society, especially on funding, are a go-to strategy for authoritarian rulers, so this is deeply troubling. Even in more democratic settings, this reinforces the shrinking of civic space hindering our collective ability to organize and fight for better futures. Given the size of the cuts, it’s clear that other funders will struggle to counterbalance the dwindling U.S. public funding, but they must try. We urge other countries and regions, as well as individuals and a broader range of philanthropy, to step up to ensure that the crucial work defending human rights online will be able to continue. 

Community Solidarity with Alaa Abd El-Fattah and Laila Soueif

The call to free Alaa Abd El-Fattah from illegal detention in Egypt was a prominent message heard throughout RightsCon. During the opening ceremony, Access Now’s new Executive Director, Alejandro Mayoral, talked about Alaa’s keynote speech at the very first RightsCon and stated: “We stand in solidarity with him and all civil society actors, activists, and journalists whose governments are silencing them.” The opening ceremony also included a video address from Alaa’s mother, Laila Soueif, in which she urged viewers to “not let our defeat be permanent.” Sadly, immediately after that address Ms. Soueif was admitted to the hospital as a result of her longstanding hunger strike in support of her son. 

The calls to #FreeAlaa and save Laila were again reaffirmed during the closing ceremony in a keynote by Sara Alsherif, Migrant Digital Justice Programme Manager at UK-based digital rights group Open Rights Group and close friend of Alaa. Referencing Alaa’s early work as a digital activist, Alsherif said: “He understood that the fight for digital rights is at the core of the struggle for human rights and democracy.” She closed by reminding the hundreds-strong audience that “Alaa could be any one of us … Please do for him what you would want us to do for you if you were in his position.”

EFF and Open Rights Group also hosted a session talking about Alaa, his work as a blogger, coder, and activist for more than two decades. The session included a reading from Alaa’s book and a discussion with participants on strategies.

Platform Accountability in Crisis

Online platforms like Facebook and services like Google are crucial spaces for civic discourse and access to information. Many sessions at RightsCon were dedicated to the growing concern that these platforms have also become powerful tools for political manipulation, censorship, and control. With the return of the Trump administration, Facebook’s shift in hate speech policies, and the growing geo-politicization of digital governance, many now consider platform accountability being in crisis. 

A dedicated “Day 0” event, co-organized by Access Now and EFF, set the stage of these discussions with a high-level panel reflecting on alarming developments in platform content policies and enforcement. Reflecting on Access Now’s “rule of law checklist,” speakers stressed how a small group of powerful individuals increasingly dictate how platforms operate, raising concerns about democratic resilience and accountability. They also highlighted the need for deeper collaboration with global majority countries on digital governance, taking into account diverse regional challenges. Beyond regulation, the conversation discussed the potential of user-empowered alternatives, such as decentralized services, to counter platform dominance and offer more sustainable governance models.

A key point of attention was the EU’s Digital Services Act (DSA), a rulebook with the potential to shape global responses to platform accountability but one that also leaves many crucial questions open. The conversation naturally transitioned to the workshop organized by the DSA Human Rights Alliance, which focused more specifically on the global implications of DSA enforcement and how principles for a “Human Rights-Centered Application of the DSA” could foster public interest and collaboration.

Fighting Internet Shutdowns and Anti-Censorship Tools

Many sessions discussed internet shutdowns and other forms of internet blocking impacted the daily lives of people under extremely oppressive regimes. The overwhelming conclusion was that we need encryption to remain strong in countries with better conditions of democracy in order to continue to bridge access to services in places where democracy is weak. Breaking encryption or blocking important tools for “national security,” elections, exams, protests, or for law enforcement only endangers freedom of information for those with less political power. In turn, these actions empower governments to take possibly inhumane actions while the “lights are out” and people can’t tell the rest of the world what is happening to them.

Another pertinent point coming out of RightsCon was that anti-censorship tools work best when everyone is using them. Diversity of users not only helps to create bridges for others who can’t access the internet through normal means, but it also helps to create traffic that looks innocuous enough to bypass censorship blockers. Discussions highlighted how the more tools we have to connect people without unique traffic, the less chances there are for government censorship technology to keep their traffic from going through. We know some governments are not above completely shutting down internet access. But in cases where they still allow the internet, user diversity is key. It also helps to move away from narratives that imply “only criminals” use encryption. Encryption is for everyone, and everyone should use it. Because tomorrow’s internet could be tested by future threats.

Palestine: Human Rights in Times of Conflict

At this years RightsCon, Palestinian non-profit organization 7amleh, in collaboration with the Palestinian Digital Rights Coalition and supported by dozens of international organizations including EFF, launched #ReconnectGaza, a global campaign to rebuild Gaza’s telecommunications network and safeguard the right to communication as a fundamental human right. The campaign comes on the back of more than 17 months of internet blackouts and destruction to Gaza’s telecommunications infrastructure by the Israeli authorities. Estimates indicate that 75% of Gaza’s telecommunications infrastructure has been damaged, with 50% completely destroyed. This loss of connectivity has crippled essential services—preventing healthcare coordination, disrupting education, and isolating Palestinians from the digital economy. 

On another panel, EFF raised concerns to Microsoft representatives about an AP report that emerged just prior to Rightscon about the company providing services to the Israeli Defense Forces that are being used as part of the repression of Palestinians in Gaza as well as in the bombings in Lebanon. We noted that Microsoft’s pledges to support human rights seemed to be in conflict with this, something EFF has already raised about Google and Amazon and their work on Project Nimbus.  Microsoft promised to look into that allegation, as well as one about its provision of services to Saudi Arabia. 

In the RightsCon opening ceremony, Alejandro Mayoral noted that: “Today, the world’s eyes are on Gaza, where genocide has taken place, AI is being weaponized, and people’s voices are silenced as the first phase of the fragile Palestinian-Israeli ceasefire is realized.” He followed up by saying, “We are surrounded by conflict. Palestine, Sudan, Myanmar, Ukraine, and beyond…where the internet and technology are being used and abused at the cost of human lives.” Following this keynote, Access Now’s MENA Policy and Advocacy Director, Marwa Fatafta, hosted a roundtable to discuss technology in times of conflict, where takeaways included the reminder that “there is no greater microcosm of the world’s digital rights violations happening in our world today than in Gaza. It’s a laboratory where the most invasive and deadly technologies are being tested and deployed on a besieged population.”

Countering Cross-Border Arbitrary Surveillance and Transnational Repression

Concerns about ongoing legal instruments that can be misused to expand transnational repression were also front-and-center at RightsCon. During a Citizen Lab-hosted session we participated in, participants examined how cross-border policing can become a tool to criminalize marginalized groups, the economic incentives driving these criminalization trends, and the urgent need for robust, concrete, and enforceable international human rights safeguards. They also noted that the newly approved UN Cybercrime Convention, with only minimal protections, adds yet another mechanism for broadening cross-border surveillance powers, thereby compounding the proliferation of legal frameworks that lack adequate guardrails against misuse.

Age-Gating the Internet

EFF co-hosted a roundtable session to workshop a human rights statement addressing government mandates to restrict young people’s access to online services and specific legal online speech. Participants in the roundtable represented five continents and included representatives from civil society and academia, some of whom focused on digital rights and some on childrens’ rights. Many of the participants will continue to refine the statement in the coming months.

Hard Conversations

EFF participated in a cybersecurity conversation with representatives of the UK government, where we raised serious concerns about the government’s hostility to strong encryption, and the resulting insecurity they had created for both UK citizens and the people who communicate with them by pressuring Apple to ensure UK law enforcement access to all communications. 

Equity and Inclusion in Platform Discussions, Policies, and Trust & Safety

The platform economy is an evergreen RightsCon topic, and this year was no different, with conversations ranging from the impact of content moderation on free expression to transparency in monetization policies, and much in between. Given the recent developments at Meta, X, and elsewhere, many participants were rightfully eager to engage.

EFF co-organized an informal meetup of global content moderation experts with whom we regularly convene, and participated in a number of sessions, such as on the decline of user agency on platforms in the face of growing centralized services, as well as ways to expand choice through decentralized services and platforms. One notable session on this topic was hosted by the Center for Democracy and Technology on addressing global inequities in content moderation, in which speakers presented findings from their research on the moderation by various platforms of content in Maghrebi Arabic and Kiswahili, as well as a forthcoming paper on Quechua.

Reflections and Next Steps

RightsCon is a conference that reminds us of the size and scope of the digital rights movement around the world. Holding it in Taiwan and in the wake of the huge cuts to funding for so many created an urgency that was palpable across the spectrum of sessions and events. We know that we’ve built a robust community and that can weather the storms, and in the face of overwhelming pressure from government and corporate actors, it's essential that we resist the temptation to isolate in the face of threats and challenges but instead continue to push forward with collectivisation and collaboration to continue speaking truth to power, from the U.S. to Germany, and across the globe.

Paige Collings

California’s A.B. 412: A Bill That Could Crush Startups and Cement A Big Tech AI Monopoly

1 week 3 days ago

California legislators have begun debating a bill (A.B. 412) that would require AI developers to track and disclose every registered copyrighted work used in AI training. At first glance, this might sound like a reasonable step toward transparency. But it’s an impossible standard that could crush small AI startups and developers while giving big tech firms even more power.

A Burden That Small Developers Can’t Bear

The AI landscape is in danger of being dominated by large companies with deep pockets. These big names are in the news almost daily. But they’re far from the only ones – there are dozens of AI companies with fewer than 10 employees trying to build something new in a particular niche. 

This bill demands that creators of any AI model–even a two-person company or a hobbyist tinkering with a small software build– identify copyrighted materials used in training.  That requirement will be incredibly onerous, even if limited just to works registered with the U.S. Copyright Office. The registration system is a cumbersome beast at best–neither machine-readable nor accessible, it’s more like a card catalog than a database–that doesn’t offer information sufficient to identify all authors of a work,  much less help developers to reliably match works in a training set to works in the system.

Even for major tech companies, meeting these new obligations  would be a daunting task. For a small startup, throwing on such an impossible requirement could be a death sentence. If A.B. 412 becomes law, these smaller players will be forced to devote scarce resources to an unworkable compliance regime instead of focusing on development and innovation. The risk of lawsuits—potentially from copyright trolls—would discourage new startups from even attempting to enter the field.

A.I. Training Is Like Reading And It’s Very Likely Fair Use 

A.B. 412 starts from a premise that’s both untrue and harmful to the public interest: that reading, scraping or searching of open web content shouldn’t be allowed without payment. In reality, courts should, and we believe will, find that the great majority of this activity is fair use. 

It’s now bedrock internet law principle that some forms of copying content online are transformative, and thus legal fair use. That includes reproducing thumbnail images for image search, or snippets of text to search books

The U.S. copyright system is meant to balance innovation with creator rights, and courts are still working through how copyright applies to AI training. In most of the AI cases, courts have yet to consider—let alone decide—how fair use applies. A.B. 412 jumps the gun, preempting this process and imposing a vague, overly broad standard that will do more harm than good.

Importantly, those key court cases are all federal. The U.S. Constitution makes it clear that copyright is governed by federal law, and A.B. 412 improperly attempts to impose state-level copyright regulations on an issue still in flux. 

A.B. 412 Is A Gift to Big Tech

The irony of A.B. 412 is that it won’t stop AI development—it will simply consolidate it in the hands of the largest corporations. Big tech firms already have the resources to navigate complex legal and regulatory environments, and they can afford to comply (or at least appear to comply) with A.B. 412’s burdensome requirements. Small developers, on the other hand, will either be forced out of the market or driven into partnerships where they lose their independence. The result will be less competition, fewer innovations, and a tech landscape even more dominated by a handful of massive companies.

If lawmakers are able to iron out some of the practical problems with A.B. 412 and pass some version of it, they may be able to force programmers to research–and effectively, pay off–copyright owners before they even write a line of code. If that’s the outcome in California, Big Tech will not despair. They’ll celebrate. Only a few companies own large content libraries or can afford to license enough material to build a deep learning model. The possibilities for startups and small programmers will be so meager, and competition will be so limited, that profits for big incumbent companies will be locked in for a generation. 

If you are a California resident and want to speak out about A.B. 412, you can find and contact your legislators through this website

Joe Mullin

EFF Joins 7amleh Campaign to #ReconnectGaza

1 week 3 days ago

In times of conflict, the internet becomes more than just a tool—it is a lifeline, connecting those caught in chaos with the outside world. It carries voices that might otherwise be silenced, bearing witness to suffering and survival. Without internet access, communities become isolated, and the flow of critical information is disrupted, making an already dire situation even worse.

At this years RightsCon conference hosted in Taiwan, Palestinian non-profit organization 7amleh, in collaboration with the Palestinian Digital Rights Coalition and supported by dozens of international organizations including EFF, launched #ReconnectGaza, a global campaign to rebuild Gaza’s telecommunications network and safeguard the right to communication as a fundamental human right. 

The campaign comes on the back of more than 17 months of internet blackouts and destruction to Gaza’s telecommunications infrastructure by  the Israeli authorities.Estimates indicate that 75% of Gaza’s telecommunications infrastructure has been damaged, with 50% completely destroyed. This loss of connectivity has crippled essential services— preventing healthcare coordination, disrupting education, and isolating Palestinians from the digital economy. In response, there is an urgent and immediate need  to deploy emergency solutions, such as eSIM cards, satellite internet access, and mobile communications hubs.

At the same time, there is an opportunity to rebuild towards a just and permanent solution with modern technologies that would enable reliable, high-speed connectivity that supports education, healthcare, and economic growth. The campaign calls for this as a paramount component to reconnecting Gaza, whilst also ensuring the safety and protection of telecommunications workers on the ground, who risk their lives to repair and maintain critical infrastructure. 

Further, beyond responding to these immediate needs, 7amleh and the #ReconnectGaza campaign demands the establishment of an independent Palestinian ICT sector, free from external control, as a cornerstone of Gaza’s reconstruction and Palestine's digital sovereignty. Palestinians have been subject to Israel internet controls since the Oslo Accords, which settled that Palestine should have its own telephone, radio, and TV networks, but handed over details to a joint technical committee. Ending the deliberate isolation of the Palestinian people is critical to protecting fundamental human rights.

This is not the first time internet shutdowns have been weaponized as a tool for oppression. In 2012, Palestinians in Gaza were subject to frequent power outages and were forced to rely on generators and insecure dial-up connections for connectivity. More recently since October 7, Palestinians in Gaza have experienced repeated internet blackouts inflicted by the Israeli authorities. Given that all of the internet cables connecting Gaza to the outside world go through Israel, the Israeli Ministry of Communications has the ability to cut off Palestinians’ access with ease. The Ministry also allocates spectrum to cell phone companies; in 2015 we wrote about an agreement that delivered 3G to Palestinians years later than the rest of the world.

Access to internet infrastructure is essential—it enables people to build and create communities, shed light on injustices, and acquire vital knowledge that might not otherwise be available. And access to it becomes even more imperative in circumstances where being able to communicate and share real-time information directly with the people you trust is instrumental to personal safety and survival. It is imperative that people’s access to the internet remains protected.

The restoration of telecommunications in Gaza is deemed an urgent humanitarian need. Global stakeholders, including UN agencies, governments, and telecommunications companies, must act swiftly to ensure the restoration and modernization of Gaza’s telecommunications.

Jillian C. York

The Foilies 2025

1 week 4 days ago
Recognize the Worst in Government Transparency 

Co-written by MuckRock's Michael Morisy, Dillon Bergin, and Kelly Kauffman

The public's right to access government information is constantly under siege across the United States, from both sides of the political aisle. In Maryland, where Democrats hold majorities, the attorney general and state legislature are pushing a bill to allow agencies to reject public records requests that they consider "harassing." At the same time, President Donald Trump's administration has moved its most aggressive government reform effort–the Department of Government Efficiency, or DOGE–outside the reach of the Freedom of Information Act (FOIA), while also beginning the mass removal of public data sets.

One of the most powerful tools to fight back against bad governance is public ridicule. That's where we come in: Every year during Sunshine Week (March 16-22). the Electronic Frontier Foundation, MuckRock and AAN Publishers team up to publish The Foilies. This annual report—now a decade old—names and shames the most repugnant, absurd, and incompetent responses to public records requests under FOIA and state transparency laws.

Sometimes the good guys win. For example, last year we highlighted the Los Angeles Police Department for using the courts to retaliate against advocates and a journalist who had rightfully received and published official photographs of police officers. The happy ending (at least for transparency): LAPD has since lost the case, and the city paid the advocates $300,000 to cover their legal bills.

Here are this year's "winners." While they may not all pay up, at least we can make sure they get the negative publicity they're owed. 

The Exorbitant FOIA Fee of the Year: Rapides Parish School District

After a church distributed a religious tract at Lessie Moore Elementary School School in Pineville, La., young students quickly dubbed its frank discussion of mature themes as “the sex book.” Hirsh M. Joshi from the Freedom From Religion Foundation, a lawyer representing a parent, filed a request with the Rapides Parish School District to try to get some basic information: How much did the school coordinate with the church distributing the material? Did other parents complain? What was the internal reaction? Joshi was stunned when the school district responded with an initial estimate of $2 million to cover the cost of processing the request. After local media picked up the story and a bit of negotiating, the school ultimately waived the charges and responded with a mere nine pages of responsive material.

While Rapides Parish’s sky-high estimate ultimately took home the gold this year, there was fierce competition. The Massachusetts State Police wanted $176,431 just to review—and potentially not even release—materials about recruits wholeave the state’s training program early. Back in Louisiana, the Jefferson Parish District Attorney’s office insisted on charging a grieving father more than $5,000 for records on the suspicious death of his own son.

The Now You See It, Now You Don’t Award: University of Wisconsin-Madison

Sports reporter Daniel Libit’s public records request is at the heart of a lawsuit that looks a lot like the Spider-Man pointing meme. In 2023, Libit filed the request for a contract between the University of Wisconsin and Altius Sports Partners, a firm that consults college athletic programs on payment strategies for college athletes ("Name, Image, Likeness" or NIL deals), after reading a university press release about the partnership.The university denied the request, claiming that Altius was actually contracted by the University of Wisconsin Foundation, a separate 501(c)(3). So, Libit asked the foundation for the contract. The foundation then denied the request, claiming it was exempt from Wisconsin’s open records laws. After the denial, Libit filed a lawsuit for the records, which was then dismissed, because the university and foundation argued that Libit had incorrectly asked for a contract between the university and Altius, as opposed to the foundation and Altius.

The foundation did produce a copy of the contract in the lawsuit, but the game of hiding the ball makes one thing clear, as Libit wrote after: “If it requires this kind of effort to get a relatively prosaic NIL consultant contract, imagine the lengths schools are willing to go to keep the really interesting stuff hidden.”

The Fudged Up Beyond All Recognition Award: Central Intelligence Agency 

A CIA official's grandma's fudge recipe was too secret for public consumption.

There are state secrets, and there are family secrets, and sometimes they mix … like a creamy, gooey confectionary.

After Mike Pompeo finished his first year as Trump's CIA director in 2017, investigative reporter Jason Leopold sent a FOIA request asking for all of the memos Pompeo sent to staff. Seven years later, the agency finally produced the records, including a "Merry Christmas and Happy New Year" message recounting the annual holiday reception and gingerbread competition, which was won by a Game of Thrones-themed entry. ("And good use of ice cream cones!" Pompeo wrote.) At the party, Pompeo handed out cards with his mom's "secret" recipe for fudge, and for those who couldn't make it, he also sent it out as an email attachment.

But the CIA redacted the whole thing, vaguely claiming it was protected from disclosure under federal law. This isn't the first time the federal government has protected Pompeo's culinary secrets: In 2021, the State Department redacted Pompeo's pizza toppings and favorite sandwich from emails.

The You Can't Handle the Truth Award: Virginia Gov. Glenn Youngkin

In Virginia, state officials have come under fire in the past few years for shielding records from the public under the broad use of a “working papers and correspondence” FOIA exemption. When a public records request came in for internal communications on the state’s Military Survivors and Dependents Education Program, which provides tuition-free college to spouses and children of military veterans killed or disabled as a result of their service, Gov. Glenn Youngkin’s office used this “working papers” exemption to reject the FOIA request.

The twist is the request was made by Kayla Owen, a military spouse and a member of the governor’s own task force studying the program. Despite Owen’s attempts to correct the parameters of the request, Youngkin’s office made the final decision in July to withhold more thantwo folders worth of communications with officials who have been involved with policy discussions about the program.

The Courts Cloaked in Secrecy Award (Tie): Solano County Superior Court, Calif., and Washoe County District Court, Nev.

Courts are usually the last place the public can go to vindicate their rights to government records when agencies flout them. When agencies lock down records, courts usually provide the key to open them up.

Except in Vallejo, Calif., where a state trial court judge decided to lock his own courtroom during a public records lawsuit—a move that even Franz Kafka would have dismissed as too surreal and ironic. The suit filed by the American Civil Liberties Union sought a report detailing a disturbing ritual in which officers bent their badges to celebrate their on-duty killings of local residents.

When public access advocates filed an emergency motion to protest the court closure, the court denied it without even letting them in to argue their case. This was not just a bad look; it violated the California and U.S. constitutions, which guarantee public access to court proceedings and a public hearing prior to barring the courtroom doors.

Not to be outdone, a Nevada trial court judge has twice barred a local group from filming hearings concerning a public records lawsuit. The request sought records of an alleged domestic violence incident at the Reno city manager’s house. Despite the Nevada Supreme Court rebuking the judge for prohibiting cameras in her courtroom, she later denied the same group from filming another hearing. The transparency group continues to fight for camera access, but its persistence should not be necessary: The court should have let them record from the get-go.      

The No Tech Support Award: National Security Agency

NSA claimed it didn't have the obsolete tech to access lecture by military computing pioneer Grace Hopper

In 1982, Rear Adm. Grace Hopper (then a captain) presented a lecture to the National Security Agency entitled “Future Possibilities: Data, Hardware, Software, and People.” One can only imagine Hopper's disappointment if she had lived long enough to learn that in the future, the NSA would claim it was impossible for its people to access the recording of the talk.

Hopper is undoubtedly a major figure in the history of computing whose records and lectures are of undeniable historical value, and Michael Ravnitzky, frequent FOIA requester and founder of Government Attic, requested this particular lecture back in 2021. Three years later, the NSA responded to tell him that they had no responsive documents.

Befuddled, Ravnitzky pointed out the lecture had been listed in the NSA’s own Television Center Catalogue. At that point, the agency copped to the actual issue. Yes, it had the record, but it was captured on AMPEX 1-inch open reel tapes, as was more common in the 1980s. Despite being a major intelligence agency with high-tech surveillance and communication capabilities, it claimed it could not find any way to access the recording.

Let’s unpack the multi-layered egregiousness of the NSA’s actions here. It took the agency three years to respond to this FOIA. When it did, the NSA claimed that it had nothing responsive, which was a lie.  But the most colossal failure by the NSA was its claim that it couldn’t find a way to make accessible to the public important moments from our history because of technical difficulties. 

But leave it to librarians to put spies to shame: The National Archives stepped in to help, and now you can watch the lecture in two parts.


Can't get enough of The Foilies? Check out our decade in review and our archives!

Dave Maass

“Guardrails” Won’t Protect Nashville Residents From AI-Enabled Camera Networks

1 week 5 days ago

Nashville’s Metropolitan Council is one vote away from passing an ordinance that’s being branded as “guardrails” against the privacy problems that come with giving the police a connected camera system like Axon’s Fusus. But Nashville locals are right to be skeptical of just how much protection from mass surveillance products they can expect.  

"I am against these guardrails," council member Ginny Welsch told the Tennessean recently. "I think they're kind of a farce. I don't think there can be any guardrail when we are giving up our privacy and putting in a surveillance system." 

Likewise, Electronic Frontier Alliance member Lucy Parsons Labs has inveighed against Fusus and the supposed guardrails as a fix to legislators’ and residents’ concerns in a letter to the Metropolitan Council. 

While the ordinance doesn’t name the company specifically, it was introduced in response to privacy concerns over the city’s possible contract for Fusus, an Axon system that facilitates access to live camera footage for police and helps funnel such feeds into real-time crime centers. In particular, local opponents are concerned about data-sharing—a critical part of Fusus—that could impede the city’s ability to uphold its values against the criminalization of some residents, like undocumented immigrants and people seeking reproductive or gender-affirming care.

This technology product, which was acquired by the police surveillance giant Axon in 2024, facilitates two major functions for police:

  • With the click of a button—or the tap of an icon on a map—officers can get access to live camera footage from public and private cameras, including the police’s Axon body-worn cameras, that have been integrated into the Fusus network.
  • Data feeds from a variety of surveillance tools—like body-worn cameras, drones, gunshot detection, and the connected camera network—can be aggregated into a system that makes those streams quickly accessible and susceptible to further analysis by features marketed as “artificial intelligence.”

From 2022 through 2023, Metropolitan Nashville Police Department (MNPD) had, unbeknownst to the public, already been using Fusus. When the contract came back under consideration, a public outcry and unanswered questions about the system led to its suspension, and the issue was deferred multiple times before the contract renewal was voted down late last year. Nashville council members determined that the Fusus system posed too great a threat to vulnerable groups that the council has sought to protect with city policies and resolutions, including pregnant residents, immigrants, and residents seeking gender-affirming care, among others. The state has criminalized some of the populations that the city of Nashville has passed ordinances to protect. 

Unfortunately, the fight against the sprawling surveillance of Fusus continues. The city council is now making its final consideration of the aforementionedan ordinance that some of its members say will protect city residents in the event that the mayor and other Fusus fans are able to get a contract signed after all.

These so-called guardrails include:

  • restricting the MNPD from accessing private cameras or installing public safety cameras in locations “where there is a reasonable expectation of privacy”; 
  • prohibiting using face recognition to identify individuals in the connected camera system’s footage; 
  • policies addressing authorized access to and use of the connected camera system, including how officers will be trained, and how they will be disciplined for any violations of the policy;
  • quarterly audits of access to the connected camera system; 
  • mandatory inclusion of a clause in procurement contracts allowing for immediate termination should violations of the ordinance be identified; 
  • mandatory reporting to the mayor and the council about any violations of the ordinance, the policies, or other abuse of access to the camera network within seven days of the discovery. 

Here’s the thing: even if these limited “guardrails” are in place, the only true protection from the improper use of the AI-enabled Fusus system is to not use it at all. 

We’ve seen that when law enforcement has access to cameras, they will use them, even if there are clear regulations prohibiting those uses: 

  • Black residents of a subsidized housing development became the primary surveillance targets for police officers with Fusus access in Toledo, Ohio. 

Firms such as Fusus and its parent company Axon are pushing AI-driven features, and databases with interjurisdictional access. Surveillance technology is bending toward a future where all of our data are being captured, including our movements by street cameras (like those that would be added to Fusus), our driving patterns by ALPR, our living habits by apps, and our actions online by web trackers, and then being combined, sold, and shared.

When Nashville first started its relationship with Fusus in 2022, the company featured only a few products, primarily focused on standardizing video feeds from different camera providers. 

Now, Fusus is aggressively leaning into artificial intelligence, claiming that its “AI on the Edge” feature is built into the initial capture phase and processes as soon as video is taken. Even if the city bans use of face recognition for the connected camera system, the Fusus system boasts that it can detect humans, objects, and combine other characteristics to identify individuals, detect movements, and set notifications based on certain characteristics and behaviors. Marketing material claims that the system comes “pre-loaded with dozens of search and analysis variables and profiles that are ready for action,” including a "robust & growing AI library.” It’s unclear how these AI recognition options are generated or how they are vetted, if at all, or whether they can even be removed as would be required by the ordinance.

A page from Fusus marketing materials, released through a public records request, featuring information on the artificial intelligence capabilities of its system

The proposed “guardrails” in Nashville are insufficient to address danger posed by mass surveillance systems, and the city of Nashville shouldn’t think they’ve protected their residents, tourists, and other visitors by passing them. Nashville residents and other advocacy groups have already raised concerns.

The only true way to protect Nashville’s residents against dragnet surveillance and overcriminalization is to block access to these invasive technologies altogether. Though this ordinance has passed its second reading, Nashville should not adopt Fusus or any other connected camera system, regardless of whether the ordinance is ultimately adopted. If Councilors care about protecting their constituents, they should hold the line against Fusus. 

Beryl Lipton

EFF to NSF: AI Action Plan Must Put People First

2 weeks ago

This past January the new administration issued an executive order on Artificial Intelligence (AI), taking the place of the now rescinded Biden-era order, calling for a new AI Action Plan tasked with “unburdening” the current AI industry to stoke innovation and remove “engineered social agendas” from the industry. This new action plan for the president is currently being developed and open to public comments to the National Science Foundation (NSF).

EFF answered with a few clear points: First, government procurement of decision-making (ADM) technologies must be done with transparency and public accountability—no secret and untested algorithms should decide who keeps their job or who is denied safe haven in the United States. Second, Generative AI policy rules must be narrowly focused and proportionate to actual harms, with an eye on protecting other public interests. And finally, we shouldn't entrench the biggest companies and gatekeepers with AI licensing schemes.

Government Automated Decision Making

US procurement of AI has moved with remarkable speed and an alarming lack of transparency. By wasting money on systems with no proven track record, this procurement not only entrenches the largest AI companies, but risks infringing the civil liberties of all people subject to these automated decisions.

These harms aren’t theoretical, we have already seen a move to adopt experimental AI tools in policing and national security, including immigration enforcement. Recent reports also indicate the Department of Government Efficiency (DOGE) intends to apply AI to evaluate federal workers, and use the results to make decisions about their continued employment.

Automating important decisions about people is reckless and dangerous. At best these new AI tools are ineffective nonsense machines which require more labor to correct inaccuracies, but at worst result in irrational and discriminatory outcomes obscured by the blackbox nature of the technology.

Instead, the adoption of such tools must be done with a robust public notice-and-comment practice as required by the Administrative Procedure Act. This process helps weed out wasteful spending on AI snake oil, and identifies when the use of such AI tools are inappropriate or harmful.

Additionally, the AI action plan should favor tools developed under the principles of free and open-source software. These principles are essential for evaluating the efficacy of these models, and ensure they uphold a more fair and scientific development process. Furthermore, more open development stokes innovation and ensures public spending ultimately benefits the public—not just the most established companies.

Don’t Enable Powerful Gatekeepers

Spurred by the general anxiety about Generative AI, lawmakers have drafted sweeping regulations based on speculation, and with little regard for the multiple public interests at stake. Though there are legitimate concerns, this reactionary approach to policy is exactly what we warned against back in 2023.

For example, bills like NO FAKES and NO AI Fraud expand copyright laws to favor corporate giants over everyone else’s expression. NO FAKES even includes a scheme for a DMCA-like notice takedown process, long bemoaned by creatives online for encouraging broader and automated online censorship. Other policymakers propose technical requirements like watermarking that are riddled with practical points of failure.

Among these dubious solutions is the growing prominence of AI licensing schemes which limit the potential of AI development to the highest bidders. This intrusion on fair use creates a paywall protecting only the biggest tech and media publishing companies—cutting out the actual creators these licenses nominally protect. It’s like helping a bullied kid by giving them more lunch money to give their bully.

This is the wrong approach. Looking for easy solutions like expanding copyright, hurts everyone. Particularly smaller artists, researchers, and businesses who cannot compete with the big gatekeepers of industry. AI has threatened the fair pay and treatment of creative labor, but sacrificing secondary use doesn’t remedy the underlying imbalance of power between labor and oligopolies.

People have a right to engage with culture and express themselves unburdened by private cartels. Policymakers should focus on narrowly crafted policies to preserve these rights, and keep rulemaking constrained to tested solutions addressing actual harms.

You can read our comments here.

Rory Mir

EFF Thanks Fastly for Donated Tools to Help Keep Our Website Secure

2 weeks ago

EFF’s most important platform for welcoming everyone to join us in our fight for a better digital future is our website, eff.org. We thank Fastly for their generous in-kind contribution of services helping keep EFF’s website online.

Eff.org was first registered in 1990, just three months after the organization was founded, and long before the web was an essential part of daily life. Our website and the fight for digital rights grew rapidly alongside each other. However, along with rising threats to our freedoms online, threats to our site have also grown.

It takes a village to keep eff.org online in 2025. Every day our staff work tirelessly to protect the site from everything from DDoS attacks to automated hacking attempts, and everything in between. As AI has taken off, so have crawlers and bots that scrape content to train LLMs, sometimes without respecting rate limits we’ve asked them to observe. Newly donated security add-ons from Fastly help us automate DDoS prevention and rate limiting, preventing our servers from getting overloaded when misbehaving visitors abuse our sites. Fastly also caches the content from our site around the globe, meaning that visitors from all over the world can access eff.org and our other sites quickly and easily.

EFF is member-supported by people who share our vision for a better digital future. We thank Fastly for showing their support for our mission to ensure that technology supports freedom, justice, and innovation for all people of the world with an in-kind gift of their full suite of services.

Allison Morris

EFFecting Change: Is There Hope for Social Media?

2 weeks ago

Please join EFF for the next segment of EFFecting Change, our livestream series covering digital privacy and free speech. 

EFFecting Change Livestream Series:
Is There Hope for Social Media?
Thursday, March 20th
12:00 PM - 1:00 PM Pacific - Check Local Time
This event is LIVE and FREE!

Users are frustrated with legacy social media companies. Is it possible to effectively build the kinds of communities we want online while avoiding the pitfalls that have driven people away?

Join our panel featuring EFF Civil Liberties Director David Greene, EFF Director for International Freedom of Expression Jillian York, Mastodon's Felix Hlatky, Bluesky's Emily Liu, and Spill's Kenya Parham as they explore the future of free expression online and why social media might still be worth saving.

We hope you and your friends can join us live! Be sure to spread the word, and share our past livestreams. Please note that all events will be recorded for later viewing on our YouTube page.

Want to make sure you don’t miss our next livestream? Here’s a link to sign up for updates about this series: eff.org/ECUpdates.

Melissa Srago

EFF Joins AllOut’s Campaign Calling for Meta to Stop Hate Speech Against LGBTQ+ Community

2 weeks ago

In January, Meta made targeted changes to its hateful conduct policy that would allow dehumanizing statements to be made about certain vulnerable groups. More specifically, Meta’s hateful conduct policy now contains the following text:

People sometimes use sex- or gender-exclusive language when discussing access to spaces often limited by sex or gender, such as access to bathrooms, specific schools, specific military, law enforcement, or teaching roles, and health or support groups. Other times, they call for exclusion or use insulting language in the context of discussing political or religious topics, such as when discussing transgender rights, immigration, or homosexuality. Finally, sometimes people curse at a gender in the context of a romantic break-up. Our policies are designed to allow room for these types of speech. 

The revision of this policy timed to Trump’s second election demonstrates that the company is focused on allowing more hateful speech against specific groups, with a noticeable and particular focus on enabling more speech challenging LGBTQ+ rights. For example, the revised policy removed previous prohibitions on comparing people to inanimate objects, feces, and filth based on their protected characteristics, such as sexual identity.

In response, LGBTQ+ rights organization AllOut gathered social justice groups and civil society organizations, including EFF, to demand that Meta immediately reverse the policy changes. By normalizing such speech, Meta risks increasing hate and discrimination against LGBTQ+ people on Facebook, Instagram and Threads. 

The campaign is supported by the following partners: All Out, Global Project Against Hate and Extremism (GPAHE), Electronic Frontier Foundation (EFF), EDRi - European Digital Rights, Bits of Freedom, SUPERRR Lab, Danes je nov dan, Corporación Caribe Afirmativo, Fundación Polari, Asociación Red Nacional de Consejeros, Consejeras y Consejeres de Paz LGBTIQ+, La Junta Marica, Asociación por las Infancias Transgénero, Coletivo LGBTQIAPN+ Somar, Coletivo Viveração, and ADT - Associação da Diversidade Tabuleirense, Casa Marielle Franco Brasil, Articulação Brasileira de Gays - ARTGAY, Centro de Defesa dos Direitos da Criança e do Adolescente Padre, Marcos Passerini-CDMP, Agência Ambiental Pick-upau, Núcleo Ypykuéra, Kurytiba Metropole, ITTC - Instituto Terra, Trabalho e Cidadania. 

Sign the AllOut petition (external link) and tell Meta: Stop hate speech against LGBT+ people!

If Meta truly values freedom of expression, we urge it to redirect its focus to empowering some of its most marginalized speakers, rather than empowering only their detractors and oppressive voices.

Paige Collings

In Memoriam: Mark Klein, AT&T Whistleblower Who Revealed NSA Mass Spying

2 weeks 1 day ago

EFF is deeply saddened to learn of the passing of Mark Klein, a bona fide hero who risked civil liability and criminal prosecution to help expose a massive spying program that violated the rights of millions of Americans.

Mark didn’t set out to change the world. For 22 years, he was a telecommunications technician for AT&T, most of that in San Francisco. But he always had a strong sense of right and wrong and a commitment to privacy.

Mark not only saw how it works, he had the documents to prove it.

When the New York Times reported in late 2005 that the NSA was engaging in spying inside the U.S., Mark realized that he had witnessed how it was happening. He also realized that the President was not telling Americans the truth about the program. And, though newly retired, he knew that he had to do something. He showed up at EFF’s front door in early 2006 with a simple question: “Do you folks care about privacy?” 

We did. And what Mark told us changed everything. Through his work, Mark had learned that the National Security Agency (NSA) had installed a secret, secure room at AT&T’s central office in San Francisco, called Room 641A. Mark was assigned to connect circuits carrying Internet data to optical “splitters” that sat just outside of the secret NSA room but were hardwired into it. Those splitters—as well as similar ones in cities around the U.S.—made a copy of all data going through those circuits and delivered it into the secret room.

A photo of the NSA-controlled 'secret room' in the AT&T facility in San Francisco (Credit: Mark Klein)

Mark not only saw how it works, he had the documents to prove it. He brought us over a hundred pages of authenticated AT&T schematic diagrams and tables. Mark also shared this information with major media outlets, numerous Congressional staffers, and at least two senators personally. One, Senator Chris Dodd, took the floor of the Senate to acknowledge Mark as the great American hero he was.

We used Mark’s evidence to bring two lawsuits against the NSA spying that he uncovered. The first was Hepting v. AT&T and the second was Jewel v. NSA. Mark also came with us to Washington D.C. to push for an end to the spying and demand accountability for it happening in secret for so many years.  He wrote an account of his experience called Wiring Up the Big Brother Machine . . . And Fighting It.

Archival EFF graphic promoting Mark Klein's DC tour

Mark stood up and told the truth at great personal risk to himself and his family. AT&T threatened to sue him, although it wisely decided not to do so. While we were able to use his evidence to make some change, both EFF and Mark were ultimately let down by Congress and the Courts, which have refused to take the steps necessary to end the mass spying even after Edward Snowden provided even more evidence of it in 2013. 

But Mark certainly inspired all of us at EFF, and he helped inspire and inform hundreds of thousands of ordinary Americans to demand an end to illegal mass surveillance. While we have not yet seen the success in ending the spying that we all have hoped for, his bravery helped to usher numerous reforms so far.

And the fight is not over. The law, called Section 702, that now authorizes the continued surveillance that Mark first revealed, expires in early 2026. EFF and others will continue to push for continued reforms and, ultimately, for the illegal spying to end entirely.

Mark’s legacy lives on in our continuing fights to reform surveillance and honor the Fourth Amendment’s promise of protecting personal privacy. We are forever grateful to him for having the courage to stand up and will do our best to honor that legacy by continuing the fight. 

Cindy Cohn
Checked
2 hours 31 minutes ago
EFF's Deeplinks Blog: Noteworthy news from around the internet
Subscribe to EFF update feed