Privacy Harm Is Harm

1 week 4 days ago

Every day, corporations track our movements through license plate scanners, building detailed profiles of where we go, when we go there, and who we visit. When they do this to us in violation of data privacy laws, we’ve suffered a real harm—period. We shouldn’t need to prove we’ve suffered additional damage, such as physical injury or monetary loss, to have our day in court.

That's why EFF is proud to join an amicus brief in Mata v. Digital Recognition Network, a lawsuit by drivers against a corporation that allegedly violated a California statute that regulates Automatic License Plate Readers (ALPRs). The state trial court erroneously dismissed the case, by misinterpreting this data privacy law to require proof of extra harm beyond privacy harm. The brief was written by the ACLU of Northern California, Stanford’s Juelsgaard Clinic, and UC Law SF’s Center for Constitutional Democracy.

The amicus brief explains:

This case implicates critical questions about whether a California privacy law, enacted to protect people from harmful surveillance, is not just words on paper, but can be an effective tool for people to protect their rights and safety.

California’s Constitution and laws empower people to challenge harmful surveillance at its inception without waiting for its repercussions to manifest through additional harms. A foundation for these protections is article I, section 1, which grants Californians an inalienable right to privacy.

People in the state have long used this constitutional right to challenge the privacy-invading collection of information by private and governmental parties, not only harms that are financial, mental, or physical. Indeed, widely understood notions of privacy harm, as well as references to harm in the California Code, also demonstrate that term’s expansive meaning.

What’s At Stake

The defendant, Digital Recognition Network, also known as DRN Data, is a subsidiary of Motorola Solutions that provides access to a massive searchable database of ALPR data collected by private contractors. Its customers include law enforcement agencies and private companies, such as insurers, lenders, and repossession firms. DRN is the sister company to the infamous surveillance vendor Vigilant Solutions (now Motorola Solutions), and together they have provided data to ICE through a contract with Thomson Reuters.

The consequences of weak privacy protections are already playing out across the country. This year alone, authorities in multiple states have used license plate readers to hunt for people seeking reproductive healthcare. Police officers have used these systems to stalk romantic partners and monitor political activists. ICE has tapped into these networks to track down immigrants and their families for deportation.

Strong Privacy Laws

This case could determine whether privacy laws have real teeth or are just words on paper. If corporations can collect your personal information with impunity—knowing that unless you can prove bodily injury or economic loss, you can’t fight back—then privacy laws lose value.

We need strong data privacy laws. We need a private right of action so when a company violates our data privacy rights, we can sue them. We need a broad definition of “harm,” so we can sue over our lost privacy rights, without having to prove collateral injury. EFF wages this battle when writing privacy laws, when interpreting those laws, and when asserting “standing” in federal and state courts.

The fight for privacy isn’t just about legal technicalities. It’s about preserving your right to move through the world without being constantly tracked, catalogued, and profiled by corporations looking to profit from your personal information.

You can read the amicus brief here.

Adam Schwartz

The UK Is Still Trying to Backdoor Encryption for Apple Users

1 week 5 days ago

The Financial Times reports that the U.K. is once again demanding that Apple create a backdoor into its encrypted backup services. The only change since the last time they demanded this is that the order is allegedly limited to only apply to British users. That doesn’t make it any better.

The demand uses a power called a “Technical Capability Notice” (TCN) in the U.K.’s Investigatory Powers Act. At the time of its signing we noted this law would likely be used to demand Apple spy on its users.

After the U.K. government first issued the TCN in January, Apple was forced to either create a backdoor or block its Advanced Data Protection feature—which turns on end-to-end encryption for iCloud—for all U.K. users. The company decided to remove the feature in the U.K. instead of creating the backdoor.

The initial order from January targeted the data of all Apple users. In August, the US claimed the U.K. withdrew the demand, but Apple did not re-enable Advanced Data Protection. The new order provides insight into why: the U.K. was just rewriting it to only apply to British users.

This is still an unsettling overreach that makes U.K. users less safe and less free. As we’ve said time and time again, any backdoor built for the government puts everyone at greater risk of hacking, identity theft, and fraud. It sets a dangerous precedent to demand similar data from other companies, and provides a runway for other authoritarian governments to issue comparable orders. The news of continued server-side access to users' data comes just days after the UK government announced an intrusive mandatory digital ID scheme, framed as a measure against illegal migration.

A tribunal hearing was initially set to take place in January 2026, though it’s currently unclear if that will proceed or if the new order changes the legal process. Apple must continue to refuse these types of backdoors. Breaking end-to-end encryption for one country breaks it for everyone. These repeated attempts to weaken encryption violates fundamental human rights and destroys our right to private spaces.

Thorin Klosowski

❌ How Meta Is Censoring Abortion | EFFector 37.13

1 week 5 days ago

It's spooky season—but while jump scares may get your heart racing, catching up on digital rights news shouldn't! Our EFFector newsletter has got you covered with easy, bite-sized updates to keep you up-to-date.

In this issue, we spotlight new ALPR-enhanced police drones and how local communities can push back; unpack the ongoing TikTok “ban,” which we’ve consistently said violates the First Amendment; and celebrate a privacy win—abandoning a phone doesn't mean you've also abandoned your privacy rights.

Prefer to listen in? Check out our audio companion, where we interview EFF Staff Attorney Lisa Femia who explains the findings from our investigation into abortion censorship on social media. Catch the conversation on YouTube or the Internet Archive.

LISTEN TO EFFECTOR

EFFECTOR 37.13 - ❌ HOW META IS CENSORING ABORTION

Since 1990 EFF has published EFFector to help keep readers on the bleeding edge of their digital rights. We know that the intersection of technology, civil liberties, human rights, and the law can be complicated, so EFFector is a great way to stay on top of things. The newsletter is chock full of links to updates, announcements, blog posts, and other stories to help keep readers—and listeners—up to date on the movement to protect online privacy and free expression. 

Thank you to the supporters around the world who make our work possible! If you're not a member yet, join EFF today to help us fight for a brighter digital future.

Christian Romero

EFF Is Standing Up for Federal Employees—Here’s How You Can Stand With Us

1 week 5 days ago

Federal employees play a key role in safeguarding the civil liberties of millions of Americans. Our rights to privacy and free expression can only survive when we stand together to push back against overreach and ensure that technology serves all people—not just the powerful. 

That’s why EFF jumped to action earlier this year, when the U.S. Office of Personnel Management (OPM) handed over sensitive employee data—Social Security numbers, benefits data, work histories, and more—to Elon Musk’s Department of Government Efficiency (DOGE). This was a blatant violation of the Privacy Act of 1974, and it put federal workers directly at risk. 

We didn’t let it stand. Alongside federal employee unions, EFF sued OPM and DOGE in February. In June, we secured a victory when a judge ruled we were entitled to a preliminary injunction and ordered OPM to provide accounting of DOGE access to employee records. Your support makes this possible. 

Now the fight continues—and your support matters more than ever. The Office of Personnel Management is planting the seeds to undermine and potentially remove the Combined Federal Campaign (CFC), the main program federal employees and retirees have long used to support charities—including EFF. For now, you can still give to EFF through the CFC this year (use our ID: 10437) and we’d appreciate your support! But with the program’s uncertain future, direct support is the best way to keep our work going strong for years to come. 

DONATE TODAY

SUPPORT EFF'S WORK DIRECTLY, BECOME A MEMBER!

When you donate directly, you join a movement of lawyers, activists, and technologists who defend privacy, call out censorship, and push back against abuses of power—everywhere from the courts to Congress and to the streets. As a member, you’ll also receive insider updates, invitations to exclusive events, and receive conversation-starting EFF gear. 

Plus, you can sustain our mission long-term with a monthly or annual donation! 

Stand with EFF. Protect privacy. Defend free expression. Support our work today. 

Related Cases: American Federation of Government Employees v. U.S. Office of Personnel Management
Christian Romero

Platforms Have Failed Us on Abortion Content. Here's How They Can Fix It.

1 week 5 days ago

This is the eighth installment in a blog series documenting EFF's findings from the Stop Censoring Abortion campaign. You can read additional posts here. 

In our Stop Censoring Abortion series, we’ve documented the many ways that reproductive rights advocates have faced arbitrary censorship on Meta platforms. Since social media is the primary—and sometimes the only—way that providers, advocates, and communities can safely and effectively share timely and accurate information about abortion, it’s vitally important that platforms take steps to proactively protect this speech.

Yet, even though Meta says its moderation policies allow abortion-related speech, its enforcement of those policies tells a different story. Posts are being wrongfully flagged, accounts are disappearing without warning, and important information is being removed without clear justification.

So what explains the gap between Meta’s public commitments and its actions? And how can we push platforms to be better—to, dare we say, #StopCensoringAbortion?

After reviewing nearly one-hundred submissions and speaking with Meta to clarify their moderation practices, here’s what we’ve learned.

Platforms’ Editorial Freedom to Moderate User Content

First, given the current landscape—with some states trying to criminalize speech about abortion—you may be wondering how much leeway platforms like Facebook and Instagram have to choose their own content moderation policies. In other words, can social media companies proactively commit to stop censoring abortion?

The answer is yes. Social media companies, including Meta, TikTok, and X, have the constitutionally protected First Amendment right to moderate user content however they see fit. They can take down posts, suspend accounts, or suppress content for virtually any reason.

The Supreme Court explicitly affirmed this right in 2023 in Moody v. Netchoice, holding that social media platforms, like newspapers, bookstores, and art galleries before them, have the First Amendment right to edit the user speech that they host and deliver to other users on their platforms. The Court also established that the government has a very limited role in dictating what social media platforms must (or must not) publish. This editorial discretion, whether granted to individuals, traditional press, or online platforms, is meant to protect these institutions from government interference and to safeguard the diversity of the public sphere—so that important conversations and movements like this one have the space to flourish.

Meta’s Broken Promises

Unfortunately, Meta is failing to meet even these basic standards. Again and again, its policies say one thing while its actual enforcement says another.

Meta has stated its intent to allow conversations about abortion to take place on its platforms. In fact, as we’ve written previously in this series, Meta has publicly insisted that posts with educational content about abortion access should not be censored, even admitting in several public statements to moderation mistakes and over-enforcement. One spokesperson told the New York Times: “We want our platforms to be a place where people can access reliable information about health services, advertisers can promote health services and everyone can discuss and debate public policies in this space. . . . That’s why we allow posts and ads about, discussing and debating abortion.”

Meta’s platform policies largely reflect this intent. But as our campaign reveals, Meta’s enforcement of those policies is wildly inconsistent. Time and again, users—including advocacy organizations, healthcare providers, and individuals sharing personal stories—have had their content taken down even though it did not actually violate any of Meta’s stated guidelines. Worse, they are often left in the dark about what happened and how to fix it.

Arbitrary enforcement like this harms abortion activists and providers by cutting them off from their audiences, wasting the effort they spend creating resources and building community on these platforms, and silencing their vital reproductive rights advocacy. And it goes without saying that it hurts users, who need access to timely, accurate, and sometimes life-saving information. At a time when abortion rights are under attack, platforms with enormous resources—like Meta—have no excuse for silencing this important speech.  

Our Call to Platforms

Our case studies have highlighted that when users can’t rely on platforms to apply their own rules fairly, the result is a widespread chilling effect on online speech. That’s why we are calling on Meta to adopt the following urgent changes.

1. Publish clear and understandable policies.

Too often, platforms’ vague rules force users to guess what content might be flagged in order to avoid shadowbanning or worse, leading to needless self-censorship. To prevent this chilling effect, platforms should strive to offer users the greatest possible transparency and clarity on their policies. The policies should be clear enough that users know exactly what is allowed and what isn’t so that, for example, no one is left wondering how exactly a clip of women sharing their abortion experiences could be mislabeled as violent extremism.

2. Enforce rules consistently and fairly.

If content doesn’t violate a platform’s stated policies, it should not be removed. And, per Meta’s own policies, an account should not be suspended for abortion-related content violations if it has not received any prior warnings or “strikes.” Yet as we’ve seen throughout this campaign, abortion advocates repeatedly face takedowns or even account suspensions of posts that fall entirely within Meta’s Community Standards. On such a massive scale, this selective enforcement erodes trust and chills entire communities from participating in critical conversations. 

3. Provide meaningful transparency in enforcement actions.

When content is removed, Meta tends to give vague, boilerplate explanations—or none at all. Instead, users facing takedowns or suspensions deserve detailed and accurate explanations that state the policy violated, reflect the reasoning behind the actual enforcement decision, and ways to appeal the decision. Clear explanations are key to preventing wrongful censorship and ensuring that platforms remain accountable to their commitments and to their users.

4. Guarantee functional appeals.

Every user deserves a real chance to challenge improper enforcement decisions and have them reversed. But based on our survey responses, it seems Meta’s appeals process is broken. Many users reported that they do not receive responses to appeals, even when the content did not violate Meta’s policies, and thus have no meaningful way to challenge takedowns. Alarmingly, we found that a user’s best (and sometimes only) chance at success is to rely on a personal connection at Meta to right wrongs and restore content. This is unacceptable. Users should have a reliable and efficient appeal process that does not depend on insider access.   

5. Expand human review.

Finally, automated systems cannot always handle the nuance of sensitive issues like reproductive health and advocacy. They misinterpret words, miss important cultural or political context, and wrongly flag legitimate advocacy as “dangerous.” Therefore, we call upon platforms to expand the role that human moderators play in reviewing auto-flagged content violations—especially when posts involve sensitive healthcare information or political expression.

Users Deserve Better

Meta has already made the choice to allow speech about abortion on its platforms, and it has not hesitated to highlight that commitment whenever it has faced scrutiny. Now it’s time for Meta to put its money where its mouth is.

Users deserve better than a system where rules are applied at random, appeals go nowhere, and vital reproductive health information is needlessly (or negligently) silenced. If Meta truly values free speech, it must commit to moderating with fairness, transparency, and accountability.

This is the eighth post in our blog series documenting the findings from our Stop Censoring Abortion campaign. Read more at https://www.eff.org/pages/stop-censoring-abortion   

Affected by unjust censorship? Share your story using the hashtag #StopCensoringAbortion. Amplify censored posts and accounts, share screenshots of removals and platform messages—together, we can demonstrate how these policies harm real people. 

Molly Buckley

【おすすめ本】山田 健太『転がる石のように 揺れるジャーナリズムと軋む表現の自由』―表面の時流に流されず 現場から説く鋭い定点時評=藤森 研(JCJ代表委員)

1 week 5 days ago
 戦後80年。日本の言論状況はどう変遷してきたのか。 「約20年ごとに、構築期・躍動期・挟撃期(権力と市民双方からのメディア攻撃)・忖度期にま とめられる」と、著者は さらりと書く。テレビ誕生、ベトナム報道、報道 の人権侵害、安倍一強……。思えば、なるほどと 頷(うなず)かされる。 本書は、琉球新報と東京新聞に載った、著者の時評の2020年以降分をまとめたもので、それ以前は既に二冊の本となって世に出ている。長期間、たゆまず現場を見続けてきた「定点観測」か ら紡ぎ出す論評は、優..
JCJ

【おすすめ本】 小林美穂子・小松田健一『桐生市事件 生活保護が歪められた街で 』―「命の砦」を守る闘いの記録=白井康彦(フリージャーナリスト)

1 week 5 days ago
 暴かれた「強者の闇」 を単行本にまとめて歴史的資料にする。ものの見事に実現した労作だ。生活保護制度は紛れもなく「命の砦」。ところが世 間には「なまけて生活保護を利用している人が多い」といった誤解が広がっている。 誤解をさらに強めたのが、2012年に民放テレビや週刊誌などが展開した「生活保護バッシング報道」。これによって「生活保護を利用しようとしている人には厳しく接して、なるべく利用させないようにすべきだ」と考える自治体担当者は一段と増えた。 その考え方の「行きついた先」に思..
JCJ