Weak Human Rights Protections: Why You Should Hate the Proposed UN Cybercrime Treaty

1 month ago

The proposed UN Cybercrime Convention dangerously undermines human rights, opening the door to unchecked cross-border surveillance and government overreach. Despite two and a half years of negotiations, the draft treaty authorizes extensive surveillance powers without robust safeguards, omitting essential data protection principles.

This risks turning international efforts to fight cybercrime into tools for human rights abuses and transnational repression.

Safeguards like prior judicial authorization call for a judge's approval of surveillance before it happens, ensuring the measure is legitimate, necessary and proportionate. Notifying individuals when their data is being accessed gives them an opportunity to challenge requests that they believe are disproportionate or unjustified.

Additionally, requiring states to publish statistical transparency reports can provide a clear overview of surveillance activities. These safeguards are not just legal formalities; they are vital for upholding the integrity and legitimacy of law enforcement activities in a democratic society.¸

Unfortunately the draft treaty is severely lacking in these protections. An article in the current draft about conditions and safeguards is vaguely written, permitting countries to apply safeguards only "where appropriate," and making them dependent on States domestic laws, some of which have weak human rights protections.¸This means that the level of protection against abusive surveillance and data collection can vary widely based on each country's discretion.

Extensive surveillance powers must be reined in and strong human rights protections added. Without those changes, the proposed treaty unacceptably endangers human rights around the world and should not be approved.

Check out our two detailed analyses about the lack of human rights safeguards in the draft treaty. 

Karen Gullo

Senators Expose Car Companies’ Terrible Data Privacy Practices

1 month ago

In a letter to the Federal Trade Commission (FTC) last week, Senators Ron Wyden and Edward Markey urged the FTC to investigate several car companies caught selling and sharing customer information without clear consent. Alongside details previously gathered from reporting by The New York Times, the letter also showcases exactly how much this data is worth to the car companies selling this information.

Car companies collect a lot of data about driving behavior, ranging from how often you brake to how rapidly you accelerate. This data can then be sold off to a data broker or directly to an insurance company, where it’s used to calculate a driver’s riskiness, and adjust insurance rates accordingly. This surveillance is often defended by its promoters as a way to get discounts on insurance, but that rarely addresses the fact your insurance rates may actually go up.

If your car is connected to the internet or has an app, you may have inadvertently “agreed” to this type of data sharing when setting it up without realizing it. The Senators’ letter asserts that Hyundai shares drivers’ data  without seeking their informed consent, and that GM and Honda used deceptive practices during signup.

When it comes to the price that companies can get for selling your driving data, the numbers range wildly, but the data isn’t as valuable as you might imagine. The letter states that Honda sold the data on about 97,000 cars to an analytics company, Verisk—which turned around and sold the data to insurance companies—for $25,920, or 26 cents per car. Hyundai got a better deal, but still not astronomical numbers: Verisk paid Hyundai $1,043,315.69, or 61 cents per car. GM declined to share details about its sales.

The letter also reveals that while GM stopped sharing driving data after The New York Times’ investigation, it did not stop sharing location data, which it’s been sharing for years. GM collects and shares location data on every car that’s connected to the internet, and doesn’t offer a way to opt out beyond disabling internet-connectivity altogether. According to the letter, GM refused to name the company it’s sharing the location data with currently. While GM claims the location data is de-identified, there is no way to de-identify location data. With just one data point, where the car is parked most often, it becomes obvious where a person lives.

Car makers should not sell our driving and location history to data brokers or insurance companies, and they shouldn’t make it as hard as they do to figure out what data gets shared and with whom. This level of tracking is a nightmare on its own, and is made worse for certain kinds of vulnerable populations, such as survivors of domestic abuse.

The three automakers listed in the letter are certainly not the only ones sharing data without real consent, and it’s likely there are other data brokers who handle this type of data. The FTC should investigate this industry further, just as it has recently investigated many other industries that threaten data privacy. Moreover, Congress and the states must pass comprehensive consumer data privacy legislation with strong data minimization rules and requirements for clear, opt-in consent.

Thorin Klosowski

EFF’s Concerns About the UN Draft Cybercrime Convention

1 month ago

The proposed UN Cybercrime Convention is an extensive surveillance pact that imposes intrusive domestic surveillance measures and mandates states’ cooperation in surveillance and data sharing. It requires states to aid each other in cybercrime investigations and prosecutions, allowing the collection, preservation, and sharing of electronic evidence for any crime deemed serious by a country’s domestic law, with minimal human rights safeguards. This cooperation extends even to countries with poor human rights records.  Negotiations for this treaty began in 2022, initiated by a controversial proposal from the Russian Federation. If adopted, it will rewrite surveillance laws worldwide. Millions of people, including human rights defenders, journalists, security researchers, and those speaking truth to power, will be affected. Without clear, enforceable safeguards, the treaty risks becoming a tool for state abuse and transnational repression rather than protecting human rights. Below are our main concerns. For a comprehensive list, please refer to our redlines and appeal to EU Delegates. 

EFF’s Key Concerns  

The Title of the Draft Convention is Misleading and Problematic: Cybercrime is a real issue but equating it with any crime involving ICTs is conceptually and practically harmful. Recent efforts at the domestic level to broaden its definition have led to the criminalization of legitimate activities, such as online criticism, religious expression, or LGBTQ support. In the proposed treaty, it encourages expansive interpretations that could lead to human rights abuses and transnational repression. Recommendation: Restrict the definition to "core cybercrimes" like technical attacks on computers, devices, data, and communications systems. Exclude human rights-protected activities from the scope of the treaty to prevent misuse and ensure these rights are not unjustly targeted due to equating cybercrime with any crime using ICT. 
 
Expansive Scope and Over-Criminalization Risks: The draft Convention's criminalization chapter dangerously broadens its scope by including crimes like “grooming” and CSAM, not just cybercrimes. Its CSAM definition risks criminalizing consensual conduct between minors. Even worse, a proposed Protocol could add two more Ad Hoc sessions to discuss even more crimes, further expanding its broad scope. Recommendation: Criminalization must be limited to Articles 7 to 11. Narrow the scope of the CSAM article to target only intentional, malicious actions, exclude from criminalization consensual activity between minors, make exemptions for self-generated content by minors mandatory, ensure financing provisions target only those knowingly involved in illegal activities, and exclude the public interest use of such materials, such as evidence in crime investigations, and scientific or artistic materials.  

Overbroad Scope of Evidence Gathering Powers Will Enable Domestic and Cross-Border Spying on Acts of Expression: The open-ended scope of Chapters IV & V risks undermining law enforcement cooperation on actual cybercrime offenses by diluting resources. It lets governments spy on people to gather potential evidence for any crime if they’ve been committed using ICT. It also allows one state to help another in surveillance for any so-called serious crime. These expansions turn the treaty into an extensive surveillance pact. Article 23(2)(c) greenlights invasive measures for minor offenses and protected expressions abusively criminalized in some countries. Article 35(1)(c) means cooperation for serious crimes, defined as offenses punishable by four years or more, which can include acts of expression considered serious offenses in national law. This broad scope risks massive abuse of power. Recommendation: Limit Articles 23(2)(c) and 35(1)(c) to Articles 7 to 11 and delete Article 23(2)(b). Support OHCHR’s recommendation to revise the definition of serious crimes to mean only “those involving death, injury, or other grave harms,” as merely suggesting respect for human rights within such a broad scope is important but insufficient because it lacks enforceable protections against misuse and abuse. Ensure cooperation is limited to situations where there is a reasonable suspicion that legal assistance will produce evidence of a criminal offense.  

Insufficient Human Rights Safeguards: Article 24, which addresses conditions and safeguards and includes the principle of proportionality, fails to explicitly include other crucial principles such as legality, necessity, and non-discrimination. Effective human rights protections require judicial approval before conducting surveillance, transparency about actions taken, and notifying users when their data is accessed unless it jeopardizes the investigation. The new draft omits these safeguards, even worse it defers the few existing safeguards to national laws that can vary greatly and may not always provide the necessary protections. It also lacks safeguards for legally privileged information, fails to prevent compelled self-incrimination, and omits protections for criminal defense attorneys. These gaps raise concerns about the erosion of human rights: the treaty doesn’t raise the bar against invasive surveillance but rather confirms even the lowest protections, potentially undermining existing robust standards.  
 
Highly Intrusive Secret Spying Powers Without Robust Safeguards: The draft allows extensive secret surveillance with weak safeguards, posing significant risks both domestically and internationally. Domestically, it permits real-time interception of traffic data for any crime, while content interception is limited to serious crimes—offenses punishable by four years or more in domestic laws. Service providers are compelled to assist in these surveillance activities, often under perpetual gag orders, preventing notification even when investigations are no longer jeopardized. Internationally, the draft allows one state to assist another in carrying out such surveillance for serious crimes, forcing companies to comply with foreign surveillance requests, also in perpetual secrecy. This lack of transparency and accountability is a recipe for unchecked abuses of power and undermines trust in digital services. Recommendation: Delete Articles 29, 30, 45, 46. 

Compelled Technical Assistance: The draft requires countries to have laws enabling authorities to compel anyone with knowledge of a particular computer system to provide necessary information to facilitate access. This could involve asking a tech expert or engineer to help unlock a device or explain its security features, which may compromise security or reveal confidential information. (ie. an engineer might be arbitrarily required to disclose an unfixed security flaw or provide signed encryption keys that protect data). Recommendation: Delete Article 28(4).  

Lawless Law Enforcement Cooperation Risks Human Rights Erosion: The current wording of Article 47 risks supporting open-ended law enforcement cooperation without detailing the necessary limitations and safeguards required under international human rights law. States should not use this Convention to authorize or require personal data sharing beyond the scope of existing mutual legal assistance treaties, the safeguards established under the MLA, and the MLA vetting mechanism. Removing these safeguards without providing comparable protections and limitations invites misuse of the mutual legal assistance framework for abuse and/or repression. Recommendation: Limit Article 47(1) to Articles 7-11, delete Articles 47(1)(b), (c), and (f), and reference Articles 24 and 36 in Article 47(2). 

Insufficient Protection for Security Researchers and Other Public Interest Work: The draft Convention fails to exempt security research, journalism, and whistleblowing from criminalization, posing significant risks to cybersecurity and press freedom globally. This includes those involved in authorized testing or protection of ICT systems. However, the draft's provisions on illegal access, interception, and interference lack mandatory requirements for criminal intent and harm, threatening to penalize security research efforts. Full list of recommendations available here

Risks to LGBTQ and Gender Rights: The broad scope of the convention continues to pose significant risks to LGBTQ+ and gender rights. The domestic and international cooperation chapter could be exploited to target individuals based on their gender or sexual orientation, especially if domestic laws criminalize these expressions as serious crimes. This is particularly concerning given the history of cybercrime laws being misused to persecute marginalized groups. Recommendation: Restrict the scope of evidence gathering to core cybercrimes. Revise the definition of serious crime as per OHCHR’s recommendation. 
 
Want more information? Please contact EFF Policy Director for Global Privacy Katitza Rodriguez at katitza@eff.org. 
 
Download our PDF here.

 

Katitza Rodriguez

Why You Should Hate the Proposed UN Cybercrime Treaty

1 month ago

International UN treaties aren’t usually on users’ radar. They are debated, often over the course of many years, by diplomats and government functionaries in Vienna or New York, and their significance is often overlooked or lost in the flood of information and news we process every day, even when they expand police powers and threaten the fundamental rights of people all over the world.

Such is the case with the proposed UN Cybercrime Treaty. For more than two years, EFF and its international civil society partners have been deeply involved in spreading the word about, and fighting to fix, seriously dangerous flaws in the draft convention. In the coming days we will publish a series of short posts that cut through the draft’s dense, highly technical text explaining the real-world effects of the convention.

The proposed treaty, pushed by Russia and shepherded by the UN Office on Drugs and Crime, is a proposed agreement between nations purportedly aimed at strengthening cross border investigations and prosecutions of cybercriminals who spread malware, steal data for ransom, and cause data breaches, among other offenses.

The problem is, as currently written, the treaty gives governments massive surveillance and data collection powers to go after not just cybercrime, but any offense they define as a serious that involves the use of a computer or communications system. In some countries, that includes criticizing the government in a social media post, expressing support online for LGBTQ+ rights, or publishing news about protests or massacres.

Tech companies and their overseas staff, under certain treaty provisions, would be compelled to help governments in their pursuit of people’s data, locations, and communications, subject to domestic jurisdictions, many of which establish draconian fines.

We have called the draft convention a blank check for surveillance abuse that can be used as a tool for human rights violations and transnational repression. It’s an international treaty that everyone should know and care about because it threatens the rights and freedoms of people across the globe. Keep an eye out for our posts explaining how.

For our key concerns, read our three-pager:

Karen Gullo

Digital Apartheid in Gaza: Unjust Content Moderation at the Request of Israel’s Cyber Unit

1 month ago

This is part one of an ongoing series. Part two on the role of big tech in human rights abuses is here.

Government involvement in content moderation raises serious human rights concerns in every context. Since October 7, social media platforms have been challenged for the unjustified takedowns of pro-Palestinian content—sometimes at the request of the Israeli government—and a simultaneous failure to remove hate speech towards Palestinians. More specifically, social media platforms have worked with the Israeli Cyber Unit—a government office set up to issue takedown requests to platforms—to remove content considered as incitement to violence and terrorism, as well as any promotion of groups widely designated as terrorists. 

Many of these relationships predate the current conflict, but have proliferated in the period since. Between October 7 and November 14, a total of 9,500 takedown requests were sent from the Israeli authorities to social media platforms, of which 60 percent went to Meta with a reported 94% compliance rate. 

This is not new. The Cyber Unit has long boasted that its takedown requests result in high compliance rates of up to 90 percent across all social media platforms. They have unfairly targeted Palestinian rights activists, news organizations, and civil society; one such incident prompted Meta’s Oversight Board to recommend that the company “Formalize a transparent process on how it receives and responds to all government requests for content removal, and ensure that they are included in transparency reporting.”

When a platform edits its content at the behest of government agencies, it can leave the platform inherently biased in favor of that government’s favored positions. That cooperation gives government agencies outsized influence over content moderation systems for their own political goals—to control public dialogue, suppress dissent, silence political opponents, or blunt social movements. And once such systems are established, it is easy for the government to use the systems to coerce and pressure platforms to moderate speech they may not otherwise have chosen to moderate.

Alongside government takedown requests, free expression in Gaza has been further restricted by platforms unjustly removing pro-Palestinian content and accounts—interfering with the dissemination of news and silencing voices expressing concern for Palestinians. At the same time, X has been criticized for failing to remove hate speech and has disabled features that allow users to report certain types of misinformation. TikTok has implemented lackluster strategies to monitor the nature of content on their services. Meta has admitted to suppressing certain comments containing the Palestinian flag in certain “offensive contexts” that violate its rules.

To combat these consequential harms to free expression in Gaza, EFF urges platforms to follow the Santa Clara Principles on Transparency and Accountability in Content Moderation and undertake the following actions:

  1. Bring in local and regional stakeholders into the policymaking process to provide a greater cultural competence—knowledge and understanding of local language, culture and contexts—throughout the content moderation system.
  2. Urgently recognize the particular risks to users’ rights that result from state involvement in content moderation processes.
  3. Ensure that state actors do not exploit or manipulate companies’ content moderation systems to censor dissenters, political opponents, social movements, or any person.
  4. Notify users when, how, and why their content has been actioned, and give them the opportunity to appeal.
Everyone Must Have a Seat at the Table

Given the significant evidence of ongoing human rights violations against Palestinians, both before and since October 7, U.S. tech companies have significant ethical obligations to verify to themselves, their employees, the American public, and Palestinians themselves that they are not directly contributing to these abuses. Palestinians must have a seat at the table, just as Israelis do, when it comes to moderating speech in the region, most importantly their own. Anything less than this risks contributing to a form of digital apartheid.

An Ongoing Issue

This isn’t the first time EFF has raised concerns about censorship in Palestine, including in multiple international forums. Most recently, we wrote to the UN Special Rapporteur on Freedom of Expression expressing concern about the disproportionate impact of platform restrictions on expression by governments and companies. In May, we submitted comments to the Oversight Board urging that moderation decisions of the rallying cry “From the river to the sea” must be made on an individualized basis rather than through a blanket ban. Along with international and regional allies, EFF also asked Meta to overhaul its content moderation practices and policies that restrict content about Palestine, and have issued a set of recommendations for the company to implement. 

And back in April 2023, EFF and ECNL submitted comments to the Oversight Board addressing the over-moderation of the word ‘shaheed’ and other Arabic-language content by Meta, particularly through the use of automated content moderation tools. In their response, the Oversight Board found that Meta’s approach disproportionately restricts free expression, is unnecessary, and that the company should end the blanket ban to remove all content using the “shaheed”.

Paige Collings

Electronic Frontier Foundation to Present Annual EFF Awards to Carolina Botero, Connecting Humanity, and 404 Media

1 month ago
2024 Awards Will Be Presented in a Live Ceremony Thursday, Sept. 12 in San Francisco

SAN FRANCISCO—The Electronic Frontier Foundation (EFF) is honored to announce that Carolina Botero, Connecting Humanity, and 404 Media will receive the 2024 EFF Awards for their vital work in ensuring that technology supports freedom, justice, and innovation for all people.  

The EFF Awards recognize specific and substantial technical, social, economic, or cultural contributions in diverse fields including journalism, art, digital access, legislation, tech development, and law. 

The EFF Awards ceremony will start at 6:30 pm PT on Thursday, Sept. 12, 2024 at the Golden Gate Club, 135 Fisher Loop in San Francisco’s Presidio. Guests can register at https://www.eff.org/event/eff-awards-2024. The ceremony will be livestreamed and recorded. 

For the past 30 years, the EFF Awards—previously known as the Pioneer Awards—have recognized and honored key leaders in the fight for freedom and innovation online. Started when the internet was new, the Awards now reflect the fact that the online world has become both a necessity in modern life and a continually evolving set of tools for communication, organizing, creativity, and increasing human potential. 

“Maintaining internet access in a conflict zone, conducting fearless investigative reporting on how tech impacts our lives, and bringing the fight for digital rights and social justice to significant portions of Latin America are all ways of ensuring technology advances us all,” EFF Executive Director Cindy Cohn said. “This year’s EFF Award winners embody the internet’s highest ideals, building a better-connected and better-informed world that brings freedom, justice, and innovation for everyone. We hope that by recognizing them in this small way, we can shine a spotlight that helps them continue and even expand their important work.” 

Carolina Botero: Fostering Digital Human Rights in Latin America 

Carolina Botero is a researcher, lecturer, writer, and consultant who is among the foremost leaders in the fight for digital rights in Latin America. In more than a decade as executive director of the Colombia-based Karisma Foundation — founded in 2003 to ensure that digital technologies protect and advance fundamental human rights and promote social justice — she transformed the organization into an outspoken voice fostering freedom of expression, privacy, access to knowledge, justice, and self-determination in our digital world, with regional and international impact. She left that position this year, opening the door for a new generation while leaving a strong and inspiring legacy for those in Latin America and beyond who advocate for a digital world that enhances rights and empowers the powerless. Botero holds a master’s degree in international law and cooperation from Belgium’s Vrije Universiteit Brussel and a master’s degree in commercial and contracting law from Spain’s Universitat Autònoma de Barcelona. She frequently authors op-eds for Colombia’s El Espectador and La Silla Vacía, and serves on the advisory board of The Regional Center for Studies for the Development of the Information Society (Cetic.br), monitoring the adoption of information and communication technologies in Brazil. She previously served on the board of Creative Commons and as a member of the UNESCO Advisory Committee on Open Science.  

Connecting Humanity: Championing Internet Access in Gaza 

Connecting Humanity is a Cairo-based nonprofit organization that helps Palestinians in Gaza regain access to the internet – a crucial avenue for free speech and the free press. Founded in late 2023 by Egyptian journalist, writer, podcaster, and activist Mirna El Helbawi, Connecting Humanity collects and distributes embedded SIMs (eSIMs), a software version of the physical chip used to connect a phone to cellular networks and the internet. Connecting Humanity has collected hundreds of thousands of eSims from around the world and distributed them to people in Gaza, providing a lifeline for many caught up in Israel’s war on Hamas. People in crisis zones rely upon the free flow of information to survive, and restoring internet access in places where other communications infrastructure has been destroyed helps with dissemination of life-saving information and distribution of humanitarian aid, ensures that everyone’s stories can be heard, and enables continued educational and cultural contact. El Helbawi previously worked as an editor at 7 Ayam Magazine and as a radio host at Egypt’s NRJ Group; she was shortlisted for the Arab Journalism Award in 2016, and she created the podcast Helbing

404 Media: Fearless Journalism 

As the media landscape in general and tech media in particular keeps shrinking, 404 Media — launched in August 2023 — has tirelessly forged ahead with incisive investigative reports, deep-dive features, blogs, and scoops about topics such as hacking, cybersecurity, cybercrime, sex, artificial intelligence, consumer rights, government and law enforcement surveillance, privacy, and the democratization of the internet. Co-founders Jason Koebler, Sam Cole, Joseph Cox, and Emanuel Maiberg all worked together at Vice Media’s Motherboard, but after that site's parent company filed for bankruptcy in May 2023, the four journalists resolved to go out on their own and build what Maiberg has called "very much a website by humans, for humans about technology. It’s not about the business of technology — it’s about how it impacts real people in the real world.” Among many examples, 404 Media has uncovered a privacy issue in the New York subway system that let stalkers track peoples’ movements, causing the MTA to shut down the feature; investigated a platform being used to generate non-consensual pornography with AI, causing the platform to make changes limiting abuse; and reported on dangerously inaccurate AI-generated books that Amazon then removed from sale

 To register for this event: https://www.eff.org/event/eff-awards-2024 

For past honorees: https://www.eff.org/awards/past-winners 

 

Josh Richman

Briefing: Negotiating States Must Address Human Rights Risks in the Proposed UN Surveillance Treaty

1 month ago

At a virtual briefing today, experts from the Electronic Frontier Foundation (EFF), Access Now, Derechos Digitales, Human Rights Watch, and the International Fund for Public Interest Media outlined the human rights risks posed by the proposed UN Cybercrime Treaty. They explained that the draft convention, instead of addressing core cybercrimes, is an extensive surveillance treaty that imposes intrusive domestic spying measures with little to no safeguards protecting basic rights. UN Member States are scheduled to hold a final round of negotiations about the treaty's text starting July 29.

If left as is, the treaty risks becoming a powerful tool for countries with poor human rights records that can be used against journalists, dissenters, and every day people. Watch the briefing here:

 

%3Ciframe%20width%3D%22560%22%20height%3D%22315%22%20src%3D%22https%3A%2F%2Fwww.youtube.com%2Fembed%2FSBkjj2tkcAY%3Fsi%3D6hgrk1xR81RZ8TWv%26autoplay%3D1%26mute%3D1%22%20title%3D%22YouTube%20video%20player%22%20frameborder%3D%220%22%20allow%3D%22accelerometer%3B%20autoplay%3B%20clipboard-write%3B%20encrypted-media%3B%20gyroscope%3B%20picture-in-picture%3B%20web-share%22%20referrerpolicy%3D%22strict-origin-when-cross-origin%22%20allowfullscreen%3D%22%22%3E%3C%2Fiframe%3E Privacy info. This embed will serve content from youtube.com

Karen Gullo

Journalists Sue Massachusetts TV Corporation Over Bogus YouTube Takedown Demands

1 month ago
Posting Video Clips of Government Meetings Is Fair Use That Doesn’t Violate the DMCA, EFF’s Clients Argue

BOSTON—A citizen journalists’ group represented by the Electronic Frontier Foundation (EFF) filed a federal lawsuit today against a Massachusetts community-access television company for falsely convincing YouTube to take down video clips of city government meetings.

The lawsuit was filed in the U.S. District Court for Massachusetts by Channel 781, an association of citizen journalists founded in 2021 to report on Waltham, MA, municipal affairs via its YouTube channel. The Waltham Community Access Corp.’s misrepresentation of copyright claims under the Digital Millennium Copyright Act (DMCA) led YouTube to temporarily deactivate Channel 781, making its work disappear from the internet last September just five days before an important municipal election, the suit says. 

“WCAC knew it had no right to stop people from using video recordings of public meetings, but asked YouTube to shut us down anyway,” Channel 781 cofounder Josh Kastorf said. “Democracy relies on an informed public, and there must be consequences for anyone who abuses the DMCA to silence journalists and cut off people’s access to government.” 

Channel 781 is a nonprofit, volunteer-run effort, and all of its content is available for free. Its posts include videos of its members reporting on news affecting the city, editorial statements, discussions in a talk-show format, and interviews. It also posts short video excerpts of meetings of the Waltham city council and other local government bodies. 

Waltham Community Access Corp. (WCAC) operates two cable television channels:  WCAC-TV is a Community Access station that provides programming geared towards the interests of local residents, businesses, and organizations, and MAC-TV is a Government Access station that provides coverage of municipal meetings, events, and special government-related programming. 

Some city meeting video clips that Channel 781 posted to YouTube were short excerpts from videos recorded by WCAC and first posted to WCAC’s website. Channel 781 posted them on YouTube to highlight newsworthy statements by city officials, to provoke discussion and debate, and to make the information more accessible to the public, including to people with disabilities. 

The DMCA notice and takedown process lets copyright holders ask websites to take down user-uploaded material that infringes their copyrights. Although Kastorf had explained to WCAC’s executive director that Channel 781’s use of the government meeting clips was a fair use under copyright law, WCAC sent three copyright infringement notices to YouTube referencing 15 specific Channel 781 videos, leading YouTube to deactivate the account and render all of its content inaccessible. YouTube didn’t restore access to the videos until two months later, after a lengthy intervention by EFF. 

The lawsuit—which seeks damages and injunctive relief—says WCAC knew, should have known, or failed to consider that the government meeting clips were a fair use of copyrighted material, and so it acted in bad faith when it sent the infringement notices to YouTube. 

“Nobody can use copyright to limit access to videos of public meetings, and those who make bogus claims in order to stifle critical reporting must be held accountable,” said EFF Intellectual Property Litigation Director Mitch Stoltz. “Phony copyright claims must never subvert the public’s right to know, and to report on, what government is doing.” 

For the complaint: https://www.eff.org/document/07-24-2024-channel-781-news-v-waltham-community-access-corporation-complaint

For more on the DMCA: https://www.eff.org/issues/dmca  

For EFF’s Takedown Hall of Shame: https://www.eff.org/takedowns

Contact:  MitchStoltzIP Litigation Directormitch@eff.org
Josh Richman

Supreme Court Dodges Key Question in Murthy v. Missouri and Dismisses Case for Failing to Connect The Government’s Communication to Specific Platform Moderation

1 month 1 week ago

We don’t know a lot more about when government jawboning social media companies—that is, attempting to pressure them to censor users’ speech— violates the First Amendment; but we do know that lawsuits based on such actions will be hard to win. In Murthy v. Missouri, the U.S. Supreme Court did not answer the important First Amendment question before it—how does one distinguish permissible from impermissible government communications with social media platforms about the speech they publish? Rather, it dismissed the cases because none of the plaintiffs could show that any of the statements by the government they complained of were likely the cause of any specific actions taken by the social media platforms against them or that they would happen again.   

As we have written before, the First Amendment forbids the government from coercing a private entity to censor, whether the coercion is direct or subtle. This has been an important principle in countering efforts to threaten and pressure intermediaries like bookstores and credit card processors to limit others’ speech. But not every communication to an intermediary about users’ speech is unconstitutional; indeed, some are beneficial—for example, platforms often reach out to government actors they perceive as authoritative sources of information. And the distinction between proper and improper speech is often obscure. 

While the Supreme Court did not tell us more about coercion, it did remind us that it is very hard to win lawsuits alleging coercion. 

So, when do the government’s efforts to persuade one to censor another become coercion? This was a hard question prior to Murthy. And unfortunately, it remains so, though a different jawboning case also recently decided provides some clarity. 

Rather than provide guidance to courts about the line between permissible and impermissible government communications with platforms about publishing users’ speech, the Supreme Court dismissed Murthy, holding that every plaintiff lacked “standing” to bring the lawsuit. That is, none of the plaintiffs had presented sufficient facts to show that the government did in the past or would in the future coerce a social media platform to take down, deamplify, or otherwise obscure any of the plaintiffs’ specific social media posts. So, while the Supreme Court did not tell us more about coercion, it did remind us that it is very hard to win lawsuits alleging coercion. 

The through line between this case and Moody v. Netchoice, decided by the Supreme Court a few weeks later, is that social media platforms have a First Amendment right to moderate the speech any user sees, and, because they exercise that right routinely, a plaintiff who believes they have been jawboned must prove that it was because of the government’s dictate, not the platform’s own decision. 

Plaintiffs’ Lack Standing to Bring Jawboning Claims 

Article III of the U.S. Constitution limits federal courts to only considering “cases and controversies.” This limitation requires that any plaintiff have suffered an injury that was traceable to the defendants and which the court has the power to fix. The standing doctrine can be a significant barrier to litigants without full knowledge of the facts and circumstances surrounding their injuries, and EFF has often complained that courts require plaintiffs to prove their cases on the merits at very early stages of litigation before the discovery process. Indeed, EFF’s landmark mass surveillance litigation, Jewel v NSA, was ultimately dismissed because the plaintiffs lacked standing to sue

The main fault in the Murthy plaintiffs’ case was weak evidence

The standing question here differs from cases such as Jewel where courts have denied plaintiffs discovery because they couldn’t demonstrate their standing without an opportunity to gather evidence of the suspected wrongdoing. The Murthy plaintiffs had an opportunity to gather extensive evidence of suspected wrongdoing—indeed, the Supreme Court noted that the case’s factual record exceeds 26,000 pages. And the Supreme Court considered this record in its standing analysis.   

While the Supreme Court did not provide guidance on what constitutes impermissible government coercion of social media platforms in Murthy, its ruling does tell us what type of cause-and-effect a plaintiff must prove to win a jawboning case. 

A plaintiff will have to prove that the negative treatment of their speech was attributable to the government, not the independent action of the platform. This accounts for basic truths of content moderation, which we emphasized in our amicus brief: that platforms moderate all the time, often based on their community guidelines, but also often ad hoc, and informed by input from users and a variety of outside experts. 

When, as in this case, plaintiffs ask a court to stop the government from ongoing or future coercion of a platform to remove, deamplify, or otherwise obscure the plaintiffs’ speech—rather than, for example, compensate for harm caused by past coercion—those plaintiffs must show a real and immediate threat that they will be harmed again. Past incidents of government jawboning are relevant only to predict a repeat of that behavior. Further, plaintiffs seeking to stop ongoing or future government coercion must show that the platform will change its policies and practices back to their pre-coerced state should the government be ordered to stop. 

Fortunately, plaintiffs will only have to prove that a particular government actor “pressured a particular platform to censor a particular topic before that platform suppressed a particular plaintiff ’s speech on that topic.” Plaintiffs do not need to show that the government targeted their posts specifically, just the general topic of their posts, and that their posts were negatively moderated as a result.  

The main fault in the Murthy plaintiffs’ case was weak evidence that the government actually caused a social media platform to take down, deamplify, or otherwise obscure any of the plaintiffs’ social media posts or any particular social media post at all. Indeed, the evidence that the content moderation decisions were the platforms’ independent decisions was stronger: the platforms had all moderated similar content for years and strengthened their content moderation standards before the government got involved; they spoke not just with the government but with other outside experts; and they had independent, non-governmental incentives to moderate user speech as they did. 

The Murthy plaintiffs also failed to show that the government jawboning they complained of, much of it focusing on COVID and vaccine posts, was continuing. As the Court noted, the government appears to have ceased those efforts. It was not enough that the plaintiffs continue to suffer ill effects from that past behavior. 

And lastly, the plaintiffs could not show that the order they sought from the courts preventing the government from further jawboning would actually cure their injuries, since the platforms may still exercise independent judgment to negatively moderate the plaintiffs’ posts even without governmental involvement. 

 The Court Narrows the Right to Listen 

The right to listen and receive information is an important First Amendment right that has typically allowed those who are denied access to censored speech to sue to regain access. EFF has fervently supported this right. 

But the Supreme Court’s opinion in Murthy v. Missouri narrows this right. The Court explains that only those with a “concrete, specific connection to the speaker” have standing to sue to challenge such censorship. At a minimum, it appears, one who wants to sue must point to specific instances of censorship that have caused them harm; it is not enough to claim an interest in a person’s speech generally or claim harm from being denied “unfettered access to social media.” While this holding rightfully applies to the States who had sought to vindicate the audience interests of their entire populaces, it is more problematic when applied to individual plaintiffs. Going forward EFF will advocate for a narrow reading of this holding. 

 As we pointed out in our amicus briefs and blog posts, this case was always a difficult one for litigating the important question of defining illegal jawboning because it was based more on a sprawling, multi-agency conspiracy theory than on specific takedown demands resulting in actual takedowns. The Supreme Court seems to have seen it the same way. 

But the Supreme Court’s Other Jawboning Case Does Help Clarify Coercion  

Fortunately, we do know a little more about the line between permissible government persuasion and impermissible coercion from a different jawboning case, outside the social media context, that the Supreme Court also decided this year: NRA v. Vullo.  

InNRA v. Vullo, the Supreme Court importantly affirmed that the controlling case for jawboning is Bantam Books v. Sullivan 

NRA v. Vullo is a lawsuit by the National Rifle Association alleging that the New York state agency that oversees the insurance industry threatened insurance companies with enforcement actions if they continued to offer coverage to the NRA. Unlike Murthy, the case came to the Supreme Court on a motion to dismiss before any discovery had been conducted and when courts are required to accept all of the plaintiffs’ factual allegations as true. 

The Supreme Court importantly affirmed that the controlling case for jawboning is Bantam Books v. Sullivan, a 1963 case in which the Supreme Court established that governments violate the First Amendment by coercing one person to censor another person’s speech over which they exercise control, what the Supreme Court called “indirect censorship.”   

In Vullo, the Supreme Court endorsed a multi-factored test that many of the lower courts had adopted, as a “useful, though nonexhaustive, guide” to answering the ultimate question in jawboning cases: did the plaintiff “plausibly allege conduct that, viewed in context, could be reasonably understood to convey a threat of adverse government action in order to punish or suppress the plaintiff ’s speech?” Those factors are: (1) word choice and tone, (2) the existence of regulatory authority (that is, the ability of the government speaker to actually carry out the threat), (3) whether the speech was perceived as a threat, and (4) whether the speech refers to adverse consequences. The Supreme Court explained that the second and third factors are related—the more authority an official wields over someone the more likely they are to perceive their speech as a threat, and the less likely they are to disregard a directive from that official. And the Supreme Court made clear that coercion may arise from ither threats or inducements.  

In our amicus brief in Murthy, we had urged the Court to make clear that an official’s intent to coerce was also highly relevant. The Supreme Court did not directly state this, unfortunately. But they did several times refer to the NRA as having properly alleged that the “coercive threats were aimed at punishing or suppressing disfavored speech.”  

At EFF, we will continue to look for cases that present good opportunities to bring jawboning claims before the courts and to bring additional clarity to this important doctrine. 

 

David Greene

Why Privacy Badger Opts You Out of Google’s “Privacy Sandbox”

1 month 1 week ago

Update July 22, 2024: Shortly after we published this post, Google announced it's no longer deprecating third-party cookies in Chrome. We've updated this blog to note the news.

The latest update of Privacy Badger opts users out of ad tracking through Google’s “Privacy Sandbox.” 

Privacy Sandbox is Google’s way of letting advertisers keep targeting ads based on your online behavior without using third-party cookies. Third-party cookies were once the most common form of online tracking technology, but major browsers, like Safari and Firefox, started blocking them several years ago. After pledging to eventually do the same for Chrome in 2020, and after several delays, today Google backtracked on its privacy promise, announcing that third-party cookies are here to stay. Notably, Google Chrome continues to lag behind other browsers in terms of default protections against online tracking.

Privacy Sandbox might be less invasive than third-party cookies, but that doesn’t mean it’s good for your privacy. Instead of eliminating online tracking, Privacy Sandbox simply shifts control of online tracking from third-party trackers to Google. With Privacy Sandbox, tracking will be done by your Chrome browser itself, which shares insights gleaned from your browsing habits with different websites and advertisers. Despite sounding like a feature that protects your privacy, Privacy Sandbox ultimately protects Google's advertising business.

How did Google get users to go along with this? In 2023, Chrome users received a pop-up about “Enhanced ad privacy in Chrome.” In the U.S., if you clicked the “Got it” button to make the pop-up go away, Privacy Sandbox remained enabled for you by default. Users could opt out by changing three settings in Chrome. But first, they had to realize that "Enhanced ad privacy" actually enabled a new form of ad tracking.

You shouldn't have to read between the lines of Google’s privacy-washing language to protect your privacy. Privacy Badger will do this for you!

Three Privacy Sandbox Features That Privacy Badger Disables For You

If you use Google Chrome, Privacy Badger will update three different settings that constitute Privacy Sandbox:

  • Ad topics: This setting allows Google to generate a list of topics you’re interested in based on the websites you visit. Any site you visit can ask Chrome what topics you’re supposedly into, then display an ad accordingly. Some of the potential topics–like “Student Loans & College Financing”, “Credit Reporting & Monitoring”, and “Unwanted Body & Facial Hair Removal”–could serve as proxies for sensitive financial or health information, potentially enabling predatory ad targeting. In an attempt to prevent advertisers from identifying you, your topics roll over each week and Chrome includes a random topic 5% of the time. However, researchers found that Privacy Sandbox topics could be used to re-identify users across websites. Using 1,207 people’s real browsing histories, researchers showed that as few as three observations of a person’s “ad topics” was enough to identify 60% of users across different websites.

  • Site-suggested ads: This setting enables "remarketing" or "retargeting," which is the reason you’re constantly seeing ads for things you just shopped for online. It works by allowing any site you visit to give information (like “this person loves sofas”) to your Chrome browser. Then when you visit a site that runs ads, Chrome uses that information to help the site display a sofa ad without the site learning that you love sofas. However, researchers demonstrated this feature of Privacy Sandbox could be exploited to re-identify and track users across websites, partially infer a user’s browsing history, and manipulate the ads that other sites show a user.

  • Ad measurement: This setting allows advertisers to track ad performance by storing data in your browser that's then shared with the advertised sites. For example, after you see an ad for shoes, whenever you visit that shoe site it’ll get information about the time of day the ad was shown and where the ad was displayed. Unfortunately, Google allows advertisers to include a unique ID with this data. So if you interact with multiple ads from the same advertiser around the web, this ID can help an advertiser build a profile of your browsing habits.

Why Privacy Badger Opts Users Out of Privacy Sandbox

Privacy Badger is committed to protecting you from online tracking. Despite being billed as a privacy feature, Privacy Sandbox protects Google’s bottom line at the expense of your privacy. Nearly 80% of Google’s revenue comes from online advertising. By building ad tracking into your Chrome browser, Privacy Sandbox gives Google even more control of the advertising ecosystem than it already has. Yet again, Google is rewriting the rules for the internet in a way that benefits itself first.

Researchers and regulators have already found that Privacy Sandbox “fails to meet its own privacy goals.” In a draft report leaked to the Wall Street Journal, the UK’s privacy regulator noted that Privacy Sandbox could be exploited to identify anonymous users and that companies will likely use it to continue tracking users across sites. Likewise, after researchers told Google about 12 attacks they conducted on a key feature of Privacy Sandbox prior to its public release, Google forged ahead and released the feature after mitigating only one of those attacks.

Privacy Sandbox offers some privacy improvements over third-party cookies. But it reinforces Google’s commitment to behavioral advertising, something we’ve been advocating against for years. Behavioral advertising incentivizes online actors to collect as much of our information as possible. This can lead to a range of harms, like bad actors buying your sensitive information and predatory ads targeting vulnerable populations.

Your browser shouldn’t put advertisers' interests above yours. As Google turns your browser into an advertising agent, Privacy Badger will put your privacy first.

What You Can Do Now

If you don’t already have Privacy Badger, install it now to automatically opt out of Privacy Sandbox and the broader ecosystem of online tracking. Already have Privacy Badger? You’re all set! And of course, don’t hesitate to spread the word to friends and family you want to protect from invasive online tracking. With your help, Privacy Badger will keep fighting to end online tracking and build a safer internet for all.

Lena Cohen

Media Briefing: EFF, Partners Warn UN Member States Are Poised to Approve Dangerous International Surveillance Treaty

1 month 1 week ago
Countries That Believe in Rule of Law Must Push Back on Draft That Expands Spying Powers, Benefiting Authoritarian Regimes

SAN FRANCISCO—On Wednesday, July 24, at 11:00 am Eastern Time (8:00 am Pacific Time, 5:00 pm CET), experts from Electronic Frontier Foundation (EFF), Access Now, Derechos Digitales, Human Rights Watch, and the International Fund for Public Interest Media will brief reporters about the imminent adoption of a global surveillance treaty that threatens human rights around the world, potentially paving the way for a new era of transnational repression.

The virtual briefing will update members of the media ahead of the United Nations’ concluding session of treaty negotiations, scheduled for July 29-August 9 in New York, to possibly finalize and adopt what started out as a treaty to combat cybercrime.

Despite repeated warnings and recommendations by human rights organizations, journalism and industry groups, cybersecurity experts, and digital rights defenders to add human rights safeguards and rein in the treaty’s broad scope and expansive surveillance powers, UN Member States are expected to adopt the Russian-backed, deeply flawed draft.

The experts will discuss the draft treaty in terms of shifts in geopolitical power, abuse of cybercrime laws, and challenges posed by the rising influence of Russia and China. A question-and-answer session will follow speaker presentations.  

WHAT:
Virtual media briefing on UN surveillance treaty

HOW:
To join the news conference remotely, please register from the following link to receive the webinar ID and password:
https://eff.zoom.us/meeting/register/tZwkd-GsrzoiH9Jt3gsl2CJ55Xv0hBDguxW5

SPEAKERS:
Tirana Hassan, Executive Director, Human Rights Watch
Paloma Lara-Castro, Public Policy Coordinator, Derechos Digitales
Khadija Patel, Journalist in Residence, International Fund for Public Interest Media
Katitza Rodriguez, Policy Director for Global Policy, EFF
Moderator: Raman Jit Singh Chima, Global Cybersecurity Lead and Senior International Counsel, Access Now

WHEN:
Wednesday, July 24, at 11:00 am Eastern Time, 8:00 am Pacific Time, 5:00 pm CET

For EFF’s submissions and Coalition Letters to UN Ad Hoc Committee overseeing treaty negotiations:
https://www.eff.org/pages/submissions#main-content

Contact:  KarenGulloSenior Writer for Free Speech and Privacykaren@eff.org DeborahBrownSenior Researcher and Advocate on Technology and Rights, Human Rights Watchbrownd@hrw.org CatalinaBallacatalina.balla@derechosdigitales.org
Karen Gullo

EFF Tells Minnesota Supreme Court to Strike Down Geofence Warrant As Fourth Circuit Court of Appeals Takes the Wrong Turn

1 month 1 week ago

We haven’t seen the end of invasive geofence warrants just yet, despite Google’s big announcement late last year that it was fundamentally changing how it collects location data. Today, EFF is filing an amicus brief in the Minnesota Supreme Court in State v. Contreras-Sanchez, involving a warrant that directed Google to turn over an entire month of location data in response to a geofence warrant. Our brief argues that warrant violates the Fourth Amendment and Minnesota’s state constitution.

Geofence warrants require a provider—almost always Google—to search its entire reserve of user location data to identify all users or devices located within a geographic area during a time period specified by law enforcement. This creates a high risk of turning suspicion on innocent people for crimes they didn’t commit and can reveal sensitive and private information about where individuals have traveled in the past. We’ve seen a recent flurry of court cases involving geofence warrants, and these courts’ rulings will set important Fourth Amendment precedent not just in geofence cases, but other investigations involving similar “reverse warrants” such as users’ keyword searches on search engines.

In Contreras-Sanchez, police discovered a dead body on the side of a rural roadway. They did not know when the body was disposed of and had few leads, so they sought a warrant directing Google to turn over location data for the area around the site for the previous month. Notably, Google responded that turning over the entire monthlong dataset would be too “cumbersome,” even though it covered only a relatively sparsely populated area. Instead, following the now-familiar “three-step” process for geofence warrants, Google provided police with location data corresponding to twelve devices that had entered the area over a single week period. Police focused in on one device, then sought identifying information on that device, leading them to the defendant.

EFF’s brief, filed along with the National Association of Criminal Defense Lawyers and the Minnesota Association of Criminal Defense Lawyers, argues that the geofence warrant acted as a “general warrant” akin to the practices of the British agents in Colonial America who were authorized to go house by house, searching for smuggled goods and evidence of seditious publications. As we write in the brief:

This general warrant allowed law enforcement to go Google account by Google account, searching each user’s private location data for evidence of an alleged crime. The same concerns that animated staunch objection to general warrants in the past are equally relevant to geofence warrants today; these warrants lack individualized suspicion, allow for unbridled officer discretion, and impact the privacy rights of countless innocent individuals. And, like the eighteenth-century writs of assistance that inspired the Fourth Amendment’s drafters, geofence warrants are especially pernicious because they also have the potential to affect fundamental rights including freedom of speech, association, and bodily autonomy. Neither the Fourth Amendment, nor Article 1, Section 10 of the Minnesota Constitution tolerate a warrant of this breadth.

Federal appeals court makes a serious misstep on geofence warrants

Meanwhile, in the leading federal geofence case, United States v. Chatrie, the federal Court of Appeals for the Fourth Circuit issued a seriously misguided opinion earlier this month, holding that a geofence warrant covering a busy area around a bank robbery for two hours wasn’t even a Fourth Amendment search at all—meaning that the police wouldn’t necessarily need a warrant to get access to all of this sensitive location data. The two-judge majority opinion effectively ignores the impact of the U.S. Supreme Court’s landmark Fourth Amendment location data case, Carpenter v. United States, and similarly tries to distinguish the Fourth Circuit’s own important precedent in Leaders of a Beautiful Struggle v. Baltimore Police Department. In the majority’s view, in order to be a search protected by the Fourth Amendment, the government must collect a significant amount of location data over a long period of time, and the two-hour period at issue in Chatrie simply wasn’t long enough to interfere with individuals’ reasonable expectation of privacy in the “whole of their physical movements” the way longer surveillance was in Carpenter and Leaders.

But in a scathing, 70-plus page dissenting opinion, Judge Wynn dismantled these arguments, showing that Carpenter requires courts to look beyond formulaic applications of precedent and examine the actual character of the surveillance at issue. On nearly every metric, geofence warrants have the capacity to reveal just as, if not more, private and intimate associations than the tracking at issue in Carpenter. What’s more, Judge Wynn’s dissent demonstrated what we’ve argued in geofence cases across the country: These warrants violate the Fourth Amendment because they are not targeted to a particular individual or device, like a typical warrant for digital communications. The only “evidence” supporting a geofence warrant is that a crime occurred in a particular area, and the perpetrator likely carried a cell phone that shared location data with Google. For this reason, they inevitably sweep up potentially hundreds of people who have no connection to the crime under investigation—and could turn each of those people into a suspect.

Chatrie’s lawyers are petitioning the entire Fourth Circuit to review the case, and we’re hopeful that the Chatrie panel opinion will be overturned by the full court en banc. We’ll be filing another amicus brief supporting Chatrie’s petition. Stay tuned for that and for the ruling from the Minnesota Supreme Court in Contreras-Sanchez

Related Cases: Carpenter v. United States
Andrew Crocker

EFF, International Partners Appeal to EU Delegates to Help Fix Flaws in Draft UN Cybercrime Treaty That Can Undermine EU's Data Protection Framework

1 month 1 week ago

With the final negotiating session to approve the UN Cybercrime Treaty just days away, EFF and 21 international civil society organizations today urgently called on delegates from EU states and the European Commission to push back on the draft convention's many flaws, which include an excessively broad scope that will grant intrusive surveillance powers without robust human rights and data protection safeguards.

The time is now to demand changes in the text to narrow the treaty's scope, limit surveillance powers, and spell out data protection principles. Without these fixes, the draft treaty stands to give governments' abusive practices the veneer of international legitimacy and should be rejected.

Letter below:

Urgent Appeal to Address Critical Flaws in the Latest Draft of the UN Cybercrime Convention


Ahead of the reconvened concluding session of the United Nations (UN) Ad Hoc Committee on Cybercrime (AHC) in New York later this month, we, the undersigned organizations, wish to urgently draw your attention to the persistent critical flaws in the latest draft of the UN cybercrime convention (hereinafter Cybercrime Convention or the Convention).

Despite the recent modifications, we continue to share profound concerns regarding the persistent shortcomings of the present draft and we urge member states to not sign the Convention in its current form.

Key concerns and proposals for remedy:
  1. Overly Broad Scope and Legal Uncertainty:
  • The draft Convention’s scope remains excessively broad, including cyber-enabled offenses and other content-related crimes. The proposed title of the Convention and the introduction of the new Article 4 – with its open-ended reference to “offenses established in accordance with other United Nations conventions and protocols” – creates significant legal uncertainty and expands the scope to an indefinite list of possible crimes to be determined only in the future. This ambiguity risks criminalizing legitimate online expression, having a chilling effect detrimental to the rule of law. We continue to recommend narrowing the Convention’s scope to clearly defined, already existing cyber-dependent crimes only, to facilitate its coherent application, ensure legal certainty and foreseeability and minimize potential abuse.
  • The draft Convention in Article 18 lacks clarity concerning the liability of online platforms for offenses committed by their users. The current draft of the Article lacks the requirement of intentional participation in offenses established in accordance with the Convention, thereby also contradicting Article 19 which does require intent. This poses the risk that online intermediaries could be held liable for information disseminated by their users, even without actual knowledge or awareness of the illegal nature of the content (as set out in the EU Digital Services Act), which will incentivise overly broad content moderation efforts by platforms to the detriment of freedom of expression. Furthermore, the wording is much broader (“for participation”) than the Budapest Convention (“committed for the cooperation’s benefit”) and would merit clarification along the lines of paragraph 125 of the Council of Europe Explanatory Report to the Budapest Convention
  • The proposal in the revised draft resolution to elaborate a draft protocol supplementary to the Convention represents a further push to expand the scope of offenses, risking the creation of a limitlessly expanding, increasingly punitive framework.
  1. Insufficient Protection for Good-Faith Actors:
  • The draft Convention fails to incorporate language sufficient to protect good-faith actors, such as security researchers (irrespective of whether it concerns the authorized testing or protection of an information and communications technology system), whistleblowers, activists, and journalists, from excessive criminalization. It is crucial that the mens rea element in the provisions relating to cyber-dependent crimes includes references to criminal intent and harm caused.
  1. Lack of Specific Human Rights Safeguards:
  • Article 6 fails to include specific human rights safeguards – as proposed by civil society organizations and the UN High Commissioner for Human Rights – to ensure a common understanding among Member States and to facilitate the application of the treaty without unlawful limitation of human rights or fundamental freedoms. These safeguards should be: 
    • applicable to the entire treaty to ensure that cybercrime efforts provide adequate protection for human rights;
    • be in accordance with the principles of legality, necessity, and proportionality, non-discrimination, and legitimate purpose;
    • incorporate the right to privacy among the human rights specified;
    • address the lack of effective gender mainstreaming to ensure the Convention does not undermine human rights on the basis of gender.
  1. Procedural Measures and Law Enforcement:
  • The Convention should limit the scope of procedural measures to the investigation of the criminal offenses set out in the Convention, in line with point 1 above.
  • In order to facilitate their application and – in light of their intrusiveness – to minimize the potential for abuse, this chapter of the Convention should incorporate the following minimal conditions and safeguards as established under international human rights law. Specifically, the following should be included in Article 24:
    • the principles of legality, necessity, proportionality, non-discrimination and legitimate purpose;
    • prior independent (judicial) authorization of surveillance measures and monitoring throughout their application;
    • adequate notification of the individuals concerned once it no longer jeopardizes investigations;
    • and regular reports, including statistical data on the use of such measures.
  • Articles 28/4, 29, and 30 should be deleted, as they include excessive surveillance measures that open the door for interference with privacy without sufficient safeguards as well as potentially undermining cybersecurity and encryption.
  1. International Cooperation:
  • The Convention should limit the scope of international cooperation solely to the crimes set out in the Convention itself to avoid misuse (as per point 1 above.) Information sharing for law enforcement cooperation should be limited to specific criminal investigations with explicit data protection and human rights safeguards.
  • Article 40 requires “the widest measure of mutual legal assistance” for offenses established in accordance with the Convention as well as any serious offense under the domestic law of the requesting State. Specifically, where no treaty on mutual legal assistance applies between State Parties, paragraphs 8 to 31 establish extensive rules on obligations for mutual legal assistance with any State Party with generally insufficient human rights safeguards and grounds for refusal. For example, paragraph 22 sets a high bar of ”substantial grounds for believing” for the requested State to refuse assistance.
  • When State Parties cannot transfer personal data in compliance with their applicable laws, such as the EU data protection framework, the conflicting obligation in Article 40 to afford the requesting State “the widest measure of mutual legal assistance” may unduly incentivize the transfer of the personal data subject to appropriate conditions under Article 36(1)(b), e.g. through derogations for specific situations in Article 38 of the EU Law Enforcement Directive. Article 36(1)(c) of the Convention also encourages State Parties to establish bilateral and multilateral agreements to facilitate the transfer of personal data, which creates a further risk of undermining the level of data protection guaranteed by EU law.
  • When personal data is transferred in full compliance with the data protection framework of the requested State, Article 36(2) should be strengthened to include clear, precise, unambiguous and effective standards to protect personal data in the requesting State, and to avoid personal data being further processed and transferred to other States in ways that may violate the fundamental right to privacy and data protection.
Conclusion and Call to Action:

Throughout the negotiation process, we have repeatedly pointed out the risks the treaty in its current form pose to human rights and to global cybersecurity. Despite the latest modifications, the revised draft fails to address our concerns and continues to risk making individuals and institutions less safe and more vulnerable to cybercrime, thereby undermining its very purpose.

Failing to narrow the scope of the whole treaty to cyber-dependent crimes, to protect the work of security researchers, human rights defenders and other legitimate actors, to strengthen the human rights safeguards, to limit surveillance powers, and to spell out the data protection principles will give governments’ abusive practices a veneer of international legitimacy. It will also make digital communications more vulnerable to those cybercrimes that the Convention is meant to address. Ultimately, if the draft Convention cannot be fixed, it should be rejected. 

With the UN AHC’s concluding session about to resume, we call on the delegations of the Member States of the European Union and the European Commission’s delegation to redouble their efforts to address the highlighted gaps and ensure that the proposed Cybercrime Convention is narrowly focused in its material scope and not used to undermine human rights nor cybersecurity. Absent meaningful changes to address the existing shortcomings, we urge the delegations of EU Member States and the EU Commission to reject the draft Convention and not advance it to the UN General Assembly for adoption.

This statement is supported by the following organizations:

Access Now
Alternatif Bilisim
ARTICLE 19: Global Campaign for Free Expression
Centre for Democracy & Technology Europe
Committee to Protect Journalists
Digitalcourage
Digital Rights Ireland
Digitale Gesellschaft
Electronic Frontier Foundation (EFF)
epicenter.works
European Center for Not-for-Profit Law (ECNL) 
European Digital Rights (EDRi)
Global Partners Digital
International Freedom of Expression Exchange (IFEX)
International Press Institute 
IT-Pol Denmark
KICTANet
Media Policy Institute (Kyrgyzstan)
Privacy International
SHARE Foundation
Vrijschrift.org
World Association of News Publishers (WAN-IFRA)
Zavod Državljan D (Citizen D)





Katitza Rodriguez

Beyond Pride Month: Protecting Digital Identities For LGBTQ+ People

1 month 2 weeks ago

The internet provides people space to build communities, shed light on injustices, and acquire vital knowledge that might not otherwise be available. And for LGBTQ+ individuals, digital spaces enable people that are not yet out to engage with their gender and sexual orientation.

In the age of so much passive surveillance, it can feel daunting if not impossible to strike any kind of privacy online. We can’t blame you for feeling this way, but there’s plenty you can do to keep your information private and secure online. What’s most important is that you think through the specific risks you face and take the right steps to protect against them. 

The first step is to create a security plan. Following that, consider some of the recommended advice below and see which steps fit best for your specific needs:  

  • Use multiple browsers for different use cases. Compartmentalization of sensitive data is key. Since many websites are finicky about the type of browser you’re using, it’s normal to have multiple browsers installed on one device. Designate one for more sensitive activities and configure the settings to have higher privacy.
  • Use a VPN to bypass local censorship, defeat local surveillance, and connect your devices securely to the network of an organization on the other side of the internet. This is extra helpful for accessing pro-LGBTQ+ content from locations that ban access to this material.
  • If your cell phone allows it, hide sensitive apps away from the home screen. Although these apps will still be available on your phone, this hides them into a special folder so that prying eyes are less likely to find them.
  • Separate your digital identities to mitigate the risk of doxxing, as the personal information exposed about you is often found in public places like “people search” sites and social media.
  • Create a security plan for incidents of harassment and threats of violence. Especially if you are a community organizer, activist, or prominent online advocate, you face an increased risk of targeted harassment. Developing a plan of action in these cases is best done well before the threats become credible. It doesn’t have to be perfect; the point is to refer to something you were able to think up clear-headed when not facing a crisis. 
  • Create a plan for backing up images and videos to avoid losing this content in places where governments slow down, disrupt, or shut down the internet, especially during LGBTQ+ events when network disruptions inhibit quick information sharing.
  • Use two-factor authentication where available to make your online accounts more secure by adding a requirement for additional proof (“factors”) alongside a strong password.
  • Obscure people’s faces when posting pictures of protests online (like using tools such as Signal’s in-app camera blur feature) to protect their right to privacy and anonymity, particularly during LGBTQ+ events where this might mean staying alive.
  • Harden security settings in Zoom for large video calls and events, such as enabling security settings and creating a process to remove opportunistic or homophobic people disrupting the call. 
  • Explore protections on your social media accounts, such as switching to private mode, limiting comments, or using tools like blocking users and reporting posts. 

For more information on these topics, visit the following:

Paige Collings

UN Cybercrime Draft Convention Dangerously Expands State Surveillance Powers Without Robust Privacy, Data Protection Safeguards

1 month 2 weeks ago

This is the third post in a series highlighting flaws in the proposed UN Cybercrime Convention. Check out Part I, our detailed analysis on the criminalization of security research activities, and Part II, an analysis of the human rights safeguards.

As we near the final negotiating session for the proposed UN Cybercrime Treaty, countries are running out of time to make much-needed improvements to the text. From July 29 to August 9, delegates in New York aim to finalize a convention that could drastically reshape global surveillance laws. The current draft favors extensive surveillance, establishes weak privacy safeguards, and defers most protections against surveillance to national laws—creating a dangerous avenue that could be exploited by countries with varying levels of human rights protections.

The risk is clear: without robust privacy and human rights safeguards in the actual treaty text, we will see increased government overreach, unchecked surveillance, and unauthorized access to sensitive data—leaving individuals vulnerable to violations, abuses, and transnational repression. And not just in one country.  Weaker safeguards in some nations can lead to widespread abuses and privacy erosion because countries are obligated to share the “fruits” of surveillance with each other. This will worsen disparities in human rights protections and create a race to the bottom, turning global cooperation into a tool for authoritarian regimes to investigate crimes that aren’t even crimes in the first place.

Countries that believe in the rule of law must stand up and either defeat the convention or dramatically limit its scope, adhering to non-negotiable red lines as outlined by over 100 NGOs. In an uncommon alliance, civil society and industry agreed earlier this year in a joint letter urging governments to withhold support for the treaty in its current form due to its critical flaws.

Background and Current Status of the UN Cybercrime Convention Negotiations

The UN Ad Hoc Committee overseeing the talks and preparation of a final text is expected to consider a revised but still-flawed text in its entirety, along with the interpretative notes, during the first week of the session, with a focus on all provisions not yet agreed ad referendum.[1] However, in keeping with the principle in multilateral negotiations that “nothing is agreed until everything is agreed,” any provisions of the draft that have already been agreed could potentially be reopened. 

The current text reveals significant disagreements among countries on crucial issues like the convention's scope and human rights protection. Of course the text could also get worse. Just when we thought Member States had removed many concerning crimes, they could reappear. The Ad-Hoc Committee Chair’s General Assembly resolution includes two additional sessions to negotiate not more protections, but the inclusion of more crimes. The resolution calls for “a draft protocol supplementary to the Convention, addressing, inter alia, additional criminal offenses.” Nevertheless, some countries still expect the latest draft to be adopted.

In this third post, we highlight the dangers of the currently proposed UN Cybercrime Convention's broad definition of "electronic data" and inadequate privacy and data protection safeguards.Together, these create the conditions for severe human rights abuses, transnational repression, and inconsistencies across countries in human rights protections.

A Closer Look to the Definition of Electronic Data

The proposed UN Cybercrime Convention significantly expands state surveillance powers under the guise of combating cybercrime. Chapter IV grants extensive government authority to monitor and access digital systems and data, categorizing data into communications data: subscriber data, traffic data, and content data. But it also makes use of a catch-all category called "electronic data." Article 2(b) defines electronic data as "any representation of facts, information, or concepts in a form suitable for processing in an information and communications technology system, including a program suitable to cause an information and communications technology system to perform a function."

"Electronic data," is eligible for three surveillance powers: preservation orders (Article 25), production orders (Article 27), and search and seizure (Article 28). Unlike the other traditional categories of traffic data, subscriber data and content data, "electronic data" refers to any data stored, processed, or transmitted electronically, regardless of whether it has been communicated to anyone. This includes documents saved on personal computers or notes stored on digital devices. In essence, this means that private unshared thoughts and information are no longer safe. Authorities can compel the preservation, production, or seizure of any electronic data, potentially turning personal devices into spy vectors regardless of whether the information has been communicated.

This is delicate territory, and it deserves careful thought and real protection—many of us now use our devices to keep our most intimate thoughts and ideas, and many of us also use tools like health and fitness tools in ways that we do not intend to share. This includes data stored on devices, such as face scans and smart home device data, if they remain within the device and are not transmitted. Another example could be photos that someone takes on a device but doesn't share with anyone. This category threatens to turn our most private thoughts and actions over to spying governments, both our own and others. 

And the problem is worse when we consider emerging technologies. The sensors in smart devices, AI, and augmented reality glasses, can collect a wide array of highly sensitive data. These sensors can record involuntary physiological reactions to stimuli, including eye movements, facial expressions, and heart rate variations. For example, eye-tracking technology can reveal what captures a user's attention and for how long, which can be used to infer interests, intentions, and even emotional states. Similarly, voice analysis can provide insights into a person's mood based on tone and pitch, while body-worn sensors might detect subtle physical responses that users themselves are unaware of, such as changes in heart rate or perspiration levels.

These types of data are not typically communicated through traditional communication channels like emails or phone calls (which would be categorized as content or traffic data). Instead, they are collected, stored, and processed locally on the device or within the system, fitting the broad definition of "electronic data" as outlined in the draft convention.

Such data likely has been harder to obtain because it may have not been communicated to or possessed by any communications intermediary or system. So it’s an  example of how the broad term "electronic data" increases the kinds (and sensitivity) of information about us that can be targeted by law enforcement through production orders or by search and seizure powers. These emerging technology uses are their own category, but they are most like "content" in communications surveillance, which usually has high protection. “Electronic data” must have equal protection as “content” of communication, and be subject to ironclad data protection safeguards, which the propose treaty fails to provide, as we will explain below.

The Specific Safeguard Problems

Like other powers in the draft convention, the broad powers related to "electronic data" don't come with specific limits to protect fair trial rights. 

Missing Safeguards

For example, many countries' have various kinds of information that is protected by a legal “privilege” against surveillance: attorney-client privilege, the spousal privilege, the priest-penitent privilege, doctor-patient privileges, and many kinds of protections for confidential business information and trade secrets. Many countries, also give additional protections for journalists and their sources. These categories, and more, provide varying degrees of extra requirements before law enforcement may access them using production orders or search-and-seizure powers, as well as various protections after the fact, such as preventing their use in prosecutions or civil actions. 

Similarly, the convention lacks clear safeguards to prevent authorities from compelling individuals to provide evidence against themselves. These omissions raise significant red flags about the potential for abuse and the erosion of fundamental rights when a treaty text involves so many countries with a high disparity of human rights protections.

The lack of specific protections for criminal defense is especially troubling. In many legal systems, defense teams have certain protections to ensure they can effectively represent their clients, including access to exculpatory evidence and the protection of defense strategies from surveillance. However, the draft convention does not explicitly protect these rights, which both misses the chance to require all countries to provide these minimal protections and potentially further undermines the fairness of criminal proceedings and the ability of suspects to mount an effective defense in countries that either don’t provide those protections or where they are not solid and clear.

Even the State “Safeguards” in Article 24 are Grossly Insufficient

Even where the convention’s text discusses “safeguards,” the convention doesn’t actually protect people. The “safeguard” section, Article 24, fails in several obvious ways: 

Dependence on Domestic Law: Article 24(1) makes safeguards contingent on domestic law, which can vary significantly between countries. This can result in inadequate protections in states where domestic laws do not meet high human rights standards. By deferring safeguards to national law, Article 24 weakens these protections, as national laws may not always provide the necessary safeguards. It also means that the treaty doesn’t raise the bar against invasive surveillance, but rather confirms even the lowest protections.

A safeguard that bends to domestic law isn't a safeguard at all if it leaves the door open for abuses and inconsistencies, undermining the protection it's supposed to offer.

Discretionary Safeguards: Article 24(2) uses vague terms like “as appropriate,” allowing states to interpret and apply safeguards selectively. This means that while the surveillance powers in the convention are mandatory, the safeguards are left to each state’s discretion. Countries decide what is “appropriate” for each surveillance power, leading to inconsistent protections and potential weakening of overall safeguards.

Lack of Mandatory Requirements: Essential protections such as prior judicial authorization, transparency, user notification, and the principle of legality, necessity and non-discrimination are not explicitly mandated. Without these mandatory requirements, there is a higher risk of misuse and abuse of surveillance powers.

No Specific Data Protection Principles: As we noted above, the proposed treaty does not include specific safeguards for highly sensitive data, such as biometric or privileged data. This oversight leaves such information vulnerable to misuse.

Inconsistent Application: The discretionary nature of the safeguards can lead to their inconsistent application, exposing vulnerable populations to potential rights violations. Countries might decide that certain safeguards are unnecessary for specific surveillance methods, which the treaty allows, increasing the risk of abuse.

Finally, Article 23(4) of Chapter IV authorizes the application of Article 24 safeguards to specific powers within the international cooperation chapter (Chapter V). However, significant powers in Chapter V, such as those related to law enforcement cooperation (Article 47) and the 24/7 network (Article 41) do not specifically cite the corresponding Chapter IV powers and so may not be covered by Article 24 safeguards.

Search and Seizure of Stored Electronic Data

The proposed UN Cybercrime Convention significantly expands government surveillance powers, particularly through Article 28, which deals with the search and seizure of electronic data. This provision grants authorities sweeping abilities to search and seize data stored on any computer system, including personal devices, without clear, mandatory privacy and data protection safeguards. This poses a serious threat to privacy and data protection.

Article 28(1) allows authorities to search and seize any “electronic data” in an information and communications technology (ICT) system or data storage medium. It lacks specific restrictions, leaving much to the discretion of national laws. This could lead to significant privacy violations as authorities might access all files and data on a suspect’s personal computer, mobile device, or cloud storage account—all without clear limits on what may be targeted or under what conditions.

Article 28(2) permits authorities to search additional systems if they believe the sought data is accessible from the initially searched system. While judicial authorization should be a requirement to assess the necessity and proportionality of such searches, Article 24 only mandates “appropriate conditions and safeguards” without explicit judicial authorization. In contrast, U.S. law under the Fourth Amendment requires search warrants to specify the place to be searched and the items to be seized—preventing unreasonable searches and seizures.

Article 28(3) empowers authorities to seize or secure electronic data, including making and retaining copies, maintaining its integrity, and rendering it inaccessible or removing it from the system. For publicly accessible data, this takedown process could infringe on free expression rights and should be explicitly subject to free expression standards to prevent abuse.

Article 28(4) requires countries to have laws that allow authorities to compel anyone who knows how a particular computer or device works to provide necessary information to access it. This could include asking a tech expert or an engineer to help unlock a device or explain its security features. This is concerning because it might force people to help law enforcement in ways that could compromise security or reveal confidential information. For example, an engineer could be required to disclose a security flaw that hasn't been fixed, or to provide encryption keys that protect data, which could then be misused. The way it is written, it could be interpreted to include disproportionate orders that can lead to forcing persons to disclose a vulnerability to the government that hasn’t been fixed. It could also imply forcing people to disclose encryption keys such as signing keys on the basis that these are “the necessary information to enable” some form of surveillance.

Privacy International and EFF strongly recommend Article 28.4 be removed in its entirety. Instead, it has been agreed ad referendum. At least, the drafters must include material in the explanatory memorandum that accompanies the draft Convention to clarify limits to avoid forcing technologists to reveal confidential information or do work on behalf of law enforcement against their will. Once again, it would also be appropriate to have clear legal standards about how law enforcement can be authorized to seize and look through people’s private devices.

In general, production and search and seizure orders might be used to target tech companies' secrets, and require uncompensated labor by technologists and tech companies, not because they are evidence of crime but because they can be used to enhance law enforcement's technical capabilities.

Domestic Expedited Preservation Orders of Electronic Data

Article 25 on preservation orders, already agreed ad referendum, is especially problematic. It’s very broad, and will result in individuals’ data being preserved and available for use in prosecutions far more than needed. It also fails to include necessary safeguards to avoid abuse of power. By allowing law enforcement to demand preservation with no factual justification, it risks spreading familiar deficiencies in U.S. law worldwide.

Article 25 requires each country to create laws or other measures that let authorities quickly preserve specific electronic data, particularly when there are grounds to believe that such data is at risk of being lost or altered.

Article 25(2) ensures that when preservation orders are issued, the person or entity in possession of the data must keep it for up to 90 days, giving authorities enough time to obtain the data through legal channels, while allowing this period to be renewed. There is no specified limit on the number of times the order can be renewed, so it can potentially be reimposed indefinitely.

Preservation orders should be issued only when they’re absolutely necessary, but Article 24 does not mention the principle of necessity and lacks individual notice and explicit grounds requirements and statistical transparency obligations.

The article must limit the number of times preservation orders may be renewed to prevent indefinite data preservation requirements. Each preservation order renewal must require a demonstration of continued necessity and factual grounds justifying continued preservation.

Article 25(3) also compels states to adopt laws that enable gag orders to accompany preservation orders, prohibiting service providers or individuals from informing users that their data was subject to such an order. The duration of such a gag order is left up to domestic legislation.

As with all other gag orders, the confidentiality obligation should be subject to time limits and only be available to the extent that disclosure would demonstrably threaten an investigation or other vital interest. Further, individuals whose data was preserved should be notified when it is safe to do so without jeopardizing an investigation. Independent oversight bodies must oversee the application of preservation orders.

Indeed, academics such as prominent law professor and former U.S. Department of Justice lawyer Orin S. Kerr have criticized similar U.S. data preservation practices under 18 U.S.C. § 2703(f) for allowing law enforcement agencies to compel internet service providers to retain all contents of an individual's online account without their knowledge, any preliminary suspicion, or judicial oversight. This approach, intended as a temporary measure to secure data until further legal authorization is obtained, lacks the foundational legal scrutiny typically required for searches and seizures under the Fourth Amendment, such as probable cause or reasonable suspicion.

The lack of explicit mandatory safeguards raise similar concerns about Article 25 of the proposed UN convention. Kerr argues that these U.S. practices constitute a "seizure" under the Fourth Amendment, indicating that such actions should be justified by probable cause or, at the very least, reasonable suspicion—criteria conspicuously absent in the current draft of the UN convention.

By drawing on Kerr's analysis, we see a clear warning: without robust safeguards— including an explicit grounds requirement, prior judicial authorization, explicit notification to users, and transparency—preservation orders of electronic data proposed under the draft UN Cybercrime Convention risk replicating the problematic practices of the U.S. on a global scale.

Production Orders of Electronic Data

Article 27(a)’s treatment of “electronic data” in production orders, in light of the draft convention’s broad definition of the term, is especially problematic.

This article, which has already been agreed ad referendum, allows production orders to be issued to custodians of electronic data, requiring them to turn over copies of that data. While demanding customer records from a company is a traditional governmental power, this power is dramatically increased in the draft convention.

As we explain above, the extremely broad definition of electronic data, which is often sensitive in nature, raises new and significant privacy and data protection concerns, as it permits authorities to access potentially sensitive information without immediate oversight and prior judicial authorization. The convention needs instead to require prior judicial authorization before such information can be demanded from the companies that hold it. 

This ensures that an impartial authority assesses the necessity and proportionality of the data request before it is executed. Without mandatory data protection safeguards for the processing of personal data, law enforcement agencies might collect and use personal data without adequate restrictions, thereby risking the exposure and misuse of personal information.

The text of the convention fails to include these essential data protection safeguards. To protect human rights, data should be processed lawfully, fairly, and in a transparent manner in relation to the data subject. Data should be collected for specified, explicit, and legitimate purposes and not further processed in a manner that is incompatible with those purposes. 

Data collected should be adequate, relevant, and limited to what is necessary to the purposes for which they are processed. Authorities should request only the data that is essential for the investigation. Production orders should clearly state the purpose for which the data is being requested. Data should be kept in a format that permits identification of data subjects for no longer than is necessary for the purposes for which the data is processed. None of these principles are present in Article 27(a) and they must be. 

International Cooperation and Electronic Data

The draft UN Cybercrime Convention includes significant provisions for international cooperation, extending the reach of domestic surveillance powers across borders, by one state on behalf of another state. Such powers, if not properly safeguarded, pose substantial risks to privacy and data protection. 

  • Article 42 (1) (“International cooperation for the purpose of expedited preservation of stored electronic data”) allows one state to ask another to obtain preservation of “electronic data” under the domestic power outlined in Article 25. 
  • Article 44 (1) (“Mutual legal assistance in accessing stored electronic data”) allows one state to ask another “to search or similarly access, seize or similarly secure, and disclose electronic data,” presumably using powers similar to those under Article 28, although that article is not referenced in Article 44. This specific provision, which has not yet been agreed ad referendum, enables comprehensive international cooperation in accessing stored electronic data. For instance, if Country A needs to access emails stored in Country B for an ongoing investigation, it can request Country B to search and provide the necessary data.
Countries Must Protect Human Rights or Reject the Draft Treaty

The current draft of the UN Cybercrime Convention is fundamentally flawed. It dangerously expands surveillance powers without robust checks and balances, undermines human rights, and poses significant risks to marginalized communities. The broad and vague definitions of "electronic data," coupled with weak privacy and data protection safeguards, exacerbate these concerns.

Traditional domestic surveillance powers are particularly concerning as they underpin international surveillance cooperation. This means that one country can easily comply with the requests of another, which if not adequately safeguarded, can lead to widespread government overreach and human rights abuses. 

Without stringent data protection principles and robust privacy safeguards, these powers can be misused, threatening human rights defenders, immigrants, refugees, and journalists. We urgently call on all countries committed to the rule of law, social justice, and human rights to unite against this dangerous draft. Whether large or small, developed or developing, every nation has a stake in ensuring that privacy and data protection are not sacrificed. 

Significant amendments must be made to ensure these surveillance powers are exercised responsibly and protect privacy and data protection rights. If these essential changes are not made, countries must reject the proposed convention to prevent it from becoming a tool for human rights violations or transnational repression.

[1] In the context of treaty negotiations, "ad referendum" means that an agreement has been reached by the negotiators, but it is subject to the final approval or ratification by their respective authorities or governments. It signifies that the negotiators have agreed on the text, but the agreement is not yet legally binding until it has been formally accepted by all parties involved.

Katitza Rodriguez

Courts Should Have Jurisdiction over Foreign Companies Collecting Data on Local Residents, EFF Tells Appeals Court

1 month 2 weeks ago

This post was written by EFF legal intern Danya Hajjaji. 

Corporations should not be able to collect data from a state’s residents while evading the jurisdiction of that state’s courts, EFF and the UC Berkeley Center for Consumer Law and Economic Justice explained in a friend-of-the-court brief to the Ninth Circuit Court of Appeals. 

The case, Briskin v. Shopify, stems from a California resident’s privacy claims against Shopify, Inc. and its subsidiaries, out-of-state companies that process payments for third party ecommerce companies (collectively “Shopify”). The plaintiff alleged that Shopify secretly collected data on the plaintiff and other California consumers while purchasing apparel from an online California-based retailer. Shopify also allegedly tracked the users’ browsing activities across all ecommerce sites that used Shopify’s services. Shopify allegedly compiled that information into comprehensive user profiles, complete with financial “risk scores” that companies could use to block users’ future purchases.  

The Ninth Circuit initially dismissed the lawsuit for lack of personal jurisdiction and ruled that Shopify, an out-of-state defendant, did not have enough contacts with California to be fairly sued in California. 

Personal jurisdiction is designed to protect defendants' due process rights by ensuring that they cannot be hailed into court in jurisdictions that they have little connection to. In the internet context, the Ninth Circuit has previously held that operating a website, plus evidence that the defendant did “something more” to target a jurisdiction, is sufficient for personal jurisdiction.  

The Ninth Circuit originally dismissed Briskin on the grounds that the plaintiff failed to show the defendant did “something more.” It held that violating all users’ privacy was not enough; Shopify would have needed to do something to target Californians in particular.  

The Ninth Circuit granted rehearing en banc, and requested additional briefing on the personal jurisdiction rule that should govern online conduct. 

EFF and the Center for Consumer Law and Economic Justice argued that courts in California can fairly hold out-of-state corporations accountable for privacy violations that involve collecting vast amounts of personal data directly from consumers inside California and using that data to build profiles based in part on their location. To obtain personal data from California consumers, corporations must usually form additional contacts with California as well—including signing contracts within the state and creating California-specific data policies. In our view, Shopify is subject to personal jurisdiction in California because Shopify’s allegedly extensive data collection operations targeted Californians. That it also allegedly collected information from users in other states should not prevent California plaintiffs from having their day in court in their home state.   

In helping the Ninth Circuit develop a sensible test for personal jurisdiction in data privacy cases, EFF hopes to empower plaintiffs to preserve their online privacy rights in their forum of choice without sacrificing existing jurisdictional protections for internet publishers.  

EFF has long worked to ensure that consumer data privacy laws balance rights to privacy and free expression. We hope the Ninth Circuit will adopt our guidelines in structuring a privacy-specific personal jurisdiction rule that is commonsense and constitutionally sound. 

Tori Noble

Victory! EFF Supporters Beat USPTO Proposal To Wreck Patent Reviews

1 month 2 weeks ago

The U.S. patent system is broken, particularly when it comes to software patents. At EFF, we’ve been fighting hard for changes that make the system more sensible. Last month, we got a big victory when we defeated a set of rules that would have mangled one of the U.S. Patent and Trademark Office (USPTO)’s most effective systems for kicking out bad patents. 

In 2012, recognizing the entrenched problem of a patent office that spewed out tens of thousands of ridiculous patents every year, Congress created a new system to review patents called “inter partes reviews,” or IPRs. While far from perfect, IPRs have resulted in cancellation of thousands of patent claims that never should have been issued in the first place. 

At EFF, we used the IPR process to crowd-fund a challenge to the Personal Audio “podcasting patent” that tried to extract patent royalty payments from U.S. podcasters. We won that proceeding and our victory was confirmed on appeal.

It’s no surprise that big patent owners and patent trolls have been trying to wreck the IPR system for years. They’ve tried, and failed, to get federal courts to dismantle IPRs. They’ve tried, and failed, to push legislation that would break the IPR system. And last year, they found a new way to attack IPRs—by convincing the USPTO to propose a set of rules that would have sharply limited the public’s right to challenge bad patents. 

That’s when EFF and our supporters knew we had to fight back. Nearly one thousand EFF supporters filed comments with the USPTO using our suggested language, and hundreds more of you wrote your own comments. 

Today, we say thank you to everyone who took the time to speak out. Your voice does matter. In fact, the USPTO withdrew all three of the terrible proposals that we focused on. 

Our Victory to Keep Public Access To Patent Challenges 

The original rules would have greatly increased expanded what are called “discretionary denials,” enabling judges at the USPTO to throw out an IPR petition without adequately considering the merits of the petition. While we would like to see even fewer discretionary denials, defeating the proposed limitations patent challenges is a significant win.

First, the original rules would have stopped “certain for-profit entities” from using the IPR system altogether. While EFF is a non-profit, for-profit companies can and should be allowed to play a role in getting wrongly granted patents out of the system. Membership-based patent defense organizations like RPX or Unified Patents can allow small companies to band together and limit their costs while defending themselves against invalid patents. And non-profits like the Linux Foundation, who joined us in fighting against these wrongheaded proposed rules, can work together with professional patent defense groups to file more IPRs. 

EFF and our supporters wrote in opposition to this rule change—and it’s out. 

Second, the original rules would have exempted “micro and small entities” from patent reviews altogether. This exemption would have applied to many of the types of companies we call “patent trolls”—that is, companies whose business is simply demanding license fees for patents, rather than offering actual products or services. Those companies, specially designed to threaten litigation, would have easily qualified as “small entities” and avoided having their patents challenged. Patent trolls, which bully real small companies and software developers into paying unwarranted settlement fees, aren’t the kind of “small business” that should be getting special exemptions from patent review. 

EFF and our supporters opposed this exemption, and it’s out of the final rulemaking. 

Third, last year’s proposal would have allowed for IPR petitions to be kicked out if they had a “parallel proceeding”—in other words, a similar patent dispute—in district court. This was a wholly improper reason to not consider IPRs, especially since district court evidence rules are different than those in place for an IPR. 

EFF and our supporters opposed these new limitations, and they’re out. 

While the new rules aren’t perfect, they’re greatly improved. We would still prefer more IPRs rather than fewer, and don’t want to see IPRs that otherwise meet the rules get kicked out of the review process. But even there, the new revised rules have big improvements. For instance, they allow for separate briefing of discretionary denials, so that people and companies seeking IPR review can keep their focus on the merits of their petition. 

Additional reading: 

Joe Mullin

Modern Cars Can Be Tracking Nightmares. Abuse Survivors Need Real Solutions.

1 month 2 weeks ago

The amount of data modern cars collect is a serious privacy concern for all of us. But in an abusive situation, tracking can be a nightmare.

As a New York Times article outlined, modern cars are often connected to apps that show a user a wide range of information about a vehicle, including real-time location data, footage from cameras showing the inside and outside of the car, and sometimes the ability to control the vehicle remotely from their mobile device. These features can be useful, but abusers often turn these conveniences into tools to harass and control their victims—or even to locate or spy on them once they've fled their abusers.

California is currently considering three bills intended to help domestic abuse survivors endangered by vehicle tracking. Unfortunately, despite the concerns of advocates who work directly on tech-enabled abuse, these proposals are moving in the wrong direction. These bills intended to protect survivors are instead being amended in ways that open them to additional risks. We call on the legislature to return to previous language that truly helps people disable location-tracking in their vehicles without giving abusers new tools.

We know abusers are happy to lie and exploit whatever they can to further their abuse, including laws and services meant to help survivors.

Each of the bills seeks to address tech-enabled abuse in different ways. The first, S.B. 1394 by CA State Sen. David Min (Irvine), earned EFF's support when it was introduced. This bill was drafted with considerable input from experts in tech-enabled abuse at The University of California, Irvine. We feel its language best serves the needs of survivors in a wide range of scenarios without creating new avenues of stalking and harassment for the abuser to exploit. As introduced, it would require car manufacturers to respond to a survivor's request to cut an abuser's remote access to a car's connected services within two business days. To make a request, a survivor must prove the vehicle is theirs to use, even if their name is not necessarily on the loan or title. They could do this through documentation such as a court order, police report, or marriage separation agreement. S.B. 1000 by CA State Sen. Angelique Ashby (Sacramento) would have applied a similar framework to allow survivors to make requests to cut remote access to vehicles and other smart devices.

In contrast, A.B. 3139 introduced by Asm. Dr. Akilah Weber (La Mesa) takes a different approach. Rather than have people submit requests first and cut access later, this bill would require car manufacturers to terminate access immediately, and only requiring some follow-up documentation up to seven days after the request. Unfortunately, both S.B. 1394 and S.B. 1000 have now been amended to adopt this "act first, ask questions later" framework.

The changes to these bills are intended to make it easier for people in desperate situations to get away quickly. Yet, for most people, we believe the risks of A.B. 3139's approach outweigh the benefits. EFF's experience working with victims of tech-enabled abuse instead suggests that these changes are bad for survivors—something we've already said in official comments to the Federal Communications Commission.

Why This Doesn't Work for Survivors

EFF has two main concerns with the approach from A.B. 3139. First, the bill sets a low bar for verifying an abusive situation, including simply allowing a statement from the person filing the request. Second, the bill requires a way to turn tracking off immediately without any verification. Why are these problems?

Imagine you have recently left an abusive relationship. You own your car, but your former partner decides to seek revenge for your leaving and calls the car manufacturer to file a false report that removes your access to your car. In cases where both the survivor and abuser have access to the car's account—a common scenario—the abuser could even kick the survivor off a car app account, and then use the app to harass and stalk the survivor remotely. Under A.B. 3139's language, it would be easy for an abuser to make a false statement, under penalty of perjury—to "verify" that the survivor is the perpetrator of abuse. Depending on a car app’s capabilities, that false claim could mean that, for up to a week, a survivor may be unable to start or access their own vehicle. We know abusers are happy to lie and exploit whatever they can to further their abuse, including laws and services meant to help survivors. It will be trivial for an abuser—who is already committing a crime and unlikely to fear a perjury charge—to file a false request to cut someone off from their car.

It's true that other domestic abuse laws EFF has worked on allow for this kind of self-attestation. This includes the Safe Connections Act, which allows survivors to peel their phone more easily off of a family plan. However, this is the wrong approach for vehicles. Access to a phone plan is significantly different from access to a car, particularly when remote services allow you to control a vehicle. While inconvenient and expensive, it is much easier to replace a phone or a phone plan than a car if your abuser locks you out. The same solution doesn't fit both problems. You need proof to make the decision to cut access to something as crucial to someone's life as their vehicle.

Second, the language added to these bills requires it be possible for anyone in a car to immediately disconnect it from connected services. Specifically, A.B. 3139 says that the method to disable tracking must be "prominently located and easy to use and shall not require access to a remote, online application." That means it must essentially be at the push of a button. That raises serious potential for misuse. Any person in the car may intentionally or accidentally disable tracking, whether they're a kid pushing buttons for fun, a rideshare passenger, or a car thief. Even more troubling, an abuser could cut access to the app’s ability to track a car and kidnap a survivor or their children. If past is prologue, in many cases, abusers will twist this "protection" to their own ends.

The combination of immediate action and self-attestation is helpful for survivors in one particular scenario—a survivor who has no documentation of their abuse, who needs to get away immediately in a car owned by their abuser. But it opens up many new avenues of stalking, harassment, and other forms of abuse for survivors. EFF has loudly called for bills that empower abuse survivors to take control away from their abusers, particularly by being able to disable tracking—but this is not the right way to do it. We urge the legislature to pass bills with the processes originally outlined in S.B. 1394 and S.B. 1000 and provide survivors with real solutions to address unwanted tracking.

Hayley Tsukayama

Detroit Takes Important Step in Curbing the Harms of Face Recognition Technology

1 month 2 weeks ago

In a first-of-its-kind agreement, the Detroit Police Department recently agreed to adopt strict limits on its officers’ use of face recognition technology as part of a settlement in a lawsuit brought by a victim of this faulty technology.  

Robert Williams, a Black resident of a Detroit suburb, filed suit against the Detroit Police Department after officers arrested him at his home in front of his wife, daughters, and neighbors for a crime he did not commit. After a shoplifting incident at a watch store, police used a blurry still taken from surveillance footage and ran it through face recognition technology—which incorrectly identified Williams as the perpetrator. 

Under the terms of the agreement, the Detroit Police can no longer substitute face recognition technology (FRT) for reliable policework. Simply put: Face recognition matches can no longer be the only evidence police use to justify an arrest. 

FRT creates an “imprint” from an image of a face, then compares that imprint to other images—often a law enforcement database made up of mugshots, driver’s license images, or even images scraped from the internet. The technology itself is fraught with issues, including that it is highly inaccurate for certain demographics, particularly Black men and women. The Detroit Police Department makes face recognition queries using DataWorks Plus software to the Statewide Network of Agency Photos, or (SNAP), a database operated by the Michigan State Police. According to data obtained by EFF through a public records request, roughly 580 local, state, and federal agencies and their sub-divisions have desktop access to SNAP.  

Among other achievements, the settlement agreement’s new rules bar arrests based solely on face recognition results, or the results of the ensuing photo lineup—a common police procedure in which a witness is asked to identify the perpetrator from a “lineup” of images—conducted immediately after FRT identifies a suspect. This dangerous simplification has meant that on partial matches—combined with other unreliable evidence, such as eyewitness identifications—police have ended up arresting people who clearly could not have committed the crime. Such was the case with Robert Williams, who had been out of the state on the day the crime occurred. Because face recognition finds people who look similar to the suspect, putting that person directly into a police lineup will likely result in the witness picking the person who looks most like the suspect they saw—all but ensuring the person falsely accused by technology will receive the bulk of the suspicion.  

Under Detroit’s new rules, if police use face recognition technology at all during any investigation, they must record detailed information about their use of the technology, such as photo quality and the number of photos of the same suspect not identified by FRT. If charges are ever filed as a result of the investigation, prosecutors and defense attorneys will have access to the information about any uses of FRT in the case.  

The Detroit Police Department’s new face recognition rules are among the strictest restrictions adopted anywhere in the country—short of the full bans on the technology passed by San Francisco, Boston, and at least 15 other municipalities. Detroit’s new regulations are an important step in the right direction, but only a full ban on government use of face recognition can fully protect against this technology’s many dangers. FRT jeopardizes every person’s right to protest government misconduct free from retribution and reprisals for exercising their right to free speech. Giving police the ability to fly a drone over a protest and identify every protester undermines every person’s right to freely associate with dissenting groups or criticize government officials without fear of retaliation from those in power. 

Moreover, FRT undermines racial justice and threatens civil rights. Study after study after study has found that these tools cannot reliably identify people of color.  According to Detroit’s own data, roughly 97 percent of queries in 2023 involved Black suspects; when asked during a public meeting in 2020, then-police Chief James Craig estimated the technology would misidentify people 96 percent of the time. 

Williams was one of the first victims of this technology—but he was by no means the last. In Detroit alone, police wrongfully arrested at least two other people based on erroneous face recognition matches: Porcha Woodruff, a pregnant Black woman, and Michael Oliver, a Black man who lost his job due to his arrest.  

Many other innocent people have been arrested elsewhere, and in some cases, have served jail time as a result. The consequences can be life-altering; one man was sexually assaulted while incarcerated due a FRT misidentification. Police and the government have proven time and time again they cannot be trusted to use this technology responsibly. Although many departments already acknowledge that FRT results alone cannot justify an arrest, that is cold comfort to people like Williams, who are still being harmed despite the reassurances police give the public.  

It is time to take FRT out of law enforcement’s hands altogether. 

Tori Noble

EFF to FCC: SS7 is Vulnerable, and Telecoms Must Acknowledge That

1 month 2 weeks ago

It’s unlikely you’ve heard of Signaling System 7 (SS7), but every phone network in the world is connected to it, and if you have ever roamed networks internationally or sent an SMS message overseas you have used it. SS7 is a set of telecommunication protocols that cellular network operators use to exchange information and route phone calls, text messages, and other communications between each other on 2G and 3G networks (4G and 5G networks instead use the Diameter signaling system). When a person travels outside their home network's coverage area (roaming), and uses their phone on a 2G or 3G network, SS7 plays a crucial role in registering the phone to the network and routing their communications to the right destination. On May 28, 2024, EFF submitted comments to the Federal Communications Commision demanding investigation of SS7 and Diameter security and transparency into how the telecoms handle the security of these networks.

What Is SS7, and Why Does It Matter?

When you roam onto different 2G or 3G networks, or send an SMS message internationally the SS7 system works behind the scenes to seamlessly route your calls and SMS messages. SS7 identifies the country code, locates the specific cell tower that your phone is using, and facilitates the connection. This intricate process involves multiple networks and enables you to communicate across borders, making international roaming and text messages possible. But even if you don’t roam internationally, send SMS messages, or use legacy 2G/3G networks, you may still be vulnerable to SS7 attacks because most telecommunications providers are still connected to it to support international roaming, even if they have turned off their own 2G and 3G networks. SS7 was not built with any security protocols, such as authentication or encryption, and has been exploited by governments, cyber mercenaries, and criminals to intercept and read SMS messages. As a result, many network operators have placed firewalls in order to protect users. However, there are no mandates or security requirements placed on the operators, so there is no mechanism to ensure that the public is safe.

Many companies treat your ownership of your phone number as a primary security authentication mechanism, or secondary through SMS two-factor authentication. An attacker could use SS7 attacks to intercept text messages and then gain access to your bank account, medical records, and other important accounts. Nefarious actors can also use SS7 attacks to track a target’s precise location anywhere in the world

These vulnerabilities make SS7 a public safety issue. EFF strongly believes that it is in the best interest of the public for telecommunications companies to secure their SS7 networks and publicly audit them, while also moving to more secure technologies as soon as possible.

Why SS7 Isn’t Secure

SS7 was standardized in the late 1970s and early 1980s, at a time when communication relied primarily on landline phones. During that era, the telecommunications industry was predominantly controlled by corporate monopolies. Because the large telecoms all trusted each other there was no incentive to focus on the security of the network. SS7 was developed when modern encryption and authentication methods were not in widespread use. 

In the 1990s and 2000s new protocols were introduced by the European Telecommunication Standards Institute (ETSI) and the telecom standards bodies to support mobile phones with services they need, such as roaming, SMS, and data. However, security was still not a concern at the time. As a result, SS7 presents significant cybersecurity vulnerabilities that demand our attention. 

SS7 can be accessed through telecommunications companies and roaming hubs. To access SS7, companies (or nefarious actors) must have a “Global Title,” which is a phone number that uniquely identifies a piece of equipment on the SS7 network. Each phone company that runs its own network has multiple global titles. Some telecommunications companies lease their global titles, which is how malicious actors gain access to the SS7 network. 

Concerns about potential SS7 exploits are primarily discussed within the mobile security industry and are not given much attention in broader discussions about communication security. Currently, there is no way for end users to detect SS7 exploitation. The best way to safeguard against SS7 exploitation is for telecoms to use firewalls and other security measures. 

With the rapid expansion of the mobile industry, there is no transparency around any efforts to secure our communications. The fact that any government can potentially access data through SS7 without encountering significant security obstacles poses a significant risk to dissenting voices, particularly under authoritarian regimes.

Some people in the telecommunications industry argue that SS7 exploits are mainly a concern for 2G and 3G networks. It’s true that 4G and 5G don’t use SS7—they use the Diameter protocol—but Diameter has many of the same security concerns as SS7, such as location tracking. What’s more, as soon as you roam onto a 3G or 2G network, or if you are communicating with someone on an older network, your communications once again go over SS7. 

FCC Requests Comments on SS7 Security 

Recently, the FCC issued a request for comments on the security of SS7 and Diameter networks within the U.S. The FCC asked whether the security efforts of telecoms were working, and whether auditing or intervention was needed. The three large US telecoms (Verizon, T-Mobile, and AT&T) and their industry lobbying group (CTIA) all responded with comments stating that their SS7 and Diameter firewalls were working perfectly, and that there was no need to audit the phone companies’ security measures or force them to report specific success rates to the government. However, one dissenting comment came from Cybersecurity and Infrastructure Security Agency (CISA) employee Kevin Briggs. 

We found the comments by Briggs, CISA’s top expert on telecom network vulnerabilities, to be concerning and compelling. Briggs believes that there have been successful, unauthorized attempts to access network user location data from U.S. providers using SS7 and Diameter exploits. He provides two examples of reports involving specific persons that he had seen: the tracking of a person in the United States using Provide Subscriber Information (PSI) exploitation (March 2022); and the tracking of three subscribers in the United States using Send Routing Information (SRI) packets (April 2022).  

This is consistent with reporting by Gary Miller and Citizen Lab in 2023, where they state: “we also observed numerous requests sent from networks in Saudi Arabia to geolocate the phones of Saudi users as they were traveling in the United States. Millions of these requests targeting the international mobile subscriber identity (IMSI), a number that identifies a unique user on a mobile network, were sent over several months, and several times per hour on a daily basis to each individual user.”

Briggs added that he had seen information describing how in May 2022, several thousand suspicious SS7 messages were detected, which could have masked a range of attacks—and that he had additional information on the above exploits as well as others that go beyond location tracking, such as the monitoring of message content, the delivery of spyware to targeted devices, and text-message-based election interference.

As a senior CISA official focused on telecom cybersecurity, Briggs has access to information that the general public is not aware of. Therefore his comments should be taken seriously, particularly in light of the concerns expressed by Senator Wyden in his letter to the President, referenced a non-public, independent, expert report commissioned by CISA, and alleged that CISA was “actively hiding information about [SS7 threats] from the American people.” The FCC should investigate these claims, and keep Congress and the public informed about exploitable weaknesses in the telecommunication networks we all use.

These warnings should be taken seriously and their claims should be investigated. The telecoms should submit the results of their audits to the FCC and CISA so that the public can have some reassurance that their security measures are working as they say they are. If the telecoms’ security measures aren’t enough, as Briggs and Miller suggest, then the FCC must step in and secure our national telecommunications network. 

Cooper Quintin
Checked
14 minutes 41 seconds ago
EFF's Deeplinks Blog: Noteworthy news from around the internet
Subscribe to EFF update feed