お知らせ:2025年度「機材リース契約の業者選定」に関する入札
[B] 軍事クーデター発生から4年、在日ミャンマー人が支援訴え
AI and Copyright: Expanding Copyright Hurts Everyone—Here’s What to Do Instead
You shouldn't need a permission slip to read a webpage–whether you do it with your own eyes, or use software to help. AI is a category of general-purpose tools with myriad beneficial uses. Requiring developers to license the materials needed to create this technology threatens the development of more innovative and inclusive AI models, as well as important uses of AI as a tool for expression and scientific research.
Threats to Socially Valuable Research and InnovationRequiring researchers to license fair uses of AI training data could make socially valuable research based on machine learning (ML) and even text and data mining (TDM) prohibitively complicated and expensive, if not impossible. Researchers have relied on fair use to conduct TDM research for a decade, leading to important advancements in myriad fields. However, licensing the vast quantity of works that high-quality TDM research requires is frequently cost-prohibitive and practically infeasible.
Fair use protects ML and TDM research for good reason. Without fair use, copyright would hinder important scientific advancements that benefit all of us. Empirical studies back this up: research using TDM methodologies are more common in countries that protect TDM research from copyright control; in countries that don’t, copyright restrictions stymie beneficial research. It’s easy to see why: it would be impossible to identify and negotiate with millions of different copyright owners to analyze, say, text from the internet.
The stakes are high, because ML is critical to helping us interpret the world around us. It's being used by researchers to understand everything from space nebulae to the proteins in our bodies. When the task requires crunching a huge amount of data, such as the data generated by the world’s telescopes, ML helps rapidly sift through the information to identify features of potential interest to researchers. For example, scientists are using AlphaFold, a deep learning tool, to understand biological processes and develop drugs that target disease-causing malfunctions in those processes. The developers released an open-source version of AlphaFold, making it available to researchers around the world. Other developers have already iterated upon AlphaFold to build transformative new tools.
Threats to CompetitionRequiring AI developers to get authorization from rightsholders before training models on copyrighted works would limit competition to companies that have their own trove of training data, or the means to strike a deal with such a company. This would result in all the usual harms of limited competition—higher costs, worse service, and heightened security risks—as well as reducing the variety of expression used to train such tools and the expression allowed to users seeking to express themselves with the aid of AI. As the Federal Trade Commission recently explained, if a handful of companies control AI training data, “they may be able to leverage their control to dampen or distort competition in generative AI markets” and “wield outsized influence over a significant swath of economic activity.”
Legacy gatekeepers have already used copyright to stifle access to information and the creation of new tools for understanding it. Consider, for example, Thomson Reuters v. Ross Intelligence, widely considered to be the first lawsuit over AI training rights ever filed. Ross Intelligence sought to disrupt the legal research duopoly of Westlaw and LexisNexis by offering a new AI-based system. The startup attempted to license the right to train its model on Westlaw’s summaries of public domain judicial opinions and its method for organizing cases. Westlaw refused to grant the license and sued its tiny rival for copyright infringement. Ultimately, the lawsuit forced the startup out of business, eliminating a would-be competitor that might have helped increase access to the law.
Similarly, shortly after Getty Images—a billion-dollar stock images company that owns hundreds of millions of images—filed a copyright lawsuit asking the court to order the “destruction” of Stable Diffusion over purported copyright violations in the training process, Getty introduced its own AI image generator trained on its own library of images.
Requiring developers to license AI training materials benefits tech monopolists as well. For giant tech companies that can afford to pay, pricey licensing deals offer a way to lock in their dominant positions in the generative AI market by creating prohibitive barriers to entry. To develop a “foundation model” that can be used to build generative AI systems like ChatGPT and Stable Diffusion, developers need to “train” the model on billions or even trillions of works, often copied from the open internet without permission from copyright holders. There’s no feasible way to identify all of those rightsholders—let alone execute deals with each of them. Even if these deals were possible, licensing that much content at the prices developers are currently paying would be prohibitively expensive for most would-be competitors.
We should not assume that the same companies who built this world can fix the problems they helped create; if we want AI models that don’t replicate existing social and political biases, we need to make it possible for new players to build them.
Nor is pro-monopoly regulation through copyright likely to provide any meaningful economic support for vulnerable artists and creators. Notwithstanding the highly publicized demands of musicians, authors, actors, and other creative professionals, imposing a licensing requirement is unlikely to protect the jobs or incomes of the underpaid working artists that media and entertainment behemoths have exploited for decades. Because of the imbalance in bargaining power between creators and publishing gatekeepers, trying to help creators by giving them new rights under copyright law is, as EFF Special Advisor Cory Doctorow has written, like trying to help a bullied kid by giving them more lunch money for the bully to take.
Entertainment companies’ historical practices bear out this concern. For example, in the late-2000’s to mid-2010’s, music publishers and recording companies struck multimillion-dollar direct licensing deals with music streaming companies and video sharing platforms. Google reportedly paid more than $400 million to a single music label, and Spotify gave the major record labels a combined 18 percent ownership interest in its now-$100 billion company. Yet music labels and publishers frequently fail to share these payments with artists, and artists rarely benefit from these equity arrangements. There is no reason to believe that the same companies will treat their artists more fairly once they control AI.
Threats to Free ExpressionGenerative AI tools like text and image generators are powerful engines of expression. Creating content—particularly images and videos—is time intensive. It frequently requires tools and skills that many internet users lack. Generative AI significantly expedites content creation and reduces the need for artistic ability and expensive photographic or video technology. This facilitates the creation of art that simply would not have existed and allows people to express themselves in ways they couldn’t without AI.
Some art forms historically practiced within the African American community—such as hip hop and collage—have a rich tradition of remixing to create new artworks that can be more than the sum of their parts. As professor and digital artist Nettrice Gaskins has explained, generative AI is a valuable tool for creating these kinds of art. Limiting the works that may be used to train AI would limit its utility as an artistic tool, and compound the harm that copyright law has already inflicted on historically Black art forms.
Generative AI has the power to democratize speech and content creation, much like the internet has. Before the internet, a small number of large publishers controlled the channels of speech distribution, controlling which material reached audiences’ ears. The internet changed that by allowing anyone with a laptop and Wi-Fi connection to reach billions of people around the world. Generative AI magnifies those benefits by enabling ordinary internet users to tell stories and express opinions by allowing them to generate text in a matter of seconds and easily create graphics, images, animation, and videos that, just a few years ago, only the most sophisticated studios had the capability to produce. Legacy gatekeepers want to expand copyright so they can reverse this progress. Don’t let them: everyone deserves the right to use technology to express themselves, and AI is no exception.
Threats to Fair UseIn all of these situations, fair use—the ability to use copyrighted material without permission or payment in certain circumstances—often provides the best counter to restrictions imposed by rightsholders. But, as we explained in the first post in this series, fair use is under attack by the copyright creep. Publishers’ recent attempts to impose a new licensing regime for AI training rights—despite lacking any recognized legal right to control AI training—threatens to undermine the public’s fair use rights.
By undermining fair use, the AI copyright creep makes all these other dangers more acute. Fair use is often what researchers and educators rely on to make their academic assessments and to gather data. Fair use allows competitors to build on existing work to offer better alternatives. And fair use lets anyone comment on, or criticize, copyrighted material.
When gatekeepers make the argument against fair use and in favor of expansive copyright—in court, to lawmakers, and to the public—they are looking to cement their own power, and undermine ours.
A Better Way ForwardAI also threatens real harms that demand real solutions.
Many creators and white-collar professionals increasingly believe that generative AI threatens their jobs. Many people also worry that it enables serious forms of abuse, such as AI-generated nonconsensual intimate imagery, including of children. Privacy concerns abound, as does consternation over misinformation and disinformation. And it’s already harming the environment.
Expanding copyright will not mitigate these harms, and we shouldn’t forfeit free speech and innovation to chase snake oil “solutions” that won’t work.
We need solutions that address the roots of these problems, like inadequate protections for labor rights and personal privacy. Targeted, issue-specific policies are far more likely to succeed in resolving the problems society faces. Take competition, for example. Proponents of copyright expansion argue that treating AI development like the fair use that it is would only enrich a handful of tech behemoths. But imposing onerous new copyright licensing requirements to train models would lock in the market advantages enjoyed by Big Tech and Big Media—the only companies that own large content libraries or can afford to license enough material to build a deep learning model—profiting entrenched incumbents at the public’s expense. What neither Big Tech nor Big Media will say is that stronger antitrust rules and enforcement would be a much better solution.
What’s more, looking beyond copyright future-proofs the protections. Stronger environmental protections, comprehensive privacy laws, worker protections, and media literacy will create an ecosystem where we will have defenses against any new technology that might cause harm in those areas, not just generative AI.
Expanding copyright, on the other hand, threatens socially beneficial uses of AI—for example, to conduct scientific research and generate new creative expression—without meaningfully addressing the harms.
This post is part of our AI and Copyright series. For more information about the state of play in this evolving area, see our first post.
Copyright and AI: the Cases and the Consequences
The launch of ChatGPT and other deep learning quickly led to a flurry of lawsuits against model developers. Legal theories vary, but most are rooted in copyright: plaintiffs argue that use of their works to train the models was infringement; developers counter that their training is fair use. Meanwhile developers are making as many licensing deals as possible to stave off future litigation, and it’s a sound bet that the existing litigation is an elaborate scramble for leverage in settlement negotiations.
These cases can end one of three ways: rightsholders win, everybody settles, or developers win. As we’ve noted before, we think the developers have the better argument. But that’s not the only reason they should win these cases: while creators have a legitimate gripe, expanding copyright won’t protect jobs from automation. A win for rightsholders or even a settlement could also lead to significant harm, especially if it undermines fair use protections for research uses or artistic protections for creators. In this post and a follow-up, we’ll explain why.
State of PlayFirst, we need some context, so here’s the state of play:
DMCA ClaimsMultiple courts have dismissed claims under Section 1202(b) of the Digital Millennium Copyright Act, stemming from allegations that developers removed or altered attribution information during the training process. In Raw Story Media v. OpenAI, Inc., the Southern District of New York dismissed these claims because the plaintiff had not “plausibly alleged” that training ChatGPT on their works had actually harmed them, and there was no “substantial risk” that ChatGPT would output their news articles. Because ChatGPT was trained on “massive amount of information from unnumerable sources on almost any given subject…the likelihood that ChatGPT would output plagiarized content from one of Plaintiffs’ articles seems remote.” Courts granted motions to dismiss similar DMCA claims in Andersen v. Stability AI, Ltd., , The Intercept Media, Inc. v. OpenAI, Inc., Kadrey v. Meta Platforms, Inc., and Tremblay v. OpenAI.
Another such case, Doe v. GitHub, Inc. will soon be argued in the Ninth Circuit.
Copyright Infringement ClaimsRightsholders also assert ordinary copyright infringement, and the initial holdings are a mixed bag. In Kadrey v. Meta Platforms, Inc., for example, the court dismissed “nonsensical” claims that Meta’s LLaMA models are themselves infringing derivative works. In Andersen v. Stability AI Ltd., however, the court held that copyright claims based on the assumption that the plaintiff’s works were included in a training data set could go forward, where the use of plaintiffs’ names as prompts generated outputted images that were “similar to plaintiffs’ artistic works.” The court also held that the plaintiffs plausibly alleged that the model was designed to “promote infringement” for similar reasons.
It's early in the case—the court was merely deciding if the plaintiffs had alleged enough to justify further proceedings—but it’s a dangerous precedent. Crucially, copyright protection extends only to the actual expression of the author—the underlying facts and ideas in a creative work are not themselves protected. That means that, while a model cannot output an identical or near-identical copy of a training image without running afoul of copyright, it is free to generate stylistically “similar” images. Training alone is insufficient to give rise to a claim of infringement, and the court impermissibly conflated permissible “similar” outputs with the copying of protectable expression.
Fair UseIn most of the AI cases, courts have yet to consider—let alone decide—whether fair use applies. In one unusual case, however, the judge has flip-flopped, previously finding that the defendant’s use was fair and changing his mind. This case, Thomson Reuters Enterprise Centre GMBH v. Ross Intelligence, Inc., concerns legal research technology. Thomson Reuters provides search tools to locate relevant legal opinions and prepares annotations describing the opinions’ holdings. Ross Intelligence hired lawyers to look at those annotations and rewrite them in their own words. Their output was used to train Ross’s search tool, ultimately providing users with relevant legal opinions based on their queries. Originally, the court got it right, holding that if the AI developer used copyrighted works only “as a step in the process of trying to develop a ‘wholly new,’ albeit competing, product,” that’s “transformative intermediate copying,” i.e. fair use.
After reconsidering, however, the judge changed his mind in several respects, essentially disagreeing with prior case law regarding search engines. We think it’s unlikely that an appeals court would uphold this divergence from precedent. But if it did, it would present legal problems for AI developers—and anyone creating search tools.
Copyright law favors the creation of new technology to learn and locate information, even when developing the tool required copying books and web pages in order to index them. Here, the search tool is providing links to legal opinions, not presenting users with any Thomson Reuters original material. The tool is concerned with noncopyrightable legal holdings and principles, not with supplanting any creative expression embodied in the annotations prepared by Thomson Reuters.
Thomson Reuters has often pushed the limits of copyright in an attempt to profit off of the public’s need to access and refer to the law, for instance by claiming a proprietary interest in its page numbering of legal opinions. Unfortunately, the judge in this case enabled them to do so in a new way. We hope the appeals court reverses the decision.
The Side DealsWhile all of this is going on, developers that can afford it—OpenAI, Google, and other tech behemoths—have inked multimillion-dollar licensing deals with Reddit, the Wall Street Journal, and myriad other corporate copyright owners. There’s suddenly a $2.5 billion licensing market for training data—even though the use of that data is almost certainly fair use.
What’s MissingThis litigation is getting plenty of attention. And it should because the stakes are high. Unfortunately, the real stakes are getting lost. These cases are not just about who will get the most financial benefits from generative AI. The outcomes will decide whether a small group of corporations that can afford big licensing fees will determine the future of AI for all of us. More on that tomorrow.
This post is part of our AI and Copyright series. Check out our other post in this series.
InterHop on digital freedoms and open source software in healthcare
第320回 官民競争入札等監理委員会(会議資料)
太平洋島嶼国等向けICT研修の開催結果
住民基本台帳法施行規則等の一部を改正する省令(案)に対する意見募集
情報通信法学研究メディア法分科会(令和6年度第2回会合)
宇宙通信アドバイザリーボード(第4回)
eシールに係る認定制度の関係規程策定のための有識者会議(第8回)
【出版界の動き】2月:書店の活性化に向けた多様な取り組み=出版部会
EFF and Repro Uncensored Launch #StopCensoringAbortion Campaign
SAN FRANCISCO—The Electronic Frontier Foundation (EFF) and the Repro Uncensored coalition on Wednesday launched the #StopCensoringAbortion campaign to ensure that people who need reproductive health and abortion information can find and share it.
Censorship of this information by social media companies appears to be increasing, so the campaign will collect information to track such incidents.
“This censorship is alarming, and we’re seeing it take place across popular social media platforms like Facebook, Instagram, and TikTok, where abortion-related content is often flagged or removed under vague ‘community guideline’ violations, despite the content being legal and factual,” said EFF Legislative Activist Rindala Alajaji. “This lack of transparency leaves organizations, influencers, and individuals in the dark, fueling a wider culture of online censorship that jeopardizes public access to vital healthcare information.”
Initially, the campaign is collecting stories from people and organizations who have faced censorship on these platforms. This will help the public and the companies understand how often this is happening, who is affected, and with what consequences. EFF will use that information to demand that censorship stop and that the companies create greater transparency in their practices, which are often obscure and difficult to track. Tech companies must not silence critical conversations about reproductive rights.
"We are not simply raising awareness—we are taking action to hold tech companies accountable for their role in censoring free speech around reproductive health. The stories we collect will be instrumental in presenting to the platforms the breadth of this problem, drawing a picture of its impact, and demanding more transparent policies,” Alajaji said. “If you or someone you know has had abortion-related content taken down or shadowbanned by a social media platform, your voice is crucial in this fight. By sharing your experience, you’ll be contributing to a larger movement to end censorship and demand that social media platforms stop restricting access to critical reproductive health information.”
In addition to a portal for reporting incidents of online abortion censorship, the campaign’s landing page provides links to reporting and research on this censorship. Additionally, the page includes digital privacy and security guides for abortion activists, medical personnel, and patients.
With reproductive rights under fire across the U.S. and around the world, access to accurate abortion information has never been more critical. Reproductive health and rights organizations have turned to online platforms to share essential, sometimes life-saving guidance and resources. Whether they provide the latest updates on abortion laws, where to find clinics, or education about abortion medication, online spaces have become a lifeline particularly for those in regions where reproductive freedoms are under siege.
But a troubling trend is making it harder for people to access vital abortion information: Social media platforms are censoring or removing abortion-related content, often without a clear justification or policy basis. A recent example surfaced last month when Instagram posts by Aid Access, an online abortion services provider, were either blurred out or prevented from loading entirely. This sparked concerns in the press about how recent content moderation policy changes by Meta, the parent company of Instagram and Facebook, would affect availability of reproductive health information.
For the campaign landing page: https://www.eff.org/pages/stop-censoring-abortion
Contact: RindalaAlajajiLegislative Activistrin@eff.orgStop Censoring Abortion: The Fight for Reproductive Rights in the Digital Age
With reproductive rights under fire across the U.S. and globally, access to accurate abortion information has never been more critical—especially online.
That’s why reproductive health and rights organizations have turned to online platforms to share essential, sometimes life-saving, guidance and resources. Whether it's how to access information about abortion medication, where to find clinics, or the latest updates on abortion laws, these online spaces have become a lifeline, particularly for those in regions where reproductive freedoms are under siege. But there's a troubling trend making it harder for people to access vital abortion information: social media platforms are increasingly censoring or removing abortion-related content—often without clear justification or policy basis.
A recent example surfaced last month when a number of Instagram posts by Aid Access, an online abortion services provider, were either blurred out or unable to load entirely. This sparked concerns in the press about how recent content moderation policy changes by Meta, the parent company of Instagram and Facebook, would affect the availability of reproductive health information. The result? Crucial healthcare information gets erased, free expression is stifled, and people are left in the dark about their rights and healthcare options.
This censorship is alarming, and we’re seeing it take place across popular social media platforms like Facebook, Instagram, and TikTok, where abortion-related content is often flagged or removed under vague "community guideline" violations, despite the content being perfectly legal and factual. This lack of transparency leaves organizations, influencers, and individuals in the dark, fueling a wider culture of online censorship that jeopardizes public access to vital healthcare information.
#StopCensoringAbortion: An EFF and Repro Uncensored CollaborationIn response to this growing issue, EFF has partnered with the Repro Uncensored coalition to call attention to instances of reproductive health and abortion content being removed or suppressed by social media platforms.
We are collecting stories from individuals and organizations who have faced censorship on these platforms to expose the true scale of the issue. Our goal is to demand greater transparency in tech companies' moderation practices and ensure that their actions do not silence critical conversations about reproductive rights.
We are not simply raising awareness—we are taking action to hold tech companies accountable for their role in censoring free speech around reproductive health.
If you or someone you know has had abortion-related content taken down or shadowbanned by a social media platform, your voice is crucial in this fight. By sharing your experience, you’ll be contributing to a larger movement to end censorship and demand that social media platforms stop restricting access to critical reproductive health information. These stories will be instrumental in presenting to the platforms the breadth of this problem, drawing a picture of its impact, and demanding more transparent policies.
If you’re able to spend five minutes reporting your experience, EFF and the rest of the Repro Uncensored coalition will do our best to help: https://www.reprouncensored.org/report-incident
Even If You Haven’t Been Censored, You Can Still Help!Not everyone has experienced censorship, but that doesn’t mean you can’t contribute to the cause. You can still help by spreading the word.
Share the #StopCensoringAbortion campaign on your social media platforms and visit our landing page for more resources and actions.
Follow Repro Uncensored and EFF on Instagram, and sign up for email updates about this campaign. The more people who are involved, the stronger our collective voice will be.
Together, we can amplify the message that information about reproductive health and rights should never be silenced—whether in the real world or online.
Saving the Internet in Europe: Defending Privacy and Fighting Surveillance
This is the third instalment in a four-part blog series documenting EFF's work in Europe. You can read additional posts here:
- Saving the Internet in Europe: How EFF Works in Europe
- Saving the Internet in Europe: Defending Free Expression
- Saving the Internet in Europe: Fostering Choice, Competition and the Right to Innovate
EFF’s mission is to ensure that technology supports freedom, justice, and innovation for all people of the world. While our work has taken us to far corners of the globe, in recent years we have worked to expand our efforts in Europe, building up a policy team with key expertise in the region, and bringing our experience in advocacy and technology to the European fight for digital rights.
In this blog post series, we will introduce you to the various players involved in that fight, share how we work in Europe, and discuss how what happens in Europe can affect digital rights across the globe.
Implementing a Privacy First Approach to Fighting Online HarmsInfringements on privacy are commonplace across the world, and Europe is no exemption. Governments and regulators across the region are increasingly focused on a range of risks associated with the design and use of online platforms, such as addictive design, the effects of social media consumption on children’s and teenagers’ mental health, and dark patterns limiting consumer choices. Many of these issues share a common root: the excessive collection and processing of our most private and sensitive information by corporations for their own financial gain.
One necessary approach to solving this pervasive problem is to reduce the amount of data that these entities can collect, analyze, and sell. The European General Data Protection Regulation (GDPR) is central to protecting users’ data protection rights in Europe, but the impact of the GDPR ultimately depends on how well it is enforced. Strengthening the enforcement of the GDPR in areas where data can be used to target, discriminate, and undermine fundamental rights is therefore a cornerstone in our work.
Beyond the GDPR, we also bring our privacy first approach to fighting online harms to discussions on online safety and digital fairness. The Digital Services Act (DSA) makes some important steps to limit the use of some data categories to target users with ads, and bans targeteds ads for minors completely. This is the right approach, which we will build on as we contribute to the debate around the upcoming Digital Fairness Act.
Age Verification Tools Are No Silver BulletAs in many other jurisdictions around the world, age verification has become a hotly debated topic in the EU, with governments across Europe seeking to introduce them. In the United Kingdom, legislation like the Online Safety Act (OSA) was introduced to make the UK “the safest place” in the world to be online. The OSA requires platforms to prevent individuals from encountering certain illegal content, which will likely mandate the use of intrusive scanning systems. Even worse, it empowers the British government, in certain situations, to demand that online platforms use government-approved software to scan for illegal content. And they are not alone in seeking to do so. Last year, France banned social media access for children under 15 without parental consent, and Norway also pledged to follow a similar ban.
Children’s safety is important, but there is little evidence that online age verification tools can help achieve this goal. EFF has long fought against mandatory age verification laws, from the U.S. to Australia, and we’ll continue to stand up against these types of laws in Europe. Not just for the sake of free expression, but to protect the free flow of information that is essential to a free society.
Challenging Creeping Surveillance PowersFor years, we’ve observed a worrying tendency of technologies designed to protect people's privacy and data being re-framed as security concerns. And recent developments in Europe, like Germany’s rush to introduce biometric surveillance, signal a dangerous move towards expanding surveillance powers, justified by narratives framing complex digital policy issues as primarily security concerns. These approaches invite tradeoffs that risk undermining the privacy and free expression of individuals in the EU and beyond.
Even though their access to data has never been broader, law enforcement authorities across Europe continue to peddle the tale of the world “going dark.” With EDRi, we criticized the EU high level group “going dark” and sent a joint letter warning against granting law enforcement unfettered capacities that may lead to mass surveillance and violate fundamental rights. We have also been involved in Pegasus spyware investigations, with EFF’s Executive Director Cindy Cohn participating in an expert hearing on the matter. The issue of spyware is pervasive and intersects with many components of EU law, such as the anti-spyware provisions contained within the EU Media Freedom Act. Intrusive surveillance has a global dimension, and our work has combined advocacy at the UN with the EU, for example, by urging the EU Parliament to reject the UN Cybercrime Treaty.
Rather than increasing surveillance, countries across Europe must also make use of their prerogatives to ban biometric surveillance, ensuring that the use of this technology is not permitted in sensitive contexts such as Europe’s borders. Face recognition, for example, presents an inherent threat to individual privacy, free expression, information security, and social justice. In the UK, we’ve been working with national groups to ban government use of face recognition technology, which is currently administered by local police forces. Given the proliferation of state surveillance across Europe, government use of this technology must be banned.
Protecting the Right to Secure and Private CommunicationsEFF works closely on issues like encryption to defend the right to private communications in Europe. For years, EFF fought hard against an EU proposal that, if it became law, would have pressured online services to abandon end-to-end encryption. We joined together with EU allies and urged people to sign the “Don’t Scan Me” petition. We lobbied EU lawmakers and urged them to protect their constituents’ human right to have a private conversation—backed up by strong encryption. Our message broke through, and a key EU committee adopted a position that bars the mass scanning of messages and protects end-to-end encryption. It also bars mandatory age verification whereby users would have had to show ID to get online. As Member States are still debating their position on the proposal, this fight is not over yet. But we are encouraged by the recent European Court of Human Rights ruling which confirmed that undermining encryption violates fundamental rights to privacy. EFF will continue to advocate for this to governments, and the corporations providing our messaging services.
As we’ve said many times, both in Europe and the U.S., there is no middle ground to content scanning and no “safe backdoor” if the internet is to remain free and private. Either all content is scanned and all actors—including authoritarian governments and rogue criminals—have access, or no one does. EFF will continue to advocate for the right to a private conversation, and hold the EU accountable to the international and European human rights protections that they are signatories to.
Looking ForwardEU legislation and international treaties should contain concrete human rights safeguards, robust data privacy standards, and sharp limits on intrusive surveillance powers, including in the context of global cooperation.
Much work remains to be done. And we are ready for it. Late last year, we put forward comprehensive policy recommendations to European lawmakers and we will continue fighting for an internet where everyone can make their voice heard. In the next—and final—post in this series, you will learn more about how we work in Europe to ensure that digital markets are fair, offer users choice and respect fundamental rights.
第453回 消費者委員会本会議【2月18日開催】
JVN: RevoWorks SCVXおよびRevoWorks Browserにおけるファイル検証不備の脆弱性
JVN: Movable Typeにおける複数のクロスサイトスクリプティングの脆弱性
Crimson Memo: Analyzing the Privacy Impact of Xiaohongshu AKA Red Note
Early in January 2025 it seemed like TikTok was on the verge of being banned by the U.S. government. In reaction to this imminent ban, several million people in the United States signed up for a different China-based social network known in the U.S. as RedNote, and in China as Xiaohongshu (小红书/ 小紅書; which translates to Little Red Book).
RedNote is an application and social network created in 2013 that currently has over 300 million users. Feature-wise, it is most comparable to Instagram and is primarily used for sharing pictures, videos, and shopping. The vast majority of its users live in China, are born after 1990, and are women. Even before the influx of new users in January, RedNote has historically had many users outside of China, primarily people from the Chinese diaspora who have friends and relatives on the network. RedNote is largely funded by two major Chinese tech corporations: Tencent and Alibaba.
When millions of U.S. based users started flocking to the application, the traditional rounds of pearl clutching and concern trolling began. Many people raised the alarm about U.S. users entrusting their data with a Chinese company, and it is implied, the Chinese Communist Party. The reaction from U.S. users was an understandable, if unfortunate, bit of privacy nihilism. People responded that they, “didn’t care if someone in China was getting their data since US companies such as Meta and Google had already stolen their data anyway.” “What is the difference,” people argued, “between Meta having my data and someone in China? How does this affect me in any way?”
Even if you don’t care about giving China your data, it is not safe to use any application that doesn’t use encryption by default.
Last week, The Citizen Lab at The Munk School Of Global Affairs, University of Toronto, released a report authored by Mona Wang, Jeffrey Knockel, and Irene Poetranto which highlights three serious security issues in the RedNote app. The most concerning finding from Citizen Lab is a revelation that RedNote retrieves uploaded user content over plaintext http. This means that anyone else on your network, at your internet service provider, or organizations like the NSA, can see everything you look at and upload to RedNote. Moreover someone could intercept that request and replace it with their own media or even an exploit to install malware on your device.
In light of this report the EFF Threat Lab decided to confirm the CItizen Lab findings and do some additional privacy investigation of RedNote. We used static analysis techniques for our investigation, including manual static analysis of decompiled source code, and automated scanners including MobSF and Exodus Privacy. We only analyzed Version 8.59.5 of RedNote for Android downloaded from the website APK Pure.
EFF has independently confirmed the finding that Red Note retrieves posted content over plaintext http. Due to this lack of even basic transport layer encryption we don’t think this application is safe for anyone to use. Even if you don’t care about giving China your data, it is not safe to use any application that doesn’t use encryption by default.
Citizen Lab researchers also found that users’ file contents are readable by network attackers. We were able to confirm that RedNote encrypts several sensitive files with static keys which are present in the app and the same across all installations of the app, meaning anyone who was able to retrieve those keys from a decompiled version of the app could decrypt these sensitive files for any user of the application. The Citizen Lab report also found a vulnerability where an attacker could identify the contents of any file readable by the application. This was out of scope for us to test but we find no reason to doubt this claim.
The third major finding by Citizen Lab was that RedNote transmits device metadata in a way that can be eavesdropped on by network attackers, sometimes without encryption at all, and sometimes in a way vulnerable to a machine-in-the middle attack. We can confirm that RedNote does not validate HTTPS certificates properly. Testing this vulnerability was out of scope for EFF, but we find no reason to doubt this claim.
Permissions and TrackersEFF performed further analysis of the permissions and trackers requested by RedNote. Our findings indicate two other potential privacy issues with the application.
RedNote requests some very sensitive permissions, including location information, even when the app is not running in the foreground. This permission is not requested by other similar apps such as TikTok, Facebook, or Instagram.
We also found, using an online scanner for tracking software called Exodus Privacy, that RedNote is not a platform which will protect its users from U.S.-based surveillance capitalism. In addition to sharing userdata with the Chinese companies Tencent and ByteDance, it also shares user data with Facebook and Google.
Other IssuesRedNote contains functionality to update its own code after it’s downloaded from the Google Play store using an open source library called APK Patch. This could be used to inject malicious code into the application after it has been downloaded without such code being revealed in automated scans meant to protect against malicious applications being uploaded to official stores, like Google Play.
RecommendationsDue to the lack of encryption we do not consider it safe for anyone to run this app. If you are going to use RedNote, we recommend doing so with the absolute minimum set of permissions necessary for the app to function (see our guides for iPhone and Android.) At least a part of this blame falls on Google. Android needs to stop allowing apps to make unencrypted requests at all.
Due to the lack of encryption we do not consider it safe for anyone to run this app.
RedNote should immediately take steps to encrypt all traffic from their application and remove the permission for background location information.
Users should also keep in mind that RedNote is not a platform which values free speech. It’s a heavily censored application where topics such as political speech, drugs and addiction, and sexuality are more tightly controlled than similar social networks.
Since it shares data with Facebook and Google ad networks, RedNote users should also keep in mind that it’s not a platform that protects you from U.S.-based surveillance capitalism.
The willingness of users to so quickly move to RedNote also highlights the fact that people are hungry for platforms that aren't controlled by the same few American tech oligarchs. People will happily jump to another platform even if it presents new, unknown risks; or is controlled by foreign tech oligarchs such as Tencent and Alibaba.
However, federal bans of such applications are not the correct answer. When bans are targeted at specific platforms such as TikTok, Deepseek, and RedNote rather than privacy-invasive practices such as sharing sensitive details with surveillance advertising platforms, users who cannot participate on the banned platform may still have their privacy violated when they flock to other platforms. The real solution to the potential privacy harms of apps like RedNote is to ensure (through technology, regulation, and law) that people’s sensitive information isn’t entered into the surveillance capitalist data stream in the first place.
We need a federal, comprehensive, consumer-focused privacy law. Our government is failing to address the fundamental harms of privacy-invading social media. Implementing xenophobic, free-speech infringing policy is having the unintended consequence of driving folks to platforms with even more aggressive censorship. This outcome was foreseeable. Rather than a knee-jerk reaction banning the latest perceived threat, these issues could have been avoided by addressing privacy harms at the source and enacting strong consumer-protection laws.
Figure 1. Permissions requested by RedNote
Permission
Description
android.permission.ACCESS_BACKGROUND_LOCATION
This app can access location at any time, even while the app is not in use.
android.permission.ACCESS_COARSE_LOCATION
This app can get your approximate location from location services while the app is in use. Location services for your device must be turned on for the app to get location.
android.permission.ACCESS_FINE_LOCATION
This app can get your precise location from location services while the app is in use. Location services for your device must be turned on for the app to get location. This may increase battery usage.
android.permission.ACCESS_MEDIA_LOCATION
Allows the app to read locations from your media collection.
android.permission.ACCESS_NETWORK_STATE
Allows the app to view information about network connections such as which networks exist and are connected.
android.permission.ACCESS_WIFI_STATE
Allows the app to view information about Wi-Fi networking, such as whether Wi-Fi is enabled and name of connected Wi-Fi devices.
android.permission.AUTHENTICATE_ACCOUNTS
Allows the app to use the account authenticator capabilities of the AccountManager, including creating accounts and getting and setting their passwords.
android.permission.BLUETOOTH
Allows the app to view the configuration of the Bluetooth on the phone, and to make and accept connections with paired devices.
android.permission.BLUETOOTH_ADMIN
Allows the app to configure the local Bluetooth phone, and to discover and pair with remote devices.
android.permission.BLUETOOTH_CONNECT
Allows the app to connect to paired Bluetooth devices
android.permission.CAMERA
This app can take pictures and record videos using the camera while the app is in use.
android.permission.CHANGE_NETWORK_STATE
Allows the app to change the state of network connectivity.
android.permission.CHANGE_WIFI_STATE
Allows the app to connect to and disconnect from Wi-Fi access points and to make changes to device configuration for Wi-Fi networks.
android.permission.EXPAND_STATUS_BAR
Allows the app to expand or collapse the status bar.
android.permission.FLASHLIGHT
Allows the app to control the flashlight.
android.permission.FOREGROUND_SERVICE
Allows the app to make use of foreground services.
android.permission.FOREGROUND_SERVICE_DATA_SYNC
Allows the app to make use of foreground services with the type dataSync
android.permission.FOREGROUND_SERVICE_LOCATION
Allows the app to make use of foreground services with the type location
android.permission.FOREGROUND_SERVICE_MEDIA_PLAYBACK
Allows the app to make use of foreground services with the type mediaPlayback
android.permission.FOREGROUND_SERVICE_MEDIA_PROJECTION
Allows the app to make use of foreground services with the type mediaProjection
android.permission.FOREGROUND_SERVICE_MICROPHONE
Allows the app to make use of foreground services with the type microphone
android.permission.GET_ACCOUNTS
Allows the app to get the list of accounts known by the phone. This may include any accounts created by applications you have installed.
android.permission.INTERNET
Allows the app to create network sockets and use custom network protocols. The browser and other applications provide means to send data to the internet, so this permission is not required to send data to the internet.
android.permission.MANAGE_ACCOUNTS
Allows the app to perform operations like adding and removing accounts, and deleting their password.
android.permission.MANAGE_MEDIA_PROJECTION
Allows an application to manage media projection sessions. These sessions can provide applications the ability to capture display and audio contents. Should never be needed by normal apps.
android.permission.MODIFY_AUDIO_SETTINGS
Allows the app to modify global audio settings such as volume and which speaker is used for output.
android.permission.POST_NOTIFICATIONS
Allows the app to show notifications
android.permission.READ_CALENDAR
This app can read all calendar events stored on your phone and share or save your calendar data.
android.permission.READ_CONTACTS
Allows the app to read data about your contacts stored on your phone. Apps will also have access to the accounts on your phone that have created contacts. This may include accounts created by apps you have installed. This permission allows apps to save your contact data, and malicious apps may share contact data without your knowledge.
android.permission.READ_EXTERNAL_STORAGE
Allows the app to read the contents of your shared storage.
android.permission.READ_MEDIA_AUDIO
Allows the app to read audio files from your shared storage.
android.permission.READ_MEDIA_IMAGES
Allows the app to read image files from your shared storage.
android.permission.READ_MEDIA_VIDEO
Allows the app to read video files from your shared storage.
android.permission.READ_PHONE_STATE
Allows the app to access the phone features of the device. This permission allows the app to determine the phone number and device IDs, whether a call is active, and the remote number connected by a call.
android.permission.READ_SYNC_SETTINGS
Allows the app to read the sync settings for an account. For example, this can determine whether the People app is synced with an account.
android.permission.RECEIVE_BOOT_COMPLETED
Allows the app to have itself started as soon as the system has finished booting. This can make it take longer to start the phone and allow the app to slow down the overall phone by always running.
android.permission.RECEIVE_USER_PRESENT
Unknown permission from android reference
android.permission.RECORD_AUDIO
This app can record audio using the microphone while the app is in use.
android.permission.REQUEST_IGNORE_BATTERY_OPTIMIZATIONS
Allows an app to ask for permission to ignore battery optimizations for that app.
android.permission.REQUEST_INSTALL_PACKAGES
Allows an application to request installation of packages.
android.permission.SCHEDULE_EXACT_ALARM
This app can schedule work to happen at a desired time in the future. This also means that the app can run when youu2019re not actively using the device.
android.permission.SYSTEM_ALERT_WINDOW
This app can appear on top of other apps or other parts of the screen. This may interfere with normal app usage and change the way that other apps appear.
android.permission.USE_CREDENTIALS
Allows the app to request authentication tokens.
android.permission.VIBRATE
Allows the app to control the vibrator.
android.permission.WAKE_LOCK
Allows the app to prevent the phone from going to sleep.
android.permission.WRITE_CALENDAR
This app can add, remove, or change calendar events on your phone. This app can send messages that may appear to come from calendar owners, or change events without notifying their owners.
android.permission.WRITE_CLIPBOARD_SERVICE
Unknown permission from android reference
android.permission.WRITE_EXTERNAL_STORAGE
Allows the app to write the contents of your shared storage.
android.permission.WRITE_SETTINGS
Allows the app to modify the system's settings data. Malicious apps may corrupt your system's configuration.
android.permission.WRITE_SYNC_SETTINGS
Allows an app to modify the sync settings for an account. For example, this can be used to enable sync of the People app with an account.
cn.org.ifaa.permission.USE_IFAA_MANAGER
Unknown permission from android reference
com.android.launcher.permission.INSTALL_SHORTCUT
Allows an application to add Homescreen shortcuts without user intervention.
com.android.launcher.permission.READ_SETTINGS
Unknown permission from android reference
com.asus.msa.SupplementaryDID.ACCESS
Unknown permission from android reference
com.coloros.mcs.permission.RECIEVE_MCS_MESSAGE
Unknown permission from android reference
com.google.android.gms.permission.AD_ID
Unknown permission from android reference
com.hihonor.push.permission.READ_PUSH_NOTIFICATION_INFO
Unknown permission from android reference
com.hihonor.security.permission.ACCESS_THREAT_DETECTION
Unknown permission from android reference
com.huawei.android.launcher.permission.CHANGE_BADGE
Unknown permission from android reference
com.huawei.android.launcher.permission.READ_SETTINGS
Unknown permission from android reference
com.huawei.android.launcher.permission.WRITE_SETTINGS
Unknown permission from android reference
com.huawei.appmarket.service.commondata.permission.GET_COMMON_DATA
Unknown permission from android reference
com.huawei.meetime.CAAS_SHARE_SERVICE
Unknown permission from android reference
com.meizu.c2dm.permission.RECEIVE
Unknown permission from android reference
com.meizu.flyme.push.permission.RECEIVE
Unknown permission from android reference
com.miui.home.launcher.permission.INSTALL_WIDGET
Unknown permission from android reference
com.open.gallery.smart.Provider
Unknown permission from android reference
com.oplus.metis.factdata.permission.DATABASE
Unknown permission from android reference
com.oplus.permission.safe.AI_APP
Unknown permission from android reference
com.vivo.identifier.permission.OAID_STATE_DIALOG
Unknown permission from android reference
com.vivo.notification.permission.BADGE_ICON
Unknown permission from android reference
com.xiaomi.dist.permission.ACCESS_APP_HANDOFF
Unknown permission from android reference
com.xiaomi.dist.permission.ACCESS_APP_META
Unknown permission from android reference
com.xiaomi.security.permission.ACCESS_XSOF
Unknown permission from android reference
com.xingin.xhs.permission.C2D_MESSAGE
Unknown permission from android reference
com.xingin.xhs.permission.JOPERATE_MESSAGE
Unknown permission from android reference
com.xingin.xhs.permission.JPUSH_MESSAGE
Unknown permission from android reference
com.xingin.xhs.permission.MIPUSH_RECEIVE
Unknown permission from android reference
com.xingin.xhs.permission.PROCESS_PUSH_MSG
Unknown permission from android reference
com.xingin.xhs.permission.PUSH_PROVIDER
Unknown permission from android reference
com.xingin.xhs.push.permission.MESSAGE
Unknown permission from android reference
freemme.permission.msa
Unknown permission from android reference
freemme.permission.msa.SECURITY_ACCESS
Unknown permission from android reference
getui.permission.GetuiService.com.xingin.xhs
Unknown permission from android reference
ohos.permission.ACCESS_SEARCH_SERVICE
Unknown permission from android reference
oplus.permission.settings.LAUNCH_FOR_EXPORT
Unknown permission from android reference