Major Setback for Intermediary Liability in Brazil: How Did We Get Here?

2 months 3 weeks ago

This is the second post of a series about intermediary liability in Brazil. Our first post gives an overview of Brazil's current intermediary liability regime, the context of its approval in 2014, and the beginning of the Supreme Court's analysis of such regime in November 2024. Our third post provides an outlook on justices' votes up until June 23, underscoring risks, mitigation measures, and blind spots of their potential decision.

Update: Brazil’s Supreme Court analysis of the country’s intermediary liability regime concluded on June 26, 2025. With three dissenting votes in favor of Article 19’s constitutionality, the majority of the court deemed it partially unconstitutional. Part of the perils we pointed out have unfortunately come true. The ruling does include some (though not all) of the minimum safeguards we had recommended. The full content of the final thesis approved is available here. Some highlights: importantly, the court preserved Article 19’s regime for crimes against honor (such as defamation). However, the notice-and-takedown regime set in Article 21 became the general rule for platforms’ liability for user content. Platforms are liable regardless of notification for paid content they distribute (including ads) and for posts shared through “artificial distribution networks” (i.e., networks of bots). The court established a duty of care regarding the distribution of specific serious unlawful content. This monitoring duty approach is quite controversial, especially considering the varied nature and size of platforms. Fortunately, the ruling emphasizes that platforms can only be held liable for these specific contents without a previous notice if they fail to tackle them in a systemic way (rather than for individual posts). Article 19’s liability regime remains the general rule for applications like email services, virtual conference platforms, and messaging apps (regarding inter-personal messages). The entirety of the ruling, including the full content of Justice’s votes, will be published in the coming weeks.

The Brazilian Supreme Court has formed a majority to overturn the country’s current online intermediary liability regime. With eight out of eleven justices having presented their opinions, the court has reached enough votes to mostly remove the need for a previous judicial order demanding content takedown to hold digital platforms liable for user posts, which is currently the general rule.  

The judgment relates to Article 19 of Brazil’s Civil Rights Framework for the Internet (“Marco Civil da Internet,” Law n. 12.965/2014), wherein internet applications can only be held liable for third-party content if they fail to comply with a judicial decision ordering its removal. Article 19 aligns with the Manila Principles and reflects the important understanding that holding platforms liable for user content without a judicial analysis creates strong incentives for enforcement overreach and over censorship of protected speech.  

Nonetheless, while Justice André Mendonça voted to preserve Article 19’s application, four other justices stated it should prevail only in specific cases, mainly for crimes against honor (such as defamation). The remaining three justices considered that Article 19 offers insufficient protection to constitutional guarantees, such as the integral protection of children and teenagers.  

The judgment will resume on June 25th, with the three final justices completing the analysis by the plenary of the court. Whereas Article 19’s partial unconstitutionality (or its interpretation “in accordance with” the Constitution) seems to be the position the majority of the court will take, the details of each vote vary, indicating important agreements still to sew up and critical tweaks to make.   

As we previously noted, the outcome of this ruling can seriously undermine free expression and privacy safeguards if they lead to general content monitoring obligations or broadly expand notice-and-takedown mandates. This trend could negatively shape developments globally in other courts, parliaments, or with respect to executive powers. Sadly, the votes so far have aggravated these concerns.  

But before we get to them, let's look at some circumstances underlying the Supreme Court's analysis. 

2014 vs. 2025: The Brazilian Techlash After Marco Civil's Approval 

How did Article 19 end up (mostly) overturned a decade after Marco Civil’s much-celebrated approval in Brazil back in 2014?   

In addition to the broader techlash following the impacts of an increasing concentration of power in the digital realm, developments in Brazil have leveraged a harsher approach towards internet intermediaries. Marco Civil became a scapegoat, especially Article 19, within regulatory approaches that largely diminished the importance of the free expression concerns that informed its approval. Rather than viewing the provision as a milestone to be complemented with new legislation, this context has reinforced the view that Article 19 should be left behind. 

 The tougher approach to internet intermediaries gained steam after former President Jair Bolsonaro’s election in 2018 and throughout the legislative debates around draft bill 2630, also known as the “Fake News bill.”  

Specifically, though not exhaustive, concerns around the spread of disinformation, online-fueled discrimination, and political violence, as well as threats to election integrity, constitute an important piece of this scenario. This includes the use of social media by the far right within the escalation of acts seeking to undermine the integrity of elections and ultimately overthrow the legitimately elected President Luis Inácio da Silva in January 2023. Investigations later unveiled that related plans included killing the new president, the vice-president, and Justice Alexandre de Moraes.  

Concerns over child and adolescents’ rights and safety are another part of the underlying context. Among others, a wave of violent threats and actual attacks in schools in early 2023 was bolstered by online content. Social media challenges also led to injuries and deaths of young people.  

Finally, the political reactions to Big Tech’s alignment with far-right politicians and feuds with Brazilian authorities complete this puzzle. It includes reactions to Meta’s policy changes in January 2025 and the Trump’s administration’s decision to restrict visas to foreign officials based on grounds of limiting free speech online. This decision is viewed as an offensive against Brazil's Supreme Court from U.S. authorities in alliance with Bolsonaro’s supporters, including his son now living in the U.S

Changes in the tech landscape, including concerns about the attention-driven information flow, alongside geopolitical tensions, landed in Article 19 examination by the Brazilian Supreme Court. Hurdles in the legislative debate of draft bill 2630 turned attention to the internet intermediary liability cases pending in the Supreme Court as the main vehicles for providing “some” response. Yet, the scope of such cases (explained here) determined the most likely outcome. As they focus on assessing platform liability for user content and whether it involves a duty to monitor, these issues became the main vectors for analysis and potential change. Alternative approaches, such as improving transparency, ensuring due process, and fostering platform accountability through different measures, like risk assessments, were mainly sidelined.  

Read our third post in this series to learn more about the analysis of the Supreme Court so far and its risks and blind spots. 

Veridiana Alimonti

JVN: GROWIにおける非効率な正規表現の使用

2 months 3 weeks ago
株式会社GROWIが提供するGROWIには非効率な正規表現を使用して入力を処理している箇所があり、細工された入力によってサービス運用妨害(DoS)状態を引き起こされる可能性があります。

[B] チャオ!イタリア通信~小学校の卒業式

2 months 3 weeks ago
イタリアの卒業式は、賑やかで自由!!イタリアでは、6月に学校の1年間が終わります。我が家の双子も、この6月に5年間の小学校生活が終わりました。こちらでは、卒業式というものがないのですが、私の子どもたちが通う小学校では、先生手作りのちょっとした式がありました。その様子を報告します。(サトウノリコ=イタリア在住)
日刊ベリタ

Copyright Cases Should Not Threaten Chatbot Users’ Privacy

2 months 3 weeks ago

Like users of all technologies, ChatGPT users deserve the right to delete their personal data. Nineteen U.S. States, the European Union, and a host of other countries already protect users’ right to delete. For years, OpenAI gave users the option to delete their conversations with ChatGPT, rather than let their personal queries linger on corporate servers. Now, they can’t. A badly misguided court order in a copyright lawsuit requires OpenAI to store all consumer ChatGPT conversations indefinitely—even if a user tries to delete them. This sweeping order far outstrips the needs of the case and sets a dangerous precedent by disregarding millions of users’ privacy rights.

The privacy harms here are significant. ChatGPT’s 300+ million users submit over 1 billion messages to its chatbots per day, often for personal purposes. Virtually any personal use of a chatbot—anything from planning family vacations and daily habits to creating social media posts and fantasy worlds for Dungeons and Dragons games—reveal personal details that, in aggregate, create a comprehensive portrait of a person’s entire life. Other uses risk revealing people’s most sensitive information. For example, tens of millions of Americans use ChatGPT to obtain medical and financial information. Notwithstanding other risks of these uses, people still deserve privacy rights like the right to delete their data. Eliminating protections for user-deleted data risks chilling beneficial uses by individuals who want to protect their privacy.

This isn’t a new concept. Putting users in control of their data is a fundamental piece of privacy protection. Nineteen states, the European Union, and numerous other countries already protect the right to delete under their privacy laws. These rules exist for good reasons: retained data can be sold or given away, breached by hackers, disclosed to law enforcement, or even used to manipulate a user’s choices through online behavioral advertising.

While appropriately tailored orders to preserve evidence are common in litigation, that’s not what happened here. The court disregarded the privacy rights of millions of ChatGPT users without any reasonable basis to believe it would yield evidence. The court granted the order based on unsupported assertions that users who delete their data are probably copyright infringers looking to “cover their tracks.” This is simply false, and it sets a dangerous precedent for cases against generative AI developers and other companies that have vast stores of user information. Unless courts limit orders to information that is actually relevant and useful, they will needlessly violate the privacy rights of millions of users.

OpenAI is challenging this order. EFF urges the court to lift the order and correct its mistakes.  

Tori Noble