iPhone製造工場のリアルを描いたドキュメント映画《飛昇》
電波監理審議会(第1134回)会議資料
令和4年7月10日執行の参議院比例代表選出議員選挙における欠員による繰上補充の選挙会において決定された事項
令和元年7月21日執行の参議院比例代表選出議員選挙における欠員による繰上補充の選挙会において決定された事項
第17回 日ASEANサイバーセキュリティ政策会議の結果
情報通信行政・郵政行政審議会 電気通信事業部会 接続委員会(第70回)配布資料・議事概要
【原発・エネルギー政策各党比較】国際環境NGO「FoE Japan」が公表=橋詰雅博
<2024総選挙>最高裁裁判官国民審査:今崎幸彦裁判官(長官)、宮川美津子裁判官の「不信任」を呼びかけます
関西生コン弾圧事件ニュース NO.107/パネリスト8人、熱く語る
労働トーク番組「迫る介護危機〜壊す政党、守る政党はどこ?」
総選挙で自公政治を終わらせよう!〜10ケ9国会議員会館前行動
経産省前脱原発テント日誌(10/17)これからは寒くなるが、今日はいい天気だった
[B] 「沖縄の基地建設反対運動参加者は日当をもらっている」 人気漫画でデマが流布されていると市民有志が抗議活動
【おすすめ本】安田浩一『地震と虐殺 1923-2023』─日本社会に巣くう差別意識を炙り出す=森 達也(作家)
【転載記事】<2024衆院総選挙>原発・エネルギーに関する各党の政策・マニフェストについて
「国会前19日行動」と「STOP ガザ虐殺兵器展 10.19大抗議」
【転載記事】<2024衆院総選挙>原発・エネルギーに関する各党の政策・マニフェストについて
EFF to Third Circuit: TikTok Has Section 230 Immunity for Video Recommendations
UPDATE: On October 23, 2024, the Third Circuit denied TikTok's petition for rehearing en banc.
EFF legal intern Nick Delehanty was the principal author of this post.
EFF filed an amicus brief in the U.S. Court of Appeals for the Third Circuit in support of TikTok’s request that the full court reconsider the case Anderson v. TikTok after a three-judge panel ruled that Section 230 immunity doesn’t apply to TikTok’s recommendations of users’ videos. We argued that the panel was incorrect on the law, and this case has wide-ranging implications for the internet as we know it today. EFF was joined on the brief with Center for Democracy & Technology (CDT), Foundation for Individual Rights and Expression (FIRE), Public Knowledge, Reason Foundation, and Wikimedia Foundation.
At issue is the panel’s misapplication of First Amendment precedent. The First Amendment protects the editorial decisions of publishers about whether and how to display content, such as the videos TikTok displays to users through its recommendation algorithm.
Additionally, because common law allows publishers to be liable for other people’s content that they publish (for example, letters to the editor that are defamatory in print newspapers) due to limited First Amendment protection, Congress passed Section 230 to protect online platforms from liability for harmful user-generated content.
Section 230 has been pivotal for the growth and diversity of the internet—without it, internet intermediaries would potentially be liable for every piece of content posted by users, making them less likely to offer open platforms for third-party speech.
In this case, the Third Circuit panel erroneously held that since TikTok enjoys protection for editorial choices under the First Amendment, TikTok’s recommendations of user videos amount to TikTok’s first-party speech, making it ineligible for Section 230 immunity. In our brief, we argued that First Amendment protection for editorial choices and Section 230 protection are not mutually exclusive.
We also argued that the panel’s ruling does not align with what every other circuit has found: that Section 230 also immunizes the editorial decisions of internet intermediaries. We made four main points in support of this argument:
- First, the panel ignored the text of Section 230 in that editorial choices are included in the commonly understood definition of “publisher” in the statute.
- Second, the panel created a loophole in Section 230 by allowing plaintiffs who were harmed by user-generated content to bypass Section 230 by focusing on an online platform’s editorial decisions about how that content was displayed.
- Third, it’s crucial that Section 230 protects editorial decisions notwithstanding additional First Amendment protection because Section 230 immunity is not only a defense against liability, it’s also a way to end a lawsuit early. Online platforms might ultimately win lawsuits on First Amendment grounds, but the time and expense of protracted litigation would make them less interested in hosting user-generated content. Section 230’s immunity from suit (as well as immunity from liability) advances Congress’ goal of encouraging speech at scale on the internet.
- Fourth, TikTok’s recommendations specifically are part of a publisher’s “traditional editorial functions” because recommendations reflect choices around the display of third-party content and so are protected by Section 230.
We also argued that allowing the panel’s decision to stand would harm not only internet intermediaries, but all internet users. If internet intermediaries were liable for recommending or otherwise deciding how to display third-party content posted to their platforms, they would end useful content curation and engage in heavy-handed censorship to remove anything that might be legally problematic from their platforms. These responses to a weakened Section 230 would greatly limit users’ speech on the internet.
The full Third Circuit should recognize the error of the panel’s decision and reverse to preserve free expression online.
A Flourishing Internet Depends on Competition
Antitrust law has long recognized that monopolies stifle innovation and gouge consumers on price. When it comes to Big Tech, harm to innovation—in the form of “kill zones,” where major corporations buy up new entrants to a market before they can compete with them—has been easy to find. Consumer harms have been harder to quantify, since a lot of services the Big Tech companies offer are “free.” This is why we must move beyond price as the major determinator of consumer harm. And once that’s done, it’s easier to see even greater benefits competition brings to the greater internet ecosystem.
In the decades since the internet entered our lives, it has changed from a wholly new and untested environment to one where a few major players dominate everyone's experience. Policymakers have been slow to adapt and have equated what's good for the whole internet with what is good for those companies. Instead of a balanced ecosystem, we have a monoculture. We need to eliminate the build up of power around the giants and instead have fertile soil for new growth.
Content ModerationIn content moderation, for example, it’s basically rote for experts to say that content moderation is impossible at scale. Facebook reports over three billion active users and is available in over 100 languages. However, Facebook is an American company that primarily does its business in English. Communication, in every culture, is heavily dependent on context. Even if it was hiring experts in every language it is in, which it manifestly is not, the company itself runs on American values. Being able to choose a social media service rooted in your own culture and language is important. It’s not that people have to choose that service, but it’s important that they have the option.
This sometimes happens in smaller fora. For example, the knitting website Ravelry, a central hub for patterns and discussions about yarn, banned all discussions about then-President Donald Trump in 2019, as it was getting toxic. A number of disgruntled users banded together to make their disallowed content available in other places.
In a competitive landscape, instead of demanding that Facebook or Twitter, or YouTube have the exact content rules you want, you could pick a service with the ones you want. If you want everything protected by the First Amendment, you could find it. If you want an environment with clear rules, consistently enforced, you could find that. Especially since smaller platforms could actually enforce its rules, unlike the current behemoths.
Product QualityThe same thing applies to product quality and the “enshittification” of platforms. Even if all of Facebook’s users spoke the same language, that’s no guarantee that they share the same values, needs, or wants. But, Facebook is an American company and it conducts its business largely in English and according to American cultural norms. As it is, Facebook’s feeds are designed to maximize user engagement and time on the service. Some people may like the recommendation algorithm, but other may want the traditional chronological feed. There’s no incentive for Facebook to offer the choice because it is not concerned with losing users to a competitor that does. It’s concerned with being able to serve as many ads to as many people as possible. In general, Facebook lacks user controls that would allow people to customize their experience on the site. That includes the ability to reorganize your feed to be chronological, to eliminate posts from anyone you don’t know, etc. There may be people who like the current, ad-focused algorithm, but no one else can get a product they would like.
Another obvious example is how much the experience of googling something has deteriorated. It’s almost hack to complain about it now, but when when it started, Google was revolutionary in its ability to a) find exactly what you were searching for and b) allow normal language searching (that is, not requiring you to use boolean searches in order to get the desired result). Google’s secret sauce was, for a long time, the ability to find the right result to a totally unique search query. If you could remember some specific string of words in the thing you were looking for, Google could find it. However, in the endless hunt for “growth,” Google moved away from quality search results and towards quantity. It also clogged the first page of results with ads and sponsored links.
Morals, Privacy, and SecurityThere are many individuals and small businesses that would like to avoid using Big Tech services, either because they are bad or because they have ethical and moral concerns. But, the bigger they are, the harder it is to avoid. For example, even if someone decides not to buy products from Amazon.com because they don’t agree with how it treats its workers, they may not be able to avoid patronizing Amazon Web Services (AWS), which funds the commerce side of the business. Netflix, The Guardian, Twitter, and Nordstrom are all companies that pay for Amazon’s services. The Mississippi Department of Employment Security moved its data management to Amazon in 2021. Trying to avoid Amazon entirely is functionally impossible. This means that there is no way for people to “vote with their feet,” withholding their business from companies they disagree with.
Security and privacy are also at risk without competition. For one thing, it’s easier for a malicious actor or oppressive state to get what they want when it’s all in the hands of a single company—a single point of failure. When a single company controls the tools everyone relies on, an outage cripples the globe. This digital monoculture was on display during this year's Crowdstrike outage, where one badly-thought-out update crashed networks across the world and across industries. The personal danger of digital monoculture shows itself when Facebook messages are used in a criminal investigation against a mother and daughter discussing abortion and in “geofence warrants” that demand Google turn over information about every device within a certain distance of a crime. For another thing, when everyone is only able to share expression in a few places that makes it easier for regimes to target certain speech and for gatekeepers to maintain control over creativity.
Another example of the relationship between privacy and competition is Google’s so-called “Privacy Sandbox.” Google’s messaged it as removing “third-party cookies” that track you across the internet. However, the change actually just moved that data into the sole control of Google, helping cement its ad monopoly. Instead of eliminating tracking, the Privacy Sandbox does tracking within the browser directly, allowing Google to charge for access to the insights gleaned from your browsing history with advertisers and websites, rather than those companies doing it themselves. It’s not more privacy, it’s just concentrated control of data.
You see this same thing at play with Apple’s app store in the saga of Beeper Mini, an app that allowed secure communications through iMessage between Apple and non-Apple phones. In doing so, it eliminated the dreaded “green bubbles” that indicated that messages were not encrypted (ie not between two iPhones). While Apple’s design choice was, in theory, meant to flag that your conversation wasn’t secure, it ended up being a design choice that motivated people to get iPhones just to avoid the stigma. Beeper Mini made messages more secure and removed the need to get a whole new phone to get rid of the green bubble. So Apple moved to break Beeper Mini, effectively choosing monopoly over security. If Apple had moved to secure non-iPhone messages on its own, that would be one thing. But it didn’t, it just prevented users from securing them on their own.
Obviously, competition isn’t a panacea. But, like privacy, its prioritization means less emergency firefighting and more fire prevention. Think of it as a controlled burn—removing the dross that smothers new growth and allows fires to rage larger than ever before.