Federal Appeals Court Rules That Fair Use May Be Narrowed to Serve Hollywood Profits

1 month 3 weeks ago

Section 1201 of the Digital Millennium Copyright Act is a ban on reading any copyrighted work that is encumbered by access restrictions. It makes it illegal for you to read and understand the code that determines how your phone or car works and whether those devices are safe. It makes it illegal to create fair use videos for expressive purposes, reporting, or teaching. It makes it illegal for people with disabilities to convert ebooks they own into a format they can perceive. EFF and co-counsel at WSGR challenged Section 1201 in court on behalf of computer science professor Matthew Green and engineer Andrew “bunnie” Huang, and we asked the court to invalidate the law on First Amendment grounds.

Despite this law's many burdens on expression and research, the Court of Appeals for the D.C. Circuit concluded that these restrictions are necessary to incentivize copyright owners to publish works online, and rejected our court challenge. It reached this conclusion despite the evidence that many works are published without digital access restrictions (such as mp3 files sold without DRM) and the fact that people willingly pay for copyrighted works even though they're readily available through piracy. Once again, copyright law has been used to squash expression in order to serve a particular business model favored by rightsholders, and we are all the poorer for it.

Integral to the Court’s decision was the conclusion that Section 1201’s ban on circumvention of access restrictions is a regulation of “conduct” rather than “speech.” This is akin to saying that the government could regulate the reading of microfiche as “conduct” rather than “speech,” because technology is necessary to do so. Of course you want to be able to read the microfiche you purchased, but you can only do so using the licensed microfiche reader the copyright owner sells you. And if that reader doesn’t meet your needs because you’re blind or you want to excerpt the microfiche to make your own fair use materials, the government can make it illegal for you to use a reader that does.

It’s a back door into speech regulation that favors large, commercial entertainment products over everyday people using those works for their own, fair-use expression or for documentary films or media literacy.

Even worse, the law governs access to copyrighted software. In the microfiche analogy, this would be microfiche that’s locked inside your car or phone or other digital device that you’re never allowed to read. It’s illegal to learn how technology works under this regime, which is very dangerous for our digital future.

The Court asserts that the existing defenses to the anti-circumvention law are good enough – even though the Library of Congress has repeatedly admitted that they weren’t when it decided to issue exemptions to expand them.

All in all, the opinion represents a victory for rightsholder business models that allow them to profit by eroding the traditional rights of fair users, and a victory for device manufacturers that would like to run software in your devices that you’re not allowed to understand or change.

Courts must reject the mistaken notion that draconian copyright regimes are helpful to “expression” as a general matter rather than just the largest copyright owners. EFF will continue to fight for your rights to express yourself and to understand the technology in your life.

Related Cases: Green v. U.S. Department of Justice
Kit Walsh

Here Are EFF's Sacramento Priorities Right Now

1 month 3 weeks ago

California is one of the nation’s few full-time state legislatures. That means advocates have to track and speak up on hundreds of bills that move through the legislative process on a strict schedule between January and August every year. The legislature has been adjourned for a month, and won't be back until August. So it's a good time to take stock and share what we've been up to in Sacramento.

EFF has been tracking nearly 100 bills this session in California alone. They cover a wide array of privacy, free speech, and innovation issues, including bills that cover what standards Artificial Intelligence (A.I.) systems should meet before being used by state agencies, how AI and copyright interact, police use of surveillance, and a lot of privacy questions. While the session isn't over yet, we have already logged a significant victory by helping stop S.B.1076, by Senator Scott Wilk (Lancaster). This bill would have weakened the California Delete Act (S.B. 362), which we fought hard to pass last year. 

Under S.B. 362, The Delete Act made it easier for anyone to exert greater control over their privacy under California's Consumer Privacy Act (CCPA). The law created a one-click “delete” button in the state's data broker registry, allowing Californians to request the removal of their personal information held by data brokers registered in California. It built on the state's existing data broker registry law to expand the information data brokers are required to disclose about data they collect on consumers. It also added strong enforcement mechanisms to ensure that data brokers comply with these reporting requirements.

S.B. 1076 would have undermined the Delete Act’s aim to provide consumers with an easy “one-click” button. It also would have opened loopholes in the law for data brokers to duck compliance. This would have hurt consumer rights and undone oversight on an opaque ecosystem of entities that collect then sell personal information they’ve amassed on individuals. S.B. 1076's proponents, which included data brokers and advertisers, argued that the Delete Act is too burdensome and makes it impossible for consumers to exercise their privacy rights under California's privacy laws. In truth, S.B. 1076 would have aided fraudsters or credit abusers to misuse your personal information. The existing guardrails and protections under the Delete Act are some of the strongest in empowering vulnerable Californians to exercise their privacy rights under CCPA, and we're proud to have protected it.

Of course, there are still a lot of bills. Let’s dive into six bills we're paying close attention to right now, to give you a taste of what's cooking in Sacramento:

A.B. 3080 EFF opposes this bill by State Assemblymember Juan Alanis (Modesto). It would create powerful incentives for so-called “pornographic internet websites” to use age-verification mechanisms. The bill is not clear on what, exactly, counts as “sexually explicit content.” Without clear guidelines, this bill will further harm the ability of all youth—particularly LGBTQ+ youth—to access legitimate content online. Different versions of bills requiring age verification have appeared in more than a dozen states. An Indiana law similar to A.B. 3080 was preliminarily enjoined—temporarily halted— after a judge ruled it was likely unconstitutional. California should not enact this bill into law.

S.B. 892 EFF supports this bill by State Senator Steve Padilla (Chula Vista), which would require the Department of Technology to establish safety, privacy, and nondiscrimination standards relating to AI services procured by the State and prohibit the state from entering into any contract for AI services unless the service provider meets the standards established. This bill is a critical first step towards ensuring that any future investment in AI technology by the State of California to support the delivery of services is grounded in consumer protection.

A.B. 3138 EFF opposes this bill by State Assemblymember Lori Wilson (Suisun City), which will turn state-issued digital license plates into surveillance trackers that record everywhere a car goes. When a similar bill came up in 2022, several domestic violence, LGBTQIA+, reproductive justice, youth, and privacy organizations negotiated to prohibit the use of GPS in passenger car digital license plates. A.B. 3138 would no longer honor the agreement under A.B. 984 (2022) and reverse that negotiation.

A.B. 1814 EFF opposes this bill from State Assemblymember Phil Ting (San Francisco). It is an attempt to sanction and expand the use of facial recognition software by police to “match” images from surveillance databases to possible suspects. Those images can then be used to issue arrest warrants or probable searches. The bill says merely that these matches can't be the sole reason for a warrant to be issued by a judge—a standard that has already failed to stop false arrests in other states. By codifying such a weak standard with the hope that “something is better than nothing”, and expanding police access to state databases, makes bill is worse than no regulation.

S.B. 981 EFF opposes this bill from State Senator Aisha Wahab (Fremont), which would require online platforms to create a reporting mechanism for certain intimate materials, and ensure that those materials cannot be viewed on the platform. This reporting mechanism and the requirement to block and remove reported content will lead to over-censorship of protected speech. If passed as written it would violate the First Amendment and run afoul of federal preemption.

A.B. 1836 EFF opposes this bill by State Assemblymember Rebecca Bauer-Kahan (San Ramon). It will create a broad new “digital replica” right of publicity for deceased personalities for the unauthorized production, distribution, or availability of their digital replica in an audiovisual work or sound recording. If passed, a deceased personality’s estate could use it to extract statutory damages of $10,000 for the use of the dead person’s image or voice “in any manner related to the work performed by the deceased personality while living” – an incredibly unclear standard that will invite years of litigation.

Of course, this isn't every bill that EFF is engaged on, or even every bill we care about. Over the coming months, you'll hear more from us about ways that Californians can help us tell lawmakers to be on the right side of digital rights issues.

Hayley Tsukayama

【焦点】CCS(炭素回収貯留)事業 脱炭素「切り札」はウソ 未完成な技術、高コスト 50年目標値、達成は困難=橋詰雅博

1 month 3 weeks ago
                           「二酸化炭素回収貯留(CCS)事業法案」が先の通常国会で成立した。火力発電所や石油精製所、セメント工場などから排出される二酸化炭素(CO2)を回収し海底や地中に貯留するCCS技術は、脱炭素の切り札と言われている。法制化によって試掘・貯留の事業化を目指す企業の参入を促進させる。官民合わせて今後10年間で4兆円を投じるというCCS事業、マスコミは「有望」と持ち上げるが、本当にそうなのか―。CCS実証実験苫小牧で実施 CO2が8..
JCJ

Google Breaks Promise to Block Third-Party Cookies

1 month 3 weeks ago

Last week, Google backtracked on its long-standing promise to block third-party cookies in Chrome. This is bad for your privacy and good for Google's business. Third-party cookies are a pervasive tracking technology that allow companies to snoop on your online activity for surveillance and ad-targeting purposes. The consumer harm caused by these cookies has been well-documented for years, prompting Safari and Firefox to block them since 2020. Google knows this—that’s why they pledged to phase out third-party cookies in 2020. By abandoning this plan, Google leaves billions of Chrome users vulnerable to online surveillance.

How do third-party cookies facilitate online surveillance?

Cookies are small packets of information stored in your browser by websites you visit. They were built to enable useful functionality, like letting a website remember your language preferences or the contents of your shopping cart. But for years, companies have abused this functionality to track user behavior across the web, fueling a vast network of online surveillance. 

While first-party cookies enable useful functionality, third-party cookies are primarily used for online tracking. Third-party cookies are set by websites other than the one you’re currently viewing. Websites often include code from third-party companies to load resources like ads, analytics, and social media buttons. When you visit a website, this third-party code can create a cookie with a unique identifier for you. When you visit another website that loads resources from the same third-party company, that company receives your unique identifier from the cookie they previously set. By recognizing your unique identifier across multiple sites, third-party companies build a detailed profile of your browsing habits. 

For example, if you visit WebMD's “HIV & AIDS Resource Center,” you might expect WebMD to get information about your visit to their page. What you probably don't expect, and what third-party cookies enable, is that your visit to WebMD is tracked by dozens of companies you've never heard of. At the time of writing, visiting WebMD’s “HIV & AIDS Resource Center” sets 257 third-party cookies on your browser. The businesses that set those cookies include big tech companies (Google, Amazon, X, Microsoft) and data brokers (Lotame, LiveRamp, Experian). By setting a cookie on WebMD, these companies can link your visit to WebMD to your activity on other websites.

How does this online surveillance harm consumers?

Third-party cookies allow companies to build detailed profiles of your online activities, which can be used for targeted advertising or sold to the highest bidder. The consequences are far-reaching and deeply concerning. Your browsing history can reveal sensitive information, including your financial status, sexual orientation, and medical conditions. Data brokers collect and sell this information without your knowledge or consent. Once your data is for sale, anyone can buy it. Purchasers include insurance companies, hedge funds, scammers, anti-abortion groups, stalkers, and government agencies such as the military, FBI, and ICE

Online surveillance tools built for advertisers are exploited by others. For example, the NSA used third-party cookies set by Google to identify targets for hacking and people attempting to remain anonymous online. Likewise, a conservative Catholic nonprofit paid data brokers millions to identify priests using gay dating apps, and the brokers obtained this information from online advertising systems. 

Targeted ads also hurt us. They enable predatory advertisers to target vulnerable groups, like payday lenders targeting people in financial trouble. They also facilitate discriminatory advertising, like landlords targeting housing ads by race.

Yet again, Google puts profits over privacy

Google's decision to continue allowing third-party cookies, despite overwhelming evidence of their surveillance harms, is a direct consequence of their advertising-driven business model. Google makes most of its money from tracker-driven, behaviorally-targeted ads

If Google wanted, Chrome could do much more to protect your privacy. Other major browsers, like Safari and Firefox, provide significantly more protection against online tracking by default. Notably, Google is the internet’s biggest tracker, and most of the websites you visit include Google trackers (including but not limited to third-party cookies). As Chrome leaves users vulnerable to tracking, Google continues to receive nearly 80% of their revenue from online advertising.

Google’s change in plans follows concerns from advertisers and regulators that the loss of third-party cookies in Chrome would harm competition in digital advertising. Google’s anti-competitive practices in the ad-tech industry must be addressed, but maintaining online surveillance systems is not the answer. Instead, we should focus on addressing the root of these competition concerns. The bipartisan AMERICA Act, which proposed breaking up vertically integrated ad-tech giants like Google, offers a more effective approach. We don’t need to sacrifice user privacy to foster a competitive digital marketplace.

What now?

First, we call on Google to reverse this harmful decision. Continuing to allow one of the most pervasive forms of online tracking, especially when other major browsers have blocked it for years, is a clear betrayal of user trust. Google must prioritize people’s privacy over their advertising revenue and find real solutions to competition concerns. 

In the meantime, users can take steps to protect themselves from online tracking. Installing Privacy Badger can help block third-party cookies and other forms of online tracking.

We also need robust privacy legislation to ensure that privacy standards aren’t set by advertising companies. Companies use various tracking methods, like fingerprinting and link redirection, to monitor users across the web without third-party cookies. As long as it remains legal and profitable, companies will continue building and selling profiles of your online activities. Already, Google has developed alternative tracking tools that may be less invasive than third-party cookies but still enable harmful surveillance. Blocking third-party cookies is important but insufficient to address pervasive online tracking. Strong privacy legislation in the United States is possible, necessary, and long overdue. A comprehensive data privacy law should protect our browsing history by default and ban behavioral ads, which drive excessive data collection.

Google's decision to continue allowing third-party cookies in Chrome is a major disappointment. Browsing the internet shouldn't require submitting to extensive surveillance. As Google prioritizes profits over privacy, we need legislation that gives you control over your data.

Lena Cohen

[B] 「ろくでなしパリ・オリンピック」【西サハラ最新情報】

1 month 3 weeks ago
「不快になった方がいたとしたら当然、本当に申し訳なく思う」と、7月28日にパリ・オリンピック委員会が物議をかもした猟奇的オリンピック開会式に関して、謝罪しました。 マリー・アントワネット女王が1789年のフランス革命後に投獄された牢獄<コンシェルジュリー>の窓々に、切断された自身の頭を抱え登場する血なまぐさい場面もありました。 パリ・オリンピック5日目には、ガザ停戦合意交渉者で、ガザを実効支配するハマスの最高指導者イスマイル・ハニヤ氏が、イスラエルに暗殺されました。 パリ・オリンピック閉会式には、シャンソン・ろくでなし(Mauvais Gar;on )をバックに、殺された4万人ガザ市民の頭を抱えて、追悼行進をしてください。
日刊ベリタ

国家という怪物相手に違憲訴訟に素手で挑む(下) 婚外子差別の根絶求める富澤由子の闘い     

1 month 3 weeks ago
富澤由子、73歳。二つの違憲・国家賠償請求裁判を、弁護士を立てない本人訴訟で闘っている。事実婚で子どもの出生届を出すときに味わった根深い「婚外子」差別、そして相続裁判で生来の姓を使えなかった「私の苦痛」――自らの結婚、出 […]
admin

[B] 仮放免の子どもたちによる絵画作品展開催 私も自由が欲しい

1 month 3 weeks ago
「仮放免」の子ども達の作品を展示する「絵画作文展」が、東京都練馬区内の練馬区立男女共同参画センターで開催されている。主催は、「入管を変える!弁護士ネットワーク」。通算4回目となる今年の展示会のテーマは「私のふるさと」。8月2日に開催された表彰式では、審査員を務めた直木賞作家の中島京子氏と哲学者の永野潤氏の口から、絵画と作文のそれぞれの作品について、大賞や審査員賞が発表された。(岩本裕之)
日刊ベリタ

Victory! D.C. Circuit Rules in Favor of Animal Rights Activists Censored on Government Social Media Pages

1 month 3 weeks ago

In a big win for free speech online, the U.S. Court of Appeals for the D.C. Circuit ruled that a federal agency violated the First Amendment when it blocked animal rights activists from commenting on the agency’s social media pages. We filed an amicus brief in the case, joined by the Foundation for Individual Rights and Expression (FIRE).

People for the Ethical Treatment of Animals (PETA) sued the National Institutes of Health (NIH) in 2021, arguing that the agency unconstitutionally blocked their comments opposing animal testing in scientific research on the agency’s Facebook and Instagram pages. (NIH provides funding for research that involves testing on animals.)

NIH argued it was simply implementing reasonable content guidelines that included a prohibition against public comments that are “off topic” to the agency’s social media posts. Yet the agency implemented the “off topic” rule by employing keyword filters that included words such as cruelty, revolting, tormenting, torture, hurt, kill, and stop to block PETA activists from posting comments that included these words.

NIH’s Social Media Pages Are Limited Public Forums

The D.C. Circuit first had to determine whether the comment sections of NIH’s social media pages are designated public forums or limited public forums. As the court explained, “comment threads of government social media pages are designated public forums when the pages are open for comment without restrictions and limited public forums when the government prospectively sets restrictions.”

The court concluded that the comment sections of NIH’s Facebook and Instagram pages are limited public forums: “because NIH attempted to remove a range of speech violating its policies … we find sufficient evidence that the government intended to limit the forum to only speech that meets its public guidelines.”

The nature of the government forum determines what First Amendment standard courts apply in evaluating the constitutionality of a speech restriction. Speech restrictions that define limited public forums must only be reasonable in light of the purposes of the forum, while speech restrictions in designated public forums must satisfy more demanding standards. In both forums, however, viewpoint discrimination is prohibited.

NIH’s Social Media Censorship Violated Animal Rights Activists’ First Amendment Rights

After holding that the comment sections of NIH’s Facebook and Instagram pages are limited public forums subject to a lower standard of reasonableness, the D.C. Circuit then nevertheless held that NIH’s “off topic” rule as implemented by keyword filters is unreasonable and thus violates the First Amendment.

The court explained that because the purpose of the forums (the comment sections of NIH’s social media pages) is directly related to speech, “reasonableness in this context is thus necessarily a more demanding test than in forums that have a primary purpose that is less compatible with expressive activity, like the football stadium.”

In rightly holding that NIH’s censorship was unreasonable, the court adopted several of the arguments we made in our amicus brief, in which we assumed that NIH’s social media pages are limited public forums but argued that the agency’s implementation of its “off topic” rule was unreasonable and thus unconstitutional.

Keyword Filters Can’t Discern Context

We argued, for example, that keyword filters are an “unreasonable form of automated content moderation because they are imprecise and preclude the necessary consideration of context and nuance.”

Similarly, the D.C. Circuit stated, “NIH’s off-topic policy, as implemented by the keywords, is further unreasonable because it is inflexible and unresponsive to context … The permanent and context-insensitive nature of NIH’s speech restriction reinforces its unreasonableness.”

Keyword Filters Are Overinclusive

We also argued, related to context, that keyword filters are unreasonable “because they are blunt tools that are overinclusive, censoring more speech than the ‘off topic’ rule was intended to block … NIH’s keyword filters assume that words related to animal testing will never be used in an on-topic comment to a particular NIH post. But this is false. Animal testing is certainly relevant to NIH’s work.”

The court acknowledged this, stating, “To say that comments related to animal testing are categorically off-topic when a significant portion of NIH’s posts are about research conducted on animals defies common sense.”

NIH’s Keyword Filters Reflect Viewpoint Discrimination

We also argued that NIH’s implementation of its “off topic” rule through keyword filters was unreasonable because those filters reflected a clear intent to censor speech critical of the government, that is, speech reflecting a viewpoint that the government did not like.

The court recognized this, stating, “NIH’s off-topic restriction is further compromised by the fact that NIH chose to moderate its comment threads in a way that skews sharply against the appellants’ viewpoint that the agency should stop funding animal testing by filtering terms such as ‘torture’ and ‘cruel,’ not to mention terms previously included such as ‘PETA’ and ‘#stopanimaltesting.’”

On this point, we further argued that “courts should consider the actual vocabulary or terminology used … Certain terminology may be used by those on only one side of the debate … Those in favor of animal testing in scientific research, for example, do not typically use words like cruelty, revolting, tormenting, torture, hurt, kill, and stop.”

Additionally, we argued that “a highly regulated social media comments section that censors Plaintiffs’ comments against animal testing gives the false impression that no member of the public disagrees with the agency on this issue.”

The court acknowledged both points, stating, “The right to ‘praise or criticize governmental agents’ lies at the heart of the First Amendment’s protections … and censoring speech that contains words more likely to be used by animal rights advocates has the potential to distort public discourse over NIH’s work.”

We are pleased that the D.C. Circuit took many of our arguments to heart in upholding the First Amendment rights of social media users in this important internet free speech case.

Sophia Cope