プレカリアートユニオン(12/30)株式会社佐藤船舶と東京都労働委員会で和解!ほか
[B] 港町・リヴォルノ散策~チャオ!イタリア通信(サトウノリコ)
24読書回顧―私のおちおし 穏やかな日常が一瞬に奪われて=後藤秀典(24年JCJ賞受賞者)
【ネット署名】事故加害東電の原発再稼動を許すな!
EFF’s 2023 Annual Report Highlights a Year of Victories: 2024 in Review
Every fall, EFF releases its annual report, and 2023 was the year of Privacy First. Our annual report dives into our groundbreaking whitepaper along with victories in freeing the law, right to repair, and more. It’s a great, easy-to-read summary of the year’s work, and it contains interesting tidbits about the impact we’ve made—for instance, did you know 394,000 people downloaded an episode of EFF’s Podcast, “How to Fix the Internet as of 2023?” Or that EFF had donors in 88 countries?
As you can see in the report, EFF’s role as the oldest, largest, and most trusted digital rights organization became even more important when tech law and policy commanded the public’s attention in 2023. Major headlines pondered the future of internet freedom. Arguments around free speech, digital privacy, AI, and social media dominated Congress, state legislatures, the U.S. Supreme Court, and the European Union.
EFF intervened with logic and leadership to keep bad ideas from getting traction, and we articulated solutions to legitimate concerns with care and nuance in our whitepaper, Privacy First: A Better Way to Protect Against Online Harms. It demonstrated how seemingly disparate concerns are in fact linked to the dominance of tech giants and the surveillance business models used by most of them. We noted how these business models also feed law enforcement’s increasing hunger for our data. We pushed for a comprehensive approach to privacy instead and showed how this would protect us all more effectively than harmful censorship strategies.
The longest running fight we won in 2023 was to free the law: In our legal representation of PublicResource.org, we successfully ensured that copyright law does not block you from finding, reading and sharing laws, regulations and building codes online. We also won a major victory in helping to pass a law in California to increase tech users’ ability to control their information. In states across the nation, we helped boost the right to repair. Due to the efforts of the many technologists and advocates involved with Let’s Encrypt, HTTPS Everywhere, and Certbot over the last 10 year, as much as 95% of the web is now encrypted. And that’s just barely scratching the surface.
Obviously, we couldn’t do any of this without the support of our members, large and small. Thank you. Take a look at the report for more information about the work we’ve been able to do this year thanks to your help.
This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.
Aerial and Drone Surveillance: 2024 in Review
We've been fighting against aerial surveillance for decades because we recognize the immense threat from Big Brother in the sky. Even if you’re behind within the confines of your backyard, you are exposed to eyes from above.
Aerial surveillance was first conducted with manned aircrafts, which the Supreme Court held was permissible without a warrant in a couple of cases the 1980s. But, as we’ve argued to courts, drones have changed the equation. Drones were a technology developed by the military before it was adopted by domestic law enforcement. And in the past decade, commercial drone makers began marketing to civilians, making drones ubiquitous in our lives and exposing us to be watched by from above by the government and our neighbors. But we believe that when we're in the constitutionally protected areas of backyards or homes, we have the right to privacy, no matter how technology has advanced.
This year, we focused on fighting back against aerial surveillance facilitated by advancement in these technologies. Unfortunately, many of the legal challenges to aerial and drone surveillance are hindered by those Supreme Court cases. But, we argued that these cases decided around the same time as when people were playing Space Invaders on the Atari 2600 and watching the Goonies on VHS should not control the legality of conduct in the age of Animal Crossing and 4k streaming services. As nostalgic as those memories may be, laws from those times are just as outdated as 16k ram packs and magnetic videotapes. And we have applauded courts for recognizing that.
Unfortunately, the Supreme Court has failed to update its understanding of aerial surveillance, even though other courts have found certain types of aerial surveillance to violate the federal and state constitutions.
Because of this ambiguity, law enforcement agencies across the nation have been quick to adopt various drone systems, especially those marketed as a “drone as first responder” program, which ostensibly allows police to assess a situation–whether it’s dangerous or requires police response at all–before officers arrive at the scene. Data from the Chula Vista Police Department in Southern California, which pioneered the model, shows that drones frequently respond to domestic violence, unspecified disturbances, and requests for psychological evaluations. Likewise, flight logs indicate the drones are often used to investigate crimes related to homelessness. The Brookhaven Police Department in Georgia also has adopted this model. While these programs sound promising in theory, municipalities have been reticent in sharing the data despite courts ruling that the information is not categorically closed to the public.
Additionally, while law enforcement agencies are quick to assure the public that their policy respects privacy concerns, those can be hollow assurances. The NYPD promised that they would not surveil constitutionally protected backyards with drones, but Eric Adams decided to use to them to spy on backyard parties over Labor Day in 2023 anyway. Without strict regulations in place, our privacy interests are at the whims of whoever holds power over these agencies.
Alarmingly, there are increasing numbers of calls by police departments and drone manufacturers to arm remote-controlled drones. After wide-spread backlash including resignations from its ethics board, drone manufacturer Axon in 2022 said it would pause a program to develop a drone armed with a taser to be deployed in school shooting scenarios. We’re likely to see more proposals like this, including drones armed with pepper spray and other crowd control weapons.
As drones incorporate more technological payload and become cheaper, aerial surveillance has become a favorite surveillance tool resorted to by law enforcement and other governmental agencies. We must ensure that these technological developments do not encroach on our constitutional rights to privacy.
This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.
Restrictions on Free Expression and Access to Information in Times of Change: 2024 in Review
This was an historical year. A year in which elections took place in countries home to almost half the world’s population, a year of war, and collapse of or chaos within several governments. It was also a year of new technological developments, policy changes, and legislative developments. Amidst these sweeping changes, freedom of expression has never been more important, and around the world, 2024 saw numerous challenges to it. From new legal restrictions on speech to wholesale internet shutdowns, here are just a few of the threats to freedom of expression online that we witnessed in 2024.
Internet shutdownsIt is sadly not surprising that, in a year in which national elections took place in at least 64 countries, internet shutdowns would be commonplace. Access Now, which tracks shutdowns and runs the KeepItOn Coalition (of which EFF is a member), found that seven countries—Comoros, Azerbaijan, Pakistan, India, Mauritania, Venezuela, and Mozambique—restricted access to the internet at least partially during election periods. These restrictions inhibit people from being able to share news of what’s happening on the ground, but they also impede access to basic services, commerce, and communications.
But elections aren’t the only justification governments use for restricting internet access. In times of conflict or protest, access to internet infrastructure is key for enabling essential communication and reporting. Governments know this, and over the past decades, have weaponized access as a means of controlling the free flow of information. This year, we saw Sudan enact a total communications blackout amidst conflict and displacement. The Iranian government has over the past two years repeatedly restricted access to the internet and social media during protests. And Palestinians in Gaza have been subject to repeated internet blackouts inflicted by Israeli authorities.
Social media platforms have also played a role in restricting speech this year, particularly when it comes to Palestine. We documented unjust content moderation by companies at the request of Israel’s Cyber Unit, submitted comment to Meta’s Oversight Board on the use of the slogan “from the river to the sea” (which the Oversight Board notably agreed with), and submitted comment to the UN Special Rapporteur on Freedom of Expression and Opinion expressing concern about the disproportionate impact of platform restrictions on expression by governments and companies.
In our efforts to ensure free expression is protected online, we collaborated with numerous groups and coalitions in 2024, including our own global content moderation coalition, the Middle East Alliance for Digital Rights, the DSA Human Rights Alliance, EDRI, and many others.
Restrictions on content, age, and identityAnother alarming 2024 trend was the growing push from several countries to restrict access to the internet by age, often by means of requiring ID to get online, thus inhibiting people’s ability to identify as they wish. In Canada, an overbroad age verification bill, S-210, seeks to prevent young people from encountering sexually explicit material online, but would require all users to submit identification before going online. The UK’s Online Safety Act, which EFF has opposed since its first introduction, would also require mandatory age verification, and would place penalties on websites and apps that host otherwise-legal content deemed “harmful” by regulators to minors. And similarly in the United States, the Kids Online Safety Act (still under revision) would require companies to moderate “lawful but awful” content and subject users to privacy-invasive age verification. And in recent weeks, Australia has also enacted a vague law that aims to block teens and children from accessing social media, marking a step back for free expression and privacy.
While the efforts of these governments are to ostensibly protect children from harm, as we have repeatedly demonstrated, they can also cause harm to young people by preventing them from accessing information that is otherwise not taught in schools or otherwise accessible in their communities.
One group that is particularly impacted by these and other regulations enacted by governments around the world is the LGBTQ+ community. In June, we noted that censorship of online LGBTQ+ speech is on the rise in a number of countries. We continue to keep a close watch on governments that seek to restrict access to vital information and communications.
CybercrimeWe’ve been pushing back against cybercrime laws for a long time. In 2024, much of that work focused on the UN Cybercrime Convention, a treaty that would allow states to collect evidence across borders in cybercrime cases. While that might sound acceptable to many readers, the problem is that numerous countries utilize “cybercrime” as a means of punishing speech. One such country is Jordan, where a cybercrime law enacted in 2023 has been used against LGBTQ+ people, journalists, human rights defenders, and those criticizing the government.
EFF has fought back against Jordan’s cybercrime law, as well as bad cybercrime laws in China, Russia, the Philippines, and elsewhere, and we will continue to do so.
This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.
経産省前脱原発テント日誌(12/26)島崎邦彦氏「地震が起こってからでは取り返しがつかない!」
渡部通信(12/29) : 『崩壊する日本の公教育 』
Cars (and Drivers): 2024 in Review
If you’ve purchased a car made in the last decade or so, it’s likely jam-packed with enough technology to make your brand new phone jealous. Modern cars have sensors, cameras, GPS for location tracking, and more, all collecting data—and it turns out in many cases, sharing it.
Cars Sure Are Sharing a Lot of InformationWhile we’ve been keeping an eye on the evolving state of car privacy for years, everything really took off after a New York Times report this past March found that the car maker G.M. was sharing information about driver’s habits with insurance companies without consent.
It turned out a number of other car companies were doing the same by using deceptive design so people didn’t always realize they were opting into the program. We walked through how to see for yourself what data your car collects and shares. That said, cars, infotainment systems, and car maker’s apps are so unstandardized it’s often very difficult for drivers to research, let alone opt out of data sharing.
Which is why we were happy to see Senators Ron Wyden and Edward Markey send a letter to the Federal Trade Commision urging it to investigate these practices. The fact is: car makers should not sell our driving and location history to data brokers or insurance companies, and they shouldn’t make it as hard as they do to figure out what data gets shared and with whom.
Advocating for Better Bills to Protect Abuse SurvivorsThe amount of data modern cars collect is a serious privacy concern for all of us. But for people in an abusive relationship, tracking can be a nightmare.
This year, California considered three bills intended to help domestic abuse survivors endangered by vehicle tracking. Of those, we initially liked the approach behind two of them, S.B. 1394 and S.B. 1000. When introduced, both would have served the needs of survivors in a wide range of scenarios without inadvertently creating new avenues of stalking and harassment for the abuser to exploit. They both required car manufacturers to respond to a survivor's request to cut an abuser's remote access to a car's connected services within two business days. To make a request, a survivor had to prove the vehicle was theirs to use, even if their name was not on the loan or title.
But the third bill, A.B. 3139, took a different approach. Rather than have people submit requests first and cut access later, this bill required car manufacturers to terminate access immediately, and only require some follow-up documentation up to seven days later. Likewise, S.B. 1394 and S.B. 1000 were amended to adopt this "act first, ask questions later" framework. This approach is helpful for survivors in one scenario—a survivor who has no documentation of their abuse, and who needs to get away immediately in a car owned by their abuser. Unfortunately, this approach also opens up many new avenues of stalking, harassment, and abuse for survivors. These bills ended up being combined into S.B. 1394, which retained some provisions we remain concerned about.
It’s Not Just the Car ItselfBecause of everything else that comes with car ownership, a car is just one piece of the mobile privacy puzzle.
This year we fought against A.B. 3138 in California, which proposed adding GPS technology to digital license plates to make them easier to track. The bill passed, unfortunately, but location data privacy continues to be an important issue that we’ll fight for.
We wrote about a bulletin released by the U.S. Cybersecurity and Infrastructure Security Agency about infosec risks in one brand of automated license plate readers (ALPRs). Specifically, the bulletin outlined seven vulnerabilities in Motorola Solutions' Vigilant ALPRs, including missing encryption and insufficiently protected credentials. The sheer scale of this vulnerability is alarming: EFF found that just 80 agencies in California, using primarily Vigilant technology, collected more than 1.6 billion license plate scans (CSV) in 2022. This data can be used to track people in real time, identify their "pattern of life," and even identify their relations and associates.
Finally, in order to drive a car, you need a license, and increasingly states are offering digital IDs. We dug deep into California’s mobile ID app, wrote about the various issues with mobile IDs— which range from equity to privacy problems—and put together an FAQ to help you decide if you’d even benefit from setting up a mobile ID if your state offers one. Digital IDs are a major concern for us in the coming years, both due to the unanswered questions about their privacy and security, and their potential use for government-mandated age verification on the internet.
The privacy problems of cars are of increasing importance, which is why Congress and the states must pass comprehensive consumer data privacy legislation with strong data minimization rules and requirements for clear, opt-in consent. While we tend to think of data privacy laws as dealing with computers, phones, or IoT devices, they’re just as applicable, and increasingly necessary, for cars, too.
This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.
Behind the Diner—Digital Rights Bytes: 2024 in Review
Although it feels a bit weird to be writing a year in review post for a site that hasn’t even been live for three months, I thought it would be fun to give a behind-the-scenes look at the work we did this year to build EFF’s newest site, Digital Rights Bytes.
Since each topic Digital Rights Bytes aims to tackle is in the form of a question, why not do this Q&A style?
Q: WHAT IS DIGITAL RIGHTS BYTES?Great question! At its core, Digital Rights Bytes is a place where you can get honest answers to the questions that have been bugging you about technology.
The site was originally pitched as ‘EFF University’ (or EFFU, pun intended) to help folks who aren’t part of our core tech-savvy base get up-to-speed on technology issues that may be affecting their everyday lives. We really wanted Digital Rights Bytes to be a place where newbies could feel safe learning about internet freedom issues, get familiar with EFF’s work, and find out how to get involved, without feeling too intimidated.
Q: WHY DOES THE SITE LOOK SO DIFFERENT FROM OTHER EFF WORK?With our main goal of attracting new readers, it was crucial to brand Digital Rights Bytes differently from other EFF projects. We wanted Digital Rights Bytes to feel like a place where you and your friend might casually chat over milkshakes—while being served pancakes by a friendly robot. We took that concept and ran with it, going forward with a full diner theme for the site. I mean, imagine the counter banter you could have at the Digital Rights Bytes Diner!
Take a look at the Digital Rights Bytes counter!
As part of this concept, we thought it made sense for each topic to be framed as a question. Of course, at EFF, we get a ton of questions from supporters and other folks online about internet freedom issues, including from our own family and friends. We took some of the questions we see fairly often, then decided which would be the most important—and most interesting—to answer.
The diner concept is why the site has a bright neon logo, pink and cyan colors, and a neat vintage looking background on desktop. Even the gif that plays on the home screen of Digital Rights Bytes shows our animal characters chatting ‘round the diner (more on them soon!)
Q: WHY DID YOU MAKE DIGITAL RIGHTS BYTES?Here’s the thing: technology continues to expand, evolve, and change—and it’s tough to keep up! We’ve all been the tech noob, trying to figure out why our devices behave the way they do, and it can be pretty overwhelming.
So, we thought that we could help out with that! And what better way to help educate newcomers than explaining these tech issues in short byte-sized videos:
A clip from the device repair video.
It took some time to nail down the style for the videos on Digital Rights Bytes. But, after some trial and error, we landed on using animals as our lead characters. A) because they’re adorable. B) because it helped further emphasize the shadowy figures that were often trying to steal their data or make their tech worse for them. It’s often unclear who is trying to steal our data or rig tech to be worse for the user, so we thought this was fitting.
In addition to the videos, EFF issue experts wrote concise and easy to read pages further detailing the topic, with an emphasis on linking to other experts and including information on how you can get involved.
Q: HAS DIGITAL RIGHTS BYTES BEEN SUCCESSFUL?You tell us! If you’re reading these Year In Review blog posts, you’re probably the designated “ask them every tech question in the world” person of your family. Why not send your family and friends over to Digital Rights Bytes and let us know if the site has been helpful to them!
We’re also looking to expand the site and answer more common questions you and I might hear. If you have suggestions, you should let us know here or on social media! Just use the hashtag #DigitalRightsBytes and we’ll be sure to consider it.
This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.
NSA Surveillance and Section 702 of FISA: 2024 in Review
Mass surveillance authority Section 702 of FISA, which allows the government to collect international communications, many of which happen to have one side in the United States, has been renewed several times since its creation with the passage of the 2008 FISA Amendments Act. This law has been an incessant threat to privacy for over a decade because the FBI operates on the “finders keepers” rule of surveillance which means that it thinks because the NSA has “incidentally” collected the US-side of conversations it is now free to sift through them without a warrant.
But 2024 became the year this mass surveillance authority was not only reauthorized by a lion’s share of both Democrats and Republicans—it was also the year the law got worse.
After a tense fight, some temporary reauthorizations, and a looming expiration, Congress finally passed the Reforming Intelligence and Securing America Act (RISAA) in April, 20204. RISAA not only reauthorized the mass surveillance capabilities of Section 702 without any of the necessary reforms that had been floated in previous bills, it also enhanced its powers by expanding what it can be used for and who has to adhere to the government’s requests for data.
Where Section 702 was enacted under the guise of targeting people not on U.S. soil to assist with national security investigations, there are not such narrow limits on the use of communications acquired under the mass surveillance law. Following the passage of RISAA, this private information can now be used to vet immigration and asylum seekers and conduct intelligence for broadly construed “counter narcotics” purposes.
The bill also included an expanded definition of “Electronic Communications Service Provider” or ECSP. Under Section 702, anyone who oversees the storage or transmission of electronic communications—be it emails, text messages, or other online data—must cooperate with the federal government’s requests to hand over data. Under expanded definitions of ECSP there are intense and well-realized fears that anyone who hosts servers, websites, or provides internet to customers—or even just people who work in the same building as these providers—might be forced to become a tool of the surveillance state. As of December 2024, the fight is still on in Congress to clarify, narrow, and reform the definition of ECSP.
The one merciful change that occurred as a result of the 2024 smackdown over Section 702’s renewal was that it only lasts two years. That means in Spring 2026 we have to be ready to fight again to bring meaningful change, transparency, and restriction to Big Brother’s favorite law.
This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.
【沖縄リポート】土砂搬出、山が消えら台風直撃=浦島 悦子
ノーベル平和賞を次に繋ごう! 来年2月に核兵器をなくす国際市民フォーラムを開催
レイバーフェスタ・会場アンケート
〔週刊 本の発見〕『終わらないPFOA汚染 公害温存システムのある国で』
文化的につながることは生きる力になる〜レイバーフェスタ2024大成功
Global Age Verification Measures: 2024 in Review
EFF has spent this year urging governments around the world, from Canada to Australia, to abandon their reckless plans to introduce age verification for a variety of online content under the guise of protecting children online. Mandatory age verification tools are surveillance systems that threaten everyone’s rights to speech and privacy, and introduce more harm than they seek to combat.
Kids Experiencing Harm is Not Just an Online PhenomenaIn November, Australia’s Prime Minister, Anthony Albanese, claimed that legislation was needed to protect young people in the country from the supposed harmful effects of social media. Australia’s Parliament later passed the Online Safety Amendment (Social Media Minimum Age) Bill 2024, which bans children under the age of 16 from using social media and forces platforms to take undefined “reasonable steps” to verify users’ ages or face over $30 million in fines. This is similar to last year’s ban on social media access for children under 15 without parental consent in France, and Norway also pledged to follow a similar ban.
No study shows such harmful impact, and kids don’t need to fall into a wormhole of internet content to experience harm—there is a whole world outside the barriers of the internet that contributes to people’s experiences, and all evidence suggests that many young people experience positive outcomes from social media. Truthful news about what’s going on in the world, such as wars and climate change is available both online and by seeing a newspaper on the breakfast table or a billboard on the street. Young people may also be subject to harmful behaviors like bullying in the offline world, as well as online.
The internet is a valuable resource for both young people and adults who rely on the internet to find community and themselves. As we said about age verification measures in the U.S. this year, online services that want to host serious discussions about mental health issues, sexuality, gender identity, substance abuse, or a host of other issues, will all have to beg minors to leave and institute age verification tools to ensure that it happens.
Limiting Access for Kids Limits Access for EveryoneThrough this wave of age verification bills, governments around the world are burdening internet users and forcing them to sacrifice their anonymity, privacy, and security simply to access lawful speech. For adults, this is true even if that speech constitutes sexual or explicit content. These laws are censorship laws, and rules banning sexual content usually hurt marginalized communities and groups that serve them the most. History shows that over-censorship is inevitable.
This year, Canada also introduced an age verification measure, bill S-210, which seeks to prevent young people from encountering sexually explicit material by requiring all commercial internet services that “make available” explicit content to adopt age verification services. This was introduced to prevent harms like the “development of pornography addiction” and “the reinforcement of gender stereotypes and the development of attitudes favorable to harassment and violence…particularly against women.” But requiring people of all ages to show ID to get online won’t help women or young people. When these large services learn they are hosting or transmitting sexually explicit content, most will simply ban or remove it outright, using both automated tools and hasty human decision-making. This creates a legal risk not just for those who sell or intentionally distribute sexually explicit materials, but also for those who just transmit it–knowingly or not.
Without Comprehensive Privacy Protections, These Bills Exacerbate Data SurveillanceUnder mandatory age verification requirements, users will have no way to be certain that the data they’re handing over is not going to be retained and used in unexpected ways, or even shared to unknown third parties. Millions of adult internet users would also be entirely blocked from accessing protected speech online because they are not in possession of the required form of ID.
Online age verification is not like flashing an ID card in person to buy particular physical items. In places that lack comprehensive data privacy legislation, the risk of surveillance is extensive. First, a person who submits identifying information online can never be sure if websites will keep that information, or how that information might be used or disclosed. Without requiring all parties who may have access to the data to delete that data, such as third-party intermediaries, data brokers, or advertisers, users are left highly vulnerable to data breaches and other security harms at companies responsible for storing or processing sensitive documents like drivers’ licenses.
Second, and unlike in-person age-gates, the most common way for websites to comply with a potential verification system would be to require all users to upload and submit—not just momentarily display—a data-rich government-issued ID or other document with personal identifying information. In a brief to a U.S. court, EFF explained how this leads to a host of serious anonymity, privacy, and security concerns. People shouldn't have to disclose to the government what websites they're looking at—which could reveal sexual preferences or other extremely private information—in order to get information from that website.
These proposals are coming to the U.S. as well. We analyzed various age verification methods in comments to the New York Attorney General. None of them are both accurate and privacy-protective.
The Scramble to Find an Effective Age Verification Method Shows There Isn't OneThe European Commission is also currently working on guidelines for the implementation of the child safety article of the Digital Services Act (Article 28) and may come up with criteria for effective age verification. In parallel, the Commission has asked for proposals for a 'mini EU ID wallet' to implement device-level age verification ahead of the expected roll out of digital identities across the EU in 2026. At the same time, smaller social media companies and dating platforms have for years been arguing that age verification should take place at the device or app-store level, and will likely support the Commission's plans. As we move into 2025, EFF will continue to follow these developments as the Commission’s apparent expectation on porn platforms to adopt age verification to comply with their risk mitigation obligations under the DSA becomes clearer.
Mandatory age verification is the wrong approach to protecting young people online. In 2025, EFF will continue urging politicians around the globe to acknowledge these shortcomings, and to explore less invasive approaches to protecting all people from online harms.
This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.
While the Court Fights Over AI and Copyright Continue, Congress and States Focus On Digital Replicas: 2024 in Review
The phrase “move fast and break things” carries pretty negative connotations in these days of (Big) techlash. So it’s surprising that state and federal policymakers are doing just that with the latest big issue in tech and the public consciousness: generative AI, or more specifically its uses to generate deepfakes.
Creators of all kinds are expressing a lot of anxiety around the use of generative artificial intelligence, some of it justified. The anxiety, combined with some people’s understandable sense of frustration that their works were used to develop a technology that they fear could displace them, has led to multiple lawsuits.
But while the courts sort it out, legislators are responding to heavy pressure to do something. And it seems their highest priority is to give new or expanded rights to protect celebrity personas–living or dead–and the many people and corporations that profit from them.
The broadest “fix” would be a federal law, and we’ve seen several proposals this year. The two most prominent are NO AI FRAUD (in the House of Representatives) and NO FAKES (in the Senate). The first, introduced in January 2024, the Act purports to target abuse of generative AI to misappropriate a person’s image or voice, but the right it creates applies to an incredibly broad amount of digital content: any “likeness” and/or “voice replica” that is created or altered using digital technology, software, an algorithm, etc. There’s not much that wouldn’t fall into that category—from pictures of your kid, to recordings of political events, to docudramas, parodies, political cartoons, and more. It also characterizes the new right as a form of federal intellectual property. This linguistic flourish has the practical effect of putting intermediaries that host AI-generated content squarely in the litigation crosshairs because Section 230 immunity does not apply to federal IP claims. NO FAKES, introduced in April, is not significantly different.
There’s a host of problems with these bills, and you can read more about them here and here.
A core problem is that these bills are modeled on the broadest state laws recognizing a right of publicity. A limited version of this right makes sense—you should be able to prevent a company from running an advertisement that falsely claims that you endorse its products—but the right of publicity has expanded well beyond its original boundaries, to potentially cover just about any speech that “evokes” a person’s identity, such as a phrase associated with a celebrity (like “Here’s Johnny,”) or even a cartoonish robot dressed like a celebrity. It’s become a money-making machine that can be used to shut down all kinds of activities and expressive speech. Public figures have brought cases targeting songs, magazine features, and even computer games.
And states are taking swift action to further expand publicity rights. Take this year’s digital replica law in Tennessee, called the ELVIS Act because of course it is. Tennessee already gave celebrities (and their heirs) a property right in their name, photograph, or likeness. The new law extends that right to voices, expands the risk of liability to include anyone who distributes a likeness without permission and limits some speech-protective exceptions.
Across the country, California couldn’t let Tennessee win the race for most restrictive/protective rules for famous people (and their heirs). So it passed AB 1836, creating liability for anyo ne person who uses a deceased personality’s name, voice, signature, photograph, or likeness, in any manner, without consent. There are a number of exceptions, which is better than nothing, but those exceptions are pretty confusing for people who don’t have lawyers to help sort them out.
These state laws are a done deal, so we’ll just have to see how they play out. At the federal level, however, we still have a chance to steer policymakers in the right direction.
We get it–everyone should be able to prevent unfair and deceptive commercial exploitation of their personas. But expanded property rights are not the way to do it. If Congress really wants to protect performers and ordinary people from deceptive or exploitative uses of their images and voice, it should take a precise, careful and practical approach that avoids potential collateral damage to free expression, competition, and innovation.
This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.