European Commission Gets Dinged for Unlawful Data Transfer, Sending a Big Message About Accountability

10 hours 55 minutes ago

The European Commission was caught failing to comply with its own data protection regulations and, in a first, ordered to pay damages to a user for the violation. The €400 ($415) award may be tiny compared to fines levied against Big Tech by European authorities, but it’s still a win for users and considerably more than just a blip for the “talk about embarrassing” file at the commission.

The case, Bindl vs. EC, underscores the principle that when people’s data is lost, stolen, or shared without promised safeguards—which can lead to identity theft, cause uncertainty about who has access to the data and for what purpose, or place our names and personal preferences in the hands of data brokers —they’ve been harmed and have the right to hold those responsible accountable and seek damages.

Some corporations, courts, and lawmakers in the U.S. need to learn a thing or two about this principle. Victims of data breaches are subject to anxiety and panic that their social security numbers and other personal information, even their passport numbers, are being bought and sold on the dark web to criminals who will use the information to drain their bank accounts or demand a ransom not to.

But when victims try to go to court, the companies that failed to protect their data in the first place sometimes say tough luck—unless you actually lose money, they say you’re not really harmed and can’t sue. And courts in many cases go along with this.

The EC debacle arose when a German citizen using the commission’s website to register for a conference was offered to sign in using Facebook, which he did—a common practice that, surprise, surprise, can and does give U.S.-based Facebook access to signees’ personal information.

Here’s the problem: In the EU, the General Data Privacy Regulations (GDPR), a comprehensive and far-reaching data privacy law that came into effect in 2018, and a related law that applies to EU institutions, Regulation (EU) 2018/1725, requires entities that handle personal data to abide by certain rules for collecting and transferring it. They must, for instance, ensure that transfers of someone’s personal information, such as their IP address, to countries outside the EU are adequately protected.

The GDPR also give users significant control over their data, such as requiring data processors to obtain users’ clear consent to handle their personal data and allowing users to seek compensation if their privacy rights are infringed—although the regulations are silent on how damages should be assessed.

In what it called a “sufficiently serious breach,” a condition for awarding damages, the European General Court, which hears actions against EU institutions, found that the EC violated EU privacy protections by facilitating in 2022 the transfer of German citizen Thomas Bindl’s IP address and other personal data to Meta, owner of Facebook. The transfer was unlawful because there were no agreements at the time that adequately protected EU users’ data from U.S. government surveillance and weak data privacy laws.

“…personal data may be transferred to a third country or to an international organisation only if the controller or processor has provided appropriate safeguards, and on condition that enforceable data subject rights and effective legal remedies for data subjects are available,” the court said. “In the present case, the Commission has neither demonstrated nor claimed that there was an appropriate safeguard, in particular a standard data protection clause or contractual clause…”

(The EC in 2023 adopted the EU-US Data Privacy Framework to facilitate mechanisms for  personal data transfers between the U.S. and EU states, Great Britain, and Switzerland with protections that are supposed to be consistent with EU, UK, and Swiss law and limit US intelligence services’ access to personal data transferred to America.)

Bindl sought compensation for non-material—that is, not involving direct financial loss—damages because the transfer caused him to lose control of his data and deprived him of his rights and freedoms.

Applying standards it had set in a data mishandling case from Austria involving non-material damage claims, the court said he was entitled to such damages because the commission had violated the GDPR-like regulation 2018/1725 and the damages he suffered were caused by the infringement.

Importantly, the court specified that the right to compensation doesn’t hinge on an assessment of whether the harms are serious enough to take to court, a condition that some EU member state courts have used to dismiss non-material damage claims.

Rather, it was enough that the data transfer put Bindl “in a position of some uncertainty as regards the processing of his personal data, in particular of his IP address,” the court said. This is criterion that could benefit other plaintiffs seeking non-material damages for the mishandling of their data, said Tilman Herbrich, Bindl’s attorney.

Noting the ease with which IP addresses can be used to connect a person to an existing online profile and exploit their data, Bindl, in conversation with The International Association of Privacy Professionals (IAPP), said “it’s totally clear that this was more than just this tiny little piece of IP address, where people even tend to argue whether its PII (personal identifiable information) or not.”  Bindl is the founder of EuGD European Society for Data Protection, a Munich-based litigation funder that supports complainants in data protection lawsuits.

The court’s decision recognizes that losing control of your data causes real non-material harm, and shines a light on why people are entitled to seek compensation for emotional damage, probably without the need to demonstrate a minimum threshold of damage.

EFF has stood up for this principle in U.S. courts against corporate giants who—after data thieves penetrate their inadequate security systems, exposing millions of people’s private information—claim in court that victims haven’t really been injured unless they can prove a specific economic harm on top of the obvious privacy harm.

In fact, negligent data breaches inflict grievous privacy harms in and of themselves, and so the victims have “standing” to sue in federal court—without the need to prove more.

Once data has been disclosed, it is often pooled with other information, some gathered consensually and legally and some gathered from other data breaches or through other illicit means. That pooled information is then used to create inferences about the affected individuals for purposes of targeted advertising, various kinds of risk evaluation, identity theft, and more.

In the EU, the Bindl case could bring more legal certainty to individuals and companies about damages for data protection violations and perhaps open the door to collective-action lawsuits. To the extent that the case was brought to determine whether the EC follows its own rules, the outcome was decisive.

The commission “should set the standard in terms of implementation of how they are doing it,” Bindl said. “If anyone is looking at somebody who is doing it perfectly right, it should be the commission, right?”

 

Karen Gullo

Key Issues Shaping State-Level Tech Policy

1 day 9 hours ago

We’re taking a moment to reflect on the 2024 state legislative session and what it means for the future of digital rights at the state level. Informed by insights from the State of State Technology Policy 2024 report by NYU’s Center on Technology Policy and EFF’s own advocacy work in state legislatures, this blog breaks down the key issues (Privacy, Children’s Online Safety, Artificial Intelligence, Competition, Broadband and Net Neutrality, and Right to Repair), taking a look back at last year’s developments while also offering a preview of the challenges and trends we can expect in state-level tech policy in the years ahead. 

To jump ahead to a specific issue, you can click on the hyperlinks below: 

Privacy

Children’s Online Safety and Age Verification

Artificial Intelligence

Competition

Broadband and Net Neutrality

Right to Repair

Privacy

State privacy legislation saw notable developments in 2024, with Maryland adopting a stronger privacy law that includes enhanced protections, such as prohibiting targeted advertising to teens, requiring opt-in consent to process health data, and broadening the definition of sensitive data to include location data. This places Maryland’s law ahead of similar measures in other states. In total, seven states—Kentucky, Maryland, Minnesota, Nebraska, New Hampshire, New Jersey, and Rhode Island—joined the ranks of states with comprehensive privacy laws last year, regulating the practices of private companies that collect, store, and process personal data. This expands on the 12 states that had already passed similar legislation in previous years (for a total of 19). Additionally, several of the laws passed in previous years went into effect in 2024.

In 2025, states are expected to continue enacting privacy laws based on the flawed Washington Privacy Act model, though states like Maryland have set a new standard. We still believe these bills must be stronger. States will likely also take the lead in pursuing issue-specific privacy laws covering genetic, biometric, location, and health data, filling gaps where federal action is unlikely (or likely to be weakened by business pressure).

Private Right of Action

A key issue in privacy regulation remains the debate over a private right of action (PRA), which is one of EFF’s main recommendations in comprehensive consumer privacy recommendations and would allow individuals to sue companies for privacy violations. Strong enforcement sits at the top of EFF’s recommendations for privacy bills for good reason. A report from the EPIC and the U.S. PIRG Education Fund highlighted that many state privacy laws provide minimal consumer protections largely due to the absence of private rights of action. Without a PRA, companies are often not held accountable for violations unless state or federal regulators take action, which is both slow and inconsistent. This leaves consumers vulnerable and powerless, unable to directly seek recourse for harm done to their privacy. Unless companies face serious consequences for violating our privacy, they’re unlikely to put our privacy ahead of their profits. 

While the California Consumer Privacy Act (CCPA) includes a limited PRA in cases of a “personal information security breach” only, it is alarming that no new comprehensive laws passed in 2023 or 2024 included a PRA. This reluctance to support a PRA reveals how businesses resist the kind of accountability that would force them to be more transparent and responsible with consumer data. Vermont’s 2024 comprehensive consumer privacy bill proposed a PRA in their bill language. Unfortunately, that bill was vetoed by Gov. Phil Scott, demonstrating how powerful corporate interests can undermine consumer rights for the sake of their own convenience. 

Consumer Privacy and Government Records

Comprehensive consumer privacy legislation outlined above primarily focuses on regulating the practices of private companies that collect, store, and process personal data. However, these laws do not target the handling of personal information by government entities at the state and local levels. Strong legislation is essential for protecting data held by these public agencies, as government records can contain sensitive and comprehensive personal information. For example, local governments may store data on residents’ health records, criminal history, or education. This sensitive data, if mishandled or exposed, can lead to significant privacy breaches. A case in point is when local police departments share facial recognition or ALPR data, raising privacy concerns about unauthorized surveillance and misuse. As tensions rise between federal, state, and local governments, there will be greater focus on data sharing between these entities, increasing the likelihood of the introduction of new laws to protect that data.

A notable example of the need for such legislation is California’s Information Practices Act (IPA) of 1977, which sets privacy guidelines for state agencies. The IPA limits the collection, maintenance, and dissemination of personal information by California state agencies, including sensitive data such as medical records. However, the IPA excludes local governments from these privacy protections, meaning counties and municipalities— which also collect vast amounts of personal data—are not held to the same standards. This gap leaves many individuals without privacy safeguards at the local government level, highlighting the need for stronger and more inclusive privacy legislation that addresses the data practices of both state and local entities–even beyond California. 

Right to Delete and DELETE Act

Data brokers are a major issue when it comes to the irresponsible handling of our personal information. These companies gather vast amounts of personal data and sell it with minimal oversight, often including highly sensitive details like purchasing habits, financial records, social media activity, and precise location tracking. The unregulated trade of this information opens the door to scams, identity theft, and financial exploitation, as individuals become vulnerable to misuse of their private data. This is why EFF supported the California “DELETE Act” in 2023, which allows people to easily and efficiently make one request to delete their personal information held by all data brokers. The law went into effect in 2024, and the deletion mechanism is expected by January 2026—marking a significant step in consumer privacy rights. 

Consumers in 19 states have a right to request that companies delete information collected about them, and these states represent the growing trend to expand consumer rights regarding personal data. However, because a “right to delete” that exists in comprehensive privacy laws requires people to file requests with each individual data broker that may have their information, it can be an incredibly time-consuming and tedious process. Because of this, the California Delete Act’s “one-stop shop” is particularly notable in setting a precedent for other states. In fact, Nebraska has already introduced LB602 for the 2025 legislative session, modeled after California's law, further demonstrating the momentum for such legislation. We hope to see more states adopt similar laws, making it easier for consumers to protect their data and enforce their privacy rights.

Issue-specific Privacy Legislation

In 2024, several states passed issue-specific privacy laws addressing concerns around biometric data, genetic privacy, and health information. 

Regarding biometric privacy, Maryland, New York, Utah, and Virginia imposed restrictions on the use of biometric identifying technologies by law enforcement, with Maryland specifically limiting facial recognition technology in criminal proceedings to certain high-crime investigations and Utah requiring a court order for any police use of biometrics, unless a public safety threat is present. 

Conversely, states like Oklahoma and Florida expanded law enforcement use of biometric data, with Oklahoma mandating biometric data collection from undocumented immigrants, and Florida allocating nearly $12 million to enhance its biometric identification technology for police. 

In the realm of genetic information privacy, Alabama and Nebraska joined 11 other states by passing laws that require direct-to-consumer genetic testing companies to disclose their data policies and implement robust security measures. These companies must also obtain consumer consent if they intend to use genetic data for research or sell it to third parties.

Lastly, in response to concerns about the sharing of reproductive health data due to state abortion bans, several states introduced and passed location data privacy and health data privacy legislation, with more anticipated in 2025 due to heightened scrutiny over location data trackers and the evolving federal landscape surrounding reproductive rights and gender affirming care.  Among those, nineteen states have enacted shield laws to prohibit sensitive data from being disclosed for out-of-state legal proceedings involving reproductive health activities. 

State shield laws vary, but most prevent state officials, including law enforcement and courts, from assisting out-of-state investigations or prosecutions of protected healthcare activities. For example, a state judge may be prohibited from enforcing an out-of-state subpoena for abortion clinic location data, or local police could be barred from aiding the extradition of a doctor facing criminal charges for performing an abortion. In 2023, EFF supported A.B. 352, which extended the protections of California's health care data privacy law to apps such as period trackers. Washington also passed the "My Health, My Data Act" that year, (H.B. 1155), which among other protections, prohibits the collection of health data without consent. 

Children’s Online Safety and Age Verification

Children’s online safety emerged as a key priority for state legislatures in the last few years, with significant variations in approach between states. In 2024, some states adopted age verification laws for both social media platforms and “adult content” sites, while others concentrated on imposing design restrictions on platforms and data privacy protections. For example, California and New York both enacted laws restricting "addictive feeds,” while Florida, Mississippi, and Tennessee enacted new age verification laws to regulate young people’s access to social media and access to “sexual” content online. Every statute 

None of the three states have implemented their age verification for social media laws, however. Courts blocked Mississippi and Tennessee from enforcing their laws, while Florida Attorney General Ashley Moody, known for aggressive enforcement of controversial laws, has chosen not to enforce the social media age verification part of the bill. She’s also asked the court to pause the lawsuit against the Florida law until the U.S. Supreme Court rules on Texas's age verification law, which only covers the “sexual content” provisions, and does not include the provisions on social media age checks.

In 2025, we hope to see a continued trend to strengthen privacy protections for young people (and adults alike). Unfortunately, we also expect state legislatures to continue refining and expanding age verification and "addictive platform” regulation for social media platforms, as well as so-called “materials harmful to minors,” with ongoing legal challenges shaping the landscape

Targeted Advertising and Children 

In response to the growing concerns over data privacy and advertising, Louisiana banned the practice of targeting of ads to minors. Seven other states also enacted comprehensive privacy laws requiring platforms to obtain explicit consent from minors before collecting or processing their data. Colorado, Maryland, New York, and Virginia went further, extending existing privacy protections with stricter rules on data minimization and requiring impact assessments for heightened risks to children's data. 

Artificial Intelligence

2024 marked a major milestone in AI regulation, with Colorado becoming the first state to pass what many regard as comprehensive AI legislation. The law requires both developers and deployers of high-risk AI systems to implement impact assessments and risk management frameworks to protect consumers from algorithmic discrimination. Other states, such as Texas, Connecticut, and Virginia, have already begun to follow suit in the 2025 legislative session, and lawmakers in many states are discussing similar bills.

However, not all AI-related legislation has been met with consensus. One of the most controversial has been California’s S.B. 1047, which aimed to regulate AI models that might have "catastrophic" effects. While EFF supported some aspects of the bill—like the creation of a public cloud-computing cluster (CalCompute)—we were concerned that it focused too heavily on speculative, long-term catastrophic outcomes, such as machines going rogue, instead of addressing the immediate, real-world harms posed by AI systems. We believe lawmakers should focus on creating regulations that address actual, present-day risks posed by AI, rather than speculative fears of future catastrophe. After a national debate over the bill, Gov. Newsom vetoed it. Sen. Weiner has already refiled the bill.

States also continued to pass narrower AI laws targeting non-consensual intimate imagery (NCII), child sexual abuse material (CSAM), and political deepfakes during the 2024 legislative session. Given that it was an election year, the debate over the use of AI to manipulate political campaigns also escalated. Fifteen states now require political advertisers to disclose the use of generative AI in ads, with some, like California and Mississippi, going further by banning deceptive uses of AI in political ads. Legal challenges, including one in California, will likely continue to shape the future of AI regulations in political discourse.

More states are expected to introduce and debate comprehensive AI legislation based on Colorado’s model this year, as well as narrower AI bills, especially on issues like NCII deepfakes, and AI-generated CSAM. The legal and regulatory landscape for AI in political ads will continue to evolve, with further lawsuits and potential new legislation expected in 2025.

Lastly, it’s also important to recognize that states and local governments themselves are major technology users. Their procurement and use of emerging technologies, such as AI and facial recognition, is itself a form of tech policy. As such, we can expect states to introduce legislation around the adoption of these technologies by government agencies, likely focusing on setting clear standards and ensuring transparency in how these technologies are deployed. 

Competition

On the competition front, several states, including New York and California, made efforts to strengthen antitrust laws and tackle monopolistic practices in Big Tech. While progress was slow, New York's Twenty-First Century Antitrust Act aimed to create a stricter antitrust framework, and the California Law Revision Commission’s ongoing review of the Cartwright Act could lead to modernized recommendations in 2025. Delaware also passed SB 296, which amends the state’s antitrust law to allow a private right of action. 

Despite the shifts in federal enforcement, bipartisan concerns about the influence of tech companies will likely ensure that state-level antitrust efforts continue to play a critical role in regulating corporate power.

Broadband and Net Neutrality

As federal efforts to regulate broadband and net neutrality have stalled, many states have taken matters into their own hands. California, Washington, Oregon, and Vermont have already passed state-level net neutrality laws aimed at preventing internet service providers (ISPs) from blocking, throttling, or prioritizing certain content or services for financial gain. With the growing frustration over the federal government’s inaction on net neutrality, more states are likely to carry the baton in 2025. 

States will continue to play an increasingly critical role in protecting consumers' online freedoms and ensuring that broadband access remains affordable and equitable. This is especially true as more communities push for expanded broadband access and better infrastructure.

Right to Repair

Another key tech issue gaining traction in state legislatures is the Right to Repair. In 2024, California and Minnesota’s Right-to-Repair legislation went into effect, granting consumers the right to repair their electronics and devices independently or through third-party repair services. These laws require manufacturers of devices like smartphones, laptops, and other electronics to provide repair parts, tools, and manuals to consumers and repair shops. Oregon and Colorado also passed similar legislation in 2024.

States will likely continue to pass right-to-repair legislation in 2025, with advocates expecting between 25 to 30 bills to be introduced across the country. These bills will likely expand on existing laws to include more products, from wheelchairs to home appliances and agricultural equipment. As public awareness of the benefits of the Right to Repair grows, legislators will be under increasing pressure to support consumer rights, promote environmental sustainability, and combat planned obsolescence.

Looking Ahead to the Future of State-Level Digital Rights

As we reflect on the 2024 state legislative session and look forward to the challenges and opportunities of 2025, it’s clear that state lawmakers will continue to play a pivotal role in shaping the future of digital rights. From privacy protections to AI regulation, broadband access, and the right to repair, state-level policies are crucial to safeguarding consumer rights, promoting fairness, and fostering innovation.

As we enter the 2025 legislative session, it’s vital that we continue to push for stronger policies that empower consumers and protect their digital rights. The future of digital rights depends on the actions we take today. Whether it’s expanding privacy protections, ensuring fair competition, or passing comprehensive right-to-repair laws, now is the time to push for change.

Join us in holding your state lawmakers accountable and pushing for policies that ensure digital rights for all.

Rindala Alajaji

How State Tech Policies in 2024 Set the Stage for 2025

1 day 9 hours ago

EFF has been at the forefront of defending civil liberties in the digital age, with our activism team working across state, federal, and local levels to safeguard everyone's rights in the rapidly evolving tech landscape. As federal action on technology policy often lags, many are looking to state governments to lead the way in addressing tech-related issues. 

Drawing insights from the State of State Technology Policy 2024 report by NYU’s Center on Technology Policy and EFF's own experiences advocating in state legislatures, this blog offers a breakdown on why you should care about state policy, the number of bills passed around the country, and a look forward to the coming challenges and trends in state-level tech policy.

Why Should You Care?

State governments are increasingly becoming key players in tech policy, moving much faster than the federal government. This has become especially apparent in 2024, when states enacted significantly more legislation regulating technology than in previous years

“Why?,” you may ask. State legislatures were the most partisan they’ve been in decades in 2024, where we saw a notable increase in the presence of "trifecta" governments—states where one political party controls both chambers of the legislature and the governorship. With this unified control, states can pass laws more easily and quickly. 

Forty states operated under such single-party rule in 2024, the most in at least three decades. Amongst the 40 trifecta states, 29 states also had veto-proof supermajorities, meaning legislation can pass regardless of gubernatorial opposition. This overwhelming single-party control helped push through new tech regulations, with the Center on Technology Policy reporting that 89% percent of all tech-related bills passed in trifecta states. Even with shifts in the 2024 elections, where at least two states—Michigan and Minnesota—lost their trifectas, the trend of state governments driving technology policy is unlikely to slow down anytime soon.

2024 in Numbers: A Historic Year for State Tech Policy

According to the State of State Technology Policy 2024 report by NYU’s Center on Technology Policy:

  • 238 technology-related bills passed across 46 states, marking a 163% increase from the previous year.
  • 20 states passed 28 privacy-related bills, including 7 states enacting laws similar to the industry supported Washington Privacy Act.
  • 18 states passed laws regulating biometric data, with 2 states introducing genetic privacy protections.
  • 23 states passed 48 laws focused on “online child safety,” primarily targeting age verification for adult content and regulating social media.
  • 41 states passed 107 bills regulating AI.
  • 22 states passed laws addressing Non-Consensual Intimate Images (NCII) and child sexual abuse material (CSAM) generated or altered by AI or digital means.
  • 17 states enacted 22 laws regulating the use of generative AI in political campaigns.
  • 6 states created 19 new commissions, task forces, and legislative committees to assess the impact of AI and explore its regulation or beneficial use. For example, California created a working group to guide the safe use of AI in education.
  • 15 states passed 18 bills related to funding AI research or initiatives. For example, Nebraska allocated funds to explore how AI can assist individuals with dyslexia.
  • 3 states made incremental changes to antitrust laws, while 6 states joined federal regulators in pursuing 6 significant cases against tech companies for anticompetitive practices.
  • California passed the most tech-related legislation in 2024, with 26 bills, followed by Utah, which passed 13 bills.
Looking Ahead: What to Expect in 2025

2025 will be a critical year for state tech policy, and we expect to see several trends persist: state governments will continue to prioritize technology policy, leveraging their political compositions to enact new laws faster than the federal government. We expect state legislatures to continue ongoing efforts to regulate AI, online child safety, and other pressing issues, with states taking a proactive role in shaping the future of tech regulation. We also should recognize that states and local governments are technology users, and that their procurement and use of technology itself is a form of tech policy. States are also likely to introduce legislation around the procurement and use of emerging technologies like AI and facial recognition by government agencies, aiming to set clear standards and ensure transparency in their adoption—an issue the EFF plans to monitor and address in more detail in future blog posts and resources. Legislative priorities will be influenced by federal inaction or shifts in policy, as states step in to fill gaps and drive national discussions on digital rights.

Much depends on the direction of federal leadership. Some states may push forward with their own tech regulations. Others may hold off, waiting for federal action. We might also see some states act as a counterbalance to federal efforts, particularly in areas like platform content moderation and data privacy, where the federal government could potentially impose restrictive policies. 

For a deep dive on how the major tech issues fared in 2024 and our expectations for 2025, check out our blog post: Key Issues Shaping State-Level Tech Policy.

EFF will continue to be at the forefront, working alongside lawmakers and advocacy partners to ensure that digital rights remain a priority in state legislatures. As state lawmakers take on critical issues like privacy protections and facial recognition technology, we’ll be there to help guide these conversations and promote policies that address real-world harms. 

We encourage our supporters to join us in these efforts—your voice and activism are crucial in shaping a future where tech serves the public good, not just corporate interests. To stay informed about ongoing state-level tech policy and to learn how you can get involved, follow EFF’s updates and continue championing digital rights with us. 

Rindala Alajaji

Open Licensing Promotes Culture and Learning. That's Why EFF Is Upgrading its Creative Commons Licenses.

1 day 13 hours ago

At EFF, we’re big fans of the Creative Commons project, which makes copyright work in empowering ways for people who want to share their work widely. EFF uses Creative Commons licenses on nearly all of our public communications. To highlight the importance of open licensing as a tool for building a shared culture, we are upgrading the license on our website to the latest version, Creative Commons Attribution 4.0 International.

Open licenses like Creative Commons are an important tool for sharing culture and learning. They allow artists and creators a simple way to encourage widespread, free distribution of their work while keeping just the rights they want for themselves—such as the right to be credited as the work’s author, the right to modify the work, or the right to control commercial uses.

Without tools like Creative Commons, copyright is frequently a roadblock to sharing and preserving culture. Copyright is ubiquitous, applying automatically to most kinds of creative work from the moment they are “fixed in a tangible medium.” Copyright carries draconian penalties unknown in most areas of U.S. law, like “statutory damages” with no proof of harm and the possibility of having to pay the rightsholder’s attorney fees. And it can be hard to learn who owns a copyright in any given work, given that copyrights can last a century or more. All of these make it risky and expensive to share and re-use creative works, or sometimes even to preserve them and make them accessible to future generations.

Open licensing helps culture and learning flourish. With many millions of works now available under Creative Commons licenses, creators and knowledge-seekers have reassurance that these works of culture and learning can be freely shared and built upon without risk.

The current suite of Creative Commons licenses has thoughtful, powerful features. It’s written to work effectively in many countries, using language that can be understood in the context of different copyright laws around the world. It addresses legal regimes other than copyright that can interfere with free re-use of creative materials, like database rights, anti-circumvention laws, and rights of publicity or personality.

And importantly, the 4.0 licenses also make clear that giving credit to the author (something all of the Creative Commons licenses require) can be done in various ways, and that technical failures don't expose users to lawsuits by copyright trolls.

At EFF, we want our work to be seen and shared widely. That’s why we’ve made our content available under Creative Commons licenses for many years. Today, in that spirit, we are updating the license for most materials on our website, www.eff.org, to Creative Commons Attribution 4.0 International.

Mitch Stoltz

Copyright is a Civil Liberties Nightmare

4 days 14 hours ago

If you’ve got lawyers and a copyright, the law gives you tremendous power to silence speech you don’t like. Copyright’s statutory damages can be as high as $150,000 per work infringed, even if no actual harm is done. This makes it far too dangerous to rely on the limitations and exceptions to fair use, as you may face a financial death sentence if a court decides you got it wrong. Most would-be speakers back down in the face of such risks, no matter now legitimate their use. The Digital Millennium Copyright Act provides an incentive for platforms to remove content on your say-so, without a judge ever reviewing your papers. The special procedures and damages available to copyright owners make it one of the most appealing mechanisms for removing unwanted speech from the internet.

Copyright owners have intimidated researchers away from disclosing that their software spies on users or is full of bugs that make it unsafe. When a blockbuster entertainment product inspires people to tell their own stories by depicting themselves in the same world or costumes, a letter from the studio’s lawyers will usually convince them to stay silent. And whose who sell software write their own law into End User License Agreements and can threaten any user who disobeys them with copyright damages.

Culture has always been a conversation, not a product that is packaged up for consumption.

These are only a few of the ways that copyright is a civil liberties nightmare in the modern age, and only a few of the abuses of copyright that we fight against in court. Copyright started out as a way for European rulers to ensure that publishers remained friendly to the government, and we still see this dynamic in the cozy relationship between Hollywood and the US military and police forces. But more and more it’s been a way for private entities that are already powerful to prevent both market competition and contrary ideas from challenging their dominance.

The imbalance of power between authors and the owners of mass media is the main reason that authors only get a small share of the value they create. Copyright is at its best when it protects a creator from being beaten to market by those who own mass media channels, giving them some leverage to negotiate. With that small bit of leverage, they can get paid something rather than nothing, though the publishing deals in highly concentrated industries are famously one-sided.

But, too often, we see copyright at its worst instead, and there is no good reason for copyright law to be as broad and draconian as it is now. It lasts essentially forever, as you will probably be dead before any new works you cherished as a child will enter the public domain. It is uniquely favored by the courts as a means for controlling speech, with ordinary First Amendment considerations taking a back seat to the interests of content owners. The would-be speaker has to prove their right to speak: for example, by persuading a court that they were making a fair use. And the penalties for a court deciding your use was infringing are devastating. It’s even used as a supposed justification for spying on and filtering the internet. Anyone familiar with automated copyright controls like ContentID on YouTube knows how restrictive they tend to be.

Bizarrely, copyright has grown so broad that it doesn’t just bar others from reproducing a work or adapting it into another medium such as film, it even prevents making original stories with a character or setting “owned” by the copyright owner. For the vast majority of our history, humans have built on and retold one another’s stories. Culture has always been a conversation, not a product that is packaged up for consumption.

The same is true for innovation, with a boom in software technology coming before copyright was applied to software. And, thanks to free software licenses that remove the default, restrictive behavior of copyright, we have communities of scrappy innovators building tools that we all rely upon for a functioning internet. When the people who depend upon a technology have a say in creating it and have the option to build their own to suit their needs, we’re much more likely to get technology that serves our interests and respects our privacy and autonomy. That's far superior to technology that comes into our homes as an agent of its creators, seeking to exploit us for advertising data, or limit our choices of apps and hardware to serve another’s profit motive.

EFF has been at the vanguard for decades, fighting back against copyright overreach in the digital world. More than ever, people need to be able to tell their stories, criticize the powerful and the status quo, and to communicate with technologies that aren’t censored by overzealous copyright bots.

Kit Walsh

Executive Order to the State Department Sideswipes Freedom Tools, Threatens Censorship Resistance, Privacy, and Anonymity of Millions

5 days 12 hours ago

In the first weeks of the Trump Administration, we have witnessed a spate of sweeping, confusing, and likely unconstitutional executive orders, including some that have already had devastating human consequences. EFF is tracking many of them, as well as other developments that impact digital rights. 

Right now, we want to draw attention to one of the executive orders that directly impacts the freedom tools that people around the world rely on to safeguard their security, privacy, and anonymity. EFF understands how critical these tools are – protecting the ability to make and share anticensorship, privacy and anonymity-protecting technologies has been central to our work since the Crypto Wars of the 1990s.

This executive order called the Reevaluating and Realigning United States Foreign Aid has led the State Department to immediately suspend its contracts with hundreds of organizations in the U.S. and around the world that have received support through programs administered by the State Department, including through its Bureau of Democracy, Human Rights, and Labor. This includes many freedom technologies that use cryptography, fight censorship, protect freedom of speech, privacy and anonymity for millions of people around the world.  While the State Department has issued some limited waivers, so far those waivers do not seem to cover the open source internet freedom technologies.  As a result, many of these projects have to stop or severely curtail their work, lay off talented workers, and stop or slow further development. 

There are many examples of freedom technologies, but here are a few that should be readily understandable to EFF’s audience: First, the Tor Project, which helps ensure that people can navigate the internet securely and privately and without fear of being tracked, both protecting themselves and avoiding censorship. Second, the Guardian Project, which creates privacy tools, open-source software libraries, and customized software solutions that can be used by individuals and groups around the world to protect personal data from unjust intrusion, interception and monitoring. Third, the Open Observatory of Network Interference, or OONI, has been carefully measuring government internet censorship in countries around the world since 2012. Fourth, the Save App from OpenArchive, is a mobile app designed to help people securely  archive, verify, and encrypt their mobile media and preserve it on the Internet Archive and decentralized web storage.

We hope that cutting off support for these and similar tools and technologies of freedom is only a temporary oversight, and that more clear thinking about these and many similar projects will result in full reinstatement. After all, these tools support people working for freedom consistent with this administration’s foreign policy objectives  —including in places like Iran, Venezuela, Cuba, North Korea, and China, just to name a few. By helping people avoid censorship, protect their speech, document human rights abuses, and retain privacy and anonymity, this work literally saves lives.

U.S. government funding helps these organizations do the less glamorous work of developing and maintaining deeply technical tools and getting them into the hands of people who need them. That is, and should remain, in the U.S. government’s interest. And sadly, it’s not work that is easily fundable otherwise. But technical people understand that these tools require ongoing support by dedicated, talented people to keep them running and available.

It’s hard to imagine that this work does not align with U.S. government priorities under any administration, and certainly not one that has stressed its commitment to fighting censorship and supporting digital technologies like cryptocurrencies that use some of the same privacy and anonymity-protecting techniques. These organizations exist to use technology to protect freedom around the world.

We urge the new administration to restore support for these critical internet freedom tools.

Corynne McSherry

The Internet Never Forgets: Fighting the Memory Hole

5 days 12 hours ago

If there is one axiom that we should want to be true about the internet, it should be: the internet never forgets. One of the advantages of our advancing technology is that information can be stored and shared more easily than ever before. And, even more crucially, it can be stored in multiple places.  

Those who back things up and index information are critical to preserving a shared understanding of facts and history, because the powerful will always seek to influence the public’s perception of them. It can be as subtle as organizing a campaign to downrank articles about their misdeeds, or as unsubtle as removing previously available information about themselves. 

This is often called “memory-holing,” after the incinerator chutes in George Orwell’s 1984 that burned any reference to the past that the government had changed. One prominent pre-internet example is Disney’s ongoing battle to remove Song of the South from public consciousness. (One can wonder if they might have succeeded if not for the internet). Instead of acknowledging mistakes, memory-holing allows powerful people, companies, and governments to pretend they never made the mistake in the first place.  

It also allows those same actors to pretend that they haven’t made a change, and that a policy rule or definition has always been the same. This creates an impression of permanency where, historically, there was fluidity. 

One of the fastest and easiest routes to the memory hole is a copyright claim. One particularly egregious practice is when a piece of media that is critical of someone, or just embarrassing to them, is copied and backdated. Then, that person or their agent claims their copy is the “original” and that the real article is “infringement.” Once the real article is removed, the copy is also disappeared and legitimate speech vanishes.  

Another frequent tactic is to claim copyright infringement when someone’s own words, images, or websites are used against them, despite it being fair use. A recent example is reporter Marisa Kabas receiving a takedown notice for sharing a screenshot of a politician’s campaign website that showed him with his cousin, alleged UHC shooter Luigi Mangione. The screenshot was removed out of an abundance of caution, but proof of something newsworthy should not be so easy to disappear. And it wasn’t. The politician's website was changed to remove the picture, but a copy of the website before the change is preserved via the Internet Archive’s Wayback Machine.  

In fact, the Wayback Machine is one of the best tools people have to fight memory-holing. Changing your own website is the first step to making embarrassing facts disappear, but the Wayback Machine preserves earlier versions. Some seek to use copyright to have entire websites blocked or taken down, and once again the Wayback Machine preserves what once was.  

This isn’t to say that everyone should be judged by their worst day, immortalized on the internet forever. It is to say that tools to remove those things will, ultimately, be of more use to the powerful than the everyday person. Copyright does not let you disappear bad news about yourself. Because the internet never forgets.  

Katharine Trendacosta

Protect Your Privacy on Bumble

5 days 21 hours ago

Late last year, Bumble finally rolled out its updated privacy policy after a coalition of twelve digital rights, LGBTQ+, human rights, and gender justice civil society organizations launched a campaign demanding stronger data protections.

Unfortunately, the company, like other dating apps, has not moved far enough, and continues to burden users with the responsibility of navigating misleading privacy settings on the app, as well as absorbing the consequences of infosec gaps, however severe. 

This should not be your responsibility—dating apps like Bumble should be prioritizing your privacy by default. This data falling into the wrong hands can come with unacceptable consequences, especially for those seeking reproductive health care, survivors of intimate partner violence, and members of the LGBTQ+ community. Laws should require companies to put our privacy over their profit, and we’re fighting hard for the introduction of comprehensive data privacy legislation in the U.S. to achieve this. 

But in the meantime, here’s a step-by-step guide on how to protect yourself and your most intimate information whilst using the dating service. 

Review Your Login Information

When you create a Bumble account, you have the option to use your phone number as a login, or use your Facebook, Google (on Android), or Apple (on iOS) account. If you use your phone number, you’ll get verification texts when you login from a new device and you won’t need any sort of password

Using your Apple, Google, or Facebook account might share some data with those services, but can also be a useful backup plan if you lose access to your phone number for whatever reason. Deciding if that trade-off is worth it is up to you. If you do choose to use those services, be sure to use a strong, unique password for your accounts and two-factor authentication. You can always review these login methods and add or remove one if you don’t want to use it anymore. 

  • Tap the Profile option, then the gear in the upper-right corner. Scroll down to Security and Privacy > Ways you can log in and review your settings.

You can also optionally link your Spotify account to your Bumble profile. While this should only display your top artists, depending on how you use Spotify there’s always a chance a bug or change might reveal more than you intend. You can disable this integration if you want:

  • Tap the Profile option, then “Complete Profile,” and scroll down the Spotify section at the bottom of that page. If the “Connect my Spotify” box is checked, tap it to uncheck the box. You can also follow Spotify’s directions to revoke app access there.
Disable Bumble’s Behavioral Ads

You don’t have many privacy options on Bumble, but there is one important setting we recommend changing: disable behavioral ads. By default, Bumble can take information from your profile and use that to display targeted ads, which track and target you based on your supposed interests. It’s best to turn this feature off:

  • Tap the profile option, then the gear in the upper-right corner. 
    • If you’re based in the U.S., scroll down to Security and Privacy > Privacy settings, and enable the option for “Do not use my profile information to show me relevant ads.” 
    • If you’re based in Europe, scroll down to Security and Privacy > Privacy settings, and click “Reject all.”

You should also disable the advertising ID on your phone, helping limit what Bumble—and any other app—can access about you for behavioral ads.

  • iPhone: Open Settings > Privacy & Security > Tracking, and set the toggle for “All Apps to Request to Track” to off.
  • Android: Open Settings > Security & privacy > Privacy controls > Ads, and tap “Delete advertising ID.”
Review the Bumble Permissions on Your Phone

Bumble asks for a handful of permissions from your device, like access to your location and camera roll (and camera). It’s worth reviewing these permissions, and possibly changing them. 

Location

Bumble won’t work without some level of location access, but you can limit what it gets by only allowing the app to access your location when you have the app open. You can deny access to your “precise location,” which is your exact spot, and instead only provide a general location. This is sort of like providing the app access to your zip code instead of your exact address.

  • iPhone: Open Settings > Privacy & Security > Location Services > Bumble. Select the option for “While Using the App,” and disable the toggle for “Precise Location.” 
  • Android: Open Settings > Security & Privacy > Privacy Controls > Permission Manager > Location > Bumble. Select the option to “Allow only while using the app,” and disable the toggle for “Use precise location.”
Photos

In order to upload profile pictures, you’ve likely already given Bumble access to your photo roll. Giving Bumble access to your whole photo roll doesn’t upload every photo you’ve ever taken, but it’s still good practice to limit what the app can even access so there’s less room for mistakes. 

  • iPhone: Open Settings > Privacy & Security > Photos > Bumble. Select the option for “Limited Access.”
  • Android: Open Settings > Security & Privacy > Privacy Controls > Permission Manager > Photos and videos > Bumble. Select the option to “Allow limited access.”
Practice Communication Guidelines for Safer Use

As with any social app, it’s important to be mindful of what you share with others when you first chat, to not disclose any financial details, and to trust your gut if something feels off. It’s also useful to review your profile information now and again to make sure you’re still comfortable sharing what you’ve listed there. Bumble has some more instructions on how to protect your personal information.

If you decide you’re done with Bumble for good, then you should delete your account before deleting the app off your phone. In the Bumble app, tap the Profile option, then tap the gear icon. Scroll down to the bottom of that page, tap “Delete Account” and follow the on-screen directions. Once complete, go ahead and delete the app.

Whilst the privacy options at our disposal may seem inadequate to meet the difficult moments ahead of us, especially for vulnerable communities in the United States and across the globe, taking these small steps can prove essential to protecting you and your information. At the same time, we’re continuing our work with organizations like Mozilla and Ultra Violet to ensure that all corporations—including dating apps like Bumble—protect our most important private information. Finding love should not involve such a privacy impinging tradeoff.

Paige Collings

EFF to State AGs: Time to Investigate Crisis Pregnancy Centers

1 week ago

Discovering that you’re pregnant can trigger a mix of emotions—excitement, uncertainty, or even distress—depending on your circumstances. Whatever your feelings are, your next steps will likely involve disclosing that news, along with other deeply personal information, to a medical provider or counselor as you explore your options.

Many people will choose to disclose that information to their trusted obstetricians, or visit their local Planned Parenthood clinic. Others, however, may instead turn to a crisis pregnancy center (CPC). Trouble is, some of these centers may not be doing a great job of prioritizing or protecting their clients’ privacy.

CPCs (also known as “fake clinics”) are facilities that are often connected to religious organizations and have a strong anti-abortion stance. While many offer pregnancy tests, counseling, and information, as well as limited medical services in some cases, they do not provide reproductive healthcare such as abortion or, in many cases, contraception. Some are licensed medical clinics; most are not. Either way, these services are a growing enterprise: in 2022, CPCs reportedly received $1.4 billion in revenue, including substantial federal and state funds.     

Last year, researchers at the Campaign for Accountability filed multiple complaints urging attorneys general in five states—Idaho, Minnesota, Washington, Pennsylvania, and New Jersey—to investigate crisis pregnancy centers that allegedly had misrepresented, through their client intake process and/or websites, that information provided to them was protected by the Health Insurance Portability and Accountability Act (“HIPAA”).

Additionally, an incident in Louisiana raised concerns that CPCs may be sharing client information with other centers in their affiliated networks, without appropriate privacy or anonymity protections. In that case, a software training video inadvertently disclosed the names and personal information of roughly a dozen clients.

Unfortunately, these privacy practices aren’t confined to those states. For example, the Pregnancy Help Center, located in Missouri, states on its website that:

Pursuant to the Health Insurance Portability and Accountability Act (HIPAA), Pregnancy Help Center has developed a notice for patients, which provides a clear explanation of privacy rights and practices as it relates to private health information.

And its Notice of Privacy Practices suggests oversight by the U.S. Department of Health and Human, instructing clients who feel their rights were violated to:

file a complaint with the U.S. Department of Health and Human Services Office for Civil Rights by sending a letter to 200 Independence Avenue, S.W., Washington, D.C. 20201, calling 1-877-696-6775, or visiting www.hhs.gov/ocr/privacy/hipaa/complaints/.

 Websites for centers in other states, such as Florida, Texas, and Arkansas, contain similar language.

As we’ve noted before, there are far too few protections for user privacy–including medical privacy—and individuals have little control over how their personal data is collected, stored, and used. Until Congress passes a comprehensive privacy law that includes a private right of action, state attorneys general must take proactive steps to protect their constituents from unfair or deceptive privacy practices. Accordingly, EFF has called on attorneys general in Florida, Texas, Arkansas, and Missouri to investigate potential privacy violations and hold accountable CPCs that engage in deceptive practices.

Regardless of your views on reproductive healthcare, we should all agree that privacy is a basic human right, and that consumers deserve transparency. Our elected officials have a responsibility to ensure that personal information, especially our sensitive medical data, is protected.

Corynne McSherry

What Proponents of Digital Replica Laws Can Learn from the Digital Millennium Copyright Act

1 week ago

We're taking part in Copyright Week, a series of actions and discussions supporting key principles that should guide copyright policy. Every day this week, various groups are taking on different elements of copyright law and policy, and addressing what's at stake, and what we need to do to make sure that copyright promotes creativity and innovation 

Performers—and ordinary people—are understandably concerned that they may be replaced or defamed by AI-generated imitations. We’ve seen a host of state and federal bills designed to address that concern, but every one just generates new problems.  

One of the most pernicious proposals is the NO FAKES Act, and Copyright Week is a good time to remember why. We’ve detailed the many problems of the bill before, but, ironically enough, one of the worst aspects is the bone it throws to critics who worry the legislation’s broad provisions and dramatic penalties will lead platforms to over-censor online expression: a safe harbor scheme modeled on the DMCA notice and takedown process.  

In essence, platforms can avoid liability if they remove all instances of allegedly illegal content once they are notified that the content is unauthorized. Platforms that ignore such a notice can be on the hook just for linking to unauthorized replicas. And every single copy made, transmitted, or displayed is a separate violation, incurring a $5000 penalty – which will add up fast. The bill does offer one not very useful carveout: if a platform can prove in court that it had an objectively reasonable belief that the content was lawful, the penalties for getting it wrong are capped at $1 million.   

The safe harbors offer cold comfort to platforms and the millions of people who rely on them to create, share, and access content. The DMCA notice and takedown process has offered important protections for the development of new venues for speech, helping creators finds audiences and vice versa. Without those protections, Hollywood would have had a veto right over all kinds of important speech tools and platforms, from basic internet service to social media and news sites to any other service that might be used to host or convey copyrighted content, thanks to copyright’s ruinous statutory penalties. The risks of accidentally facilitating infringement would have been just too high.   

But the DMCA notice and takedown process has also been regularly abused to target lawful speech. Congress knew this was a risk, so it built in some safeguards: a counter-notice process to help users get improperly targeted content restored, and a process for deterring that abuse in the first place by allowing users to hold notice senders accountable when they misuse the process. Unfortunately, some courts have mistakenly interpreted the latter provisions to require showing that the sender subjectively knew it was lying when it claimed the content was unlawful. That standard is very hard to meet in most cases. 

Proponents of a new digital replica right could have learned from that experience and created a notice process with strong provisions against abuse. Those provisions are even more necessary here, where it would be even harder for providers to know whether a notice is false. Instead, NO FAKES offers fewer safeguards than the DMCA. For example, while the DMCA puts the burden on the rightsholder to put up or shut up (i.e., file a lawsuit) if a speaker pushes back and explains why the content is lawful, NO FAKES instead puts the burden on the speaker to run to court within 14 days to defend their rights. The powerful have lawyers on retainer who can do that, but most creators, activists, and citizen journalists do not.   

And the NO FAKES provisions to allow improperly targeted speakers to hold the notice abuser accountable will offer as little deterrent as the roughly parallel provisions in the DMCA. As with the DMCA, a speaker must prove that the lie was “knowing,” which can be interpreted to mean that the sender gets off scot-free as long as they subjectively believe the lie to be true, no matter how unreasonable that belief.  

If proponents want to protect online expression for everyone, at a minimum they should redraft the counter-notice process to more closely model the DMCA, and clarify that abusers, like platforms, will be held to an objective knowledge standard. If they don’t, the advent of digital replicas will, ironically enough, turn out to be an excuse to strangle all kinds of new and old creativity. 

Corynne McSherry

California Law Enforcement Misused State Databases More Than 7,000 Times in 2023

1 week ago

The Los Angeles County Sheriff’s Department (LACSD) committed wholesale abuse of sensitive criminal justice databases in 2023, violating a specific rule against searching the data to run background checks for concealed carry firearm permits.

The sheriff’s department’s 6,789 abuses made up a majority of the record 7,275 violations across California that were reported to the state Department of Justice (CADOJ) in 2023 regarding the California Law Enforcement Telecommunications System (CLETS). 

Records obtained by EFF also included numerous cases of other forms of database abuse in 2023, such as police allegedly using data for personal vendettas. While many violations resulted only in officers or other staff being retrained in appropriate use of the database, departments across the state reported that violations in 2023 led to 24 officers being suspended, six officers resigning, and nine being fired.

CLETS contains a lot of sensitive information and is meant to provide officers in California with access to a variety of databases, including records from the Department of Motor Vehicles, the National Law Enforcement Telecommunications System, Criminal Justice Information Services, and the National Crime Information Center. Law enforcement agencies with access to CLETS are required to inform the state Justice Department of any investigations and discipline related to misuse of the system. This mandatory reporting helps to provide oversight and transparency around how local agencies are using and abusing their access to the array of databases. 

A slide from a Long Beach Police Department training for new recruits.

Misuse can take many forms, ranging from sharing passwords to using the system to look up romantic partners or celebrities. In 2019, CADOJ declared that using CLETS data for "immigration enforcement" is considered misuse under the California Values Act.  

EFF periodically files California Public Records Act requests for the data and records generated by these CLETS misuse disclosures. To help improve access to this data, EFF's investigations team has compiled and compressed that information from the years 2019 - 2023 for public download. Researchers and journalists can look up the individual data per agency year-to-year. 

Download the 2019-2023 data here. Data from previous years is available here: 2010-2014, 2015, 2016, 2017, 2018.  

California agencies are required to report misuse of CLETS to CADOJ by February 1 of the following year, which means numbers for 2024 are due to the state agency at the end of this month. However, it often takes the state several more months to follow up with agencies that do not respond and to enter information from the individual forms into a database. 

Across California between 2019 and 2023, there have been:

  • 761 investigations of CLETS misuse, resulting in findings of at least 7,635 individual violations of the system’s rules
  • 55 officer suspensions, 50 resignations, and 42 firings related to CLETS misuse
  • six misdemeanor convictions and one felony conviction related to CLETS misuse

As we reviewed the data made public since 2019, there were a few standout situations worth additional reporting. For example, LACSD in 2023 conducted one investigation into CLETS misuse which resulted in substantiating thousands of misuse claims. The Riverside County Sheriff's Office and Pomona Police Department also found hundreds of violations of access to CLETS the same year. 

Some of the highest profile cases include: 

  • LACSD’s use of criminal justice data for concealed carry permit research, which is specifically forbidden by CLETS rules. According to meeting notes of the CLETS oversight body, LACSD retrained all staff and implemented new processes. However, state Justice Department officials acknowledged that this problem was not unique, and they had documented other agencies abusing the data in the same way.
  • A Redding Police Department officer in 2021 was charged with six misdemeanors after being accused of accessing CLETS to set up a traffic stop for his fiancée's ex-husband, resulting in the man's car being towed and impounded, the local outlet A News Cafe reported. Court records show the officer was fired, but he was ultimately acquitted by a jury in the criminal case. He now works for a different police department 30 miles away.
  • The Folsom Police Department in 2021 fired an officer who was accused of sending racist texts and engaging in sexual misconduct, as well as abusing CLETS. However, the Sacramento County District Attorney told a local TV station it declined to file charges, citing insufficient evidence.
  • A Madera Police Officer in 2021 resigned and pleaded guilty to accessing CLETS and providing that information to an unauthorized person. He received a one-year suspended sentence and 100 hours of community service, according to court records. In a statement, the police department said the individual's "behavior was absolutely inappropriate" and "his actions tarnish the nobility of our profession."
  • A California Highway Patrol officer was charged with improperly accessing CLETS to investigate vehicles his friend was interested in purchasing as part of his automotive business. 

The San Francisco Police Department, which failed to provide its numbers to CLETS in 2023, may be reporting at least one violation from the past year, according to a May 2024 report of sustained complaints, which lists one substantiated violation involving “Computer/CAD/CLETS Misuse.” 

CLETS is only one of many massive databases available to law enforcement, but it is one of the very few with a mandatory reporting requirement for abuse; violations of other systems likely never go reported to a state oversight body or at all. The sheer amount of misuse should serve as a warning that other systems police use, such as automated license plate reader and face recognition databases, are likely also being abused at a high rate–or even higher, since they are not subject to the same scrutiny as CLETS.

Related Cases: California Law Enforcement Telecommunications System
Beryl Lipton

Don't Make Copyright Law in Smoke-Filled Rooms

1 week 1 day ago

We're taking part in Copyright Week, a series of actions and discussions supporting key principles that should guide copyright policy. Every day this week, various groups are taking on different elements of copyright law and policy, and addressing what's at stake, and what we need to do to make sure that copyright promotes creativity and innovation.

Copyright law affects everything we do on the internet. So why do some lawmakers and powerful companies still think they can write new copyright law behind closed doors, with ordinary internet users on the outside?

Major movie and TV studios are again pushing Congress to create a vast new censorship regime on the U.S. internet, one that could even reach abroad and conscript infrastructure companies to help make whole websites disappear. The justification is, as always, creating ever more draconian means of going after copyright infringement, and never mind all of the powerful tools that already exist.

The movie studios and other major media companies last tried this in 2012, seeking to push a pair of internet censorship bills called SOPA and PIPA through Congress, without hearings. Lawmakers were preparing to ignore the concerns of internet users not named Disney, Warner, Paramount, or Fox. At first, they ignored the long, sad history of copyright enforcement tools being used for censorship. They ignored the technologists, including some of the creators of the internet, who explained how website-blocking creates security threats and inevitably blocks lawful speech. And they ignored the pleas of ordinary users who were concerned about the websites they relied on going dark because of hasty site-blocking orders.

Writing new copyright laws in the proverbial smoke-filled backroom was somewhat less surprising in 2012. Before the internet, copyright mainly governed the relationships between authors and publishers, movie producers and distributors, photographers and clients, and so on. The easiest way to make changes was to get representatives of these industries together to hash out the details, then have Congress pass those changes into law. It worked well enough for most people.

In the internet age, that approach is unworkable. Every use of the internet, whether sending a photo, reading a social media post, or working on a shared document, causes a copy of some creative work. And nearly every creative work that’s recorded on a computing device is automatically governed by copyright law, with no registration or copyright notices required. That makes copyright a fundamental governing law of the internet. It shapes the design and functions of the devices we use, what software we can run, and when and how we can participate in culture. Its massive penalties and confusing exceptions can ensnare everyone from landlords to librarians, from students to salespeople.

Users fought back. In a historic protest, thousands of websites went dark for a day, with messages encouraging users to oppose the SOPA/PIPA bills. EFF alone helped users send more than 1,000,000 emails to Congress, and countless more came from other organizations. Web traffic briefly brought down some Senate websites. 162 million people visited Wikipedia and 8 million looked up their representatives’ phone numbers. Google received more than 7 million signatures on its petition. Everyone who wrote, called, and visited their lawmakers sent a message that laws affecting the internet can't be made in a backroom by insiders bearing campaign cash. Congress quickly scrapped the bills.

After that, although Congress avoided changing copyright law for years, the denizens of the smoke-filled room never gave up. The then-leaders of the Motion Picture Association and the Recording Industry Association of America both vented angrily about ordinary people getting a say over copyright. Big Media went on a world tour, pushing for site-blocking laws that led to the same problems [Italy etc] of censorship and over-blocking in many countries that U.S. users had mostly avoided.

Now, they’re trying again. Major media companies are pushing Congress to pass new site-blocking laws that would conscript internet service providers, domain name services, and potentially others to build a new censorship machine. The problems of overblocking and misuse haven’t gone away—if anything they've gotten as ever more of our life is lived online. The biggest tech companies, who in 2012 were prodded into action by a mass movement of internet users, are now preoccupied by antitrust lawsuits and seeking favor from the new administration in Washington. And as with other extraordinary tools that Congress has given to the largest copyright holders, site-blocking won’t stay confined to copyright—other powerful industries and governments will clamor to use the system for censorship, and it will get ever harder to resist those calls.

It seems like lawmakers have learned nothing, because copyright law is again being written in secret by a handful of industry representatives. That was unacceptable in 2012, and it’s even more unacceptable in 2025. Before considering site blocking, or any major changes to copyright, Congress needs to consult with every kind of internet user, including small content creators, small businesses, educators, librarians, and technologists not beholden to the largest tech and media companies.

We can’t go backwards. Copyright law affects everyone, and everyone needs a say in its evolution. Before taking up site-blocking or any other major changes to copyright law, Congress needs to air those proposals publicly, seek input from far and wide—and listen to it.

Mitch Stoltz

It's Copyright Week 2025: Join Us in the Fight for Better Copyright Law and Policy

1 week 1 day ago

We're taking part in Copyright Week, a series of actions and discussions supporting key principles that should guide copyright policy. Every day this week, various groups are taking on different elements of copyright law and policy, and addressing what's at stake, and what we need to do to make sure that copyright promotes creativity and innovation 

One of the unintended consequences of the internet is that more of us than ever are aware of how much of our lives is affected by copyright. People see their favorite YouTuber’s video get removed or re-edited due to copyright. People know they can’t tinker with or fix their devices. And people have realized, and are angry about, the fact that they don’t own much of the media they have paid for.  

All of this is to say that copyright is no longer—if it ever was—a niche concern of certain industries. As corporations have pushed to expand copyright, they have made it everyone’s problem. And that means they don’t get to make the law in secret anymore. 

Twelve years ago, a diverse coalition of Internet users, non-profit groups, and Internet companies defeated the Stop Online Piracy Act (SOPA) and the PROTECT IP Act (PIPA), bills that would have forced Internet companies to blacklist and block websites accused of hosting copyright infringing content. These were bills that would have made censorship very easy, all in the name of copyright protection. 

As people raise more and more concerns about the major technology companies that control our online lives, it’s important not to fall into the trap of thinking that copyright will save us. As SOPA/PIPA reminds us: expanding copyright serves the gatekeepers, not the users.  

We continue to fight for a version of copyright that does what it is supposed to. And so, every year, EFF and a number of diverse organizations participate in Copyright Week. Each year, we pick five copyright issues to highlight and advocate a set of principles of copyright law. This year’s issues are: 

  • Monday: Copyright Policy Should Be Made in the Open With Input From Everyone: Copyright is not a niche concern. It affects everyone’s experience online, therefore laws and policy should be made in the open and with users’ concerns represented and taken into account. 
  • Tuesday: Copyright Enforcement as a Tool of Censorship: Freedom of expression is a fundamental human right essential to a functioning democracy. Copyright should encourage more speech, not act as a legal cudgel to silence it.  
  • Wednesday: Device and Digital Ownership: As the things we buy increasingly exist either in digital form or as devices with software, we also find ourselves subject to onerous licensing agreements and technological restrictions. If you buy something, you should be able to truly own it – meaning you can learn how it works, repair it, remove unwanted features, or tinker with it to make it work in a new way.  
  • Thursday: The Preservation and Sharing of Information and Culture: Copyright often blocks the preservation and sharing of information and culture, traditionally in the public interest. Copyright law and policy should encourage and not discourage the saving and sharing of information. 
  • Friday: Free Expression and Fair Use: Copyright policy should encourage creativity, not hamper it. Fair use makes it possible for us to comment, criticize, and rework our common culture.  

Every day this week, we’ll be sharing links to blog posts on these topics at https://www.eff.org/copyrightweek. 

Katharine Trendacosta

EFF to Michigan Supreme Court: Cell Phone Search Warrants Must Strictly Follow The Fourth Amendment’s Particularity and Probable Cause Requirements

1 week 4 days ago

Last week, EFF, along with the Criminal Defense Attorneys of Michigan, ACLU, and ACLU of Michigan, filed an amicus brief in People v. Carson in the Supreme Court of Michigan, challenging the constitutionality of the search warrant of Mr. Carson's smart phone.

In this case, Mr. Carson was arrested for stealing money from his neighbor's safe with a co-conspirator. A few months later, law enforcement applied for a search warrant for Mr. Carson's cell phone. The search warrant enumerated the claims that formed the basis for Mr. Carson's arrest, but the only mention of a cell phone was a law enforcement officer's general assertion that phones are communication devices often used in the commission of crimes. A warrant was issued which allowed the search of the entirety of Mr. Carson's smart phone, with no temporal or category limits on the data to be searched. Evidence found on the phone was then used to convict Mr. Carson.

On appeal, the Court of Appeals made a number of rulings in favor of Mr. Carson, including that evidence from the phone should not have been admitted because the search warrant lacked particularity and was unconstitutional. The government's appeal to the Michigan Supreme Court was accepted and we filed an amicus brief.

In our brief, we argued that the warrant was constitutionally deficient and overbroad, because there was no probable cause for searching the cell phone and that the warrant was insufficiently particular because it failed to limit the search to within a time frame or certain categories of information.

As the U.S. Supreme Court recognized in Riley v. California, electronic devices such as smart phones “differ in both a quantitative and a qualitative sense” from other objects. The devices contain immense storage capacities and are filled with sensitive and revealing data, including apps for everything from banking to therapy to religious practices to personal health. As the refrain goes, whatever the need, “there's an app for that.” This special nature of digital devices requires courts to review warrants to search digital devices with heightened attention to the Fourth Amendment’s probable cause and particularity requirements.

In this case, the warrant fell far short. In order for there to be probable cause to search an item, the warrant application must establish a “nexus” between the incident being investigated and the place to be searched. But the application in this case gave no reason why evidence of the theft would be found on Mr. Carson's phone. Instead, it only stated the allegations leading to Mr. Carson's arrest and boilerplate language about cell phone use among criminals. While those facts may establish probable cause to arrest Mr. Carson, they did not establish probable cause to search Mr. Carson's phone. If it were otherwise, the government would always be able to search the cell phone of someone they had probable cause to arrest, thereby eradicating the independent determination of whether probable cause exists to search something. Without a nexus between the crime and Mr. Carson’s phone, there was no probable cause.

Moreover, the warrant allowed for the search of “any and all data” contained on the cell phone, with no limits whatsoever. This type of "all content" warrants are the exact type of general warrants against which the Fourth Amendment and its state corollaries were meant to protect. Cell phone search warrants that have been upheld have contained temporal constraints and a limit to the categories of data to be searched. Neither limitations—or any other limitations—were in the issued search warrant. The police should have used date limitations in applying for the search warrant, as they do in their warrant applications for other searches in the same investigation. Additionally, the warrant allowed the search of all the information on the phone, the vast majority of which did not—and could not—contain evidence related to the investigation.

As smart phones become more capacious and entail more functions, it is imperative that courts adhere to the narrow construction of warrants for the search of electronic devices to support the basic purpose of the Fourth Amendment to safeguard the privacy and security of individuals against arbitrary invasions by governmental officials.

Hannah Zhao

Face Scans to Estimate Our Age: Harmful and Creepy AF

1 week 5 days ago

Government must stop restricting website access with laws requiring age verification.

Some advocates of these censorship schemes argue we can nerd our way out of the many harms they cause to speech, equity, privacy, and infosec. Their silver bullet? “Age estimation” technology that scans our faces, applies an algorithm, and guesses how old we are – before letting us access online content and opportunities to communicate with others. But when confronted with age estimation face scans, many people will refrain from accessing restricted websites, even when they have a legal right to use them. Why?

Because quite simply, age estimation face scans are creepy AF – and harmful. First, age estimation is inaccurate and discriminatory. Second, its underlying technology can be used to try to estimate our other demographics, like ethnicity and gender, as well as our names. Third, law enforcement wants to use its underlying technology to guess our emotions and honesty, which in the hands of jumpy officers is likely to endanger innocent people. Fourth, age estimation face scans create privacy and infosec threats for the people scanned. In short, government should be restraining this hazardous technology, not normalizing it through age verification mandates.

Error and discrimination

Age estimation is often inaccurate. It’s in the name: age estimation. That means these face scans will regularly mistake adults for adolescents, and wrongfully deny them access to restricted websites. By the way, it will also sometimes mistake adolescents for adults.

Age estimation also is discriminatory. Studies show face scans are more likely to err in estimating the age of people of color and women. Which means that as a tool of age verification, these face scans will have an unfair disparate impact.

Estimating our identity and demographics

Age estimation is a tech sibling of face identification and the estimation of other demographics. To users, all face scans look the same and we shouldn’t allow them to become a normal part of the internet. When we submit to a face scan to estimate our age, a less scrupulous company could flip a switch and use the same face scan, plus a slightly different algorithm, to guess our name or other demographics.

Some companies are in both the age estimation business and the face identification business.

Other developers claim they can use age estimation’s underlying technology – application of an algorithm to a face scan – to estimate our gender (like these venders) and our ethnicity (like these venders). But these scans are likely to misidentify the many people whose faces do not conform to gender and ethnic averages (such as transgender people). Worse, powerful institutions can harm people with this technology. China uses face scans to identify ethnic Uyghurs. Transphobic legislators may try to use them to enforce bathroom bans. For this reason, advocates have sought to prohibit gender estimation face scans.

Estimating our emotions and honesty

Developers claim they can use face estimation’s underlying technology to estimate our emotions (like these venders). But this will always have a high error rate, because people express emotions differently, based on culture, temperament, and neurodivergence. Worse, researchers are trying to use face scans to estimate deception, and even criminality. Mind-reading technologies have a long and dubious history, from phrenology to polygraphs.

Unfortunately, powerful institutions may believe the hype. In 2008, the U.S. Department of Homeland Security disclosed its efforts to use “image analysis” of “facial features” (among other biometrics) to identify “malintent” of people being screened. Other policing agencies are using algorithms to analyze emotions and deception.

When police technology erroneously identifies a civilian as a threat, many officers overreact. For example, ALPR errors recurringly prompt police officers to draw guns on innocent drivers. Some government agencies now advise drivers to keep their hands on the steering wheel during a traffic stop, to reduce the risk that the driver’s movements will frighten the officer. Soon such agencies may be advising drivers not to roll their eyes, because the officer’s smart glasses could misinterpret that facial expression as anger or deception.

Privacy and infosec

The government should not be forcing tech companies to collect even more personal data from users. Companies already collect too much data and have proved they cannot be trusted to protect it.

Age verification face scans create new threats to our privacy and information security. These systems collect a scan of our face and guess our age. A poorly designed system might store this personal data, and even correlate it to the online content that we look at. In the hands of an adversary, and cross-referenced to other readily available information, this information can expose intimate details about us. Our faces are unique, immutable, and constantly on display – creating risk of biometric tracking across innumerable virtual and IRL contexts. Last year, hackers breached an age verification company (among many other companies).

Of course, there are better and worse ways to design a technology. Some privacy and infosec risks might be reduced, for example, by conducting face scans on-device instead of in-cloud, or by deleting everything immediately after a visitor passes the age test. But lower-risk does not mean zero-risk. Clever hackers might find ways to breach even well-designed systems, companies might suddenly change their systems to make them less privacy-protective (perhaps at the urging of government), and employees and contractors might abuse their special access. Numerous states are mandating age verification with varying rules for how to do so; numerous websites are subject to these mandates; and numerous vendors are selling face scanning services. Inevitably, many of these websites and services will fail to maintain the most privacy-preserving systems, because of carelessness or greed.

Also, face scanning algorithms are often trained on data that was collected using questionable privacy methods—whether it be from users with murky-consent or non-users. The government data sets used to test biometric algorithms sometimes come from prisoners and immigrants.

Most significant here, when most people arrive at most age verification checkpoints, they will have no idea whether the face scan system has minimized the privacy and infosec risks. So many visitors will turn away, and forego the content and conversations available on restricted website.

Next steps

Algorithmic face scans are dangerous, whether used to estimate our age, our other demographics, our name, our emotions, or our honesty. Thus, EFF supports a ban on government use of this technology, and strict regulation (including consent and minimization) for corporate use.

At a minimum, government must stop coercing websites into using face scans, as a means of complying with censorious age verification mandates. Age estimation does not eliminate the privacy and security issues that plague all age verification systems. And these face scans cause many people to refrain from accessing websites they have a legal right to access. Because face scans are creepy AF.

Adam Schwartz

Second Circuit Rejects Record Labels’ Attempt to Rewrite the DMCA

1 week 5 days ago

In a major win for creator communities, the U.S. Court of Appeals for the Second Circuit has once again handed video streaming site Vimeo a solid win in its long-running legal battle with Capitol Records and a host of other record labels.

The labels claimed that Vimeo was liable for copyright infringement on its site, and specifically that it can’t rely on the Digital Millennium Copyright Act’s safe harbor because Vimeo employees “interacted” with user-uploaded videos that included infringing recordings of musical performances owned by the labels. Those interactions included commenting on, liking, promoting, demoting , or posting them elsewhere on the site. The record labels contended that these videos contained popular songs, and it would’ve been obvious to Vimeo employees that this music was unlicensed.

But as EFF explained in an amicus brief filed in support of Vimeo, even rightsholders themselves mistakenly demand takedowns. Labels often request takedowns of music they don’t own or control, and even request takedowns of their own content. They also regularly target fair uses. When rightsholders themselves cannot accurately identify infringement, courts cannot presume that a service provider can do so, much less a blanket presumption as to hundreds of videos.

In an earlier ruling, the court  held that the labels had to show that it would be apparent to a person without specialized knowledge of copyright law that the particular use of the music was unlawful, or prove that the Vimeo workers had expertise in copyright law. The labels argued that Vimeo’s own efforts to educate its employees and user about copyright, among other circumstantial evidence, were enough to meet that burden. The Second Circuit disagreed, finding that:

Vimeo’s exercise of prudence in instructing employees not to use copyrighted music and advising users that use of copyrighted music “generally (but not always) constitutes copyright infringement” did not educate its employees about how to distinguish between infringing uses and fair use.

The Second Circuit also rejected another equally dangerous argument: that Vimeo lost safe harbor protection by receiving a “financial benefit” from infringing activity, such as user-uploaded videos, that the platform had a “right and ability to control.” The labels contended that any website that exercises editorial judgment—for example, by removing, curating, or organizing content—would necessarily have the “right and ability to control” that content. If they were correct, ordinary content moderation would put a platform at risk of crushing copyright liability.

As the Second Circuit put it, the labels’ argument:

would substantially undermine what has generally been understood to be one of Congress’s major objectives in passing the DMCA: encouraging entrepreneurs to establish websites that can offer the public rapid, efficient, and inexpensive means of communication by shielding service providers from liability for infringements placed on the sites by users.

Fortunately, the Second Circuit’s decisions in this case help preserve the safe harbors and the expression and innovation that they make possible. But it should not have taken well over a decade of litigation—and likely several millions of dollars in legal fees—to get there.

Related Cases: Capitol v. Vimeo
Tori Noble

Speaking Freely: Lina Attalah

1 week 5 days ago

This interview has been edited for length and clarity.*

Jillian York: Welcome, let’s start here. What does free speech or free expression mean to you personally?

Lina Attalah: Being able to think without too many calculations and without fear.

York: What are the qualities that make you passionate about the work that you do, and also about telling stories and utilizing your free expression in that way? 

Well, it ties in with your first question. Free speech is basically being able to express oneself without fear and without too many calculations. These are things that are not granted, especially in the context I work in. I know that it does not exist in any absolute way anywhere, and increasingly so now, but even more so in our context, and historically it hasn't existed in our context. So this has also drawn me to try to unearth what is not being said, what is not being known, what is not being shared. I guess the passion came from that lack more than anything else. Perhaps, if I lived in a democracy, maybe I wouldn't have wanted to be a journalist. 

York: I’d like to ask you about Syria, since you just traveled there. I know that you're familiar with the context there in terms of censorship and the Internet in particular. What do you see in terms of people's hopes for more expression in Syria in the future?

I think even though we share an environment where freedom of expression has been historically stifled, there is an exception to Syria when it comes to the kind of controls there have been on people's ability to express, let alone to organize and mobilize. I think there's also a state of exception when it comes to the price that had to be paid in Syrian prisons for acts of free expression and free speech. This is extremely exceptional to the fabric of Syrian society. So going there and seeing that this condition was gone, after so much struggle, after so much loss, is a situation that is extremely palpable. From the few days I spent there, what was clear to me is that everybody is pretty much uncertain about the future, but there is an undoubted relief that this condition is gone for now, this fear. It literally felt like it's a lower sky, sort of repressing people's chests somehow, and it's just gone. This burden was just gone. It's not all flowery, it's not all rosy. Everybody is uncertain. But the very fact that this fear is gone is very palpable and cannot be taken away from the experience we're living through now in Syria.

York: I love that. Thank you. Okay, let’s go to Egypt a little bit. What can you tell us about the situation for free speech in the context of Egypt? We're coming up on fourteen years since the uprising in 2011 and eleven years since Sisi came to power. And I mean, I guess, contextualize that for our readers who don't know what's happened in Egypt in the past decade or so.

For a quick summary, the genealogy goes as follows. There was a very tight margin through which we managed to operate as journalists, as activists, as people trying to sort of enlarge the space through which we can express ourselves on matters of public concerns in the last years of Mubarak's rule. And this is the time that coincided with the opening up of the internet—back in the time when the internet was also more of a public space, before the overt privatization that we experience in that virtual space as well. Then the Egyptian revolution happened in 2011 and that space further exploded in expression and diversity of voices and people speaking to different issues that had previously been reserved to the hideouts of activist circles. 

Then you had a complete reversal of all of this with the takeover of a military appointed government. Subsequently, with the election of President Sisi in 2014, it became clear that it was a government that believed that the media's role—this is just one example focusing on the media—is to basically support the government in a very sort of 1960s nasserite understanding that there is a national project, that he's leading it, and we are his soldiers. We should basically endorse, support, not criticize, not weaken, basically not say anything differently from him. And you know this, of course, transcends the media. Everybody should be a soldier in a way and also the price of doing otherwise has been hefty, in the sense that a lot of people ended up under prosecution, serving prolonged jail sentences, or even spending prolonged times in pre-trial detention without even getting prosecuted.

So you have this total reversal from an unfolding moment of free speech that sort of exploded for a couple of years starting in 2011, and then everything closing up, closing up, closing up to the point where that margin that I started off talking about at the beginning is almost no longer even there. And, on a personal note, I always ask myself if the margin has really tightened or if one just becomes more scared as they grow older? But the margin has indeed tightened quite extensively. Personally, I'm aging and getting more scared. But another objective indicator is that almost all of my friends and comrades who have been with me on this path are no longer around because they are either in prison or in exile or have just opted out from the whole political apparatus. So that says that there isn't the kind of margin through which we managed to maneuver before the revolution.

 York: Earlier you touched on the privatization of online spaces. Having watched the way tech companies have behaved over the past decade, what do you think that these companies fail to understand about the Egyptian and the regional context?

It goes back to how we understand this ecosystem, politically, from the onset. I am someone who thinks of governments and markets, or governments and corporations, as the main actors in a market, as dialectically interchangeable. Let's say they are here to control, they are here to make gains, and we are here to contest them even though we need them. We need the state, we need the companies. But there is no reason on earth to believe that either of them want our best. I'm putting governments and companies in the same bucket, because I think it's important not to fall for the liberals’ way of thinking that the state has certain politics, but the companies are freer or are just after gains. I do think of them as formidable political edifices that are self-serving. For us, the political game is always how to preserve the space that we've created for ourselves, using some of the leverage from these edifices without being pushed over and over. 

For me, this is a very broad political thing, and I think about them as a duality, because, operating as a media organization in a country like Egypt, I have to deal with the dual repression of those two edifices. To give you a very concrete example, in 2017 the Egyptian government blocked my website, Mada Masr, alongside a few other media websites, shortly before going on and blocking hundreds of websites. All independent media websites, without exception, have been blocked in Egypt alongside sites through which you can download VPN services in order to be able to also access these blocked websites. And that's done by the government, right? So one of the things we started doing when this happened in 2017 is we started saying, “Okay, we should invest in Meta. Or back then it was still Facebook, so we should invest in Facebook more. Because the government monitors you.” And this goes back to the relation, the interchangeability of states and companies. The government would block Mada Masr, but would never block Facebook, because it's bad for business. They care about keeping Facebook up and running. 

It's not Syria back in the time of Assad. It's not Tunisia back in the time of Ben Ali. They still want some degree of openness, so they would keep social media open. So we let go of our poetic triumphalism when we said, we will try to invest in more personalized, communitarian dissemination mechanisms when building our audiences, and we'll just go on Facebook. Because what option do we have? But then what happens is that is another track of censorship in a different way that still blocks my content from being able to reach its audiences through all the algorithmic developments that happened and basically the fact that—and this is not specific to Egypt—they just want to think of themselves as the publishers. They started off by treating us as the publishers and themselves as the platforms, but at this point, they want to be everything. And what would we expect from a big company, a profitable company, besides them wanting to be everything? 

York: I don't disagree at this point. I think that there was a point in time where I would have disagreed. When you work closely with companies, it’s easy to fall into the trap of believing that change is possible because you know good people who work there, people who really are trying their best. But those people are rarely capable of shifting the direction of the company, and are often the ones to leave first.

Let’s shift to talking about our friend, Egyptian political prisoner Alaa Abd El-Fattah. You mentioned the impact that the past 11 years, really the past 14 years, have had on people in Egypt. And, of course, there are many political prisoners, but one of the prisoners that that EFF readers will be familiar with is Alaa. You recently accepted the English PEN Award on his behalf. Can you tell us more about what he has meant to you?

One way to start talking about Alaa is that I really hope that 2025 is the year when he will get released. It's just ridiculous to keep making that one single demand over and over without seeing any change there. So Alaa has been imprisoned on account of his free speech, his attempt to speak freely. And he attempted to speak, you know, extremely freely in the sense that a lot of his expression is his witty sort of engagement with surrounding political events that came through his personal accounts on social media, in additional to the writing that he's been doing for different media platforms, including ours and yours and so on. And in that sense, he's so unmediated, he’s just free. A truly free spot. He has become the icon of the Egyptian revolution, the symbol of revolutionary spirit who you know is fighting for people's right to free speech and, more broadly, their dignity. I guess I'm trying to make a comment, a very basic comment, on abolition and, basically, the lack of utility of prisons, and specifically political prisons. Because the idea is to mute that voice. But what has happened throughout all these years of Alaa’s incarceration is that his voice has only gotten amplified by this very lack, by this very absence, right? I always lament about the fact that I do not know if I would have otherwise become very close to Alaa. Perhaps if he was free and up and running, we wouldn't have gotten this close. I have no idea. Maybe he would have just gone working on his tech projects and me on my journalism projects. Maybe we would have tried to intersect, and we had tried to intersect, but maybe we would have gone on without interacting much. But then his imprisonment created this tethering where I learned so much through his absence. 

Somehow I've become much more who I am in terms of the journalism, in terms of the thinking, in terms of the politics, through his absence, through that lack. So there is something that gets created with this aggressive muting of a voice that should be taken note of. That being said, I don't mean to romanticize absence, because he needs to be free. You know it's, it's becoming ridiculous at this point. His incarceration is becoming ridiculous at this point. 

York: I guess I also have to ask, what would your message be to the UK Government at this point?

Again, it's a test case for what so-called democratic governments can still do to their citizens. There needs to be something more forceful when it comes to demanding Alaa’s release, especially in view of the condition of his mother, who has been on a hunger strike for over 105 days as of the day of this interview. So I can't accept that this cannot be a forceful demand, or this has to go through other considerations pertaining to more abstract bilateral relations and whatnot. You know, just free the man. He's your citizen. You know, this is what's left of what it means to be a democratic government.

York: Who is your free speech hero? 

It’s Alaa. He always warns us of over-symbolizing him or the others. Because he always says, when we over symbolize heroes, they become abstract. And we stop being able to concretize the fights and the resistance. We stop being able to see that this is a universal battle where there are so many others fighting it, albeit a lot more invisible, but at the same time. Alaa, in his person and in what he represents, reminds me of so much courage. A lot of times I am ashamed of my fear. I'm ashamed of not wanting to pay the price, and I still don't want to pay the price. I don't want to be in prison. But at the same time, I look up at someone like Alaa, fearlessly saying what he wants to say, and I’m just always in awe of him. 

Jillian C. York

The Impact of Age Verification Measures Goes Beyond Porn Sites

1 week 5 days ago

As age verification bills pass across the world under the guise of “keeping children safe online,” governments are increasingly giving themselves the authority to decide what topics are deemed “safe” for young people to access, and forcing online services to remove and block anything that may be deemed “unsafe.” This growing legislative trend has sparked significant concerns and numerous First Amendment challenges, including a case currently pending before the Supreme Court–Free Speech Coalition v. Paxton. The Court is now considering how government-mandated age verification impacts adults’ free speech rights online.

These challenges keep arising because this isn’t just about safety—it’s censorship. Age verification laws target a slew of broadly-defined topics. Some block access to websites that contain some "sexual material harmful to minors," but define the term so loosely that “sexual material” could encompass anything from sex education to R-rated movies; others simply list a variety of vaguely-defined harms. In either instance, lawmakers and regulators could use the laws to target LGBTQ+ content online.

This risk is especially clear given what we already know about platform content policies. These policies, which claim to "protect children" or keep sites “family-friendly,” often label LGBTQ+ content as “adult” or “harmful,” while similar content that doesn't involve the LGBTQ+ community is left untouched. Sometimes, this impact—the censorship of LGBTQ+ content—is implicit, and only becomes clear when the policies (and/or laws) are actually implemented. Other times, this intended impact is explicitly spelled out in the text of the policies and bills.

In either case, it is critical to recognize that age verification bills could block far more than just pornography.

Take Oklahoma’s bill, SB 1959, for example. This state age verification law aims to prevent young people from accessing content that is “harmful to minors” and went into effect last November 1st. It incorporates definitions from another Oklahoma statute, Statute 21-1040, which defines material “harmful to minors” as any description or exhibition, in whatever form, of nudity and “sexual conduct.” That same statute then defines “sexual conduct” as including acts of “homosexuality.” Explicitly, then, SB 1959 requires a site to verify someone’s age before showing them content about homosexuality—a vague enough term that it could potentially apply to content from organizations like GLAAD and Planned Parenthood.

This vague definition will undoubtedly cause platforms to over-censor content relating to LGBTQ+ life, health, or rights out of fear of liability. Separately, bills such as SB 1959 might also cause users to self-police their own speech for the same reasons, fearing de-platforming. The law leaves platforms unsure and unable to precisely exclude the minimum amount of content that fits the bill's definition, leading them to over censorship of content that may just also include this very blog post. 

Beyond Individual States: Kids Online Safety Act (KOSA)

Laws like the proposed federal Kids Online Safety Act (KOSA) make government officials the arbiters of what young people can see online and will lead platforms to implement invasive age verification measures to avoid the threat of liability. If KOSA passes, it will lead to people who make online content about sex education, and LGBTQ+ identity and health, being persecuted and shut down as well. All it will take is one member of the Federal Trade Commission seeking to score political points, or a state attorney general seeking to ensure re-election, to start going after the online speech they don’t like. These speech burdens will also affect regular users as platforms mass-delete content in the name of avoiding lawsuits and investigations under KOSA. 

Senator Marsha Blackburn, co-sponsor of KOSA, has expressed a priority in “protecting minor children from the transgender [sic] in this culture and that influence.” KOSA, to Senator Blackburn, would address this problem by limiting content in the places “where children are being indoctrinated.” Yet these efforts all fail to protect children from the actual harms of the online world, and instead deny vulnerable young people a crucial avenue of communication and access to information. 

LGBTQ+ Platform Censorship by Design

While the censorship of LGBTQ+ content through age verification laws can be represented as an “unintended consequence” in certain instances, barring access to LGBTQ+ content is part of the platforms' design. One of the more pervasive examples is Meta suppressing LGBTQ+ content across its platforms under the guise of protecting younger users from "sexually suggestive content.” According to a recent report, Meta has been hiding posts that reference LGBTQ+ hashtags like #lesbian, #bisexual, #gay, #trans, and #queer for users that turned the sensitive content filter on, as well as showing users a blank page when they attempt to search for LGBTQ+ terms. This leaves teenage users with no choice in what content they see, since the sensitive content filter is turned on for them by default. 

This policy change came on the back of a protracted effort by Meta to allegedly protect teens online. In January last year, the corporation announced a new set of “sensitive content” restrictions across its platforms (Instagram, Facebook, and Threads), including hiding content which the platform no longer considered age-appropriate. This was followed later by the introduction of Instagram For Teens to further limit the content users under the age of 18 could see. This feature sets minors’ accounts to the most restrictive levels by default, and teens under 16 can only reverse those settings through a parent or guardian. 

Meta has apparently now reversed the restrictions on LGBTQ+ content after calling the issue a “mistake.” This is not good enough. In allowing pro-LGBTQ+ content to be integrated into the sensitive content filter, Meta has aligned itself with those that are actively facilitating a violent and harmful removal of rights for LGBTQ+ people—all under the guise of keeping children and teens safe. Not only is this a deeply flawed strategy, it harms everyone who wishes to express themselves on the internet. These policies are written and enforced discriminatorily and at the expense of transgender, gender-fluid, and nonbinary speakers. They also often convince or require platforms to implement tools that, using the laws' vague and subjective definitions, end up blocking access to LGBTQ+ and reproductive health content

The censorship of this content prevents individuals from being able to engage with such material online to explore their identities, advocate for broader societal acceptance and against hate, build communities, and discover new interests. With corporations like Meta intervening to decide how people create, speak, and connect, a crucial form of engagement for all kinds of users has been removed and the voices of people with less power are regularly shut down. 

And at a time when LGBTQ+ individuals are already under vast pressure from violent homophobic threats offline, these online restrictions have an amplified impact. 

LGBTQ+ youth are at a higher risk of experiencing bullying and rejection, often turning to online spaces as outlets for self-expression. For those without family support or who face the threat of physical or emotional abuse at home because of their sexual orientation or gender identity, the internet becomes an essential resource. A report from the Gay, Lesbian & Straight Education Network (GLSEN) highlights that LGBTQ+ youth engage with the internet at higher rates than their peers, often showing greater levels of civic engagement online compared to offline. Access to digital communities and resources is critical for LGBTQ+ youth, and restricting access to them poses unique dangers.

Call to Action: Digital Rights Are LGBTQ+ Rights

These laws have the potential to harm us all—including the children they are designed to protect. 

As more U.S. states and countries pass age verification laws, it is crucial to recognize the broader implications these measures have on privacy, free speech, and access to information. This conglomeration of laws poses significant challenges for users trying to maintain anonymity online and access critical content—whether it’s LGBTQ+ resources, reproductive health information, or otherwise. These policies threaten the very freedoms they purport to protect, stifling conversations about identity, health, and social justice, and creating an environment of fear and repression. 

The fight against these laws is not just about defending online spaces; it’s about safeguarding the fundamental rights of all individuals to express themselves and access life-saving information.

We need to stand up against these age verification laws—not only to protect users’ free expression rights, but also to safeguard the free flow of information that is vital to a democratic society. Reach out to your state and federal legislators, raise awareness about the consequences of these policies, and support organizations like the LGBT Tech, ACLU, the Woodhull Freedom Foundation, and others that are fighting for digital rights of young people alongside EFF.

The fight for the safety and rights of LGBTQ+ youth is not just a fight for visibility—it’s a fight for their very survival. Now more than ever, it’s essential for allies, advocates, and marginalized communities to push back against these dangerous laws and ensure that the internet remains a space where all voices can be heard, free from discrimination and censorship.

Paige Collings

Texas Is Enforcing Its State Data Privacy Law. So Should Other States.

1 week 6 days ago

States need to have and use data privacy laws to bring privacy violations to light and hold companies accountable for them. So, we were glad to see that the Texas Attorney General’s Office has filed its first lawsuit under Texas Data Privacy and Security Act (TDPSA) to take the Allstate Corporation to task for sharing driver location and other driving data without telling customers.

In its complaint, the attorney general’s office alleges that Allstate and a number of its subsidiaries (some of which go by the name “Arity”) “conspired to secretly collect and sell ‘trillions of miles’ of consumers’ ‘driving behavior’ data from mobile devices, in-car devices, and vehicles.” (The defendant companies are also accused of violating Texas’ data broker law and its insurance law prohibiting unfair and deceptive practices.)

On the privacy front, the complaint says the defendant companies created a software development kit (SDK), which is basically a set of tools that developers can create to integrate functions into an app. In this case, the Texas Attorney General says that Allstate and Arity specifically designed this toolkit to scrape location data. They then allegedly paid third parties, such as the app Life360, to embed it in their apps. The complaint also alleges that Allstate and Arity chose to promote their SDK to third-party apps that already required the use of location date, specifically so that people wouldn’t be alerted to the additional collection.

That’s a dirty trick. Data that you can pull from cars is often highly sensitive, as we have raised repeatedly. Everyone should know when that information's being collected and where it's going.

More state regulators should follow suit and use the privacy laws on their books.

The Texas Attorney General’s office estimates that 45 million Americans, including those in Texas, unwittingly downloaded this software that collected their information, including location information, without notice or consent. This violates Texas’ privacy law, which went into effect in July 2024 and requires companies to provide a reasonably accessible notice to a privacy policy, conspicuous notice that they’re selling or processing sensitive data for targeting advertising, and to obtain consumer consent to process sensitive data.

This is a low bar, and the companies named in this complaint still allegedly failed to clear it. As law firm Husch Blackwell pointed out in its write-up of the case, all Arity had to do, for example, to fulfill one of the notice obligations under the TDPSA was to put up a line on their website saying, “NOTICE: We may sell your sensitive personal data.”

In fact, Texas’s privacy law does not meet the minimum of what we’d consider a strong privacy law. For example, the Texas Attorney General is the only one who can file a lawsuit under its states privacy law. But we advocate for provisions that make sure that everyone, not only state attorneys general, can file suits to make sure that all companies respect our privacy.

Texas’ privacy law also has a “right to cure”—essentially a 30-day period in which a company can “fix” a privacy violation and duck a Texas enforcement action. EFF opposes rights to cure, because they essentially give companies a “get-out-jail-free” card when caught violating privacy law. In this case, Arity was notified and given the chance to show it had cured the violation. It just didn’t.

According the complaint, Arity apparently failed to take even basic steps that would have spared it from this enforcement action. Other companies violating our privacy may be more adept at getting out of trouble, but they should be found and taken to task too. That’s why we advocate for strong privacy laws that do even more to protect consumers.

Nineteen states now have some version of a data privacy law. Enforcement has been a bit slower. California has brought a few enforcement actions since its privacy law went into effect in 2020; Texas and New Hampshire are two states that have created dedicated data privacy units in their Attorney General offices, signaling they’re staffing up to enforce their laws. More state regulators should follow suit and use the privacy laws on their books. And more state legislators should enact and strengthen their laws to make sure companies are truly respecting our privacy.

Hayley Tsukayama

The FTC’s Ban on GM and OnStar Selling Driver Data Is a Good First Step

1 week 6 days ago

The Federal Trade Commission announced a proposed settlement agreeing that General Motors and its subsidiary, OnStar, will be banned from selling geolocation and driver behavior data to credit agencies for five years. That’s good news for G.M. owners. Every car owner and driver deserves to be protected.

Last year, a New York Times investigation highlighted how G.M. was sharing information with insurance companies without clear knowledge from the driver. This resulted in people’s insurance premiums increasing, sometimes without them realizing why that was happening. This data sharing problem was common amongst many carmakers, not just G.M., but figuring out what your car was sharing was often a Sisyphean task, somehow managing to be more complicated than trying to learn similar details about apps or websites.

The FTC complaint zeroed in on how G.M. enrolled people in its OnStar connected vehicle service with a misleading process. OnStar was initially designed to help drivers in an emergency, but over time the service collected and shared more data that had nothing to do with emergency services. The result was people signing up for the service without realizing they were agreeing to share their location and driver behavior data with third parties, including insurance companies and consumer reporting agencies. The FTC also alleged that G.M. didn’t disclose who the data was shared with (insurance companies) and for what purposes (to deny or set rates). Asking car owners to choose between safety and privacy is a nasty tactic, and one that deserves to be stopped.

For the next five years, the settlement bans G.M. and OnStar from these sorts of privacy-invasive practices, making it so they cannot share driver data or geolocation to consumer reporting agencies, which gather and sell consumers’ credit and other information. They must also obtain opt-in consent to collect data, allow consumers to obtain and delete their data, and give car owners an option to disable the collection of location data and driving information.

These are all important, solid steps, and these sorts of rules should apply to all carmakers. With privacy-related options buried away in websites, apps, and infotainment systems, it is currently far too difficult to see what sort of data your car collects, and it is not always possible to opt out of data collection or sharing. In reality, no consumer knowingly agrees to let their carmaker sell their driving data to other companies.

All carmakers should be forced to protect their customers’ privacy, and they should have to do so for longer than just five years. The best way to ensure that would be through a comprehensive consumer data privacy legislation with strong data minimization rules and requirements for clear, opt-in consent. With a strong privacy law, all car makers—not just G.M.— would only have authority to collect, maintain, use, and disclose our data to provide a service that we asked for.

Thorin Klosowski
Checked
1 hour 52 minutes ago
EFF's Deeplinks Blog: Noteworthy news from around the internet
Subscribe to EFF update feed