AI and Policing: 2024 in Review

3 months ago

There’s no part of your life now where you can avoid the onslaught of “artificial intelligence.” Whether you’re trying to search for a recipe and sifting through AI-made summaries or listening to your cousin talk about how they’ve fired their doctor and replaced them with a chatbot, it seems now, more than ever, that AI is the solution to every problem. But, in the meantime, some people are getting hideously rich by convincing people with money and influence that they must integrate AI into their business or operations.

Enter law enforcement.

When many tech vendors see police, they see dollar signs. Law enforcement’s got deep pockets. They are under political pressure to address crime. They are eager to find that one magic bullet that finally might do away with crime for good. All of this combines to make them a perfect customer for whatever way technology companies can package machine-learning algorithms that sift through historical data in order to do recognition, analytics, or predictions.

AI in policing can take many forms that we can trace back decades–including various forms of face recognition, predictive policing, data analytics, automated gunshot recognition, etc. But this year has seen the rise of a new and troublesome development in the integration between policing and artificial intelligence: AI-generated police reports.

Egged on by companies like Truleo and Axon, there is a rapidly-growing market for vendors that use a large language model to write police reports for officers. In the case of Axon, this is done by using the audio from police body-worn cameras to create narrative reports with minimal officer input except for a prompt to add a few details here and there.

We wrote about what can go wrong when towns start letting their police write reports using AI. First and foremost, no matter how many boxes police check to say they are responsible for the content of the report, when cross examination reveals lies in a police report, officers will now have the veneer of plausible deniability by saying, “the AI wrote that part.” After all, we’ve all heard of AI hallucinations at this point, right? And don’t we all just click through terms of service without reading it carefully?

And there are so many more questions we have. Translation is an art, not a science, so how and why will this AI understand and depict things like physical conflict or important rhetorical tools of policing like the phrases, “stop resisting” and “drop the weapon,” even if a person is unarmed or is not resisting? How well does it understand sarcasm? Slang? Regional dialect? Languages other than English? Even if not explicitly made to handle these situations, if left to their own devices, officers will use it for any and all reports.

Prosecutors in Washington have even asked police not to use AI to write police reports (for now) out of fear that errors might jeopardize trials.

Countless movies and TV shows have depicted police hating paperwork and if these pop culture representations are any indicator, we should expect this technology to spread rapidly in 2025. That’s why EFF is monitoring its spread closely and providing more information as we continue to learn more about how it’s being used. 

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.

Matthew Guariglia

Fighting Online ID Mandates: 2024 In Review

3 months ago

This year, nearly half of U.S. states passed laws imposing age verification requirements on online platforms. EFF has opposed these efforts because they censor the internet and burden access to online speech. Though age verification mandates are often touted as “online safety” measures for kids, the laws actually do more harm than good. They undermine the fundamental speech rights of adults and young people alike, create new barriers to internet access, and put at risk all internet users’ privacy, anonymity, and security.

Age verification bills generally require online services to verify all users’ ages—often through invasive tools like ID checks, biometric scans, and other dubious “age estimation” methods—before granting them access to certain online content or services. Some state bills mandate the age verification explicitly, including Texas’s H.B. 1181, Florida’s H.B. 3, and Indiana’s S.B. 17. Other state bills claim not to require age verification, but still threaten platforms with liability for showing certain content or features to minor users. These bills—including Mississippi’s H.B. 1126, Ohio’s Parental Notification by Social Media Operators Act, and the federal Kids Online Safety Act—raise the question: how are platforms to know which users are minors without imposing age verification?

EFF’s answer: they can’t. We call these bills “implicit age verification mandates” because, though they might expressly deny requiring age verification, they still force platforms to either impose age verification measures or, worse, to censor whatever content or features deemed “harmful to minors” for all users—not just young people—in order to avoid liability.

Age verification requirements are the wrong approach to protecting young people online. No one should have to hand over their most sensitive personal information or submit to invasive biometric surveillance just to access lawful online speech.

EFF’s Work Opposing State Age Verification Bills

Last year, we saw a slew of dangerous social media regulations for young people introduced across the country. This year, the flood of ill-advised bills grew larger. As of December 2024, nearly every U.S. state legislature has introduced at least one age verification bill, and nearly half the states have passed at least one of these proposals into law.

Courts agree with our position on age verification mandates. Across the country, courts have repeatedly and consistently held these so-called “child safety” bills unconstitutional, confirming that it is nearly impossible to impose online age-verification requirements without violating internet users’ First Amendment rights. In 2024, federal district courts in Ohio, Indiana, Utah, and Mississippi enjoined those states’ age verification mandates. The decisions underscore how these laws, in addition to being unconstitutional, are also bad policy. Instead of seeking to censor the internet or block young people from it, lawmakers seeking to help young people should focus on advancing legislation that solves the most pressing privacy and competition problems for all users—without restricting their speech.

Here’s a quick review of EFF’s work this year to fend off state age verification mandates and protect digital rights in the face of this legislative onslaught.

California

In January, we submitted public comments opposing an especially vague and poorly written proposal: California Ballot Initiative 23-0035, which would allow plaintiffs to sue online information providers for damages of up to $1 million if they violate their “responsibility of ordinary care and skill to a child.” We pointed out that this initiative’s vague standard, combined with extraordinarily large statutory damages, will severely limit access to important online discussions for both minors and adults, and cause platforms to censor user content and impose mandatory age verification in order to avoid this legal risk. Thankfully, this measure did not make it onto the 2024 ballot.

In February, we filed a friend-of-the-court brief arguing that California’s Age Appropriate Design Code (AADC) violated the First Amendment. Our brief asked the Ninth Circuit Court of Appeals to rule narrowly that the AADC’s age estimation scheme and vague description of “harmful content” renders the entire law unconstitutional, even though the bill also contained several privacy provisions that, stripped of the unconstitutional censorship provisions, could otherwise survive. In its decision in August, the Ninth Circuit confirmed that parts of the AADC likely violate the First Amendment and provided a helpful roadmap to legislatures for how to write privacy first laws that can survive constitutional challenges. However, the court missed an opportunity to strike down the AADC’s age-verification provision specifically.

Later in the year, we also filed a letter to California lawmakers opposing A.B. 3080, a proposed state bill that would have required internet users to show their ID in order to look at sexually explicit content. Our letter explained that bills that allow politicians to define what “sexually explicit” content is and enact punishments for those who engage with it are inherently censorship bills—and they never stop with minors. We declared victory in September when the bill failed to get passed by the legislature.

New York

Similarly, after New York passed the Stop Addictive Feeds Exploitation (SAFE) for Kids Act earlier this year, we filed comments urging the state attorney general (who is responsible for writing the rules to implement the bill) to recognize that that age verification requirements are incompatible with privacy and free expression rights for everyone. We also noted that none of the many methods of age verification listed in the attorney general’s call for comments is both privacy-protective and entirely accurate, as various experts have reported.

Texas

We also took the fight to Texas, which passed a law requiring all Texas internet users, including adults, to submit to invasive age verification measures on every website deemed by the state to be at least one-third composed of sexual material. After a federal district court put the law on hold, the Fifth Circuit reversed and let the law take effect—creating a split among federal circuit courts on the constitutionality of age verification mandates. In May, we filed an amicus brief urging the U.S. Supreme Court to grant review of the Fifth Circuit’s decision and to ultimately overturn the Texas law on First Amendment grounds.

In September, after the Supreme Court accepted the Texas case, we filed another amicus brief on the merits. We pointed out that the Fifth Circuit’s flawed ruling diverged from decades of legal precedent recognizing, correctly, that online ID mandates impose greater burdens on our First Amendment rights than in-person age checks. We explained that there is nothing about this Texas law or advances in technology that would lessen the harms that online age verification mandates impose on adults wishing to exercise their constitutional rights. The Supreme Court has set this case, Free Speech Coalition v. Paxton, for oral argument in February 2025.

Mississippi

Finally, we supported the First Amendment challenge to Mississippi’s age verification mandate, H.B. 1126, by filing amicus briefs both in the federal district court and on appeal to the Fifth Circuit. Mississippi’s extraordinarily broad law requires social media services to verify the ages of all users, to obtain parental consent for any minor users, and to block minor users from exposure to materials deemed “harmful” by state officials.

In our June brief for the district court, we once again explained that online age verification laws are fundamentally different and more burdensome than laws requiring adults to show their IDs in physical spaces, and impose significant barriers on adults’ ability to access lawful speech online. The district court agreed with us, issuing a decision that enjoined the Mississippi law and heavily cited our amicus brief.

Upon Mississippi’s appeal to the Fifth Circuit, we filed another amicus brief—this time highlighting H.B. 1126’s dangerous impact on young people’s free expression. After all, minors enjoy the same First Amendment right as adults to access and engage in protected speech online, and online spaces are diverse and important spaces where minors can explore their identities—whether by creating and sharing art, practicing religion, or engaging in politics—and seek critical resources and support for the very same harms these bills claim to address. In our brief, we urged the court to recognize that age-verification regimes like Mississippi’s place unnecessary and unconstitutional barriers between young people and these online spaces that they rely on for vibrant self-expression and crucial support.

Looking Ahead

As 2024 comes to a close, the fight against online age verification is far from over. As the state laws continue to proliferate, so too do the legal challenges—several of which are already on file.

EFF’s work continues, too. As we move forward in state legislatures and courts, at the federal level here in the United States, and all over the world, we will continue to advocate for policies that protect the free speech, privacy, and security of all users—adults and young people alike. And, with your help, we will continue to fight for the future of the open internet, ensuring that all users—especially the youth—can access the digital world without fear of surveillance or unnecessary restrictions.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.

Molly Buckley

Federal Regulators Limit Location Brokers from Selling Your Whereabouts: 2024 in Review

3 months ago

The opening and closing months of 2024 saw federal enforcement against a number of location data brokers that track and sell users’ whereabouts through apps installed on their smartphones. In January, the Federal Trade Commission brought successful enforcement actions against X-Mode Social and InMarket, banning the companies from selling precise location data—a first prohibition of this kind for the FTC. And in December, the FTC widened its net to two additional companies—Gravy Analytics (Venntel) and Mobilewalla—barring them from selling or disclosing location data on users visiting sensitive areas such as reproductive health clinics or places of worship. In previous years, the FTC has sued location brokers such as Kochava, but the invasive practices of these companies have only gotten worse. Seeing the federal government ramp up enforcement is a welcome development for 2024.

As regulators have clearly stated, location information is sensitive personal information. Companies can glean location information from your smartphone in a number of ways. Apps that include Software Development Kits (SDKs) from some companies will instruct the app to send back troves of sensitive information for analytical insights or debugging purposes. The data brokers may offer market insights or financial incentives for app developers to include their SDKs. Other companies will not ask apps to directly include their SDKs, but will participate in Real-Time Bidding (RTB) auctions, placing bids for ad-space on devices in locations they specify. Even if they lose the auction, they can glean valuable device location information just by participating. Often, apps will ask for permissions such as location data for legitimate reasons aligned with the purpose of the app: for example, a price comparison app might use your whereabouts to show you the cheapest vendor of a product you’re interested in for your area. What you aren’t told is that your location is also shared with companies tracking you.

A number of revelations this year gave us better insight into how the location data broker industry works, revealing the inner-workings of powerful tools such as Locate X, which allows even those claiming to work with law enforcement at some point in the future to access troves of mobile location data across the planet. The mobile location tracking company FOG Data Science, which in 2022 EFF revealed to be selling troves of information to local police, was this year found also to be soliciting law enforcement for information on the doctors of suspects in order to track them via their doctor visits.

A number of revelations this year gave us better insight into how the location data broker industry works

EFF detailed how these tools can be stymied via technical means, such as changing a few key settings on your mobile device to disallow data brokers from linking your location across space and time. We further outlined legislative avenues to ensure structural safeguards are put in place to protect us all from an out-of-control predatory data industry.

In addition to FTC action, the Consumer Financial Protection Bureau proposed a new rule meant to crack down on the data broker industry. As the CFPB mentioned, data brokers compile highly sensitive information—like information about a consumer's finances, the apps they use, and their location throughout the day. The rule would include stronger consent requirements and protections for personal data that has been purportedly de-identified. Given the abuses the announcement cites, including the distribution and sale of “detailed personal information about military service members, veterans, government employees, and other Americans,” we hope to see adoption and enforcement of this proposed rule in 2025.

This year has seen a strong regulatory appetite to protect consumers from harms which in bygone years would have seemed unimaginable: detailed records on the movements of nearly everyone, packaged and made available for pennies. We hope 2025 continues this appetite to address the dangers of location data brokers.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.

Bill Budington

【JCJ オンライン講演会】令和の公害か「PFOA(ピーフォア)」の恐怖  講師:中川 七海さん(ジャーナリスト)25年1月13日(月・祝)午後2時から4時 

3 months ago
  ■開催趣旨:  自然界で分解されず〝永遠の化学物質〟と呼ばれる有機フッ素化合物「PFAS(ピーファス)」。約1万種類のうちとりわけ毒性が強いのが「PFOS(ピーフォス)」と「PFOA(ピーフォア)」で、泡消火剤、フライパン、防水スプレー、食品包装紙などに使われる。 在日米軍基地や工場周辺から検出され地下水汚染に進展している。水道水への影響を懸念する住民は不安な日々を送る中、WHO(世界保健機構)は2023年11月、PFOAを発がん性があるPFOSを発がん性の可能性があると..
JCJ

Exposing Surveillance at the U.S.-Mexico Border: 2024 Year in Review in Pictures

3 months ago

Some of the most picturesque landscapes in the United States can be found along the border with Mexico. Yet, from San Diego’s beaches to the Sonoran Desert, from Big Bend National Park to the Boca Chica wetlands, we see vistas marred by the sinister spread of surveillance technology, courtesy of the federal government.  

EFF refuses to let this blight grow without documenting it, exposing it, and finding ways to fight back alongside the communities that live in the shadow of this technological threat to human rights.  

Here’s a galley of images representing our work and the new developments we’ve discovered in border surveillance in 2024.  

1. Mapping Border Surveillance  

EFF’s stand-up display of surveillance at the US-Mexico border. Source: EFF

EFF published the first iteration of our map of surveillance towers at the U.S.-Mexico border in Spring 2023, having pinpointed the precise location of 290 towers, a fraction of what we knew might be out there. A year- –and- –a -half later, with the help of local residents, researchers, and search-and-rescue groups, our map now includes more than 500 towers.  

In many cases, the towers are brand new, with some going up as recently as this fall. We’ve also added the location of surveillance aerostats, checkpoint license plate readers, and face recognition at land ports of entry. 

In addition to our online map, we also created a 10’ x 7’ display that we debuted at “Regardless of Frontiers: The First Amendment and the Exchange of Ideas Across Borders,” a symposium held by the Knight First Amendment Institute at Columbia University in October. If your institution would be interested in hosting it, please email us at aos@eff.org

2. Infrastructures of Control

The Infrastructures of Control exhibit at University of Arizona. Source: EFF

Two University of Arizona geographers—Colter Thomas and Dugan Meyer—used our map to explore the border, driving on dirt roads and hiking in the desert, to document the infrastructure that comprises the so-called “virtual wall.” The result: “Infrastructures of Control,” a photography exhibit in April at the University of Arizona that also included a near-actual size replica of an “autonomous surveillance tower.”   

You can read our interview with Thomas and Meyer here.

3. An Old Tower, a New Lease in Calexico 

A remote video surveillance system in Calexico, Calif. Source: EFF

Way back in 2000, the Immigration and Naturalization Service—which oversaw border security prior to the creation of Customs and Border Protection (CBP) within the Department of Homeland Security (DHS) — leased a small square of land in a public park in Calexico, Calif., where it then installed one of the earliest border surveillance towers. The lease lapsed in 2020 and with plans for a massive surveillance upgrade looming, CBP rushed to try to renew the lease this year. 

This was especially concerning because of CBP’s new strategy of combining artificial intelligence with border camera feeds.  So EFF teamed up with the Imperial Valley Equity and Justice Coalition, American Friends Service Committee, Calexico Needs Change, and Southern Border Communities Coalition to try to convince the Calexico City Council to either reject the lease or demand that CBP enact better privacy protections for residents in the neighboring community and children playing in Nosotros Park. Unfortunately, local politics were not in our favor. However, resisting border surveillance is a long game, and EFF considers it a victory that this tower even got a public debate at all. 

4. Aerostats Up in the Air 

The Tactical Aerostat System at Santa Teresa Station. Source: Battalion Search and Rescue (CC BY)

CBP seems incapable of developing a coherent strategy when it comes to tactical aerostats—tethered blimps equipped with long-range, high-definition cameras. In 2021, the agency said it wanted to cancel the program, which involved four aerostats in the Rio Grande Valley, before reversing itself. Then in 2022, CBP launched new aerostats in Nogales, Ariz. and Columbus, N.M. and announced plans to launch 17 more within a year.  

But by 2023, CBP had left the program out of its proposed budget, saying the aerostats would be decommissioned. 

And yet, in fall 2024, CBP launched a new aerostat at the Santa Teresa Border Patrol Station in New Mexico. Our friends at Battalion Search & Rescue gathered photo evidence for us. Soon after, CBP issued a new solicitation for the aerostat program and a member of Congress told Border Report that the aerostats may be upgraded and as many as 12 new ones may be acquired by CBP via the Department of Defense.

Meanwhile, one of CBP’s larger Tethered Aerostats Radar Systems in Eagle Pass, Texas was down for most of the year after deflating in high winds. CBP has reportedly not been interested in paying hundreds of thousands of dollars to get it up again.   

5. New Surveillance in Southern Arizona

A Buckeye Camera on a pole along the border fence near Sasabe, Ariz. Source: EFF

Buckeye Cameras are motion-triggered cameras that were originally designed for hunters and ranchers to spot wildlife, but border enforcement authorities—both federal and state/local—realized years ago that they could be used to photograph people crossing the border. These cameras are often camouflaged (e.g. hidden in trees, disguised as garbage, or coated in sand).  

Now, CBP is expanding their use of Buckeye Cameras. During a trip to Sasabe, Ariz., we discovered the CBP is now placing Buckeye Cameras in checkpoints, welding them to the border fence, and installing metal poles, wrapped in concertina wire, with Buckeye Cameras at the top.

A surveillance tower along the highway west of Tucson. Source: EFF

On that same trip to Southern Arizona, EFF (along with the Infrastructures of Control geographers) passed through a checkpoint west of Tucson, where previously we had identified a relocatable surveillance tower. But this time it was gone. Why, we wondered? Our question was answered just a minute or two later, when we spotted a new surveillance tower on a nearby hill-top, a new model that we had not previously seen deployed in the wild.  

6. Artificial Intelligence  

A graphic from a January 2024 “Industry Day” event. Source: Custom & Border Protection

CBP and other agencies regularly hold “Industry Days” to brief contractors on the new technology and capabilities the agency may want to buy in the near future. In January, EFF attended one such  “Industry Day” designed to bring tech vendors up-to-speed on the government’s horrific vision of a border secured by artificial intelligence (see the graphic above for an example of that vision). 

A graphic from a January 2024 “Industry Day” event. Source: Custom & Border Protection

At this event, CBP released the convoluted flow chart above as part of slide show. Since it’s so difficult to parse, here’s the best sense we can make out of it:. When someone crosses the border, it triggers an unattended ground sensor (UGS), and then a camera autonomously detects, identifies, classifies and tracks the person, handing them off camera to camera, and the AI system eventually alerts Border Patrol to dispatch someone to intercept them for detention. 

7. Congress in Virtual Reality

Rep. Scott Peters on our VR tour of the border. Source: Peters’ Instagram

We search for surveillance on the ground. We search for it in public records. We search for it in satellite imagery. But we’ve also learned we can use virtual reality in combination with Google Streetview not only to investigate surveillance, but also to introduce policymakers to the realities of the policies they pass. This year, we gave Rep. Scott Peters (D-San Diego) and his team a tour of surveillance at the border in VR, highlighting the impact on communities.  

“[EFF] reminded me of the importance of considering cost-effectiveness and Americans’ privacy rights,” Peters wrote afterward in a social media. 

We also took members of Rep. Mark Amodei’s (R-Reno) district staff on a similar tour. Other Congressional staffers should contact us at aos@eff.org if you’d like to try it out.  

Learn more about how EFF uses VR to research the border in this interview and this lightning talk.  

8. Indexing Border Tech Companies 

An HDT Global vehicle at the 2024 Border Security Expo. Source: Dugan Meyer (CC0 1.0 Universal)

In partnership with the Heinrich Böll Foundation, EFF and University of Nevada, Reno student journalist Andrew Zuker built a dataset of hundreds of vendors marketing technology to the U.S. Department of Homeland Security. As part of this research, Zuker journeyed to El Paso, Texas for the Border Security Expo, where he systematically gathered information from all the companies promoting their surveillance tools. You can read Zuker’s firsthand report here.

9. Plataforma Centinela Inches Skyward 

An Escorpión unit, part of the state of Chihuahua’s Plataforma Centinela project. Source: EFF

In fall 2023, EFF released its report on the Plataforma Centinela, a massive surveillance network being built by the Mexican state of Chihuahua in Ciudad Juarez that will include 10,000+ cameras, face recognition, artificial intelligence, and tablets that police can use to access all this data from the field. At its center is the Torre Centinela, a 20-story headquarters that was supposed to be completed in 2024.  

The site of the Torre Centinela in downtown Ciudad Juarez. Source: EFF

We visited Ciudad Juarez in May 2024 and saw that indeed, new cameras had been installed along roadways, and the government had begun using “Escorpión” mobile surveillance units, but the tower was far from being completed. A reporter who visited in November confirmed that not much more progress had been made, although officials claim that the system will be fully operational in 2025.

10. EFF’s Border Surveillance Zine 

Do you want to review even more photos of surveillance that can be found at the border, whether they’re planted in the ground, installed by the side of the road, or floating in the air? Download EFF’s new zine in English or Spanish—or if you live a work in the border region, email us as aos@eff.org and we’ll mail you hard copies.  

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.

Dave Maass

Fighting Automated Oppression: 2024 in Review

3 months ago

EFF has been sounding the alarm on algorithmic decision making (ADM) technologies for years. ADMs use data and predefined rules or models to make or support decisions, often with minimal human involvement, and in 2024, the topic has been more active than ever before, with landlords, employers, regulators, and police adopting new tools that have the potential to impact both personal freedom and access to necessities like medicine and housing.

This year, we wrote detailed reports and comments to US and international governments explaining that ADM poses a high risk of harming human rights, especially with regard to issues of fairness and due process. Machine learning algorithms that enable ADM in complex contexts attempt to reproduce the patterns they discern in an existing dataset. If you train it on a biased dataset, such as records of whom the police have arrested or who historically gets approved for health coverage, then you are creating a technology to automate systemic, historical injustice. And because these technologies don’t (and typically can’t) explain their reasoning, challenging their outputs is very difficult.

If you train it on a biased dataset, you are creating a technology to automate systemic, historical injustice.

It’s important to note that decision makers tend to defer to ADMs or use them as cover to justify their own biases. And even though they are implemented to change how decisions are made by government officials, the adoption of an ADM is often considered a mere ‘procurement’ decision like buying a new printer, without the kind of public involvement that a rule change would ordinarily entail. This, of course, increases the likelihood that vulnerable members of the public will be harmed and that technologies will be adopted without meaningful vetting. While there may be positive use cases for machine learning to analyze government processes and phenomena in the world, making decisions about people is one of the worst applications of this technology, one that entrenches existing injustice and creates new, hard-to-discover errors that can ruin lives.

Vendors of ADM have been riding a wave of AI hype, and police, border authorities, and spy agencies have gleefully thrown taxpayer money at products that make it harder to hold them accountable while being unproven at offering any other ‘benefit.’ We’ve written about the use of generative AI to write police reports based on the audio from bodycam footage, flagged how national security use of AI is a threat to transparency, and called for an end to AI Use in Immigration Decisions.

The hype around AI and the allure of ADMs has further incentivized the collection of more and more user data.

The private sector is also deploying ADM to make decisions about people’s access to employment, housing, medicine, and more. People have an intuitive understanding of some of the risks this poses, with most Americans expressing discomfort about the use of AI in these contexts. Companies can make a quick buck firing people and demanding the remaining workers figure out how to implement snake-oil ADM tools to make these decisions faster, though it’s becoming increasingly clear that this isn’t delivering the promised productivity gains.

ADM can, however, help a company avoid being caught making discriminatory decisions that violate civil rights laws—one reason why we support mechanisms to prevent unlawful private discrimination using ADM. Finally, the hype around AI and the allure of ADMs has further incentivized the collection and monetization of more and more user data and more invasions of privacy online, part of why we continue to push for a privacy-first approach to many of the harmful applications of these technologies.

In EFF’s podcast episode on AI, we discussed some of the challenges posed by AI and some of the positive applications this technology can have when it’s not used at the expense of people’s human rights, well-being, and the environment. Unless something dramatically changes, though, using AI to make decisions about human beings is unfortunately doing a lot more harm than good.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.

Kit Walsh

State Legislatures Are The Frontline for Tech Policy: 2024 in Review

3 months ago

State lawmakers are increasingly shaping the conversation on technology and innovation policy in the United States. As Congress continues to deliberate key issues such as data privacy, police use of data, and artificial intelligence, lawmakers are rapidly advancing their own ideas into state law. That’s why EFF fights for internet rights not only in Congress, but also in statehouses across the country.

This year, some of that work has been to defend good laws we’ve passed before. In California, EFF worked to oppose and defeat S.B. 1076, by State Senator Scott Wilk, which would have undermined the California Delete Act (S.B. 362). Enacted last year, the Delete Act provides consumers with an easy “one-click” button to ask data brokers registered in California to remove their personal information. S.B. 1076 would have opened loopholes for data brokers to duck compliance with this common-sense, consumer-friendly tool. We were glad to stop it before it got very far.

Also in California, EFF worked with dozens of organizations led by ACLU California Action to defeat A.B. 1814, a facial recognition bill authored by Assemblymember Phil Ting. The bill would have made it easy for policy to evade accountability and we are glad to see the California legislature reject this dangerous bill. For the full rundown of our highlights and lowlights in California, you can check out our recap of this year’s session.

EFF also supported efforts from the ACLU of Massachusetts to pass the Location Shield Act, which, as introduced, would have required companies to get consent before collecting or processing location data and largely banned the sale of location data. While the bill did not become law this year, we look forward to continuing the fight to push it across the finish line in 2025.

As deadlock continues in Washington D.C., state lawmakers will continue to emerge as leading voices on several key EFF issues.

States Continue to Experiment

Several states also introduced bills this year that raise similar issues as the federal Kids Online Safety Act, which attempts to address young people’s safety online but instead introduces considerable censorship and privacy concerns.

For example, in California, we were able to stop A.B. 3080, authored by Assemblymember Juan Alanis. We opposed this bill for many reasons, including that it was not clear on what counted as “sexually explicit content” under its definition. This vagueness set up barriers to youth—particularly LGBTQ+ youth—to access legitimate content online.

We also oppose any bills, including A.B. 3080, that require age verification to access certain sites or social media networks. Lawmakers filed bills that have this requirement in more than a dozen states. As we said in comments to the New York Attorney General’s office on their recently passed “SAFE for Kids Act,” none of the requirements the state was considering are both privacy-protective and entirely accurate. Age-verification requirements harm all online speakers by burdening free speech and diminishing online privacy by incentivizing companies to collect more personal information.

We also continue to watch lawmakers attempting to regulate the creation and spread of deepfakes. Many of these proposals, while well-intentioned, are written in ways that likely violate First Amendment rights to free expression. In fact, less than a month after California’s governor signed a deepfake bill into law a federal judge put its enforcement on pause (via a preliminary injunction) on First Amendment grounds. We encourage lawmakers to explore ways to focus on the harms that deepfakes pose without endangering speech rights.

On a brighter note, some state lawmakers are learning from gaps in existing privacy law and working to improve standards. In the past year, both Maryland and Vermont have advanced bills that significantly improve state privacy laws we’ve seen before. The Maryland Online Data Privacy Act (MODPA)—authored by State Senator Dawn File and Delegate Sara Love (now State Senator Sara Love), contains strong data privacy minimization requirements. Vermont’s privacy bill, authored by State Rep. Monique Priestley, included the crucial right for individuals to sue companies that violate their privacy. Unfortunately, while the bill passed both houses, it was vetoed by Vermont Gov. Phil Scott. As private rights of action are among our top priorities in privacy laws, we look forward to seeing more bills this year that contain this important enforcement measure.

Looking Ahead to 2025

2025 will be a busy year for anyone who works in state legislatures. We already know that state lawmakers are working together on issues such as AI legislation. As we’ve said before, we look forward to being a part of these conversations and encourage lawmakers concerned about the threats unchecked AI may pose to instead consider regulation that focuses on real-world harms. 

As deadlock continues in Washington D.C., state lawmakers will continue to emerge as leading voices on several key EFF issues. So, we’ll continue to work—along with partners at other advocacy organizations—to advise lawmakers and to speak up. We’re counting on our supporters and individuals like you to help us champion digital rights. Thanks for your support in 2024.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.

Hayley Tsukayama

[B] 港町・リヴォルノ散策~チャオ!イタリア通信(サトウノリコ)

3 months ago
我が家の子どもたちも12月20日に小学校の授業が終わり、17日間の冬休みに突入しました。イタリアでは、12月25日クリスマス、26日聖ステファノの日、1月1日はお正月、1月6日はエピファニアと言われる祝日です。なので、1月6日までずっと冬休みになります。親たちは、だいたいカレンダー通りの休みなので、1月2日から働くのが通常です。
日刊ベリタ

24読書回顧―私のおちおし 穏やかな日常が一瞬に奪われて=後藤秀典(24年JCJ賞受賞者)

3 months ago
  福島第一原発事故と司法に関する本をひたすら読んだ一年だった。まずは馬場靖子撮影・著『あの日あのとき ふるさとアルバム 私たちの浪江町津島』(東京印書館)。 集落のみんなが集まっての田植え、合間にお茶を飲みながら笑う、孫と散歩する女性、地元の高校生が仮装して町を練り歩く、カメラに向かって笑いかける老夫婦…そこには、日々の変わらぬ暮らしを営む人々の素顔を写っている。やさしさに包まれた写真だが、私は、とてつもない恐怖を感じてしまった。写されたのは、東京電力福島第一原発事故前の福..
JCJ

EFF’s 2023 Annual Report Highlights a Year of Victories: 2024 in Review

3 months ago

Every fall, EFF releases its annual report, and 2023 was the year of Privacy First. Our annual report dives into our groundbreaking whitepaper along with victories in freeing the law, right to repair, and more. It’s a great, easy-to-read summary of the year’s work, and it contains interesting tidbits about the impact we’ve made—for instance, did you know 394,000 people downloaded an episode of EFF’s Podcast, “How to Fix the Internet as of 2023?” Or that EFF had donors in 88 countries?

As you can see in the report, EFF’s role as the oldest, largest, and most trusted digital rights organization became even more important when tech law and policy commanded the public’s attention in 2023. Major headlines pondered the future of internet freedom. Arguments around free speech, digital privacy, AI, and social media dominated Congress, state legislatures, the U.S. Supreme Court, and the European Union.

EFF intervened with logic and leadership to keep bad ideas from getting traction, and we articulated solutions to legitimate concerns with care and nuance in our whitepaper, Privacy First: A Better Way to Protect Against Online Harms. It demonstrated how seemingly disparate concerns are in fact linked to the dominance of tech giants and the surveillance business models used by most of them. We noted how these business models also feed law enforcement’s increasing hunger for our data. We pushed for a comprehensive approach to privacy instead and showed how this would protect us all more effectively than harmful censorship strategies.  

The longest running fight we won in 2023 was to free the law: In our legal representation of PublicResource.org, we successfully ensured that copyright law does not block you from finding, reading and sharing laws, regulations and building codes online. We also won a major victory in helping to pass a law in California to increase tech users’ ability to control their information. In states across the nation, we helped boost the right to repair. Due to the efforts of the many technologists and advocates involved with Let’s Encrypt, HTTPS Everywhere, and Certbot over the last 10 year, as much as 95% of the web is now encrypted. And that’s just barely scratching the surface.

Read the Report

Obviously, we couldn’t do any of this without the support of our members, large and small. Thank you. Take a look at the report for more information about the work we’ve been able to do this year thanks to your help.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.

Allison Morris

Aerial and Drone Surveillance: 2024 in Review

3 months ago

We've been fighting against aerial surveillance for decades because we recognize the immense threat from Big Brother in the sky. Even if you’re behind within the confines of your backyard, you are exposed to eyes from above.

Aerial surveillance was first conducted with manned aircrafts, which the Supreme Court held was permissible without a warrant in a couple of cases the 1980s. But, as we’ve argued to courts, drones have changed the equation. Drones were a technology developed by the military before it was adopted by domestic law enforcement. And in the past decade, commercial drone makers began marketing to civilians, making drones ubiquitous in our lives and exposing us to be watched by from above by the government and our neighbors. But we believe that when we're in the constitutionally protected areas of backyards or homes, we have the right to privacy, no matter how technology has advanced. 

  This year, we focused on fighting back against aerial surveillance facilitated by advancement in these technologies. Unfortunately, many of the legal challenges to aerial and drone surveillance are hindered by those Supreme Court cases. But, we argued that these cases decided around the same time as when people were playing Space Invaders on the Atari 2600 and watching the Goonies on VHS should not control the legality of conduct in the age of Animal Crossing and 4k streaming services. As nostalgic as those memories may be, laws from those times are just as outdated as 16k ram packs and magnetic videotapes. And we have applauded courts for recognizing that. 

  Unfortunately, the Supreme Court has failed to update its understanding of aerial surveillance, even though other courts have found certain types of aerial surveillance to violate the federal and state constitutions.  

 Because of this ambiguity, law enforcement agencies across the nation have been quick to adopt various drone systems, especially those marketed as a “drone as first responder” program, which ostensibly allows police to assess a situation–whether it’s dangerous or requires police response at all–before officers arrive at the scene. Data from the Chula Vista Police Department in Southern California, which pioneered the model, shows that drones frequently respond to domestic violence, unspecified disturbances, and requests for psychological evaluations. Likewise, flight logs indicate the drones are often used to investigate crimes related to homelessness. The Brookhaven Police Department in Georgia also has adopted this model. While these programs sound promising in theory, municipalities have been reticent in sharing the data despite courts ruling that the information is not categorically closed to the public. 

Additionally, while law enforcement agencies are quick to assure the public that their policy respects privacy concerns, those can be hollow assurances. The NYPD promised that they would not surveil constitutionally protected backyards with drones, but Eric Adams decided to use to them to spy on backyard parties over Labor Day in 2023 anyway. Without strict regulations in place, our privacy interests are at the whims of whoever holds power over these agencies. 

 Alarmingly, there are increasing numbers of calls by police departments and drone manufacturers to arm remote-controlled drones. After wide-spread backlash including resignations from its ethics board, drone manufacturer Axon in 2022 said it would pause a program to develop a drone armed with a taser to be deployed in school shooting scenarios. We’re likely to see more proposals like this, including drones armed with pepper spray and other crowd control weapons. 

 As drones incorporate more technological payload and become cheaper, aerial surveillance has become a favorite surveillance tool resorted to by law enforcement and other governmental agencies. We must ensure that these technological developments do not encroach on our constitutional rights to privacy.  

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.

Hannah Zhao

Restrictions on Free Expression and Access to Information in Times of Change: 2024 in Review

3 months ago

This was an historical year. A year in which elections took place in countries home to almost half the world’s population, a year of war, and collapse of or chaos within several governments. It was also a year of new technological developments, policy changes, and legislative developments. Amidst these sweeping changes, freedom of expression has never been more important, and around the world, 2024 saw numerous challenges to it. From new legal restrictions on speech to wholesale internet shutdowns, here are just a few of the threats to freedom of expression online that we witnessed in 2024.

Internet shutdowns

It is sadly not surprising that, in a year in which national elections took place in at least 64 countries, internet shutdowns would be commonplace. Access Now, which tracks shutdowns and runs the KeepItOn Coalition (of which EFF is a member), found that seven countries—Comoros, Azerbaijan, Pakistan, India, Mauritania, Venezuela, and Mozambique—restricted access to the internet at least partially during election periods. These restrictions inhibit people from being able to share news of what’s happening on the ground, but they also impede access to basic services, commerce, and communications.

Repression of speech in times of conflict

But elections aren’t the only justification governments use for restricting internet access. In times of conflict or protest, access to internet infrastructure is key for enabling essential communication and reporting. Governments know this, and over the past decades, have weaponized access as a means of controlling the free flow of information. This year, we saw Sudan enact a total communications blackout amidst conflict and displacement. The Iranian government has over the past two years repeatedly restricted access to the internet and social media during protests. And Palestinians in Gaza have been subject to repeated internet blackouts inflicted by Israeli authorities.

Social media platforms have also played a role in restricting speech this year, particularly when it comes to Palestine. We documented unjust content moderation by companies at the request of Israel’s Cyber Unit, submitted comment to Meta’s Oversight Board on the use of the slogan “from the river to the sea” (which the Oversight Board notably agreed with), and submitted comment to the UN Special Rapporteur on Freedom of Expression and Opinion expressing concern about the disproportionate impact of platform restrictions on expression by governments and companies.

In our efforts to ensure free expression is protected online, we collaborated with numerous groups and coalitions in 2024, including our own global content moderation coalition, the Middle East Alliance for Digital Rights, the DSA Human Rights Alliance, EDRI, and many others.

Restrictions on content, age, and identity

Another alarming 2024 trend was the growing push from several countries to restrict access to the internet by age, often by means of requiring ID to get online, thus inhibiting people’s ability to identify as they wish. In Canada, an overbroad age verification bill, S-210, seeks to prevent young people from encountering sexually explicit material online, but would require all users to submit identification before going online. The UK’s Online Safety Act, which EFF has opposed since its first introduction, would also require mandatory age verification, and would place penalties on websites and apps that host otherwise-legal content deemed “harmful” by regulators to minors. And similarly in the United States, the Kids Online Safety Act (still under revision) would require companies to moderate “lawful but awful” content and subject users to privacy-invasive age verification. And in recent weeks, Australia has also enacted a vague law that aims to block teens and children from accessing social media, marking a step back for free expression and privacy.

While the efforts of these governments are to ostensibly protect children from harm, as we have repeatedly demonstrated, they can also cause harm to young people by preventing them from accessing information that is otherwise not taught in schools or otherwise accessible in their communities.  

One group that is particularly impacted by these and other regulations enacted by governments around the world is the LGBTQ+ community. In June, we noted that censorship of online LGBTQ+ speech is on the rise in a number of countries. We continue to keep a close watch on governments that seek to restrict access to vital information and communications.

Cybercrime

We’ve been pushing back against cybercrime laws for a long time. In 2024, much of that work focused on the UN Cybercrime Convention, a treaty that would allow states to collect evidence across borders in cybercrime cases. While that might sound acceptable to many readers, the problem is that numerous countries utilize “cybercrime” as a means of punishing speech. One such country is Jordan, where a cybercrime law enacted in 2023 has been used against LGBTQ+ people, journalists, human rights defenders, and those criticizing the government.

EFF has fought back against Jordan’s cybercrime law, as well as bad cybercrime laws in China, Russia, the Philippines, and elsewhere, and we will continue to do so.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.

Jillian C. York