Court Orders Google (a Monopolist) To Knock It Off With the Monopoly Stuff

1 month 3 weeks ago

A federal court recently ordered Google to make it easier for Android users to switch to rival app stores, banned Google from using its vast cash reserves to block competitors, and hit Google with a bundle of thou-shalt-nots and assorted prohibitions.

Each of these measures is well crafted, narrowly tailored, and purpose-built to accomplish something vital: improving competition in mobile app stores.

You love to see it.

Some background: the mobile OS market is a duopoly run by two dominant firms, Google (Android) and Apple (iOS). Both companies distribute software through their app stores (Google's is called "Google Play," Apple's is the "App Store"), and both companies use a combination of market power and legal intimidation to ensure that their users get all their apps from the company's store.

This creates a chokepoint: if you make an app and I want to run it, you have to convince Google (or Apple) to put it in their store first. That means that Google and Apple can demand all kinds of concessions from you, in order to reach me. The most important concession is money, and lots of it. Both Google and Apple demand 30 percent of every dime generated with an app - not just the purchase price of the app, but every transaction that takes place within the app after that. The companies have all kinds of onerous rules blocking app makers from asking their users to buy stuff on their website, instead of in the app, or from offering discounts to users who do so.

For avoidance of doubt: 30 percent is a lot. The "normal" rate for payment processing is more like 2-5 percent, a commission that's gone up 40 percent since covid hit, a price-hike that is itself attributable to monopoly power in the sector.That's bad, but Google and Apple demand ten times that (unless you qualify for their small business discount, in which case, they only charge five times more than the Visa/Mastercard cartel).

Epic Games - the company behind the wildly successful multiplayer game Fortnite - has been chasing Google and Apple through the courts over this for years, and last December, they prevailed in their case against Google.

This week's court ruling is the next step in that victory. Having concluded that Google illegally acquired and maintained a monopoly over apps for Android, the court had to decide what to do about it.

It's a great judgment: read it for yourself, or peruse the highlights in this excellent summary from The Verge

For the next three years, Google must meet the following criteria:

  • Allow third-party app stores for Android, and let those app stores distribute all the same apps as are available in Google Play (app developers can opt out of this);
  • Distribute third-party app stores as apps, so users can switch app stores by downloading a new one from Google Play, in just the same way as they'd install any app;
  • Allow apps to use any payment processor, not just Google's 30 percent money-printing machine;
  • Permit app vendors to tell users about other ways to pay for the things they buy in-app;
  • Permit app vendors to set their own prices.

Google is also prohibited from using its cash to fence out rivals, for example, by:

  • Offering incentives to app vendors to launch first on Google Play, or to be exclusive to Google Play;
  • Offering incentives to app vendors to avoid rival app stores;
  • Offering incentives to hardware makers to pre-install Google Play;
  • Offering incentives to hardware makers not to install rival app stores.

These provisions tie in with Google's other recent  loss; in Google v. DoJ, where the company was found to have operated a monopoly over search. That case turned on the fact that Google paid unimaginably vast sums - more than $25 billion per year - to phone makers, browser makers, carriers, and, of course, Apple, to make Google Search the default. That meant that every search box you were likely to encounter would connect to Google, meaning that anyone who came up with a better search engine would have no hope of finding users.

What's so great about these remedies is that they strike at the root of the Google app monopoly. Google locks billions of users into its platform, and that means that software authors are at its mercy. By making it easy for users to switch from one app store to another, and by preventing Google from interfering with that free choice, the court is saying to Google, "You can only remain dominant if you're the best - not because you're holding 3.3 billion Android users hostage."

Interoperability - plugging new features, services and products into existing systems - is digital technology's secret superpower, and it's great to see the courts recognizing how a well-crafted interoperability order can cut through thorny tech problems. 

Google has vowed to appeal. They say they're being singled out, because Apple won a similar case earlier this year. It's true, a different  court got it wrong with Apple.

But Apple's not off the hook, either: the EU's Digital Markets Act took effect this year, and its provisions broadly mirror the injunction that just landed on Google. Apple responded to the EU by refusing to substantively comply with the law, teeing up another big, hairy battle.

In the meantime, we hope that other courts, lawmakers and regulators continue to explore the possible uses of interoperability to make technology work for its users. This order will have far-reaching implications, and not just for games like Fortnite: the 30 percent app tax is a millstone around the neck of all kinds of institutions, from independent game devs who are dolphins caught in Google's tuna net to the free press itself..

Cory Doctorow

Cop Companies Want All Your Data and Other Takeaways from This Year’s IACP Conference

1 month 3 weeks ago

Artificial intelligence dominated the technology talk on panels, among sponsors, and across the trade floor at this year’s annual conference of the International Association of Chiefs of Police (IACP).

IACP, held Oct. 19 - 22 in Boston, brings together thousands of police employees with the businesses who want to sell them guns, gadgets, and gear. Across the four-day schedule were presentations on issues like election security and conversations with top brass like Secretary of Homeland Security Alejandro Mayorkas. But the central attraction was clearly the trade show floor. 

Hundreds of vendors of police technology spent their days trying to attract new police customers and sell existing ones on their newest projects. Event sponsors included big names in consumer services, like Amazon Web Services (AWS) and Verizon, and police technology giants, like Axon. There was a private ZZ Top concert at TD Garden for the 15,000+ attendees. Giveaways — stuffed animals, espresso, beer, challenge coins, and baked goods — appeared alongside Cybertrucks, massage stations, and tables of police supplies: vehicles, cameras, VR training systems, and screens displaying software for recordkeeping and data crunching.

And vendors were selling more ways than ever for police to surveillance the public and collect as much personal data as possible. EFF will continue to follow up on what we’ve seen in our research and at IACP.

A partial view of the vendor booths at IACP 2024


Doughnuts provided by police tech vendor Peregrine

“All in On AI” Demands Accountability

Police are pushing forward full speed ahead on AI. 

EFF’s Atlas of Surveillance tracks use of AI-powered equipment like face recognition, automated license plate readers, drones, predictive policing, and gunshot detection. We’ve seen a trend toward the integration of these various data streams, along with private cameras, AI video analysis, and information bought from data brokers. We’ve been following the adoption of real-time crime centers. Recently, we started tracking the rise of what we call Third Party Investigative Platforms, which are AI-powered systems that claim to sort or provide huge swaths of data, personal and public, for investigative use. 

The IACP conference featured companies selling all of these kinds of surveillance. Also, each day contained multiple panels on how AI could be integrated into local police work, including featured speakers like Axon founder Rick Smith, Chula Vista Police Chief Roxana Kennedy, and Fort Collins Police Chief Jeff Swoboda, whose agency was among the first to use Axon’s DraftOne, software using genAI to create police reports. Drone as First Responder (DFR) programs were prominently featured by Skydio, Flock Safety, and Brinc. Clearview AI marketed its face recognition software. Axon offered a whole set of different tools, centering its whole presentation around AxonAI and the computer-driven future. 

The booth for police drone provider, Brinc

The policing “solution” du jour is AI, but in reality it demands oversight, skepticism, and, in some cases, total elimination. AI in policing carries a dire list of risks, including extreme privacy violations, bias, false accusations, and the sabotage of our civil liberties. Adoption of such tools at minimum requires community control of whether to acquire them, and if adopted, transparency and clear guardrails. 

The Corporate/Law Enforcement Data Surveillance Venn Diagram Is Basically A Circle

AI cannot exist without data: data to train the algorithms, to analyze even more data, to trawl for trends and generate assumptions. Police have been accruing their own data for years through cases, investigations, and surveillance. Corporations have also been gathering information from us: our behavior online, our purchases, how long we look at an image, what we click on. 

As one vendor employee said to us, “Yeah, it’s scary.” 

Corporate harvesting and monetizing of our data market is wildly unregulated. Data brokers have been busily vacuuming up whatever information they can. A whole industry provides law enforcement access to as much information about as many people as possible, and packages police data to “provide insights” and visualizations. At IACP, companies like LexisNexis, Peregrine, DataMinr, and others showed off how their platforms can give police access to evermore data from tens of thousands of sources. 

Some Cops Care What the Public Thinks

Cops will move ahead with AI, but they would much rather do it without friction from their constituents. Some law enforcement officials remain shaken up by the global 2020 protests following the police murder of George Floyd. Officers at IACP regularly referred to the “public” or the “activists” who might oppose their use of drones and other equipment. One featured presentation, “Managing the Media's 24-Hour News Cycle and Finding a Reporter You Can Trust,” focused on how police can try to set the narrative that the media tells and the public generally believes. In another talk, Chula Vista showed off professionally-produced videos designed to win public favor. 

This underlines something important: Community engagement, questions, and advocacy are well worth the effort. While many police officers think privacy is dead, it isn’t. We should have faith that when we push back and exert enough pressure, we can stop law enforcement’s full-scale invasion of our private lives.

Cop Tech is Coming To Every Department

The companies that sell police spy tech, and many departments that use it, would like other departments to use it, too, expanding the sources of data feeding into these networks. In panels like “Revolutionizing Small and Mid-Sized Agency Practices with Artificial Intelligence,” and “Futureproof: Strategies for Implementing New Technology for Public Safety,” police officials and vendors encouraged agencies of all sizes to use AI in their communities. Representatives from state and federal agencies talked about regional information-sharing initiatives and ways smaller departments could be connecting and sharing information even as they work out funding for more advanced technology.

A Cybertruck at the booth for Skyfire AI

“Interoperability” and “collaboration” and “data sharing” are all the buzz. AI tools and surveillance equipment are available to police departments of all sizes, and that’s how companies, state agencies, and the federal government want it. It doesn’t matter if you think your Little Local Police Department doesn’t need or can’t afford this technology. Almost every company wants them as a customer, so they can start vacuuming their data into the company system and then share that data with everyone else. 

We Need Federal Data Privacy Legislation

There isn’t a comprehensive federal data privacy law, and it shows. Police officials and their vendors know that there are no guardrails from Congress preventing use of these new tools, and they’re typically able to navigate around piecemeal state legislation. 

We need real laws against this mass harvesting and marketing of our sensitive personal information — a real line in the sand that limits these data companies from helping police surveil us lest we cede even more of our rapidly dwindling privacy. We need new laws to protect ourselves from complete strangers trying to buy and search data on our lives, so we can explore and create and grow without fear of indefinite retention of every character we type, every icon we click. 

Having a computer, using the internet, or buying a cell phone shouldn’t mean signing away your life and its activities to any random person or company that wants to make a dollar off of it.

Beryl Lipton

EU to Apple: “Let Users Choose Their Software”; Apple: “Nah”

1 month 3 weeks ago

This year, a far-reaching, complex new piece of legislation comes into effect in EU: the Digital Markets Act (DMA), which represents some of the most ambitious tech policy in European history. We don’t love everything in the DMA, but some of its provisions are great, because they center the rights of users of technology, and they do that by taking away some of the control platforms exercise over users, and handing that control back to the public who rely on those platforms.

Our favorite parts of the DMA are the interoperability provisions. IP laws in the EU (and the US) have all but killed the longstanding and honorable tradition of adversarial interoperability: that’s when you can alter a service, program or device you use, without permission from the company that made it. Whether that’s getting your car fixed by a third-party mechanic, using third-party ink in your printer, or choosing which apps run on your phone, you should have the final word. If a company wants you to use its official services, it should make the best services, at the best price – not use the law to force you to respect its business-model.

It seems the EU agrees with us, at least on this issue. The DMA includes several provisions that force the giant tech companies that control so much of our online lives (AKA “gatekeeper platforms”) to provide official channels for interoperators. This is a great idea, though, frankly, lawmakers should also restore the right of tinkerers and hackers to reverse-engineer your stuff and let you make it work the way you want.

One of these interop provisions is aimed at app stores for mobile devices. Right now, the only (legal) way to install software on your iPhone is through Apple’s App Store. That’s fine, so long as you trust Apple and you think they’re doing a great job, but pobody’s nerfect, and even if you love Apple, they won’t always get it right – like when they tell you you’re not allowed to have an app that records civilian deaths from US drone strikes, or a game that simulates life in a sweatshop, or a dictionary (because it has swear words!). The final word on which apps you use on your device should be yours.

Which is why the EU ordered Apple to open up iOS devices to rival app stores, something Apple categorically refuses to do. Apple’s “plan” for complying with the DMA is, shall we say, sorely lacking (this is part of a grand tradition of American tech giants wiping their butts with EU laws that protect Europeans from predatory activity, like the years Facebook spent ignoring European privacy laws, manufacturing stupid legal theories to defend the indefensible).

Apple’s plan for opening the App Store is effectively impossible for any competitor to use, but this goes double for anyone hoping to offer free and open source software to iOS users. Without free software – operating systems like GNU/Linux, website tools like WordPress, programming languages like Rust and Python, and so on – the internet would grind to a halt.

Our dear friends at Free Software Foundation Europe (FSFE) have filed an important brief with the European Commission, formally objecting to Apple’s ridiculous plan on the grounds that it effectively bars iOS users from choosing free software for their devices.

FSFE’s brief makes a series of legal arguments, rebutting Apple’s self-serving theories about what the DMA really means. FSFE shoots down Apple’s tired argument that copyrights and patents override any interoperability requirements. U.S. courts have been inconsistent on this issue, but we’re hopeful that the Court of Justice of the E.U. will reject the “intellectual property trump card.” Even more importantly, FSFE makes moral and technical arguments about the importance of safeguarding the technological self-determination of users by letting them choose free software, and about why this is as safe – or safer – than giving Apple a veto over its customers’ software choices.

Apple claims that because you might choose bad software, you shouldn’t be able to choose software, period. They say that if competing app stores are allowed to exist, users won’t be safe or private. We disagree – and so do some of the most respected security experts in the world.

It’s true that Apple can use its power wisely to ensure that you only choose good software. But it’s also used that power to attack its users, like in China, where Apple blocked all working privacy tools from iPhones and then neutered a tool used to organize pro-democracy protests.

It’s not just in China, either. Apple has blanketed the world with billboards celebrating its commitment to its users’ privacy, and they made good on that promise, blocking third-party surveillance (to the $10 billion dollar chagrin of Facebook). But right in the middle of all that, Apple also started secretly spying on iOS users to fuel its own surveillance advertising network, and then lied about it.

Pobody’s nerfect. If you trust Apple with your privacy and security, that’s great. But for people who don’t trust Apple to have the final word – for people who value software freedom, or privacy (from Apple), or democracy (in China), users should have the final say.

We’re so pleased to see the EU making tech policy we can get behind – and we’re grateful to our friends at FSFE for holding Apple’s feet to the fire when they flout that law.

Cory Doctorow

The Real Monsters of Street Level Surveillance

1 month 3 weeks ago

Safe trick-or-treating this Halloween means being aware of the real monsters of street-level surveillance. You might not always see these menaces, but they are watching you. The real-world harms of these terrors wreak havoc on our communities. Here, we highlight just a few of the beasts. To learn more about all of the street-level surveillance creeps in your community, check out our even-spookier resource, sls.eff.org

If your blood runs too cold, take a break with our favorite digital rights legends— the Encryptids.

The Face Stealer

Careful where you look. Around any corner may loom the Face Stealer, an arachnid mimic that captures your likeness with just a glance. Is that your mother in the woods? Your roommate down the alley? The Stealer thrives on your dread and confusion, luring you into its web. Everywhere you go, strangers and loved ones alike recoil, convinced you’re something monstrous. Survival means adapting to a world where your face is no longer yours—it’s a lure for the horror that claimed it.

The Real Monster

Face recognition technology (FRT) might not jump out at you, but the impacts of this monster are all too real. EFF wants to banish this monster with a full ban on government use, and prohibit companies from feeding on this data without permission. FRT is a tool for mass surveillance, snooping on protesters, and deepening social inequalities.

Three-eyed Beast

Freeze! In your weakest moment, you may  encounter the Three-Eyed Beast—and you don’t want to make any sudden movements. As it snarls, its third eye cracks open and sends a chill through your soul. This magical gaze illuminates your every move, identifying every flaw and mistake. The rest of the world is shrouded in darkness as its piercing squeals of delight turn you into a spectacle—sometimes calling in foes like the Face Stealer. The real fear sets in when the eye closes once more, leaving you alone in the shadows as you realize its gaze was the last to ever find you. 

The Real Monster

Body-worn cameras are marketed as a fix for police transparency, but instead our communities get another surveillance tool pointed at us. Officers often decide when to record and what happens to the footage, leading to selective use that shields misconduct rather than exposes it. Even worse, these cameras can house other surveillance threats like Face Recognition Technology. Without strict safeguards, and community control of whether to adopt them in the first place, these cameras do more harm than good.

Shrapnel Wraith

If you spot this whirring abomination, it’s likely too late. The Shrapnel Wraith circles, unleashed on our most under-served and over-terrorized communities. This twisted heap of bolts and gears, puppeted by spiteful spirits into this gestalt form of a vulture. It watches your most private moments, but don’t mistake it for a mere voyeur; it also strikes with lethal force. Its junkyard shrapnel explodes through the air, only for two more vultures to rise from the wreckage. Its shadow swallows the streets, its buzzing sinking through your skin. Danger is circling just overhead.

The Real Monster

Drones and robots give law enforcement constant and often unchecked surveillance power. Frequently equipped with tools like high-definition cameras, heat sensors, and license plate readers, these products can extend surveillance into seemingly private spaces like one’s own backyard.  Worse, some can be armed with explosives and other weapons making them a potentially lethal threat.  Drone and robot use must have strong protections for people’s privacy, and we strongly oppose arming them with any weapons.

Doorstep Creep

Candy-seekers, watch which doors you ring this Halloween, as the Doorstep Creep lurks  at more and more homes. Slinking by the door, this ghoul fosters fear and mistrust in communities, transforming cozy entries into a fortress of suspicion. Your visit feels judged, unwanted, and in a shadow of loathing. As you walk away,  slanderous whispers echo in the home and down the street. You are not welcome here. Doors lock, blinds close, and the Creeps' dark eyes remind you of how alone you are.

The Real Monster

Community Surveillance Apps come in many forms, encouraging the adoption of more home security devices like doorway cameras, smart doorbells, and more crowd-sourced surveillance apps. People come to these apps out of fear and only find more of the same, with greater public paranoia, racial gatekeeping, and even vigilante violence. EFF believes the makers of these platforms should position them away from crime and suspicion and toward community support and mutual aid. 

Foggy Gremlin

Be careful where you step for this scavenger. The Foggy Gremlin sticks to you like a leech, and envelopes you in a psychedelic mist to draw in large predators. You can run, but no longer hide, as the fog spreads and grows denser. Anywhere you go, and anywhere you’ve been is now a hunting ground. As exhaustion sets in, a world once open and bright has become narrow, dark, and sinister.

The Real Monster

Real-time location tracking is a chilling mechanism that enables law enforcement to monitor individuals through data bought from brokers, often without warrants or oversight. Location data, harvested from mobile apps, can be weaponized to conduct area searches that expose sensitive information about countless individuals, the overwhelming majority of whom are innocent. We oppose this digital dragnet and advocate for legislation like the Fourth Amendment is Not For Sale Act to protect individuals from such tracking.

Street Level Surveillance

Fight the monsters in your community

Rory Mir

Disability Rights Are Technology Rights

1 month 3 weeks ago

At EFF, our work always begins from the same place: technological self-determination. That’s the right to decide which technology you use, and how you use it. Technological self-determination is important for every technology user, and it’s especially important for users with disabilities.

Assistive technologies are a crucial aspect of living a full and fulfilling life, which gives people with disabilities motivation to be some of the most skilled, ardent, and consequential technology users in the world. There’s a whole world of high-tech assistive tools and devices out there, with disabled technologists and users intimately involved in the design process. 

The accessibility movement’s slogan, “Nothing about us without us,” has its origins in the first stirrings of European democratic sentiment in sixteenth (!) century and it expresses a critical truth: no one can ever know your needs as well you do. Unless you get a say in how things work, they’ll never work right.

So it’s great to see people with disabilities involved in the design of assistive tech, but that’s where self-determination should start, not end. Every person is different, and the needs of people with disabilities are especially idiosyncratic and fine-grained. Everyone deserves and needs the ability to modify, improve, and reconfigure the assistive technologies they rely on.

Unfortunately, the same tech companies that devote substantial effort to building in assistive features often devote even more effort to ensuring that their gadgets, code and systems can’t be modified by their users.

Take streaming video. Back in 2017, the W3C finalized “Encrypted Media Extensions” (EME), a standard for adding digital rights management (DRM) to web browsers. The EME spec includes numerous accessibility features, including facilities for including closed captioning and audio descriptive tracks.

But EME is specifically designed so that anyone who reverse-engineers and modifies it will fall afoul of Section 1201 of the Digital Millennium Copyright Act (DMCA 1201), a 1998 law that provides for five-year prison-sentences and $500,000 fines for anyone who distributes tools that can modify DRM. The W3C considered – and rejected – a binding covenant that would protect technologists who added more accessibility features to EME.

The upshot of this is that EME’s accessibility features are limited to the suite that a handful of giant technology companies have decided are important enough to develop, and that suite is hardly comprehensive. You can’t (legally) modify an EME-restricted stream to shift the colors to ones that aren’t affected by your color-blindness. You certainly can’t run code that buffers the video and looks ahead to see if there are any seizure-triggering strobe effects, and dampens them if there are. 

It’s nice that companies like Apple, Google and Netflix put a lot of thought into making EME video accessible, but it’s unforgivable that they arrogated to themselves the sole right to do so. No one should have that power.

It’s bad enough when DRM infects your video streams, but when it comes for hardware, things get really ugly. Powered wheelchairs – a sector dominated by a cartel of private-equity backed giants that have gobbled up all their competing firms – have a serious DRM problem.

Powered wheelchair users who need even basic repairs are corralled by DRM into using the manufacturer’s authorized depots, often enduring long waits during which they are unable to leave their homes or even their beds. Even small routine adjustments, like changing the wheel torque after adjusting your tire pressure, can require an official service call.

Colorado passed the country’s first powered wheelchair Right to Repair law in 2022. Comparable legislation is now pending in California, and the Federal Trade Commission has signaled that it will crack down on companies that use DRM to block repairs. But the wheels of justice grind slow – and wheelchair users’ own wheels shouldn’t be throttled to match them.

People with disabilities don’t just rely on devices that their bodies go into; gadgets that go into our bodies are increasingly common, and there, too, we have a DRM problem. DRM is common in implants like continuous glucose monitors and insulin pumps, where it is used to lock people with diabetes into a single vendor’s products, as a prelude to gouging them (and their insurers) for parts, service, software updates and medicine.

Even when a manufacturer walks away from its products, DRM creates insurmountable legal risks for third-party technologists who want to continue to support and maintain them. That’s bad enough when it’s your smart speaker that’s been orphaned, but imagine what it’s like to have an orphaned neural implant that no one can support without risking prison time under DRM laws.

Imagine what it’s like to have the bionic eye that is literally wired into your head go dark after the company that made it folds up shop – survived only by the 95-year legal restrictions that DRM law provides for, restrictions that guarantee that no one will provide you with software that will restore your vision.

Every technology user deserves the final say over how the systems they depend on work. In an ideal world, every assistive technology would be designed with this in mind: free software, open-source hardware, and designed for easy repair.

But we’re living in the Bizarro world of assistive tech, where not only is it normal to distribute tools for people with disabilities are designed without any consideration for the user’s ability to modify the systems they rely on – companies actually dedicate extra engineering effort to creating legal liability for anyone who dares to adapt their technology to suit their own needs.

Even if you’re able-bodied today, you will likely need assistive technology or will benefit from accessibility adaptations. The curb-cuts that accommodate wheelchairs make life easier for kids on scooters, parents with strollers, and shoppers and travelers with rolling bags. The subtitles that make TV accessible to Deaf users allow hearing people to follow along when they can’t hear the speaker (or when the director deliberately chooses to muddle the dialog). Alt tags in online images make life easier when you’re on a slow data connection.

Fighting for the right of disabled people to adapt their technology is fighting for everyone’s rights.

(EFF extends our thanks to Liz Henry for their help with this article.)

Cory Doctorow

The UK Must Act: Alaa Abd El-Fattah Still Imprisoned 25 Days After Release Date

1 month 4 weeks ago

It’s been 25 days since September 29, the day that should have seen British-Egyptian blogger, coder, and activist Alaa Abd El Fattah walk free. Egyptian authorities refused to release him at the end of his sentence, in contradiction of the country's own Criminal Procedure Code, which requires that time served in pretrial detention count toward a prison sentence. In the days since, Alaa’s family has been able to secure meetings with high-level British officials, including Foreign Secretary David Lammy, but as of yet, the Egyptian government still has not released Alaa.

In early October, Alaa was named the 2024 PEN Writer of Courage by PEN Pinter Prize winner Arundhati Roy, who presented the award in a ceremony where it was received by Egyptian publication Mada Masr editor Lina Attalah on Alaa’s behalf.

Alaa’s mother, Laila Soueif, is now on her third week of hunger strike and says that she won’t stop until Alaa is free or she’s taken to the hospital. In recent weeks, Alaa’s mothers and sisters have met with several members of Parliament in the hopes of placing more pressure on officials. As the BBC reports, his family are “deeply disappointed with how the current government, and the previous one, have handled his case” and believe that the UK has more leverage with Egypt that it is not using.

Alaa deserves to finally return to his family, now in the UK, and to be reunited with his son, Khaled, who is now a teenager. We urge EFF supporters in the UK to write to their MP (external link) to place pressure on the UK’s Labour government to use their power to push for Alaa’s release. 

Jillian C. York

In Appreciation of David Burnham

1 month 4 weeks ago

We at EFF have long recognized the threats posed by the unchecked technological prowess of law enforcement and intelligence agencies. Since our founding in 1990, we have been in the forefront of efforts to impose meaningful legal controls and accountability on the secretive activities of those entities, including the National Security Agency (NSA). While the U.S. Senate’s Church Committee hearings and report in the mid-1970s documented the past abuses of government surveillance powers, it could not anticipate the dangers those interception and collection capabilities would bring to a networked environment. As Sen. Frank Church said in 1975 about an unchecked NSA, “No American would have any privacy left, such is the capability to monitor everything: telephone conversations, telegrams, it doesn't matter. There would be no place to hide.” The communications infrastructure was still in a mid-20th century analog mode.

burnham.jpg One of the first observers to recognize the impact of NSA’s capabilities in the emerging digital landscape was David Burnham, a pioneering investigative journalist and author who passed away earlier this month at 91 years of age. While the obituary that ran at his old home, The New York Times, rightly emphasized Burnham’s ground-breaking investigations of police corruption and the shoddy safety standards of the nuclear power industry (depicted, respectively, in the films “Serpico” and “Silkwood”), those in the digital rights world are especially appreciative of his prescience when it came to the issues we care about deeply.

In 1983, Burnham published “The Rise of the Computer State,” one of the earliest examinations of the emerging challenges of the digital age. As Walter Cronkite wrote in his foreword to the book, “The same computer that enables us to explore the outer reaches of space and the mysteries of the atom can also be turned into an instrument of tyranny. We must ensure that the rise of the computer state does not also mean the demise of our civil liberties.” Here is what Burnham wrote in a piece for The New York Times Magazine based on the reporting in his book:

With unknown billions of Federal dollars, the [NSA] purchases the most sophisticated communications and computer equipment in the world. But truly to comprehend the growing reach of this formidable organization, it is necessary to recall once again how the computers that power the NSA are also gradually changing lives of Americans - the way they bank, obtain benefits from the Government and communicate with family and friends. Every day, in almost every area of culture and commerce, systems and procedures are being adopted by private companies and organizations...that make it easier for the NSA to dominate American society...

Remember, that was written in 1983. Ten years before the launch of the Mosaic browser and three decades before mobile devices became ubiquitous. But Burnham understood the trajectory of the emerging technology, for both the government and its citizens.

Recognizing the dangers of unchecked surveillance powers, Burnham was a champion of oversight and transparency, and, consequently, he was a skilled and aggressive user of the Freedom of Information Act. In 1989, he partnered with Professor Susan Long to establish the Transactional Records Access Clearinghouse (TRAC) at Syracuse University. TRAC combines sophisticated use of FOIA with data analytics techniques “to develop as comprehensive and detailed a picture as possible about what federal enforcement and regulatory agencies actually do . . . and to organize all of this information to make it readily accessible to the public.” From its FOIA requests, TRAC adds more than 3 billion new records to its database annually. Its work is widely acclaimed by the many academics, journalists and lawyers who make use of its extensive resources. It is a fitting legacy to Burnham’s unwavering belief in the power of information.

As EFF Executive Director Cindy Cohn has said when describing our work, we stand on the shoulders of giants. With his recognition of technology’s challenges to privacy, his insistence on transparency, and his joy in telling truth to power, David Burnham was one of them.

Full disclosure: David was a longtime colleague, client and friend.

David Sobel

How Many U.S. Persons Does Section 702 Spy On? The ODNI Needs to Come Clean.

1 month 4 weeks ago

EFF has joined with 23 other organizations including the ACLU, Restore the Fourth, the Brennan Center for Justice, Access Now, and the Freedom of the Press Foundation to demand that the Office of the Director of National Intelligence (ODNI) furnish the public with an estimate of exactly how many U.S. persons’ communications have been hoovered up, and are now sitting on a government server for law enforcement to unconstitutionally sift through at their leisure.

This letter was motivated by the fact that representatives of the National Security Agency (NSA) have promised in the past to provide the public with an estimate of how many U.S. persons—that is, people on U.S. soil—have had their communications “incidentally” collected through the surveillance authority Section 702 of the FISA Amendments Act. 

As the letter states, “ODNI and NSA cannot expect public trust to be unconditional. If ODNI and NSA continue to renege on pledges to members of Congress, and to withhold information that lawmakers, civil society, academia, and the press have persistently sought over the course of thirteen years, that public trust will be fatally undermined.”

Section 702 allows the government to conduct surveillance of foreigners abroad from inside the United States. It operates, in part, through the cooperation of large and small telecommunications service providers which hand over the digital data and communications they oversee. While Section 702 prohibits the NSA from intentionally targeting Americans with this mass surveillance, these agencies routinely acquire a huge amount of innocent Americans' communications “incidentally” because, as it turns out, people in the United States communicate with people overseas all the time. This means that the U.S. government ends up with a massive pool consisting of the U.S.-side of conversations as well as communications from all over the globe. Domestic law enforcement agencies, including the Federal Bureau of Investigation (FBI), can then conduct backdoor warrantless searches of these “incidentally collected” communications. 

For over 10 years, EFF has fought hard every time Section 702 expires in the hope that we can get some much-needed reforms into any bills that seek to reauthorize the authority. Most recently, in spring 2024, Congress renewed Section 702 for another two years with none of the changes necessary to restore privacy rights

While we wait for the upcoming opportunity to fight Section 702, joining our allies to sign on to this letter in the fight for transparency will give us a better understanding of the scope of the problem.

You can read the whole letter here.

Matthew Guariglia

EFF to Massachusetts’ Highest Court: Pretrial Electronic Monitoring Should Not Eviscerate Privacy Rights

1 month 4 weeks ago

When someone is placed on location monitoring for one purpose, it does not justify law enforcement’s access to that information for a completely different purpose without a proper warrant. 

EFF joined the Committee for Public Counsel Services, ACLU, ACLU of Massachusetts, and the Massachusetts Association of Criminal Defense Lawyers, in filing an amicus brief in the Massachusetts Supreme Judicial Court, in Commonwealth v. Govan, arguing just that. 

In this case, the defendant Anthony Govan was subjected to pretrial electronic monitoring as a condition of release prior to trial. In investigating a completely unrelated crime, the police asked the pretrial electronic monitoring division for the identity and location of “anyone” who was near the location of this latter incident. Mr. Govan’s data was part of the response, and that information was used against him in this unrelated case. 

Our joint amicus brief highlighted the coercive nature of electronic monitoring programs. When the alternative is being locked up, there is no meaningful consent to the collection of information under electronic monitoring. At the same time, as someone on pretrial release, Mr. Govan had a reasonable expectation of privacy in his location information. As courts, including the U.S. Supreme Court, have recognized, location and movement information are incredibly sensitive and revealing. Just because someone is on electronic monitoring, it doesn’t mean they have no expectation of privacy, whether they are going to a political protest, a prayer group, an abortion clinic, a gun show, or their private home. Pretrial electronic monitoring collects this information around the clock—information that otherwise would not have been available to law enforcement through traditional tools.  

The violation of privacy is especially problematic in this case, because Mr. Govan had not been convicted and is still presumed to be innocent. According to current law, those on pretrial release are entitled to far stronger Fourth Amendment protections than those who are on monitored release after a conviction. As argued in the amicus brief, absent a proper warrant, the information gathered by the electronic monitoring program should only be used to make sure Mr. Govan was complying with his pretrial release conditions. 

Lastly, although this case is decided on the absence of a warrant or a warrant exception, we argued that the court should provide guidance for future warrants. The Fourth Amendment and its state corollaries prohibit “general warrants,” akin to a fishing expedition, and instead require warrants meet nexus and particularity requirements.  Bulk location data requests like the one in this case cannot meet that standard.  

While electronic monitoring is marketed as an alternative to detention, the evidence does not bear this out. Courts should not allow the government to use the information gathered from this expansion of state surveillance to be used beyond its purpose without a warrant.

Hannah Zhao

U.S. Border Surveillance Towers Have Always Been Broken

1 month 4 weeks ago

A new bombshell scoop from NBC News revealed an internal U.S. Border Patrol memo claiming that 30 percent of camera towers that compose the agency's "Remote Video Surveillance System" (RVSS) program are broken. According to the report, the memo describes "several technical problems" affecting approximately 150 towers.

Except, this isn't a bombshell. What should actually be shocking is that Congressional leaders are acting shocked, like those who recently sent a letter about the towers to Department of Homeland Security (DHS) Secretary Alejandro Mayorkas. These revelations simply reiterate what people who have been watching border technology have known for decades: Surveillance at the U.S.-Mexico border is a wasteful endeavor that is ill-equipped to respond to an ill-defined problem.

Yet, after years of bipartisan recognition that these programs were straight-up boondoggles, there seems to be a competition among political leaders to throw the most money at programs that continue to fail.

Official oversight reports about the failures, repeated breakages, and general ineffectiveness of these camera towers have been public since at least the mid-2000s. So why haven't border security agencies confronted the problem in the last 25 years? One reason is that these cameras are largely political theater; the technology dazzles publicly, then fizzles quietly. Meanwhile, communities that should be thriving at the border are treated like a laboratory for tech companies looking to cash in on often exaggerated—if not fabricated—homeland security threats.

The Acronym Game

EFF is mapping surveillance at the U.S.-Mexico border

.

In fact, the history of camera towers at the border is an ugly cycle. First, Border Patrol introduces a surveillance program with a catchy name and big promises. Then a few years later, oversight bodies, including Congress, conclude it's an abject mess. But rather than abandon the program once and for all, border security officials come up with a new name, slap on a fresh coat of paint, and continue on. A few years later, history repeats.

In the early 2000s, there was the Integrated Surveillance Intelligence System (ISIS), with the installation of RVSS towers in places like Calexico, California and Nogales, Arizona, which was later became the America's Shield Initiative (ASI). After those failures, there was Project 28 (P-28), the first stage of the Secure Border Initiative (SBInet). When that program was canceled, there were various new programs like the Arizona Border Surveillance Technology Plan, which became the Southwest Border Technology Plan. Border Patrol introduced the Integrated Fixed Tower (IFT) program and the RVSS Update program, then the Automated Surveillance Tower (AST) program. And now we've got a whole slew of new acronyms, including the Integrated Surveillance Tower (IST) program and the Consolidated Towers and Surveillance Equipment (CTSE) program.

Feeling overwhelmed by acronyms? Welcome to the shell game of border surveillance. Here's what happens whenever oversight bodies take a closer look.

ISIS and ASI

An RVSS from the early 2000s in Calexico, California.

Let's start with the Integrated Surveillance Intelligence System (ISIS), a program comprised of towers, sensors and databases originally launched in 1997 by the Immigration and Naturalization Service. A few years later, INS was reorganized into the U.S. Department of Homeland Security (DHS), and ISIS became part of the newly formed Customs & Border Protection (CBP).

It was only a matter of years before the DHS Inspector General concluded that ISIS was a flop: "ISIS remote surveillance technology yielded few apprehensions as a percentage of detection, resulted in needless investigations of legitimate activity, and consumed valuable staff time to perform video analysis or investigate sensor alerts."

During Senate hearings, Sen. Judd Gregg (R-NH), complained about a "total breakdown in the camera structures," and that the U.S. government "bought cameras that didn't work."

Around 2004, ISIS was folded into the new America's Shield Initiative (ASI), which officials claimed would fix those problems. CBP Commissioner Robert Bonner even promoted ASI as a "critical part of CBP’s strategy to build smarter borders." Yet, less than a year later, Bonner stepped down, and the Government Accountability Office (GAO) found the ASI had numerous unresolved issues necessitating a total reevaluation. CBP disputed none of the findings and explained it was dismantling ASI in order to move onto something new that would solve everything: the Secure Border Initiative (SBI).

Reflecting on the ISIS/ASI programs in 2008, Rep. Mike Rogers (R-MI) said, "What we found was a camera and sensor system that was plagued by mismanagement, operational problems, and financial waste. At that time, we put the Department on notice that mistakes of the past should not be repeated in SBInet." 

You can guess what happened next. 

P-28 and SBInet

The subsequent iteration was called Project 28, which then evolved into the Secure Border Initiative's SBInet, starting in the Arizona desert. 

In 2010, the DHS Chief Information Officer summarized its comprehensive review: "'Project 28,' the initial prototype for the SBInet system, did not perform as planned. Project 28 was not scalable to meet the mission requirements for a national comment [sic] and control system, and experienced significant technical difficulties." 

A DHS graphic illustrating the SBInet concept

Meanwhile, bipartisan consensus had emerged about the failure of the program, due to the technical problems as well as contracting irregularities and cost overruns.

As Rep. Christopher Carney (D-PA) said in his prepared statement during Congressional hearings:

P–28 and the larger SBInet program are supposed to be a model of how the Federal Government is leveraging technology to secure our borders, but Project 28, in my mind, has achieved a dubious distinction as a trifecta of bad Government contracting: Poor contract management; poor contractor performance; and a poor final product.

Rep. Rogers' remarks were even more cutting: "You know the history of ISIS and what a disaster that was, and we had hoped to take the lessons from that and do better on this and, apparently, we haven’t done much better. "

Perhaps most damning of all was yet another GAO report that found, "SBInet defects have been found, with the number of new defects identified generally increasing faster than the number being fixed—a trend that is not indicative of a system that is maturing." 

In January 2011, DHS Secretary Janet Napolitano canceled the $3-billion program.

IFTs, RVSSs, and ASTs

Following the termination of SBInet, the Christian Science Monitor ran the naive headline, "US cancels 'virtual fence' along Mexican border. What's Plan B?" Three years later, the newspaper answered its own question with another question, "'Virtual' border fence idea revived. Another 'billion dollar boondoggle'?"

Boeing was the main contractor blamed for SBINet's failure, but Border Patrol ultimately awarded one of the biggest new contracts to Elbit Systems, which had been one of Boeing's subcontractors on SBInet. Elbit began installing IFTs (again, that stands for "Integrated Fixed Towers") in many of the exact same places slated for SBInet. In some cases, the equipment was simply swapped on an existing SBInet tower.

Meanwhile, another contractor, General Dynamics Information Technology, began installing new RVSS towers and upgrading old ones as part of the RVSS-U program. Border Patrol also started installing hundreds of "Autonomous Surveillance Towers" (ASTs) by yet another vendor, Anduril Industries, embracing the new buzz of artificial intelligence.

An Autonomous Surveillance Tower and an RVSS tower along the Rio Grande.

In 2017, the GAO complained the Border Patrol's poor data quality made the agency "limited in its ability to determine the mission benefits of its surveillance technologies." In one case, Border Patrol stations in the Rio Grande Valley claimed IFTs assisted in 500 cases in just six months. The problem with that assertion was there are no IFTs in Texas or, in fact, anywhere outside Arizona.

A few years later, the DHS Inspector General issued yet another report indicating not much had improved:

CBP faced additional challenges that reduced the effectiveness of its existing technology. Border Patrol officials stated they had inadequate personnel to fully leverage surveillance technology or maintain current information technology systems and infrastructure on site. Further, we identified security vulnerabilities on some CBP servers and workstations not in compliance due to disagreement about the timeline for implementing DHS configuration management requirements.

CBP is not well-equipped to assess its technology effectiveness to respond to these deficiencies. CBP has been aware of this challenge since at least 2017 but lacks a standard process and accurate data to overcome it.

Overall, these deficiencies have limited CBP’s ability to detect and prevent the illegal entry of noncitizens who may pose threats to national security.

Around that same time, the RAND Corporation published a study funded by DHS that found "strong evidence" the IFT program was having no impact on apprehension levels at the border, and only "weak" and "inconclusive" evidence that the RVSS towers were having any effect on apprehensions.

And yet, border authorities and their supporters in Congress are continuing to promote unproven, AI-driven technologies as the latest remedy for years of failures, including the ones voiced in the memo obtained by NBC News. These systems involve cameras controlled by algorithms that automatically identify and track objects or people of interest. But in an age when algorithmic errors and bias are being identified nearly everyday in every sector including law enforcement, it is unclear how this technology has earned the trust of the government.

History Keeps Repeating

That brings us today, with reportedly 150 or more towers out of service. So why does Washington keep supporting surveillance at the border? Why are they proposing record-level funding for a system that seems irreparable? Why have they abandoned their duty to scrutinize federal programs?

Well, one reason may be that treating problems at the border as humanitarian crises or pursuing foreign policy or immigration reform measures isn't as politically useful as promoting a phantom "invasion" that requires a military-style response. Another reason may be that tech companies and defense contractors wield immense amounts of influence and stand to make millions, if not billions, profiting off border surveillance. The price is paid by taxpayers, but also in the civil liberties of border communities and the human rights of asylum seekers and migrants.

But perhaps the biggest reason this history keeps repeating itself is that no one is ever really held accountable for wasting potentially billions of dollars on high-tech snake oil.

Dave Maass

EFF to Third Circuit: TikTok Has Section 230 Immunity for Video Recommendations

2 months ago

UPDATE: On October 23, 2024, the Third Circuit denied TikTok's petition for rehearing en banc.

EFF legal intern Nick Delehanty was the principal author of this post.

EFF filed an amicus brief in the U.S. Court of Appeals for the Third Circuit in support of TikTok’s request that the full court reconsider the case Anderson v. TikTok after a three-judge panel ruled that Section 230 immunity doesn’t apply to TikTok’s recommendations of users’ videos. We argued that the panel was incorrect on the law, and this case has wide-ranging implications for the internet as we know it today. EFF was joined on the brief with Center for Democracy & Technology (CDT), Foundation for Individual Rights and Expression (FIRE), Public Knowledge, Reason Foundation, and Wikimedia Foundation.

At issue is the panel’s misapplication of First Amendment precedent. The First Amendment protects the editorial decisions of publishers about whether and how to display content, such as the videos TikTok displays to users through its recommendation algorithm.

Additionally, because common law allows publishers to be liable for other people’s content that they publish (for example, letters to the editor that are defamatory in print newspapers) due to limited First Amendment protection, Congress passed Section 230 to protect online platforms from liability for harmful user-generated content.

Section 230 has been pivotal for the growth and diversity of the internet—without it, internet intermediaries would potentially be liable for every piece of content posted by users, making them less likely to offer open platforms for third-party speech.

In this case, the Third Circuit panel erroneously held that since TikTok enjoys protection for editorial choices under the First Amendment, TikTok’s recommendations of user videos amount to TikTok’s first-party speech, making it ineligible for Section 230 immunity. In our brief, we argued that First Amendment protection for editorial choices and Section 230 protection are not mutually exclusive.

We also argued that the panel’s ruling does not align with what every other circuit has found: that Section 230 also immunizes the editorial decisions of internet intermediaries. We made four main points in support of this argument:

  • First, the panel ignored the text of Section 230 in that editorial choices are included in the commonly understood definition of “publisher” in the statute.
  • Second, the panel created a loophole in Section 230 by allowing plaintiffs who were harmed by user-generated content to bypass Section 230 by focusing on an online platform’s editorial decisions about how that content was displayed.
  • Third, it’s crucial that Section 230 protects editorial decisions notwithstanding additional First Amendment protection because Section 230 immunity is not only a defense against liability, it’s also a way to end a lawsuit early. Online platforms might ultimately win lawsuits on First Amendment grounds, but the time and expense of protracted litigation would make them less interested in hosting user-generated content. Section 230’s immunity from suit (as well as immunity from liability) advances Congress’ goal of encouraging speech at scale on the internet.
  • Fourth, TikTok’s recommendations specifically are part of a publisher’s “traditional editorial functions” because recommendations reflect choices around the display of third-party content and so are protected by Section 230.

We also argued that allowing the panel’s decision to stand would harm not only internet intermediaries, but all internet users. If internet intermediaries were liable for recommending or otherwise deciding how to display third-party content posted to their platforms, they would end useful content curation and engage in heavy-handed censorship to remove anything that might be legally problematic from their platforms. These responses to a weakened Section 230 would greatly limit users’ speech on the internet.

The full Third Circuit should recognize the error of the panel’s decision and reverse to preserve free expression online.

Sophia Cope

A Flourishing Internet Depends on Competition

2 months ago

Antitrust law has long recognized that monopolies stifle innovation and gouge consumers on price. When it comes to Big Tech, harm to innovation—in the form of  “kill zones,” where major corporations buy up new entrants to a market before they can compete with them—has been easy to find. Consumer harms have been harder to quantify, since a lot of services the Big Tech companies offer are “free.” This is why we must move beyond price as the major determinator of consumer harm. And once that’s done, it’s easier to see even greater benefits competition brings to the greater internet ecosystem. 

In the decades since the internet entered our lives, it has changed from a wholly new and untested environment to one where a few major players dominate everyone's experience. Policymakers have been slow to adapt and have equated what's good for the whole internet with what is good for those companies. Instead of a balanced ecosystem, we have a monoculture. We need to eliminate the build up of power around the giants and instead have fertile soil for new growth.

Content Moderation 

In content moderation, for example, it’s basically rote for experts to say that content moderation is impossible at scale. Facebook reports over three billion active users and is available in over 100 languages. However, Facebook is an American company that primarily does its business in English. Communication, in every culture, is heavily dependent on context. Even if it was hiring experts in every language it is in, which it manifestly is not, the company itself runs on American values. Being able to choose a social media service rooted in your own culture and language is important. It’s not that people have to choose that service, but it’s important that they have the option.  

This sometimes happens in smaller fora. For example, the knitting website Ravelry, a central hub for patterns and discussions about yarn, banned all discussions about then-President Donald Trump in 2019, as it was getting toxic. A number of disgruntled users banded together to make their disallowed content available in other places. 

In a competitive landscape, instead of demanding that Facebook or Twitter, or YouTube have the exact content rules you want, you could pick a service with the ones you want. If you want everything protected by the First Amendment, you could find it. If you want an environment with clear rules, consistently enforced, you could find that. Especially since smaller platforms could actually enforce its rules, unlike the current behemoths.  

Product Quality 

The same thing applies to product quality and the “enshittification” of platforms. Even if all of Facebook’s users spoke the same language, that’s no guarantee that they share the same values, needs, or wants. But, Facebook is an American company and it conducts its business largely in English and according to American cultural norms. As it is, Facebook’s feeds are designed to maximize user engagement and time on the service. Some people may like the recommendation algorithm, but other may want the traditional chronological feed. There’s no incentive for Facebook to offer the choice because it is not concerned with losing users to a competitor that does. It’s concerned with being able to serve as many ads to as many people as possible. In general, Facebook lacks user controls that would allow people to customize their experience on the site. That includes the ability to reorganize your feed to be chronological, to eliminate posts from anyone you don’t know, etc. There may be people who like the current, ad-focused algorithm, but no one else can get a product they would like. 

Another obvious example is how much the experience of googling something has deteriorated. It’s almost hack to complain about it now, but when when it started, Google was revolutionary in its ability to a) find exactly what you were searching for and b) allow normal language searching (that is, not requiring you to use boolean searches in order to get the desired result). Google’s secret sauce was, for a long time, the ability to find the right result to a totally unique search query. If you could remember some specific string of words in the thing you were looking for, Google could find it. However, in the endless hunt for “growth,” Google moved away from quality search results and towards quantity.  It also clogged the first page of results with ads and sponsored links.  

Morals, Privacy, and Security 

There are many individuals and small businesses that would like to avoid using Big Tech services, either because they are bad or because they have ethical and moral concerns. But, the bigger they are, the harder it is to avoid. For example, even if someone decides not to buy products from Amazon.com because they don’t agree with how it treats its workers, they may not be able to avoid patronizing Amazon Web Services (AWS), which funds the commerce side of the business. Netflix, The Guardian, Twitter, and Nordstrom are all companies that pay for Amazon’s services. The Mississippi Department of Employment Security moved its data management to Amazon in 2021. Trying to avoid Amazon entirely is functionally impossible. This means that there is no way for people to “vote with their feet,” withholding their business from companies they disagree with.  

Security and privacy are also at risk without competition. For one thing, it’s easier for a malicious actor or oppressive state to get what they want when it’s all in the hands of a single company—a single point of failure. When a single company controls the tools everyone relies on, an outage cripples the globe. This digital monoculture was on display during this year's Crowdstrike outage, where one badly-thought-out update crashed networks across the world and across industries. The personal danger of digital monoculture shows itself when Facebook messages are used in a criminal investigation against a mother and daughter discussing abortion and in “geofence warrants” that demand Google turn over information about every device within a certain distance of a crime. For another thing, when everyone is only able to share expression in a few places that makes it easier for regimes to target certain speech and for gatekeepers to maintain control over creativity.  

Another example of the relationship between privacy and competition is Google’s so-called “Privacy Sandbox.” Google’s messaged it as removing “third-party cookies” that track you across the internet. However, the change actually just moved that data into the sole control of Google, helping cement its ad monopoly. Instead of eliminating tracking, the Privacy Sandbox does tracking within the browser directly, allowing Google to charge for access to the insights gleaned from your browsing history with advertisers and websites, rather than those companies doing it themselves. It’s not more privacy, it’s just concentrated control of data. 

You see this same thing at play with Apple’s app store in the saga of Beeper Mini, an app that allowed secure communications through iMessage between Apple and non-Apple phones. In doing so, it eliminated the dreaded “green bubbles” that indicated that messages were not encrypted (ie not between two iPhones). While Apple’s design choice was, in theory, meant to flag that your conversation wasn’t secure, it ended up being a design choice that motivated people to get iPhones just to avoid the stigma. Beeper Mini made messages more secure and removed the need to get a whole new phone to get rid of the green bubble. So Apple moved to break Beeper Mini, effectively choosing monopoly over security. If Apple had moved to secure non-iPhone messages on its own, that would be one thing. But it didn’t, it just prevented users from securing them on their own.  

Obviously, competition isn’t a panacea. But, like privacy, its prioritization means less emergency firefighting and more fire prevention. Think of it as a controlled burn—removing the dross that smothers new growth and allows fires to rage larger than ever before.  

Katharine Trendacosta

California Attorney General Issues New Guidance on Military Equipment to Law Enforcement

2 months ago

California law enforcement should take note: the state’s Attorney General has issued a new bulletin advising them on how to comply with AB 481—a state law that regulates how law enforcement agencies can use, purchase, and disclose information about military equipment at their disposal. This important guidance comes in the wake of an exposé showing that despite awareness of AB 481, the San Francisco Police Department (SFPD) flagrantly disregarded the law. EFF applauds the Attorney General’s office for reminding police and sheriff’s departments what the law says and what their obligations are, and urges the state’s top law enforcement officer to monitor agencies’ compliance with the law.

The bulletin emphasizes that law enforcement agencies must seek permission from governing bodies like city councils or boards of supervisors before buying any military equipment, or even applying for grants or soliciting donations to procure that equipment. The bulletin also reminds all California law enforcement agencies and state agencies with law enforcement divisions of their transparency obligations: they must post on their website a military equipment use policy that describes, among other details, the capabilities, purposes and authorized uses, and financial impacts of the equipment, as well as oversight and enforcement mechanisms for violations of the policy. Law enforcement agencies must also publish an annual military equipment report that provides information on how the equipment was used the previous year and the associated costs.

Agencies must cease use of any military equipment, including drones, if they have not sought the proper permission to use them. This is particularly important in San Francisco, where the SFPD has been caught, via public records, purchasing drones without seeking the proper authorization first, over the warnings of the department’s own policy officials.

In a climate where few cities and states have laws governing what technology and equipment police departments can use, Californians are fortunate to have regulations like AB 481 requiring transparency, oversight, and democratic control by elected officials of military equipment. But those regulations are far less effective if there is no accountability mechanism to ensure that police and sheriff’s departments follow them.


The SFPD and all other California law enforcement agencies must re-familiarize themselves with the rules. Police and sheriff’s departments must obtain permission and justify purchases before they buy military equipment, have use policies approved by their local governing body, and  provide yearly reports about what they have and how much it costs.

Matthew Guariglia

Prosecutors in Washington State Warn Police: Don’t Use Gen AI to Write Reports

2 months ago

The King County Prosecuting Attorney’s Office, which handles all prosecutions in the Seattle area, has instructed police in no uncertain terms: do not use AI to write police reports...for now. This is a good development. We hope prosecutors across the country will exercise such caution as companies continue to peddle technology – generative artificial intelligence (genAI) to help write police reports – that could harm people who come into contact with the criminal justice system.

Chief Deputy Prosecutor Daniel J. Clark said in a memo about AI-based tools to write narrative police reports based on body camera audio that the technology as it exists is “one we are not ready to accept.”

The memo continues,“We do not fear advances in technology – but we do have legitimate concerns about some of the products on the market now... AI continues to develop and we are hopeful that we will reach a point in the near future where these reports can be relied on. For now, our office has made the decision not to accept any police narratives that were produced with the assistance of AI.” We would add that, while EFF embraces advances in technology, we doubt genAI in the near future will be able to help police write reliable reports.

We agree with Chief Deputy Clark that: “While an officer is required to edit the narrative and assert under penalty of perjury that it is accurate, some of the [genAI] errors are so small that they will be missed in review.”

This is a well-reasoned and cautious approach. Some police want to cut the time they spend writing reports, and Axon’s new product DraftOne claims to do so by  exporting the labor to machines. But the public, and other local agencies, should be skeptical of this tech. After all, these documents are often essential for prosecutors to build their case, for district attorneys to recommend charges, and for defenders to cross examine arresting officers.

To read more on generative AI and police reports, click here

Matthew Guariglia

Preemption Playbook: Big Tech’s Blueprint Comes Straight from Big Tobacco

2 months ago

Big Tech is borrowing a page from Big Tobacco's playbook to wage war on your privacy, according to Jake Snow of the ACLU of Northern California. We agree.  

In the 1990s, the tobacco industry attempted to use federal law to override a broad swath of existing state laws and prevent states from future action on those areas. For Big Tobacco, it was the “Accommodation Program,” a national campaign ultimately aimed to override state indoor smoking laws with weaker federal law. Big Tech is now attempting this with federal privacy bills, like the American Privacy Rights Act (APRA), that would preempt many state privacy laws.  

In “Big Tech is Trying to Burn Privacy to the Ground–And They’re Using Big Tobacco’s Strategy to Do It,” Snow outlines a three-step process that both industries have used to weaken state laws. Faced with a public relations crisis, the industries:

  1. Muddy the waters by introducing various weak bills in different states.
  2. Complain that they are too confusing to comply with, 
  3. Ask for “preemption” of grassroots efforts.

“Preemption” is a legal doctrine that allows a higher level of government to supersede the power of a lower level of government (for example, a federal law can preempt a state law, and a state law can preempt a city or county ordinance).  

EFF has a clear position on this: we oppose federal privacy laws that preempt current and future state privacy protections, especially by a lower federal standard.  

Congress should set a nationwide baseline for privacy, but should not take away states' ability to react in the future to current and unforeseen problems. Earlier this year, EFF joined ACLU and dozens of digital and human rights organizations in opposing APRA’s preemption sections. The letter points out that, "the soundest approach to avoid the harms from preemption is to set the federal standard as a national baseline for privacy protections — and not a ceiling.” EFF led a similar coalition effort in 2018.  

Companies that collect and use our data—and have worked to kill strong state privacy bills time and again— want Congress to believe a "patchwork" of state laws is unworkable for data privacy. But many existing federal laws concerning privacy, civil rights, and more operate as regulatory floors and do not prevent states from enacting and enforcing their own stronger statutes. Complaints of this “patchwork” have long been a part of the strategy for both Big Tech and Big Tobacco.  

States have long been the “laboratories of democracy” and have led the way in the development of innovative privacy legislation. Because of this, federal laws should establish a floor and not a ceiling, particularly as new challenges rapidly emerge. Preemption would leave consumers with inadequate protections, and make them worse off than they would be in the absence of federal legislation.  

Congress never preempted states' authority to enact anti-smoking laws, despite Big Tobacco’s strenuous efforts. So there is hope that Big Tech won’t be able to preempt state privacy law, either. EFF will continue advocating against preemption to ensure that states can protect their citizens effectively. 

Read Jake Snow’s article here.

Rindala Alajaji

Courts Agree That No One Should Have a Monopoly Over the Law. Congress Shouldn’t Change That

2 months ago

Some people just don’t know how to take a hint. For more than a decade, giant standards development organizations (SDOs) have been fighting in courts around the country, trying use copyright law to control access to other laws. They claim that that they own the copyright in the text of some of the most important regulations in the country – the codes that protect product, building and environmental safety--and that they have the right to control access to those laws. And they keep losing because, it turns out, from New York, to Missouri, to the District of Columbia, judges understand that this is an absurd and undemocratic proposition. 

They suffered their latest defeat in Pennsylvania, where  a district court held that UpCodes, a company that has created a database of building codes – like the National Electrical Code--can include codes incorporated by reference into law. ASTM, a private organization that coordinated the development of some of those codes, insists that it retains copyright in them even after they have been adopted into law. Some courts, including the Fifth Circuit Court of Appeals, have rejected that theory outright, holding that standards lose copyright protection when they are incorporated into law. Others, like the DC Circuit Court of Appeals in a case EFF defended on behalf of Public.Resource.Org, have held that whether or not the legal status of the standards changes once they are incorporated into law, posting them online is a lawful fair use. 

In this case, ASTM v. UpCodes, the court followed the latter path. Relying in large part on the DC Circuit’s decision, as well as an amicus brief EFF filed in support of UpCodes, the court held that providing access to the law (for free or subject to a subscription for “premium” access) was a lawful fair use. A key theme to the ruling is the public interest in accessing law: 

incorporation by reference creates serious notice and accountability problems when the law is only accessible to those who can afford to pay for it. … And there is significant evidence of the practical value of providing unfettered access to technical standards that have been incorporated into law. For example, journalists have explained that this access is essential to inform their news coverage; union members have explained that this access helps them advocate and negotiate for safe working conditions; and the NAACP has explained that this access helps citizens assert their legal rights and advocate for legal reforms.

We’ve seen similar rulings around the country, from California to New York to Missouri. Combined with two appellate rulings, these amount to a clear judicial consensus. And it turns out the sky has not fallen; SDOs continue to profit from their work, thanks in part to the volunteer labor of the experts who actually draft the standards and don’t do it for the royalties.  You would think the SDOs would learn their lesson, and turn their focus back to developing standards, not lawsuits.

Instead, SDOs are asking Congress to rewrite the Constitution and affirm that SDOs retain copyright in their standards no matter what a federal regulator does, as long as they make them available online. We know what that means because the SDOs have already created “reading rooms” for some of their standards, and they show us that the SDOs’ idea of “making available” is “making available as if it was 1999.” The texts are not searchable, cannot be printed, downloaded, highlighted, or bookmarked for later viewing, and cannot be magnified without becoming blurry. Cross-referencing and comparison is virtually impossible. Often, a reader can view only a portion of each page at a time and, upon zooming in, must scroll from right to left to read a single line of text. As if that wasn’t bad enough, these reading rooms are inaccessible to print-disabled people altogether.. 

It’s a bad bargain that would trade our fundamental due process rights in exchange for a pinky promise of highly restricted access to the law. But if Congress takes that step, it’s a comfort to know that we can take the fight back to the courts and trust that judges, if not legislators, understand why laws are facts, not property, and should be free for all to access, read, and share. 

Corynne McSherry

EFF and IFPTE Local 20 Attain Labor Contract

2 months ago
First-Ever, Three-Year Pact Protects Workers’ Pay, Benefits, Working Conditions, and More

SAN FRANCISCO—Employees and management at the Electronic Frontier Foundation have achieved a first-ever labor contract, they jointly announced today.  EFF employees have joined the Engineers and Scientists of California Local 20, IFPTE.  

The EFF bargaining unit includes more than 60 non-management employees in teams across the organization’s program and administrative staff. The contract covers the usual scope of subjects including compensation; health insurance and other benefits; time off; working conditions; nondiscrimination, accommodation, and diversity; hiring; union rights; and more. 

"EFF is its people. From the moment that our staff decided to organize, we were supportive and approached these negotiations with a commitment to enshrining the best of our practices and adopting improvements through open and honest discussions,” EFF Executive Director Cindy Cohn said. “We are delighted that we were able to reach a contract that will ensure our team receives the wages, benefits, and protections they deserve as they continue to advance our mission of ensuring that technology supports freedom, justice and innovation for all people of the world.” 

“We’re pleased to have partnered with EFF management in crafting a contract that helps our colleagues thrive both at work and outside of work,” said Shirin Mori, a member of the EFF Workers Union negotiating team. “This contract is a testament to creative solutions to improve working conditions and benefits across the organization, while also safeguarding the things employees love about working at EFF. We deeply appreciate the positive working relationship with management in establishing a strong contract.” 

The three-year contract was ratified unanimously by EFF’s board of directors Sept. 18, and by 96 percent of the bargaining unit on Sept. 25. It is effective Oct. 1, 2024 through Sept. 30, 2027. 

EFF is the largest and oldest nonprofit defending civil liberties in the digital world. Founded in 1990, EFF champions user privacy, free expression, and innovation through impact litigation, policy analysis, grassroots activism, and technology development.  

The Engineers and Scientists of California Local 20, International Federation of Professional and Technical Engineers, is a democratic labor union representing more than 8,000 engineers, scientists, licensed health professionals, and attorneys at PG&E, Kaiser Permanente, the U.S. Environmental Protection Agency, Legal Aid at Work, numerous clinics and hospitals, and other employers throughout Northern California.  

For the contract: https://ifpte20.org/wp-content/uploads/2024/10/Electronic-Frontier-Foundation-2024-2027.pdf 

For more on IFPTE Local 20: https://ifpte20.org/ 

Josh Richman

Civil Rights Commission Pans Face Recognition Technology

2 months ago

In its recent report, Civil Rights Implications of Face Recognition Technology (FRT), the U.S. Commission on Civil Rights identified serious problems with the federal government’s use of face recognition technology, and in doing so recognized EFF’s expertise on this issue. The Commission focused its investigation on the Department of Justice (DOJ), the Department of Homeland Security (DHS), and the Department of Housing and Urban Development (HUD).

According to the report, the DOJ primarily uses FRT within the Federal Bureau of Investigation and U.S. Marshals Service to generate leads in criminal investigations. DHS uses it in cross-border criminal investigations and to identify travelers. And HUD implements FRT with surveillance cameras in some federally funded public housing. The report explores how federal training on FRT use in these departments is inadequate, identifies threats that FRT poses to civil rights, and proposes ways to mitigate those threats.

EFF supports a ban on government use of FRT and strict regulation of private use. In April of this year, we submitted comments to the Commission to voice these views. The Commission’s report quotes our comments explaining how FRT works, including the steps by which FRT uses a probe photo (the photo of the face that will be identified) to run an algorithmic search that matches the face within the probe photo to those in the comparison data set. Although EFF aims to promote a broader understanding of the technology behind FRT, our main purpose in submitting the comments was to sound the alarm about the many dangers the technology poses.

These disparities in accuracy are due in part to algorithmic bias.

The government should not use face recognition because it is too inaccurate to determine people’s rights and benefits, its inaccuracies impact people of color and members of the LGBTQ+ community at far higher rates, it threatens privacy, it chills expression, and it introduces information security risks. The report highlights many of the concerns that we've stated about privacy, accuracy (especially in the context of criminal investigations), and use by “inexperienced and inadequately trained operators.” The Commission also included data showing that face recognition is much more likely to reach a false positive (inaccurately matching two photos of different people) than a false negative (inaccurately failing to match two photos of the same person). According to the report, false positives are even more prevalent for Black people, people of East Asian descent, women, and older adults, thereby posing equal protection issues. These disparities in accuracy are due in part to algorithmic bias. Relatedly, photographs are often unable to accurately capture dark skinned people’s faces, which means that the initial inputs to the algorithm can themselves be unreliable. This poses serious problems in many contexts, but especially in criminal investigations, in which the stakes of an FRT misidentification are peoples’ lives and liberty.

The Commission recommends that Congress and agency chiefs enact better oversight and transparency rules. While EFF agrees with many of the Commission’s critiques, the technology poses grave threats to civil liberties, privacy, and security that require a more aggressive response. We will continue fighting to ban face recognition use by governments and to strictly regulate private use. You can join our About Face project to stop the technology from entering your community and encourage your representatives to ban federal use of FRT.

Emma Leeds Armstrong

New EFF Report Provides Guidance to Ensure Human Rights are Protected Amid Government Use of AI in Latin America

2 months ago

                        

Governments increasingly rely on algorithmic systems to support consequential assessments and determinations about people’s lives, from judging eligibility for social assistance to trying to predict crime and criminals. Latin America is no exception. With the use of artificial intelligence (AI) posing human rights challenges in the region, EFF released today the report Inter-American Standards and State Use of AI for Rights-Affecting Determinations in Latin America: Human Rights Implications and Operational Framework.

This report draws on international human rights law, particularly standards from the Inter-American Human Rights System, to provide guidance on what state institutions must look out for when assessing whether and how to adopt artificial intelligence AI and automated decision-making (ADM) systems for determinations that can affect people’s rights.

We organized the report’s content and testimonies on current challenges from civil society experts on the ground in our project landing page.

AI-based Systems Implicate Human Rights

The report comes amid deployment of AI/ADM-based systems by Latin American state institutions for services and decision-making that affects human rights. Colombians must undergo classification from Sisbén, which measures their degree of poverty and vulnerability, if they want to access social protection programs. News reports in Brazil have once again flagged the problems and perils of Córtex, an algorithmic-powered surveillance system that cross-references various state databases with wide reach and poor controls. Risk-assessment systems seeking to predict school dropout, children’s rights violations or teenage pregnancy have integrated government related programs in countries like México, Chile, and Argentina. Different courts in the region have also implemented AI-based tools for a varied range of tasks.

EFF’s report aims to address two primary concerns: opacity and lack of human rights protections in state AI-based decision-making. Algorithmic systems are often deployed by state bodies in ways that obscure how decisions are made, leaving affected individuals with little understanding or recourse.

Additionally, these systems can exacerbate existing inequalities, disproportionately impacting marginalized communities without providing adequate avenues for redress. The lack of public  participation in the development and implementation of these systems further undermines democratic governance, as affected groups are often excluded from meaningful decision-making processes relating to government adoption and use of these technologies.

This is at odds with the human rights protections most Latin American countries are required to uphold. A majority of states have committed to comply with the American Convention on Human Rights and the Protocol of San Salvador. Under these international instruments, they have the duty to respect human rights and prevent violations from occurring. States’ responsibilities before international human rights law as guarantor of rights, and people and social groups as rights holders—entitled to call for them and participate—are two basic tenets that must guide any legitimate use of AI/ADM systems by state institutions for consequential decision-making, as we underscore in the report.

Inter-American Human Rights Framework

Building off extensive research of Inter-American Commission on Human Rights’ reports and Inter-American Court of Human Rights’ decisions and advisory opinions, we devise human rights implications and an operational framework for their due consideration in government use of algorithmic systems.

We detail what states’ commitments under the Inter-American System mean when state bodies decide to implement AI/ADM technologies for rights-based determinations. We explain why this adoption must fulfill necessary and proportionate principles, and what this entails. We underscore what it means to have a human rights approach to state AI-based policies, including crucial redlines for not moving ahead with their deployment.

We elaborate on what states must observe to ensure critical rights in line with Inter-American standards. We look particularly at political participation, access to information, equality and non-discrimination, due process, privacy and data protection, freedoms of expression, association and assembly, and the right to a dignified life in connection to social, economic, and cultural rights.

Some of them embody principles that must cut across the different stages of AI-based policies or initiatives—from scoping the problem state bodies seek to address and assessing whether algorithmic systems can reliably and effectively contribute to achieving its goals, to continuously monitoring and evaluating their implementation.

These cross-cutting principles integrate the comprehensive operational framework we provide in the report for governments and civil society advocates in the region.

Transparency, Due Process, and Data Privacy Are Vital

Our report’s recommendations reinforce that states must ensure transparency at every stage of AI deployment. Governments must provide clear information about how these systems function, including the categories of data processed, performance metrics, and details of the decision-making flow, including human and machine interaction.

It is also essential to disclose important aspects of how they were designed, such as details on the model’s training and testing datasets. Moreover, decisions based on AI/ADM systems must have a clear, reasoned, and coherent justification. Without such transparency, people cannot effectively understand or challenge the decisions being made about them, and the risk of unchecked rights violations increases.

Leveraging due process guarantees is also covered. The report highlights that decisions made by AI systems often lack the transparency needed for individuals to challenge them. The lack of human oversight in these processes can lead to arbitrary or unjust outcomes. Ensuring that affected individuals have the right to challenge AI-driven decisions through accessible legal mechanisms and meaningful human review is a critical step in aligning AI use with human rights standards.

Transparency and due process relate to ensuring people can fully enjoy the rights that unfold from informational self-determination, including the right to know what data about them are contained in state records, where the data came from, and how it’s being processed.

The Inter-American Court recently recognized informational self-determination as an autonomous right protected by the American Convention. It grants individuals the power to decide when and to what extent aspects of their private life can be revealed, including their personal information. It is intrinsically connected to the free development of one’s personality, and any limitations must be legally established, and necessary and proportionate to achieve a legitimate goal.

Ensuring Meaningful Public Participation

Social participation is another cornerstone of the report’s recommendations. We emphasize that marginalized groups, who are most likely to be negatively affected by AI and ADM systems, must have a voice in how these systems are developed and used. Participatory mechanisms must not be mere box-checking exercises and are vital for ensuring that algorithmic-based initiatives do not reinforce discrimination or violate rights. Human Rights Impact Assessments and independent auditing are important vectors for meaningful participation and should be used during all stages of planning and deployment. 

Robust legal safeguards, appropriate institutional structures, and effective oversight, often neglected, are underlying conditions for any legitimate government use of AI for rights-based determinations. As AI continues to play an increasingly significant role in public life, the findings and recommendations of this report are crucial. Our aim is to make a timely and compelling contribution for a human rights-centric approach to the use of AI/ADM in public decision-making.

We’d like to thank the consultant Rafaela Cavalcanti de Alcântara for her work on this report, and Clarice Tavares, Jamila Venturini, Joan López Solano, Patricia Díaz Charquero, Priscilla Ruiz Guillén, Raquel Rachid, and Tomás Pomar for their insights and feedback to the report.

The full report is here.

Veridiana Alimonti

EFF to New York: Age Verification Threatens Everyone's Speech and Privacy

2 months ago

Young people have a right to speak and access information online. Legislatures should remember that protecting kids' online safety shouldn't require sweeping online surveillance and censorship.

EFF reminded the New York Attorney General of this important fact in comments responding to the state's recently passed Stop Addictive Feeds Exploitation (SAFE) for Kids Act—which requires platforms to verify the ages of people who visit them. After New York's legislature passed the bill, it is now up to the state attorney general's office to write rules to implement it.

We urge the attorney general's office to recognize that age verification requirements are incompatible with privacy and free expression rights for everyone. As we say in our comments:

[O]nline age-verification mandates like that imposed by the New York SAFE For Kids Act are unconstitutional because they block adults from content they have a First Amendment right to access, burden their First Amendment right to browse the internet anonymously, and chill data security- and privacy-minded individuals who are justifiably leery of disclosing intensely personal information to online services. Further, these mandates carry with them broad, inherent burdens on adults’ rights to access lawful speech online. These burdens will not and cannot be remedied by new developments in age-verification technology.

We also noted that none of the methods of age verification listed in the attorney general's call for comments is both privacy-protective and entirely accurate. They each have their own flaws that threaten everyone's privacy and speech rights. "These methods don’t each fit somewhere on a spectrum of 'more safe' and 'less safe,' or 'more accurate' and 'less accurate.' Rather, they each fall on a spectrum of 'dangerous in one way' to 'dangerous in a different way'," we wrote in the comments.

Read the full comments here: https://www.eff.org/document/eff-comments-ny-ag-safe-kids-sept-2024

Hayley Tsukayama
Checked
2 hours 31 minutes ago
EFF's Deeplinks Blog: Noteworthy news from around the internet
Subscribe to EFF update feed