When AI and Secure Chat Meet, Users Deserve Strong Controls Over How They Interact

1 hour 57 minutes ago

Both Google and Apple are cramming new AI features into their phones and other devices, and neither company has offered clear ways to control which apps those AI systems can access. Recent issues around WhatsApp on both Android and iPhone demonstrate how these interactions can go sideways, risking revealing chat conversations beyond what you intend. Users deserve better controls and clearer documentation around what these AI features can access.

After diving into how Google Gemini and Apple Intelligence (and in some cases Siri) currently work, we didn’t always find clear answers to questions about how data is stored, who has access, and what it can be used for.

At a high level, when you compose a message with these tools, the companies can usually see the contents of those messages and receive at least a temporary copy of the text on their servers.

When receiving messages, things get trickier. When you use an AI like Gemini or a feature like Apple Intelligence to summarize or read notifications, we believe companies should be doing that content processing on-device. But poor documentation and weak guardrails create issues that have lead us deep into documentation rabbit holes and still fail to clarify the privacy practices as clearly as we’d like.

We’ll dig into the specifics below as well as potential solutions we’d like to see Apple, Google, and other device-makers implement, but first things first, here’s what you can do right now to control access:

Control AI Access to Secure Chat on Android and iOS

Here are some steps you can take to control access if you want nothing to do with the device-level AI features' integration and don’t want to risk accidentally sharing the text of a message outside of the app you’re using.

How to Check and Limit What Gemini Can Access

If you’re using Gemini on your Android phone, it’s a good time to review your settings to ensure things are set up how you want. Here’s how to check each of the relevant settings:

  • Disable Gemini App Activity: Gemini App Activity is a history Google stores of all your interactions with Gemini. It’s enabled by default. To disable it, open Gemini (depending on your phone model, you may or may not even have the Google Gemini app installed. If you don’t have it installed, you don’t really need to worry about any of this). Tap your profile picture > Gemini Apps Activity, then change the toggle to either “Turn off,” or “Turn off and delete activity” if you want to delete previous conversations. If the option reads “Turn on,” then Gemini Apps Activity is already turned off. 
  • Control app and notification access: You can control which apps Gemini can access by tapping your profile picture > Apps, then scrolling down and disabling the toggle next to any apps you do not want Gemini to access. If you do not want Gemini to potentially access the content that appears in notifications, open the Settings app and revoke notification access from the Google app.
  • Delete the Gemini app: Depending on your phone model, you might be able to delete the Gemini app and revert to using Google Assistant instead. You can do so by long-pressing the Gemini app and selecting the option to delete. 
How to Check and Limit what Apple Intelligence and Siri Can Access

Similarly, there are a few things you can do to clamp down on what Apple Intelligence and Siri can do: 

  • Disable the “Use with Siri Requests” option: If you want to continue using Siri, but don’t want to accidentally use it to send messages through secure messaging apps, like WhatsApp, then you can disable that feature by opening Settings > Apps > [app name], and disabling “Use with Siri Requests,” which turns off the ability to compose messages with Siri and send them through that app.
  • Disable Apple Intelligence entirely: Apple Intelligence is an all-or-nothing setting on iPhones, so if you want to avoid any potential issues your only option is to turn it off completely. To do so, open Settings > Apple Intelligence & Siri, and disable “Apple Intelligence” (you will only see this option if your device supports Apple Intelligence, if it doesn’t, the menu will only be for “Siri”). You can also disable certain features, like “writing tools,” using Screen Time restrictions. Siri can’t be universally turned off in the same way, though you can turn off the options under “Talk to Siri” to make it so you can’t speak to it. 

For more information about cutting off AI access at different levels in other apps, this Consumer Reports article covers other platforms and services.

Why It Matters  Sending Messages Has Different Privacy Concerns than Receiving Them

Let’s start with a look at how Google and Apple integrate their AI systems into message composition, using WhatsApp as an example.

Google Gemini and WhatsApp

On Android, you can optionally link WhatsApp and Gemini together so you can then initiate various actions for sending messages from the Gemini app, like “Call Mom on WhatsApp” or “Text Jason on WhatsApp that we need to cancel our secret meeting, but make it a haiku.” This feature raised red flags for users concerned about privacy.

By default, everything you do in Gemini is stored in the “Gemini Apps Activity,” where messages are stored forever, subject to human review, and are used to train Google’s products. So, unless you change it, when you use Gemini to compose and send a message in WhatsApp then the message you composed is visible to Google.

If you turn the activity off, interactions are still stored for 72 hours. Google’s documentation claims that even though messages are stored, those conversations aren't reviewed or used to improve Google machine learning technologies, though that appears to be an internal policy choice with no technical limits preventing Google from accessing those messages.

By default, everything you do in Gemini is stored in the “Gemini Apps Activity,” where messages are stored forever, subject to human review, and are used to train Google’s products.

The simplicity of invoking Gemini to compose and send a message may lead to a false sense of privacy. Notably, other secure messaging apps, like Signal, do not offer this Gemini integration.

For comparison’s sake, let’s see how this works with Apple devices.

Siri and WhatsApp

The closest comparison to this process on iOS is to use Siri, which it is claimed, will eventually be a part of Apple Intelligence. Currently, Apple’s AI message composition tools are not available for third-party apps like Signal and WhatsApp.

According to its privacy policy, when you dictate a message through Siri to send to WhatsApp (or anywhere else), the message, including metadata like the recipient phone number and other identifiers, is sent to Apple’s servers. This was confirmed by researchers to include the text of messages sent to WhatsApp. When you use Siri to compose a WhatsApp message, the message gets routed to both Apple and WhatsApp. Apple claims it does not store this transcript unless you’ve opted into “Improve Siri and Dictation.” WhatsApp defers to Apple’s support for data handling concerns. This is similar to how Google handles speech-to-text prompts.

In response to that research, Apple said this was expected behavior with an app that uses SiriKit—the extension that allows third-party apps to integrate with Siri—like WhatsApp does.

Both Siri and Apple Intelligence can sometimes run locally on-device, and other times need to rely on Apple-managed cloud servers to complete requests. Apple Intelligence can use the company’s Private Cloud Compute, but Siri doesn’t have a similar feature.

The ambiguity around where data goes makes it overly difficult to decide on whether you are comfortable with the sort of privacy trade-off that using features like Siri or Apple Intelligence might entail.

How Receiving Messages Works

Sending encrypted messages is just one half of the privacy puzzle. What happens on the receiving end matters too. 

Google Gemini

By default, the Gemini app doesn’t have access to the text inside secure messaging apps or to notifications. But you can grant access to notifications using the Utilities app. Utilities can read, summarize, and reply to notifications, including in WhatsApp and Signal (it can also read notifications in headphones).

This could open up any notifications routed through the Utilities app to the Gemini app to access internally or from third-parties.

We could not find anything in Google’s Utilities documentation that clarifies what information is collected, stored, or sent to Google from these notifications. When we reached out to Google, the company responded that it “builds technical data protections that safeguard user data, uses data responsibly, and provides users with tools to control their Gemini experience.” Which means Google has no technical limitation around accessing the text from notifications if you’ve enabled the feature in the Utilities app. This could open up any notifications routed through the Utilities app to the Gemini app to be accessed internally or from third-parties. Google needs to publicly make its data handling explicit in its documentation.

If you use encrypted communications apps and have granted access to notifications, then it is worth considering disabling that feature or controlling what’s visible in your notifications on an app-level.

Apple Intelligence

Apple is more clear about how it handles this sort of notification access.

Siri can read and reply to messages with the “Announce Notifications” feature. With this enabled, Siri can read notifications out loud on select headphones or via CarPlay. In a press release, Apple states, “When a user talks or types to Siri, their request is processed on device whenever possible. For example, when a user asks Siri to read unread messages… the processing is done on the user’s device. The contents of the messages aren’t transmitted to Apple servers, because that isn’t necessary to fulfill the request.”

Apple Intelligence can summarize notifications from any app that you’ve enabled notifications on. Apple is clear that these summaries are generated on your device, “when Apple Intelligence provides you with preview summaries of your emails, messages, and notifications, these summaries are generated by on-device models.” This means there should be no risk that the text of notifications from apps like WhatsApp or Signal get sent to Apple’s servers just to summarize them.

New AI Features Must Come With Strong User Controls

As more device-makers cram AI features into their devices, the more necessary it is for us to have clear and simple controls over what personal data these features can access on our devices. If users do not have control over when a text leaves a device for any sort of AI processing—whether that’s to a “private” cloud or not—it erodes our privacy and potentially threatens the foundations of end-to-end encrypted communications.

Per-app AI Permissions

Google, Apple, and other device makers should add a device-level AI permission, just like they do for other potentially invasive privacy features, like location sharing, to their phones. You should be able to tell the operating system’s AI to not access an app, even if that comes at the “cost” of missing out on some features. The setting should be straightforward and easy to understand in ways the Gemini an Apple Intelligence controls currently are not.

Offer On-Device-Only Modes

Device-makers should offer an “on-device only” mode for those interested in using some features without having to try to figure out what happens on device or on the cloud. Samsung offers this, and both Google and Apple would benefit from a similar option.

Improve Documentation

Both Google and Apple should improve their documentation about how these features interact with various apps. Apple doesn’t seem to clarify notification processing privacy anywhere outside of a press release, and we couldn’t find anything about Google’s Utilities privacy at all. We appreciate tools like Gemini Apps Activity as a way to audit what the company collects, but vague information like “Prompted a Communications query” is only useful if there’s an explanation somewhere about what that means.

The current user options are not enough. It’s clear that the AI features device-makers add come with significant confusion about their privacy implications, and it’s time to push back and demand better controls. The privacy problems introduced alongside new AI features should be taken seriously, and remedies should be offered to both users and developers who want real, transparent safeguards over how a company accesses their private data and communications.

Thorin Klosowski

Civil Disobedience of Copyright Keeps Science Going

3 hours 2 minutes ago

Creating and sharing knowledge are defining traits of humankind, yet copyright law has grown so restrictive that it can require acts of civil disobedience to ensure that students and scholars have the books they need and to preserve swaths of culture from being lost forever.

Reputable research generally follows a familiar pattern: Scientific articles are written by scholars based on their research—often with public funding. Those articles are then peer-reviewed by other scholars in their fields and revisions are made according to those comments. Afterwards, most large publishers expect to be given the copyright on the article as a condition of packaging it up and selling it back to the institutions that employ the academics who did the research and to the public at large. Because research is valuable and because copyright is a monopoly on disseminating the articles in question, these publishers can charge exorbitant fees that place a strain even on wealthy universities and are simply out of reach for the general public or universities with limited budgets, such as those in the global south. The result is a global human rights problem.

This model is broken, yet science goes on thanks to widespread civil disobedience of the copyright regime that locks up the knowledge created by researchers. Some turn to social media to ask that a colleague with access share articles they need (despite copyright’s prohibitions on sharing). Certainly, at least some such sharing is protected fair use, but scholars should not have to seek a legal opinion or risk legal threats from publishers to share the collective knowledge they generate.

Even more useful, though on shakier legal ground, are so-called “shadow archives” and aggregators such as SciHub, Library Genesis (LibGen), Z-Library, or Anna’s Archive. These are the culmination of efforts from volunteers dedicated to defending science.

SciHub alone handles tens of millions of requests for scientific articles each year and remains operational despite adverse court rulings thanks both to being based in Russia, and to the community of academics who see it as an ethical response to the high access barriers that publishers impose and provide it their log-on credentials so it can retrieve requested articles. SciHub and LibGen are continuations of samizdat, the Soviet-era practice of disobeying state censorship in the interests of learning and free speech.

Unless publishing gatekeepers adopt drastically more equitable practices and become partners in disseminating knowledge, they will continue to lose ground to open access alternatives, legal or otherwise.

EFF is proud to celebrate Open Access Week.

Kit Walsh

Opt Out October: Daily Tips to Protect Your Privacy and Security

3 hours 10 minutes ago

Trying to take control of your online privacy can feel like a full-time job. But if you break it up into small tasks and take on one project at a time it makes the process of protecting your privacy much easier. This month we’re going to do just that. For the month of October, we’ll update this post with new tips every weekday that show various ways you can opt yourself out of the ways tech giants surveil you.

Online privacy isn’t dead. But the tech giants make it a pain in the butt to achieve. With these incremental tweaks to the services we use, we can throw sand in the gears of the surveillance machine and opt out of the ways tech companies attempt to optimize us into advertisement and content viewing machines. We’re also pushing companies to make more privacy-protective defaults the norm, but until that happens, the onus is on all of us to dig into the settings.

All month long we’ll share tips, including some with the help from our friends at Consumer Reports’ Security Planner tool. Use the Table of Contents here to jump straight to any tip.

Table of Contents

Tip 1: Establish Good Digital Hygiene

Before we can get into the privacy weeds, we need to first establish strong basics. Namely, two security fundamentals: using strong passwords (a password manager helps simplify this) and two-factor authentication for your online accounts. Together, they can significantly improve your online privacy by making it much harder for your data to fall into the hands of a stranger.

Using unique passwords for every web login means that if your account information ends up in a data breach, it won’t give bad actors an easy way to unlock your other accounts. Since it’s impossible for all of us to remember a unique password for every login we have, most people will want to use a password manager, which generates and stores those passwords for you.

Two-factor authentication is the second lock on those same accounts. In order to login to, say, Facebook for the first time on a particular computer, you’ll need to provide a password and a “second factor,” usually an always-changing numeric code generated in an app or sent to you on another device. This makes it much harder for someone else to get into your account because it’s less likely they’ll have both a password and the temporary code.

This can be a little overwhelming to get started if you’re new to online privacy! Aside from our guides on Surveillance Self-Defense, we recommend taking a look at Consumer Reports’ Security Planner for ways to help you get started setting up your first password manager and turning on two-factor authentication.

Tip 2: Learn What a Data Broker Knows About You

Hundreds of data brokers you’ve never heard of are harvesting and selling your personal information. This can include your address, online activity, financial transactions, relationships, and even your location history. Once sold, your data can be abused by scammers, advertisers, predatory companies, and even law enforcement agencies.

Data brokers build detailed profiles of our lives but try to keep their own practices hidden. Fortunately, several state privacy laws give you the right to see what information these companies have collected about you. You can exercise this right by submitting a data access request to a data broker. Even if you live in a state without privacy legislation, some data brokers will still respond to your request.

There are hundreds of known data brokers, but here are a few major ones to start with:

Data brokers have been caught ignoring privacy laws, so there’s a chance you won’t get a response. If you do, you’ll learn what information the data broker has collected about you and the categories of third parties they’ve sold it to. If the results motivate you to take more privacy action, encourage your friends and family to do the same. Don’t let data brokers keep their spying a secret.

You can also ask data brokers to delete your data, with or without an access request. We’ll get to that later this month and explain how to do this with people-search sites, a category of data brokers.

Tip 3: Disable Ad Tracking on iPhone and Android

Picture this: you’re doomscrolling and spot a t-shirt you love. Later, you mention it to a friend and suddenly see an ad for that exact shirt in another app. The natural question pops into your head: “Is my phone listening to me?” Take a sigh of relief because, no, your phone is not listening to you. But advertisers are using shady tactics to profile your interests. Here’s an easy way to fight back: disable the ad identifier on your phone to make it harder for advertisers and data brokers to track you.

Disable Ad Tracking on iOS and iPadOS:

  • Open Settings > Privacy & Security > Tracking, and turn off “Allow Apps to Request to Track.”
  • Open Settings > Privacy & Security > Apple Advertising, and disable “Personalized Ads” to also stop some of Apple’s internal tracking for apps like the App Store. 
  • If you use Safari, go to Settings > Apps > Safari > Advanced and disable “Privacy Preserving Ad Measurement.”

Disable Ad Tracking on Android:

  • Open Settings > Security & privacy > Privacy controls > Ads, and tap “Delete advertising ID.”
  • While you’re at it, run through Google’s “Privacy Checkup” to review what info other Google services—like YouTube or your location—may be sharing with advertisers and data brokers.

These quick settings changes can help keep bad actors from spying on you. For a deeper dive on securing your iPhone or Android device, be sure to check out our full Surveillance Self-Defense guides.

Tip 4: Declutter Your Apps

Decluttering is all the rage for optimizers and organizers alike, but did you know a cleansing sweep through your apps can also help your privacy? Apps collect a lot of data, often in the background when you are not using them. This can be a prime way companies harvest your information, and then repackage and sell it to other companies you've never heard of. Having a lot of apps increases the peepholes that companies can gain into your personal life. 

Do you need three airline apps when you're not even traveling? Or the app for that hotel chain you stayed in once? It's best to delete that app and cut off their access to your information. In an ideal world, app makers would not process any of your data unless strictly necessary to give you what you asked for. Until then, to do an app audit:

  • Look through the apps you have and identify ones you rarely open or barely use. 
  • Long-press on apps that you don't use anymore and delete or uninstall them when a menu pops up. 
  • Even on apps you keep, take a swing through the location, microphone, or camera permissions for each of them. For iOS devices you can follow these instructions to find that menu. For Android, check out this instructions page.

If you delete an app and later find you need it, you can always redownload it. Try giving some apps the boot today to gain some memory space and some peace of mind.

Tip 5: Disable Behavioral Ads on Amazon

Happy Amazon Prime Day! Let’s celebrate by taking back a piece of our privacy.

Amazon collects an astounding amount of information about your shopping habits. While the only way to truly free yourself from the company’s all-seeing eye is to never shop there, there is something you can do to disrupt some of that data use: tell Amazon to stop using your data to market more things to you (these settings are for US users and may not be available in all countries).

  • Log into your Amazon account, then click “Account & Lists” under your name. 
  • Scroll down to the “Communication and Content” section and click “Advertising preferences” (or just click this link to head directly there).
  • Click the option next to “Do not show me interest-based ads provided by Amazon.”
  • You may want to also delete the data Amazon already collected, so click the “Delete ad data” button.

This setting will turn off the personalized ads based on what Amazon infers about you, though you will likely still see recommendations based on your past purchases at Amazon.

Of course, Amazon sells a lot of other products. If you own an Alexa, now’s a good time to review the few remaining privacy options available to you after the company took away the ability to disable voice recordings. Kindle users might want to turn off some of the data usage tracking. And if you own a Ring camera, consider enabling end-to-end encryption to ensure you’re in control of the recording, not the company. 

Tip 6: Install Privacy Badger to Block Online Trackers

Every time you browse the web, you’re being tracked. Most websites contain invisible tracking code that lets companies collect and profit from your data. That data can end up in the hands of advertisers, data brokers, scammers, and even government agencies. Privacy Badger, EFF’s free browser extension, can help you fight back.

Privacy Badger automatically blocks hidden trackers to stop companies from spying on you online. It also tells websites not to share or sell your data by sending the “Global Privacy Control” signal, which is legally binding under some state privacy laws. Privacy Badger has evolved over the past decade to fight various methods of online tracking. Whether you want to protect your sensitive information from data brokers or just don’t want Big Tech monetizing your data, Privacy Badger has your back.

Visit privacybadger.org to install Privacy Badger.

It’s available on Chrome, Firefox, Edge, and Opera for desktop devices and Firefox and Edge for Android devices. Once installed, all of Privacy Badger’s features work automatically. There’s no setup required! If blocking harmful trackers ends up breaking something on a website, you can easily turn off Privacy Badger for that site while maintaining privacy protections everywhere else.

When you install Privacy Badger, you’re not just protecting yourself—you’re joining EFF and millions of other users in the fight against online surveillance.

Tip 7: Review Location Tracking Settings

Data brokers don’t just collect information on your purchases and browsing history. Mobile apps that have the location permission turned on will deliver your coordinates to third parties in exchange for insights or monetary kickbacks. Even when they don’t deliver that data directly to data brokers, if the app serves ad space, your location will be delivered in real-time bid requests not only to those wishing to place an ad, but to all participants in the ad auction—even if they lose the bid. Location data brokers take part in these auctions just to harvest location data en masse, without any intention of buying ad space.

Luckily, you can change a few settings to protect yourself against this hoovering of your whereabouts. You can use iOS or Android tools to audit an app’s permissions, providing clarity on who is providing what info to whom. You can then go to the apps that don’t need your location data and disable their access to that data (you can always change your mind later if it turns out location access was useful). You can also disable real-time location tracking by putting your phone into airplane mode, while still being able to navigate using offline maps. And by disabling mobile advertising identifiers (see tip three), you break the chain that links your location from one moment to the next.

Finally, for particularly sensitive situations you may want to bring an entirely separate, single-purpose device which you’ve kept clean of unneeded apps and locked down settings on. Similar in concept to a burner phone, even if this single-purpose device does manage to gather data on you, it can only tell a partial story about you—all the other data linking you to your normal activities will be kept separate.

For details on how you can follow these tips and more on your own devices, check out our more extensive post on the topic.

Tip 8: Limit the Data Your Gaming Console Collects About You

Oh, the beauty of gaming consoles—just plug in and play! Well... after you speed-run through a bunch of terms and conditions, internet setup, and privacy settings. If you rushed through those startup screens, don’t worry! It’s not too late to limit the data your console is collecting about you. Because yes, modern consoles do collect a lot about your gaming habits.

Start with the basics: make sure you have two-factor authentication turned on for your accounts. PlayStation, Xbox, and Nintendo all have guides on their sites. Between payment details and other personal info tied to these accounts, 2FA is an easy first line of defense for your data.

Then, it’s time to check the privacy controls on your console:

  • PlayStation 5: Go to Settings > Users and Accounts > Privacy to adjust what you share with both strangers and friends. To limit the data your PS5 collects about you, go to Settings > Users and Accounts > Privacy, where you can adjust settings under Data You Provide and Personalization.
  • Xbox Series X|S: Press the Xbox button > Profile & System > Settings > Account > Privacy & online safety > Xbox Privacy to fine-tune your sharing. To manage data collection, head to Profile & System > Settings > Account > Privacy & online safety > Data collection.
  • Nintendo Switch: The Switch doesn’t share as much data by default, but you still have options. To control who sees your play activity, go to System Settings > Users > [your profile] > Play Activity Settings. To opt out of sharing eShop data, open the eShop, select your profile (top right), then go to Google Analytics Preferences > Do Not Share.

Plug and play, right? Almost. These quick checks can help keep your gaming sessions fun—and more private.

Tip 9: Hide Your Start and End Points on Strava

Sharing your personal fitness goals, whether it be extended distances, accurate calorie counts, or GPS paths—sounds like a fun, competitive feature offered by today's digital fitness trackers. If you enjoy tracking those activities, you've probably heard of Strava. While it's excellent for motivation and connecting with fellow athletes, Strava's default settings can reveal sensitive information about where you live, work, or exercise, creating serious security and privacy risks. Fortunately, Strava gives you control over how much of your activity map is visible to others, allowing you to stay active in your community while protecting your personal safety.

We've covered how Strava data exposed classified military bases in 2018 when service members used fitness trackers. If fitness data can compromise national security, what's it revealing about you?

Here's how to hide your start and end points:

  • On the website: Hover over your profile picture > Settings > Privacy Controls > Map Visibility.
  • On mobile: Open Settings > Privacy Controls > Map Visibility.
  • You can then choose from three options: hide portions near a specific address, hide start/end of all activities, or hide entire maps

You can also adjust individual activities:

  • Open the activity you want to edit.
  • Select the three-dot menu icon.
  • Choose "Edit Map Visibility."
  • Use sliders to customize what's hidden or enable "Hide the Entire Map."

Great job taking control of your location privacy! Remember that these settings only apply to Strava, so if you share activities to other platforms, you'll need to adjust those privacy settings separately. While you're at it, consider reviewing your overall activity visibility settings to ensure you're only sharing what you want with the people you choose.

Tip 10: Find and Delete An Account You No Longer Use

Millions of online accounts are compromised each year. The more accounts you have, the more at risk you are of having your personal data illegally accessed and published online. Even if you don’t suffer a data breach, there’s also the possibility that someone could find one of your abandoned social media accounts containing information you shared publicly on purpose in the past, but don’t necessarily want floating around anymore. And companies may still be profiting off details of your personal life, even though you’re not getting any benefit from their service.

So, now’s a good time to find an old account to delete. There may be one you can already think of, but if you’re stuck, you can look through your password manager, look through logins saved on your web browser, or search your email inbox for phrases like “new account,” “password,” “welcome to,” or “confirm your email.” Or, enter your email address on the website HaveIBeenPwned to get a list of sites where your personal information has been compromised to see if any of them are accounts you no longer use.

Once you’ve decided on an account, you’ll need to find the steps to delete it. Simply deleting an app off of your phone or computer does not delete your account. Often you can log in and look in the account settings, or find instructions in the help menu, the FAQ page, or the pop-up customer service chat. If that fails, use a search engine to see if anybody else has written up the steps to deleting your specific type of account.

For more information, check out the Delete Unused Accounts tip on Security Planner.

Tip 11: Search for Yourself

Today's tip may sound a little existential, but we're not suggesting a deep spiritual journey. Just a trip to your nearest search engine. Pop your name into search engines such as Google or DuckDuckGo, or even AI tools such as ChatGPT, to see what you find. This is one of the simplest things you can do to raise your own awareness of your digital reputation. It can be the first thing prospective employers (or future first dates) do when trying to figure out who you are. From a privacy perspective, doing it yourself can also shed light on how your information is presented to the general public. If there's a defunct social media account you'd rather keep hidden, but it's on the first page of your search results, that might be a good signal for you to finally delete that account. If you shared your cellphone number with an organization you volunteer for and it's on their home page, you can ask them to take it down.

Knowledge is power. It's important to know what search results are out there about you, so you understand what people see when they look for you. Once you have this overview, you can make better choices about your online privacy. 

Tip 12: Tell “People Search” Sites to Delete Your Information

When you search online for someone’s name, you’ll likely see results from people-search sites selling their home address, phone number, relatives’ names, and more. People-search sites are a type of data broker with an especially dangerous impact. They can expose people to scams, stalking, and identity theft. Submit opt out requests to these sites to reduce the amount of personal information that is easily available about you online.

Check out this list of opt-out links and instructions for more than 50 people search sites, organized by priority. Before submitting a request, check that the site actually has your information. Here are a few high-priority sites to start with: 

Data brokers continuously collect new information, so your data could reappear after being deleted. You’ll have to re-submit opt-outs periodically to keep your information off of people-search sites. Subscription-based services can automate this process and save you time, but a Consumer Reports study found that manual opt-outs are more effective.

Tip 13: Remove Your Personal Addresses from Search Engines 

Your home address may often be found with just a few clicks online. Whether you're concerned about your digital footprint or looking to safeguard your physical privacy, understanding where your address appears and how to remove or obscure it is a crucial step. Here's what you need to know.

Your personal addresses can be available through public records like property purchases, medical licensing information, or data brokers. Opting out from data brokers will do a lot to remove what's available commercially, but sometimes you can't erase the information entirely from things like property sales records.

You can ask some search engines to remove your personal information from search indexes, which is the most efficient way to make information like your personal addresses, phone number, and email address a lot harder to find. Google has a form that makes this request quite easy, and we’d suggest starting there.

Day 14: Check Out Signal

Here's the problem: many of your texts aren't actually private. Phone companies, government agencies, and app developers all too often can all peek at your conversations.

So on Global Encryption Day, our tip is to check out Signal—a messaging app that actually keeps your conversations private.

Signal uses end-to-end encryption, meaning only you and your recipient can read your messages—not even Signal can see them. Security experts love Signal because it's run by a privacy-focused nonprofit, funded by donations instead of data collection, and its code is publicly auditable. 

Beyond privacy, Signal offers free messaging and calls over Wi-Fi, helping you avoid SMS charges and international calling fees. The only catch? Your contacts need Signal too, so start recruiting your friends and family!

How to get started: Download Signal from your app store, verify your phone number, set a secure PIN, and start messaging your contacts who join you. Consider also setting up a username so people can reach you without sharing your phone number. For more detailed instructions, check out our guide.

Global Encryption Day is the perfect timing to protect your communications. Take your time to explore the app, and check out other privacy protecting features like disappearing messages, session verification, and lock screen notification privacy.

Tip 15: Switch to a Privacy-Protective Browser

Your browser stores tons of personal information: browsing history, tracking cookies, and data that companies use to build detailed profiles for targeted advertising. The browser you choose makes a huge difference in how much of this tracking you can prevent.

Most people use Chrome or Safari, which are automatically installed on Google and Apple products, but these browsers have significant privacy drawbacks. For example: Chrome's Incognito mode only hides history on your device—it doesn't stop tracking. Safari has been caught storing deleted browser history and collecting data even in private browsing mode.

Firefox is one alternative that puts privacy first. Unlike Chrome, Firefox automatically blocks trackers and ads in Private Browsing mode and prevents websites from sharing your data between sites. It also warns you when websites try to extract your personal information. But Firefox isn't your only option—other privacy-focused browsers like DuckDuckGo, Brave, and Tor also offer strong protections with different features. The key is switching away from browsers that prioritize data collection over your privacy.

Switching is easy: download your chosen browser from the links above and install it. Most browsers let you import bookmarks and passwords during setup.

You now have a new browser! Take some time to explore your new browser's privacy settings to maximize your protection.

Tip 16: Give Yourself Another Online Identity

We all take on different identities at times. Just as it's important to set boundaries in your daily life, the same can be true for your digital identity. For many reasons, people may want to keep aspects of their lives separate—and giving people control over how their information is used is one of the fundamental reasons we fight for privacy. Consider chopping up pieces of your life over separate email accounts, phone numbers, or social media accounts. 

This can help you manage your life and keep a full picture of your private information out of the hands of nosy data-mining companies. Maybe you volunteer for an organization in your spare time that you'd rather keep private, want to keep emails from your kids' school separate from a mountain of spam, or simply would rather keep your professional and private social media accounts separate. 

Whatever the reason, consider whether there's a piece of your life that could benefit from its own identity. When you set up these boundaries, you can also protect your privacy.

Come back tomorrow for another tip!

Thorin Klosowski

EFF Backs Constitutional Challenge to Ecuador’s Intelligence Law That Undermines Human Rights

4 hours 9 minutes ago

In early September, EFF submitted an amicus brief to Ecuador’s Constitutional Court supporting a constitutional challenge filed by Ecuadorian NGOs, including INREDH and LaLibre. The case challenges the constitutionality of the Ley Orgánica de Inteligencia (LOI) and its implementing regulation, the General Regulation of the LOI.

EFF’s amicus brief argues that the LOI enables disproportionate surveillance and secrecy that undermine constitutional and Inter-American human rights standards. EFF urges the Constitutional Court to declare the LOI and its regulation unconstitutional in their entirety.

More specifically, our submission notes that:

“The LOI presents a structural flaw that undermines compliance with the principles of legality, legitimate purpose, suitability, necessity, and proportionality; it inverts the rule and the exception, with serious harm to rights enshrined constitutionally and under the Convention; and it prioritizes indeterminate state interests, in contravention of the ultimate aim of intelligence activities and state action, namely the protection of individuals, their rights, and freedoms.”

Core Legal Problems Identified

Vague and Overbroad Definitions

The LOI contains key terms like “national security,” “integral security of the State,” “threats,” and “risks” that are left either undefined or so broadly framed that they could mean almost anything. This vagueness grants intelligence agencies wide, unchecked discretion, and fails short of the standard of legal certainty required under the American Convention on Human Rights (CADH).

Secrecy and Lack of Transparency

The LOI makes secrecy the rule rather than the exception, reversing the Inter-American principle of maximum disclosure, which holds that access to information should be the norm and secrecy a narrowly justified exception. The law establishes a classification system—“restricted,” “secret,” and “top secret”—for intelligence and counterintelligence information, but without clear, verifiable parameters to guide its application on a case-by-case basis. As a result, all information produced by the governing body (ente rector) of the National Intelligence System is classified as secret by default. Moreover, intelligence budgets and spending are insulated from meaningful public oversight, concentrated under a single authority, and ultimately destroyed, leaving no mechanism for accountability.

Weak or Nonexistent Oversight Mechanisms

The LOI leaves intelligence agencies to regulate themselves, with almost no external scrutiny. Civilian oversight is minimal, limited to occasional, closed-door briefings before a parliamentary commission that lacks real access to information or decision making power. This structure offers no guarantee of independent or judicial supervision and instead fosters an environment where intelligence operations can proceed without transparency or accountability.

Intrusive Powers Without Judicial Authorization

The LOI allows access to communications, databases, and personal data without prior judicial order, which enables the mass surveillance of electronic communications, metadata, and databases across public and private entities—including telecommunication operators. This directly contradicts rulings of the Inter-American Court of Human Rights, which establish that any restriction of the right to privacy must be necessary, proportionate, and subject to independent oversight. It also runs counter to CAJAR vs. Colombia, which affirms that intrusive surveillance requires prior judicial authorization.

International Human Rights Standards Applied

Our amicus curiae draws on the CAJAR vs. Colombia judgment, which set strict standards for intelligence activities. Crucially, Ecuador’s LOI fall short of all these tests: it doesn’t constitute an adequate legal basis for limiting rights; contravenes necessary and proportionate principles; fails to ensure robust controls and safeguards, like prior judicial authorization and solid civilian oversight; and completely disregards related data protection guarantees and data subject’s rights.

At its core, the LOI structurally prioritizes vague notions of “state interest” over the protection of human rights and fundamental freedoms. It legalizes secrecy, unchecked surveillance, and the impunity of intelligence agencies. For these reasons, we urge Ecuador’s Constitutional Court to declare the LOI and its regulations unconstitutional, as they violate both the Ecuadorian Constitution and the American Convention on Human Rights (CADH).

Read our full amicus brief here to learn more about how Ecuador’s intelligence framework undermines privacy, transparency, and the human rights protected under Inter-American human rights law.

Paige Collings

It’s Time to Take Back CTRL

2 days 2 hours ago

Technology is supercharging the attack on democracy by making it easier to spy on people, block free speech, and control what we do. The Electronic Frontier Foundation’s activists, lawyers, and technologists are fighting back. Join the movement to Take Back CTRL.

DONATE TODAY

Join EFF and Fight Back

Take Back CTRL is EFF's new website to give you insight into the ways that technology has become the veins and arteries of rising global authoritarianism. It’s not just because of technology’s massive power to surveil, categorize, censor, and make decisions for governments—but also because the money made by selling your data props up companies and CEOs with clear authoritarian agendas. As the preeminent digital rights organization, EFF has a clear role to play.

If You Use Technology, This Fight Is Yours.

EFF was created for scary moments like the one we’re facing now. For 35 years, EFF has fought to ensure your rights follow you online and wherever you use technology. We’ve sued, we’ve analyzed, we’ve hacked, we’ve argued, and we’ve helped people be heard in halls of power.

But we're still missing something. You.

Because it's your rights we're fighting for:

  • Your right to speak and learn freely online, free of government censorship
  • Your right to move through the world without being surveilled everywhere you go
  • Your right to use your device without it tracking your every click, purchase, and IRL movement
  • Your right to control your data, including data about your body, and to know that data given to one government agency won’t be weaponized against you by another
  • Your right to do what you please with the products and content you pay for
  • Consider Take Back CTRL our "help wanted" notice, because we need your help to win this fight today.

Join EFF

The future is being decided today. Join the movement to Take Back CTRL.

The Take Back CTRL campaign highlights the work that EFF is doing to fight for our democracy, defend vulnerable members of our community, and stand up against the use of tech in this authoritarian takeover. It also features actions everyone can take to support EFF’s work, use our tools in their everyday lives, and fight back.

Help us spread the word:

Stop tech from dismantling democracy. Join the movement to Take Back CTRL of our rights. https://eff.org/tbc

Allison Morris

No Tricks, Just Treats 🎃 EFF’s Halloween Signal Stickers Are Here!

2 days 22 hours ago

EFF usually warns of new horrors threatening your rights online, but this Halloween we’ve summoned a few of our own we’d like to share.  Our new Signal Sticker Pack highlights some creatures—both mythical and terrifying—conjured up by our designers for you to share this spooky season.

If you’re new to Signal, it's a free and secure messaging app built by the nonprofit Signal Foundation at the forefront of defending user privacy. While chatting privately, you can add some seasonal flair with Signal Stickers, and rest assured: friends receiving them get the full sticker pack fully encrypted, safe from prying eyes and lurking spirits.

How To Get and Share Signal Stickers

On any mobile device or desktop with the Signal app installed, you can simply click the button below.

Download EFF's Signal Stickers

To share Frights and Rights  

You can also paste the sticker link directly into a signal chat, and then tap it to download the pack directly to the app.

Once they’re installed, they are even easier to share—simply open a chat, tap the sticker menu on your keyboard, and send one of EFF’s spooky stickers.  They’ll then be asked if they’d like to also have the sticker pack.

All of this works without any third parties knowing what sticker packs you have or whom you shared them with. Our little ghosts and ghouls are just between us.

Meet The Encryptids

These familiar champions of digital rights—The Encryptids—are back! Don’t let their monstrous looks fool you; each one advocates for privacy, security, and a dash of weirdness in their own way. Whether they’re shouting about online anonymity or the importance of interoperability, they’re ready to help you share your love for digital rights. Learn more about their stories here, and you can even grab a bigfoot pin to let everyone know that privacy is a “human” right.

Street-Level Surveillance Monsters

On a cool autumn night, you might be on the lookout for ghosts and ghouls from your favorite horror flicks—but in the real world, there are far scarier monsters lurking in the dark: police surveillance technologies. Often hidden in plain sight, these tools quietly watch from the shadows and are hard to spot. That’s why we’ve given these tools the hideous faces they deserve in our Street-Level Surveillance Monsters series, ready to scare (and inform) your loved ones.

Copyright Creatures

Ask any online creator and they’ll tell you: few things are scarier than a copyright takedown. From unfair DMCA claims and demonetization to frivolous lawsuits designed to intimidate people into a hefty payment, the creeping expansion of copyright can inspire as much dread as any monster on the big screen. That’s why this pack includes a few trolls and creeps straight from a broken copyright system—where profit haunts innovation. 

To that end, all of EFF’s work (including these stickers) are under an open CC-BY License, free for you to use and remix as you see fit.

Happy Haunting Everybody!

These frights may disappear with your message, but the fights persist. That’s why we’re so grateful to EFF supporters for helping us make the digital world a little more weird and a little less scary. You can become a member today and grab some gear to show your support. Happy Halloween!

DONATE TODAY

Rory Mir

No One Should Be Forced to Conform to the Views of the State

1 week ago

Should you have to think twice before posting a protest flyer to your Instagram story? Or feel pressure to delete that bald JD Vance meme that you shared? Now imagine that you could get kicked out of the country—potentially losing your job or education—based on the Trump administration’s dislike of your views on social media. 

That threat to free expression and dissent is happening now, but we won’t let it stand. 

"...they're not just targeting individuals—they're targeting the very idea of freedom itself."

The Electronic Frontier Foundation and co-counsel are representing the United Automobile Workers (UAW), Communications Workers of America (CWA), and American Federation of Teachers (AFT) in a lawsuit against the U.S. State Department and Department of Homeland Security for their viewpoint-based surveillance and suppression of noncitizens’ First Amendment-protected speech online.  The lawsuit asks a federal court to stop the government’s unconstitutional surveillance program, which has silenced citizens and noncitizens alike. It has even hindered unions’ ability to associate with their members. 

"When they spy on, silence, and fire union members for speaking out, they're not just targeting individuals—they're targeting the very idea of freedom itself,” said UAW President Shawn Fain. 

The Trump administration has built this mass surveillance program to monitor the constitutionally protected online speech of noncitizens who are lawfully present in the U.S. The program uses AI and automated technologies to scour social media and other online platforms to identify and punish individuals who express viewpoints the government considers "hostile" to "our culture" and "our civilization".  But make no mistake: no one should be forced to conform to the views of the state. 

The Foundation of Democracy 

Your free expression and privacy are fundamental human rights, and democracy crumbles without them. We have an opportunity to fight back, but we need you.  EFF’s team of lawyers, activists, researchers, and technologists have been on a mission to protect your freedom online since 1990, and we’re just getting started.

Donate and become a member of EFF today. Your support helps protect crucial rights, online and off, for everyone.

Give Today

Related Cases: United Auto Workers v. U.S. Department of State
Lisa Femia

Labor Unions, EFF Sue Trump Administration to Stop Ideological Surveillance of Free Speech Online

1 week ago
Viewpoint-based Online Surveillance of Permanent Residents and Visa Holders Violates First Amendment, Lawsuit Argues

NEW YORK—The United Automobile Workers (UAW), Communications Workers of America (CWA), and American Federation of Teachers (AFT) filed a lawsuit today against the Departments of State and Homeland Security for their viewpoint-based surveillance and suppression of protected expression online. The complaint asks a federal court to stop this unconstitutional surveillance program, which has silenced and frightened both citizens and noncitizens, and hampered the ability of the unions to associate with their members and potential members. The case is titled UAW v. State Department.

Since taking power, the Trump administration has created a mass surveillance program to monitor constitutionally protected speech by noncitizens lawfully present in the U.S. Using AI and other automated technologies, the program surveils the social media accounts of visa holders with the goal of identifying and punishing those who express viewpoints the government doesn't like. This has been paired with a public intimidation campaign, silencing not just noncitizens with immigration status, but also the families, coworkers, and friends with whom their lives are integrated.

As detailed in the complaint, when asked in a survey if they had changed their social media activity as a result of the Trump administration's ideological online surveillance program, over 60 percent of responding UAW members and over 30 percent of responding CWA members who were aware of the program said they had. Among noncitizens, these numbers were even higher. Of respondents aware of the program, over 80 percent of UAW members who were not U.S. citizens and over 40 percent of CWA members who were not U.S. citizens said they had changed their activity online.

Individual union members reported refraining from posting, refraining from sharing union content, deleting posts, and deleting entire accounts in response to the ideological online surveillance program. Criticism of the Trump administration or its policies was the most common type of content respondents reported changing their social media activity around. Many members also reported altering their offline union activity in response to the program, including avoiding being publicly identified as part of the unions and reducing their participation in rallies and protests. One member even said they declined to report a wage theft claim due to fears arising from the surveillance program.

Represented by the Electronic Frontier Foundation (EFF), Muslim Advocates (MA), and the Media Freedom & Information Access Clinic (MFIA), the UAW, CWA, and AFT seek to halt the program that affects thousands of their members individually and has harmed the ability of the unions to organize, represent, and recruit members. The lawsuit argues that the viewpoint-based online surveillance program violates the First Amendment and the Administrative Procedure Act.

"The Trump administration's use of surveillance to track and intimidate UAW members is a direct assault on the First Amendment—and an attack on every working person in this country," said UAW President Shawn Fain. "When they spy on, silence, and fire union members for speaking out, they're not just targeting individuals—they're targeting the very idea of freedom itself. The right to protest, to organize, to speak without fear—that's the foundation of American democracy. If they can come for UAW members at our worksites, they can come for any one of us tomorrow. And we will not stand by and let that happen."

"Every worker should be alarmed by the Trump administration’s online surveillance program," said CWA President Claude Cummings Jr. "The labor movement is built on our freedoms under the First Amendment to speak and assemble without fear retaliation by the government. The unconstitutional Challenged Surveillance Program threatens those freedoms and explicitly targets those who are critical of the administration and its policies. This policy interferes with CWA members’ ability to express their points of view online and organize to improve their working conditions."

"Free speech is the foundation of democracy in America," said AFT President Randi Weingarten. "The Trump administration has rejected that core constitutional right and now says only speech it agrees with is permitted—and that it will silence those who disagree. This suit exposes the online surveillance tools and other cyber tactics never envisioned by the founders to enforce compliance with the administration’s views. It details the direct harms on both the target of these attacks and the chilling effect on all those we represent and teach."

"Using a variety of AI and automated tools, the government can now conduct viewpoint-based surveillance and analysis on a scale that was never possible with human review alone," said EFF Staff Attorney Lisa Femia. "The scale of this spying is matched by an equally massive chilling effect on free speech."

"The administration is hunting online for an ever-growing list of disfavored viewpoints," said Golnaz Fakhimi, Legal Director of Muslim Advocates. "Its goal is clear: consolidate authoritarian power by crushing dissent, starting with noncitizens, but certainly not ending there. This urgent lawsuit aims to put a stop to this power grab and defend First Amendment freedoms crucial to a pluralistic and democratic society."

"This case goes to the heart of the First Amendment," said Anthony Cosentino, a student in the Media Freedom & Information Access Clinic. "The government can’t go after people for saying things it doesn’t like. The current administration has ignored that principle, developing a vast surveillance apparatus to find and punish people for their constitutionally protected speech. It is an extraordinary abuse of power, creating a climate of fear not seen in this country since the McCarthy era, especially on college campuses. Our laws and Constitution will not allow it."

For the complaint: https://www.eff.org/document/uaw-v-dos-complaint

For more about the litigation: https://eff.org/cases/united-auto-workers-v-us-department-state

Contacts:
Electronic Frontier Foundation: press@eff.org
Muslim Advocates: golnaz@muslimadvocates.org

Hudson Hongo

🎃 A Full Month of Privacy Tips from EFF | EFFector 37.14

1 week 1 day ago

Instead of catching you off-guard with a jump scare this Halloween season, EFF is here to catch you up on the latest digital rights news with our EFFector newsletter!

In this issue, we’re helping you take control of your online privacy with Opt Out October; explaining the UK’s attack on encryption and why it’s bad for all users; and covering shocking new details about an abortion surveillance case in Texas.

Prefer to listen in? Check out our audio companion, where EFF Security and Privacy Activist Thorin Klosowski explains how small steps to protect your privacy can add up to big changes.  Catch the conversation on YouTube or the Internet Archive.

LISTEN TO EFFECTOR

EFFECTOR 37.14 - 🎃 A FULL MONTH OF PRIVACY TIPS FROM EFF

Since 1990 EFF has published EFFector to help keep readers on the bleeding edge of their digital rights. We know that the intersection of technology, civil liberties, human rights, and the law can be complicated, so EFFector is a great way to stay on top of things. The newsletter is chock full of links to updates, announcements, blog posts, and other stories to help keep readers—and listeners—up to date on the movement to protect online privacy and free expression. 

Thank you to the supporters around the world who make our work possible! If you're not a member yet, join EFF today to help us fight for a brighter digital future.

Christian Romero

Victory! California Requires Transparency for AI Police Reports

1 week 2 days ago

California Governor Newsom has signed S.B. 524, a bill that begins the long process of regulating and imposing transparency on the growing problem of AI-written police reports. EFF supported this bill and has spent the last year vocally criticizing the companies pushing AI-generated police reports as a service. 

S.B.524 requires police to disclose, on the report, if it was used to fully or in part author a police report. Further, it bans vendors from selling or sharing the information a police agency provided to the AI. 

The bill is also significant because it required departments to retain the first draft of the report so that judges, defense attorneys, or auditors could readily see which portions of the final report were written by the officer and which portions were written by the computer. This creates major problems for police who use the most popular product in this space: Axon’s Draft One. By design, Draft One does not retain an edit log of who wrote what. Now, to stay in compliance with the law, police departments will either need Axon to change their product, or officers will have to take it upon themselves to go retain evidence of what the draft of their report looked like. Or, police can drop Axon’s Draft One all together. 

EFF will continue to monitor whether departments are complying with this state law.

After Utah, California has become the second state to pass legislation that begins to address this problem. Because of the lack of transparency surrounding how police departments buy and deploy technology, it’s often hard to know if police departments are using AI to write reports, how the generative AI chooses to translate audio to a narrative, and which portions of reports are written by AI and which parts are written by the officers. EFF has written a guide to help you file public records requests that might shed light on your police department’s use of AI to write police reports. 

It’s still unclear if products like Draft One run afoul of record retention laws, and how AI-written police reports will impact the criminal justice system. We will need to consider more comprehensive regulation and perhaps even prohibition of this use of generative AI. But S.B. 524 is a good first step. We hope that more states will follow California and Utah’s lead and pass even stronger bills.

Matthew Guariglia

EFF and Five Human Rights Organizations Urge Action Around Microsoft’s Role in Israel’s War on Gaza

1 week 2 days ago

In a letter sent to Microsoft at the end of last month, EFF and five other civil society organizations—Access Now, Amnesty International, Human Rights Watch, Fight for the Future, and 7amleh—called on the company to cease any further involvement in providing AI and cloud computing technologies for use in Israel’s ongoing genocide against Palestinians in the Gaza Strip.

EFF also sent updated letters to Google and Amazon renewing our calls for each company to respond to the serious concerns we raised with each of them last year about how they are fulfilling their respective human rights promises to the public. Neither Google nor Amazon has responded substantively. Amazon failed to even acknowledge our request, much less provide any transparency to the public. 

Microsoft Takes a Positive Step Against Surveillance

On September 25, Microsoft’s Vice Chair & President reported that the company had “ceased and disabled a set of services” provided to a unit within the Israel Ministry of Defense. The announcement followed an internal review at the company after The Guardian reported on August 6 that the IDF is using Azure for the storage of data files of phone calls obtained through broad or mass surveillance of civilians in Gaza and the West Bank.

This investigation by The Guardian, +972 Magazine, and Local Call also revealed the extent to which Israel’s military intelligence unit in question, Unit 8200, has used Microsoft’s Azure cloud infrastructure and AI technologies to process intercepted communications and power AI-driven targeting systems against Palestinians in Gaza and the West Bank—potentially facilitating war crimes and acts of genocide.

Microsoft’s actions are a positive step, and we urge its competitors Google and Amazon to, at the very least, do the same, rather than continuing to support and facilitate mass surveillance of Palestinians in Gaza and the West Bank.  

The Next Steps

But this must be the starting point, and not the end. Our joint letter therefore calls on Microsoft to provide clarity around:

  1. What further steps Microsoft will take to suspend its business with the Israeli military and other government bodies where there is evidence indicating that business is contributing to grave human rights abuses and international crimes.
  2. Whether Microsoft will commit to publishing the review findings in full, including the scope of the investigation, the specific entities and services under review, and measures Microsoft will take to address adverse human rights impacts related to its business with the Israeli military and other government bodies.
  3. What steps Microsoft has taken to ensure that its current formal review thoroughly investigates the use of its technologies by the Israeli authorities, in light of the fact that the same law firm carried out the previous review and concluded that there was no evidence of use of Microsoft’s Azure and AI technologies to target or harm people in Gaza.
  4. Whether Microsoft will conduct an additional human rights review, or incorporate a human rights lens to the current review.
  5. Whether Microsoft has applied any limited access restrictions to its AI technologies used by the IDF and Israeli government to commit genocide and other international crimes. 
  6. Whether Microsoft will evaluate the “high-impact and higher-risk uses” of its evolving AI technology deployed in conflict zones.
  7. How Microsoft is planning to provide effective remedy, including reparations, to Palestinians affected by any contributions by the company to violations of human rights by Israel.

Microsoft’s announcement of an internal review and the suspension of some of its services is long overdue and much needed in addressing its potential complicity in human rights abuses. But it must not end here, and Microsoft should not be the only major technology company taking such action.  

EFF, Access Now, Amnesty International, Human Rights Watch, Fight for the Future, and 7amleh provided a deadline of October 10 for Microsoft to respond to the questions outlined in the letter. However, Microsoft is expected to send its written response by the end of the month, and we will publish the response once received.

Read the full letter to Microsoft here.

Electronic Frontier Foundation

Watch Now: Navigating Surveillance with EFF Members

1 week 5 days ago

Online surveillance is everywhere—and understanding how you’re being tracked, and how to fight back, is more important than ever. That’s why EFF partnered with Women In Security and Privacy (WISP) for our annual Global Members’ Speakeasy, where we tackled online behavioral tracking and the massive data broker industry that profits from your personal information. 

Our live panel featured Rory Mir (EFF Associate Director of Community Organizing), Lena Cohen (EFF Staff Technologist), Mitch Stoltz (EFF IP Litigation Director) and Yael Grauer, Program Manager at Consumer Reports. Together, they unpacked how we arrived at a point where a handful of major tech companies dictate so much of our digital rights, how these monopolies erode privacy, and what real-world consequences come from constant data collection—and most importantly, what you can do to fight back. 

Members also joined in for a lively Q&A, exploring practical steps to opt out of some of this data collection, discussing the efficacy of privacy laws like the California Consumer Privacy Act (CCPA), and sharing tools and tactics to reclaim control over their data. 

We're always excited to find new ways to connect with our supporters and spotlight the critical work that their donations make possible. And because we want everyone to learn from these conversations, you can now watch the full conversation on YouTube or the Internet Archive

WATCH THE FULL DISCUSSION

EFF’s Global Member Speakeasy: You Are the Product 

Events like the annual Global Members’ Speakeasy are just one way we like to thank our members for powering EFF’s mission. When you become a member, you’re not only supporting our legal battles, research, and advocacy for digital freedom—you’re joining a global community of people who care deeply about defending privacy and free expression for everyone. 

Join EFF today, and you’ll receive invitations for future member events, quarterly insider updates on our most important work, and some conversation-starting EFF gear to help you spread the word about online freedom. 

A huge thank you to everyone who joined us and our partners at WISP for helping make this event happen. We’re already planning upcoming in-person and virtual events, and we can’t wait to see you there. 

Christian Romero

EFF Austin: Organizing and Making a Difference in Central Texas

1 week 5 days ago

Austin, Texas is a major tech hub with a population that’s engaged in advocacy and paying attention. Since 1991, EFF-Austin an independent nonprofit civil liberties organization, has been the proverbial beacon alerting those in central Texas to the possibilities and implications of modern technology. It is also an active member of the Electronic Frontier Alliance (EFA). On a recent visit to Texas, I got the chance to speak with Kevin Welch, President of EFF-Austin, about the organization, its work, and what lies ahead for them:

How did EFF-Austin get started, and can you share how it got its name?

EFF-Austin is concerned with emerging frontiers where technology meets society. We are a group of visionary technologists, legal professionals, academics, political activists, and concerned citizens who work to protect digital rights and educate the public about emerging technologies and their implications. Similar to our namesake, the national Electronic Frontier Foundation (EFF), “the dominion we defend is the vast wealth of digital information, innovation, and technology that resides online.” EFF-Austin was originally formed in 1991 with the intention that it would become the first chapter of the national Electronic Frontier Foundation. However, EFF decided not to become a chapters organization, and EFF-Austin became a separately-incorporated, independent nonprofit organization focusing on cyber liberties, digital rights, and emerging technologies.

What's the mission of EFF-Austin and what do you promote?

EFF-Austin advocates for establishment and protection of digital rights and defense of the wealth of digital information, innovation, and technology. We promote the right of all citizens to communicate and share information without unreasonable constraint. We also advocate for the fundamental right to explore, tinker, create, and innovate along the frontier of emerging technologies.

EFF-Austin has been involved in a number of initiatives and causes over the past several years, including legislative advocacy. Can you share a few of them?

We were one of the earliest local organizations that began to call out the Austin City Council over their use of automated license plate readers (ALPRs). After several years of fighting, EFF-Austin was proud to join the No ALPRs coalition as a founding member with over thirty local and state activist groups. Through our efforts, Austin decided not to renew our ALPR pilot project, becoming one of the only cities in America to reject ALPRs. Building on this success, the coalition is broadening its scope to call out other uses of surveillance in Austin, like proposed contracts for park surveillance from Liveview Technologies, as well as data privacy abuses more generally, such as the potential partnership with Valkyrie AI to non-consensually provide citizen data for model training and research purposes without sufficient oversight or guardrails. In support of these initiatives, EFF-Austin also partnered with the Austin Technology Commission to propose much stricter oversight and transparency rules around how the city of Austin engages in contracts with third party technology vendors.

EFF-Austin has also provided expert testimony on a number of major technology bills at the Texas Legislature that have since become law, including the Texas Data Privacy And Security Act (TDPSA) and the Texas Responsible AI Governance Act (TRAIGA).

How can someone local to central Texas get involved?

We conduct monthly meetups with a variety of speakers, usually the second Tuesday of each month at 7:00pm at Capital Factory (701 Brazos St, Austin, TX 78701) in downtown Austin. These meetups can range from technology and legal explainers to digital security trainings, from digital arts profiles to shining a spotlight on surveillance. In addition, we have various one-off events, often in partnership with other local nonprofits and civic institutions, including our fellow EFA member Open Austin. We also have annual holiday parties and SXSW gatherings that are free and open to the public. We don't currently have memberships, so any and all are welcome.

While EFF-Austin events are popular and well-attended, and our impact on local technology policy is quite impressive for such a small nonprofit, we have no significant sustained funding beyond occasional outreach to our community. Any local nonprofits, activist organizations, academic initiatives, or technology companies who find themselves aligned with our cause and would like to fund our efforts are encouraged to reach out. We also always welcome the assistance of those who wish to volunteer their technical, organizational, or legal skills to our cause. In addition to emailing us at info@effaustin.org, follow us on Mastodon, Bluesky, Twitter, Facebook, Instagram, or Meetup, and visit us at our website at https://effaustin.org.

Christopher Vines

PERA Remains a Serious Threat to Efforts Against Bad Patents

1 week 6 days ago

As all things old are new again, a bill that would make obtaining bad patents easier and harder to challenge is being considered in the Senate Judiciary Committee. The Patent Eligibility Restoration Act (PERA) would reverse over a decade of progress in fighting patent trolls and making the patent system more balanced.

PERA would overturn long-standing court decisions that have helped keep some of the most problematic patents in check. This includes the Supreme Court’s Alice v. CLS Bank decision, which bars patents on abstract ideas. While Alice has not completely solved the problems of the patent system or patent trolling, it has led to the rejection of hundreds of low-quality software patents and, as a result, has allowed innovation and small businesses to grow.

Thanks to the Alice decision, courts have invalidated a rogue’s gallery of terrible software patents—such as patents on online photo contests, online bingo, upselling, matchmaking, and scavenger hunts. These patents didn’t describe real inventions—they merely applied old ideas to general-purpose computers. But PERA would wipe out the Alice framework and replace it with vague, hollow exceptions, taking us back to an era where patent trolls and large corporate patent-holders aggressively harassed software developers and small companies.

This bill, combined with recent changes that have restricted access to the Patent Trial and Appeal Board (PTAB), would create a perfect storm—giving patent trolls and major corporations with large patent portfolios free rein to squeeze out independent inventors and small businesses.

EFF is proud to join a letter, along with Engine, the Public Interest Patent Law Institute, Public Knowledge, and R Street, to the Senate Judiciary Committee opposing this poorly-timed and concerning bill. We urge the committee to instead focus on restoring the PTAB as the accessible, efficient check on patent quality that Congress intended.

Katharine Trendacosta

EFF and Other Organizations: Keep Key Intelligence Positions Senate Confirmed

2 weeks 1 day ago

In a joint letter to the ranking members of the House and Senate intelligence committees, EFF has joined with 20 other organizations, including the ACLU, Brennan Center, CDT, Asian Americans Advancing Justice, and Demand Progress, to express opposition to a rule change that would seriously weaken accountability in the intelligence community. Specifically, under the proposed Senate Intelligence Authorization Act, S. 2342, the general counsels of the Central Intelligence Agency (CIA) and the Office of the Director of National Intelligence (ODNI) would no longer be subject to Senate confirmation.

You can read the entire letter here

In theory, having the most important legal thinkers at these secretive agencies—the ones who presumably tell an agency if something is legal or not—approved or rejected by the Senate allows elected officials the chance to vet candidates and their beliefs. If, for instance, a confirmation hearing had uncovered that a proposed general counsel for the CIA thinks it's not only legal, but morally justifiable for the agency to spy on US persons on US soil because of their political or religious beliefs–then the Senate would have the chance to reject that person. 

As the letter says, “The general counsels of the CIA and ODNI wield extraordinary influence, and they do so entirely in secret, shaping policies on surveillance, detention, interrogation, and other highly consequential national security matters. Moreover, they are the ones primarily responsible for determining the boundaries of what these agencies may lawfully do. The scope of this power and the fact that it occurs outside of public view is why Senate confirmation is so important.” 

It is for this reason that EFF and our ally organizations urge Congress to remove this provision from the Senate Intelligence Authorization Act.

Matthew Guariglia

How to File a Privacy Complaint in California

2 weeks 1 day ago

Privacy laws are only as strong as their enforcement. In California, the state’s privacy agency recently issued its largest-ever fine for violation of the state’s privacy law—and all because of a consumer complaint.

The state’s  privacy law, the California Consumer Privacy Act or CCPA, requires many companies to respect California customers' and job applicants' rights to know, delete and correct information that businesses collect about them, and to opt-out of some types of sharing and use. It also requires companies to give notice of these rights, along with other information, to customers, job applicants, and others. (Bonus tip: Have a complaint about something else, such as a data breach? Go to the CA Attorney General.)

If you’re a Californian and think a business isn’t obeying the law, then the best thing to do is tell someone who can do something about it. How? It’s easy. In fewer than a dozen questions, you can share enough information to get the agency started.

Start With the Basics

First, head to the California Privacy Protection Agency’s website at cppa.ca.gov. On the front page, you’ll see an option to “File a Complaint.” Click on that option.

That button takes you to the online complaint form. You can also print out the agency’s paper complaint form here.

The complaint form starts, fittingly, by explaining the agency’s own privacy practices. Then it gets down to business by asking for information about your situation.

The first question offers a list of rights people have under the CCPA, such as a right to delete or a right to correct sensitive personal information. So, for example, if you’ve asked ABC Company to delete your information, but they have refused, you’d select “Right to Delete.” This helps the agency categorize your complaint and tie it directly to the requirements in the law.  The form then asks for the names of businesses, contractors, or people you want to report.

It also asks whether you’re a California resident. If you’re unsure, because you split residency or for other reasons, there is an “Unsure” option.

Adding the Details

From there, the form asks for more detailed information about what’s happened. There is a character limit on this question, so you’ll have to choose your words carefully. If you can, check out the agency’s FAQ on how to write a successful complaint before you submit the form. This will help you be specific and tell the agency what they need to hear to act on your complaint.

In the next question, include information about any proof you have supporting your complaint. So, for example, you could tell the agency you have your email asking ABC Company to delete your information, and also a screenshot of proof that they haven’t erased it. Or, say “I spoke to a person on the phone on this date.” This should just be a list of information you have, rather than a place to paste in emails or attach images.

The form will also ask if you’ve directly contacted the business about your complaint. You can just answer yes or no to this question. If it’s an issue such as a company not posting a privacy notice, or something similar, it may not have made sense to contact them directly. But if you made a deletion request, you probably have contacted them about it.

Anonymous or Not?

Finally, the complaint form will ask you to make either an “unsworn complaint” or a “sworn complaint.” This choice affects how you’ll be involved in the process going forward. You can file an anonymous unsworn complaint. But that will mean the agency can’t contact you about the issue in the future, since they don’t have any of your information.

For a sworn complaint, you have to provide some contact information and confirm that what you’re saying is true and that you’d swear to it in court.

Just because you submit contact information, that doesn’t mean the agency will contact you. Investigations are usually confidential, until there’s something like a settlement to announce. But we’ve seen that consumer complaints can be the spark for an investigation. It’s important for all of us to speak up, because it really does make a difference.

Hayley Tsukayama

California Targets Tractor Supply's Tricky Tracking

2 weeks 1 day ago

The California Privacy Protection Agency (CPPA) issued a record fine earlier this month to Tractor Supply, the country’s self-proclaimed largest “rural lifestyle” retailer, for apparently ducking its responsibilities under the California Consumer Privacy Act. Under that law, companies are required to respect California customers’ and job applicants’ rights to know, delete, and correct information that businesses collect about them, and to opt-out of some types of sharing and use. The law also requires companies to give notice of these rights, along with other information, to customers, job applicants, and others. The CPPA said that Tractor Supply failed several of these requirements. This is the first time the agency has enforced this data privacy law to protect job applicants. Perhaps best of all, the company's practices came to light all thanks to a consumer complaint filed with the agency.

Your complaints matter—so keep speaking up. 

Tractor Supply, which has 2,500 stores in 49 states, will pay for their actions to the tune of $1,350,000—the largest fine the agency has issued to date. Specifically, the agency said, Tractor Supply violated the law by:

  • Failing to maintain a privacy policy that notified consumers of their rights;
  • Failing to notify California job applicants of their privacy rights and how to exercise them;
  • Failing to provide consumers with an effective mechanism to opt-out of the selling and sharing of their personal information, including through opt-out preference signals such as Global Privacy Control; and
  • Disclosing personal information to other companies without entering into contracts that contain privacy protections.

In addition to the fine, the company also must take an inventory of its digital properties and tracking technologies and will have to certify its compliance with the California privacy law for the next four years.

It may surprise people to see that the agency’s most aggressive fine isn’t levied on a large technology company, data broker, or advertising company. But this case merely highlights what anyone who uses the internet knows: practically every company is tracking your online behavior. 

The agency may be trying to make exactly this point by zeroing in on Tractor Supply. In its press release on the fine, the agency's top enforcer was clear that they'll be casting a wide net. 

 “We will continue to look broadly across industries to identify violations of California’s privacy law,” said Michael Macko, the Agency’s head of enforcement. “We made it an enforcement priority to investigate whether businesses are properly implementing privacy rights, and this action underscores our ongoing commitment to doing that for consumers and job applicants alike.”

It is encouraging to see the agency stand up for Californians’ rights. For years, we have said privacy laws are only as strong as their enforcement. Ideally we'd like to see privacy laws—including California’s—include a private right to action to let anyone sue for privacy violations, in addition to enforcement actions like this one from regulators. Since individuals can't stand up for the majority of their own privacy rights in California, however, it's even more important that regulators such as the CPPA are active, strategic, and bold. 

It also highlights why it's important for people like you to submit complaints to regulators. As the agency itself said, “The CPPA opened an investigation into Tractor Supply’s privacy practices after receiving a complaint from a consumer in Placerville, California.” Your complaints matter—so keep speaking up

Hayley Tsukayama

Flock Safety and Texas Sheriff Claimed License Plate Search Was for a Missing Person. It Was an Abortion Investigation.

2 weeks 2 days ago

New documents and court records obtained by EFF show that Texas deputies queried Flock Safety's surveillance data in an abortion investigation, contradicting the narrative promoted by the company and the Johnson County Sheriff that she was “being searched for as a missing person,” and that “it was about her safety.” 

The new information shows that deputies had initiated a "death investigation" of a "non-viable fetus," logged evidence of a woman’s self-managed abortion, and consulted prosecutors about possibly charging her. 

Johnson County Sheriff Adam King repeatedly denied the automated license plate reader (ALPR) search was related to enforcing Texas's abortion ban, and Flock Safety called media accounts "false," "misleading" and "clickbait." However, according to a sworn affidavit by the lead detective, the case was in fact a death investigation in response to a report of an abortion, and deputies collected documentation of the abortion from the "reporting person," her alleged romantic partner. The death investigation remained open for weeks, with detectives interviewing the woman and reviewing her text messages about the abortion. 

The documents show that the Johnson County District Attorney's Office informed deputies that "the State could not statutorily charge [her] for taking the pill to cause the abortion or miscarriage of the non-viable fetus."

An excerpt from the JCSO detective's sworn affidavit.

The records include previously unreported details about the case that shocked public officials and reproductive justice advocates across the country when it was first reported by 404 Media in May. The case serves as a clear warning sign that when data from ALPRs is shared across state lines, it can put people at risk, including abortion seekers. And, in this case, the use may have run afoul of laws in Washington and Illinois.

A False Narrative Emerges

Last May, 404 Media obtained data revealing the Johnson County Sheriff’s Office conducted a nationwide search of more than 83,000 Flock ALPR cameras, giving the reason in the search log: “had an abortion, search for female.” Both the Sheriff's Office and Flock Safety have attempted to downplay the search as akin to a search for a missing person, claiming deputies were only looking for the woman to “check on her welfare” and that officers found a large amount of blood at the scene – a claim now contradicted by the responding investigator’s affidavit. Flock Safety went so far as to assert that journalists and advocates covering the story intentionally misrepresented the facts, describing it as "misreporting" and "clickbait-driven." 

As Flock wrote of EFF's previous commentary on this case (bold in original statement): 

Earlier this month, there was purposefully misleading reporting that a Texas police officer with the Johnson County Sheriff’s Office used LPR “to target people seeking reproductive healthcare.” This organization is actively perpetuating narratives that have been proven false, even after the record has been corrected.

According to the Sheriff in Johnson County himself, this claim is unequivocally false.

… No charges were ever filed against the woman and she was never under criminal investigation by Johnson County. She was being searched for as a missing person, not as a suspect of a crime.

That sheriff has since been arrested and indicted on felony counts in an unrelated sexual harassment and whistleblower retaliation case. He has also been charged with aggravated perjury for allegedly lying to a grand jury. EFF filed public records requests with Johnson County to obtain a more definitive account of events.

The newly released incident report and affidavit unequivocally describe the case as a "death investigation" of a "non-viable fetus." These documents also undermine the claim that the ALPR search was in response to a medical emergency, since, in fact, the abortion had occurred more than two weeks before deputies were called to investigate. 

In recent years, anti-abortion advocates and prosecutors have increasingly attempted to use “fetal homicide” and “wrongful death” statutes – originally intended to protect pregnant people from violence – to criminalize abortion and pregnancy loss. These laws, which exist in dozens of states, establish legal personhood of fetuses and can be weaponized against people who end their own pregnancies or experience a miscarriage. 

In fact, a new report from Pregnancy Justice found that in just the first two years since the Supreme Court’s decision in Dobbs, prosecutors initiated at least 412 cases charging pregnant people with crimes related to pregnancy, pregnancy loss, or birth–most under child neglect, endangerment, or abuse laws that were never intended to target pregnant people. Nine cases included allegations around individuals’ abortions, such as possession of abortion medication or attempts to obtain an abortion–instances just like this one. The report also highlights how, in many instances, prosecutors use tangentially related criminal charges to punish people for abortion, even when abortion itself is not illegal.

By framing their investigation of a self-administered abortion as a “death investigation” of a “non-viable fetus,” Texas law enforcement was signaling their intent to treat the woman’s self-managed abortion as a potential homicide, even though Texas law does not allow criminal charges to be brought against an individual for self-managing their own abortion. 

The Investigator's Sworn Account

Over two days in April, the woman went through the process of taking medication to induce an abortion. Two weeks later, her partner–who would later be charged with domestic violence against her–reported her to the sheriff's office. 

The documents confirm that the woman was not present at the home when the deputies “responded to the death (Non-viable fetus).” As part of the investigation, officers collected evidence that the man had assembled of the self-managed abortion, including photographs, the FedEx envelope the medication arrived in, and the instructions for self-administering the medication. 

Another Johnson County official ran two searches through the ALPR database with the note "had an abortion, search for female," according to Flock Safety search logs obtained by EFF. The first search, which has not been previously reported, probed 1,295 Flock Safety networks–composed of 17,684 different cameras–going back one week. The second search, which was originally exposed by 404 Media, was expanded to a full month of data across 6,809 networks, including 83,345 cameras. Both searches listed the same case number that appears on the death investigation/incident report obtained by EFF. 

After collecting the evidence from the woman’s partner, the investigators say they consulted the district attorney’s office, only to be told they could not press charges against the woman. 

An excerpt from the JCSO detective's sworn affidavit.

Nevertheless, when the subject showed up at the Sheriff’s office a week later, officers were under the impression that she came to “to tell her side of the story about the non-viable fetus.” They interviewed her, inspected text messages about the abortion on her phone, and watched her write a timeline of events. 

Only after all that did they learn that she actually wanted to report a violent assault by her partner–the same individual who had called the police to report her abortion. She alleged that less than an hour after the abortion, he choked her, put a gun to her head, and made her beg for her life. The man was ultimately charged in connection with the assault, and the case is ongoing. 

This documented account runs completely counter to what law enforcement and Flock have said publicly about the case. 

Johnson County Sheriff Adam King told 404 media: "Her family was worried that she was going to bleed to death, and we were trying to find her to get her to a hospital.” He later told the Dallas Morning News: “We were just trying to check on her welfare and get her to the doctor if needed, or to the hospital."

The account by the detective on the scene makes no mention of concerned family members or a medical investigator. To the contrary, the affidavit says that they questioned the man as to why he "waited so long to report the incident," and he responded that he needed to "process the event and call his family attorney." The ALPR search was recorded 2.5 hours after the initial call came in, as documented in the investigation report.

The Desk Sergeant's Report—One Month Later

EFF obtained a separate "case supplemental report" written by the sergeant who says he ran the May 9 ALPR searches. 

The sergeant was not present at the scene, and his account was written belatedly on June 5, almost a month after the incident and nearly a week after 404 Media had already published the sheriff’s alternative account of the Flock Safety search, kicking off a national controversy. The sheriff's office provided this sergeant's report to Dallas Morning News

In the report, the sergeant claims that the officers on the ground asked him to start "looking up" the woman due to there being "a large amount of blood" found at the residence—an unsubstantiated claim that is in conflict with the lead investigator’s affidavit. The sergeant repeatedly expresses that the situation was "not making sense." He claims he was worried that the partner had hurt the woman and her children, so "to check their welfare," he used TransUnion's TLO commercial investigative database system to look up her address. Once he identified her vehicle, he ran the plate through the Flock database, returning hits in Dallas.

Two abortion-related searches in the JCSO's Flock Safety ALPR audit log

The sergeant's report, filed after the case attracted media attention, notably omits any mention of the abortion at the center of the investigation, although it does note that the caller claimed to have found a fetus. The report does not explain, or even address, why the sergeant used the phrase "had an abortion, search for female” as the official reason for the ALPR searches in the audit log. 

It's also unclear why the sergeant submitted the supplemental report at all, weeks after the incident. By that time, the lead investigator had already filed a sworn affidavit that contradicted the sergeant's account. For example, the investigator, who was on the scene, does not describe finding any blood or taking blood samples into evidence, only photographs of what the partner believed to be the fetus. 

One area where they concur: both reports are clearly marked as a "death investigation." 

Correcting the Record

Since 404 Media first reported on this case, King has perpetuated the false narrative, telling reporters that the woman was never under investigation, that officers had not considered charges against her, and that "it was all about her safety."

But here are the facts: 

  • The reports that have been released so far describe this as a death investigation.
  • The lead detective described himself as "working a death investigation… of a non-viable fetus" at the time he interviewed the woman (a week after the ALPR searches).
  • The detective wrote that they consulted the district attorney's office about whether they could charge her for "taking the pill to cause the abortion or miscarriage of the non-viable fetus." They were told they could not.
  • Investigators collected a lot of data, including photos and documentation of the abortion, and ran her through multiple databases. They even reviewed her text messages about the abortion. 
  • The death investigation was open for more than a month.

The death investigation was only marked closed in mid-June, weeks after 404 Media's article and a mere days before the Dallas Morning News published its report, in which the sheriff inaccurately claimed the woman "was not under investigation at any point."

Flock has promoted this unsupported narrative on its blog and in multimedia appearances. We did not reach out to Flock for comment on this article, as their communications director previously told us the company will not answer our inquiries until we "correct the record and admit to your audience that you purposefully spread misinformation which you know to be untrue" about this case. 

Consider the record corrected: It turns out the truth is even more damning than initially reported.

The Aftermath

In the aftermath of the original reporting, government officials began to take action. The networks searched by Johnson County included cameras in Illinois and Washington state, both states where abortion access is protected by law. Since then: 

  • The Illinois Secretary of State has announced his intent to “crack down on unlawful use of license plate reader data,” and urged the state’s Attorney General to investigate the matter. 
  • In California, which also has prohibitions on sharing ALPR out of state and for abortion-ban enforcement, the legislature cited the case in support of pending legislation to restrict ALPR use.
  • Ranking Members of the House Oversight Committee and one of its subcommittees launched a formal investigation into Flock’s role in “enabling invasive surveillance practices that threaten the privacy, safety, and civil liberties of women, immigrants, and other vulnerable Americans.” 
  • Senator Ron Wyden secured a commitment from Flock to protect Oregonians' data from out-of-state immigration and abortion-related queries.

In response to mounting pressure, Flock announced a series of new features supposedly designed to prevent future abuses. These include blocking “impermissible” searches, requiring that all searches include a “reason,” and implementing AI-driven audit alerts to flag suspicious activity. But as we've detailed elsewhere, these measures are cosmetic at best—easily circumvented by officers using vague search terms or reusing legitimate case numbers. The fundamental architecture that enabled the abuse remains unchanged. 

Meanwhile, as the news continued to harm the company's sales, Flock CEO Garrett Langley embarked on a press tour to smear reporters and others who had raised alarms about the usage. In an interview with Forbes, he even doubled down and extolled the use of the ALPR in this case. 

So when I look at this, I go “this is everything’s working as it should be.” A family was concerned for a family member. They used Flock to help find her, when she could have been unwell. She was physically okay, which is great. But due to the political climate, this was really good clickbait.

Nothing about this is working as it should, but it is working as Flock designed. 

The Danger of Unchecked Surveillance

Flock Safety ALPR cameras

This case reveals the fundamental danger of allowing companies like Flock Safety to build massive, interconnected surveillance networks that can be searched across state lines with minimal oversight. When a single search query can access more than 83,000 cameras spanning almost the entire country, the potential for abuse is staggering, particularly when weaponized against people seeking reproductive healthcare. 

The searches in this case may have violated laws in states like Washington and Illinois, where restrictions exist specifically to prevent this kind of surveillance overreach. But those protections mean nothing when a Texas deputy can access cameras in those states with a few keystrokes, without external review that the search is legal and legitimate under local law. In this case, external agencies should have seen the word "abortion" and questioned the search, but the next time an officer is investigating such a case, they may use a more vague or misleading term to justify the search. In fact, it's possible it has already happened. 

ALPRs were marketed to the public as tools to find stolen cars and locate missing persons. Instead, they've become a dragnet that allows law enforcement to track anyone, anywhere, for any reason—including investigating people's healthcare decisions. This case makes clear that neither the companies profiting from this technology nor the agencies deploying it can be trusted to tell the full story about how it's being used.

States must ban law enforcement from using ALPRs to investigate healthcare decisions and prohibit sharing data across state lines. Local governments may try remedies like reducing data retention period to minutes instead of weeks or months—but, really, ending their ALPR programs altogether is the strongest way to protect their most vulnerable constituents. Without these safeguards, every license plate scan becomes a potential weapon against a person seeking healthcare.

Dave Maass

What Europe’s New Gig Work Law Means for Unions and Technology

2 weeks 5 days ago

At EFF, we believe that tech rights are worker’s rights. Since the pandemic, workers of all kinds have been subjected to increasingly invasive forms of bossware. These are the “algorithmic management” tools that surveil workers on and off the job, often running on devices that (nominally) belong to workers, hijacking our phones and laptops. On the job, digital technology can become both a system of ubiquitous surveillance and a means of total control.

Enter the EU’s Platform Work Directive (PWD). The PWD was finalized in 2024, and every EU member state will have to implement (“transpose”) it by 2026. The PWD contains far-reaching measures to protect workers from abuse, wage theft, and other unfair working conditions.

But the PWD isn’t self-enforcing! Over the decades that EFF has fought for user rights, we’ve proved that having a legal right on paper isn’t the same as having that right in the real world. And workers are rarely positioned to take on their bosses in court or at a regulatory body. To do that, they need advocates.

That’s where unions come in. Unions are well-positioned to defend their members – and all workers (EFF employees are proudly organized under the International Federation of Professional and Technical Engineers).

The European Trade Union Confederation has just published “Negotiating the Algorithm,” a visionary – but detailed and down-to-earth – manual for unions seeking to leverage the PWD to protect and advance workers’ interests in Europe.

The report notes the alarming growth of algorithmic management, with 79% of European firms employing some form of bossware. Report author Ben Wray enumerates many of the harms of algorithmic management, such as “algorithmic wage discrimination,” where each worker is offered a different payscale based on surveillance data that is used to infer how economically desperate they are.

Algorithmic management tools can also be used for wage theft, for example, by systematically undercounting the distances traveled by delivery drivers or riders. These tools can also subject workers to danger by penalizing workers who deviate from prescribed tasks (for example, when riders are downranked for taking an alternate route to avoid a traffic accident).

Gig workers live under the constant threat of being “deactivated” (kicked off the app) and feel pressure to do unpaid work for clients who can threaten their livelihoods with one-star reviews. Workers also face automated de-activation: a whole host of “anti-fraud” tripwires can see workers de-activated without appeal. These risks do not befall all workers equally: Black and brown workers face a disproportionate risk of de-activation when they fail facial recognition checks meant to prevent workers from sharing an account (facial recognition systems make more errors when dealing with darker skin tones).

Algorithmic management is typically accompanied by a raft of cost-cutting measures, and workers under algorithmic management often find that their employer’s human resources department has been replaced with chatbots, web-forms, and seemingly unattended email boxes. When algorithmic management goes wrong, workers struggle to reach a human being who can hear their appeal.

For these reasons and more, the ETUC believes that unions need to invest in technical capacity to protect workers’ interests in the age of algorithmic management.

The report sets out many technological activities that unions can get involved with. At the most basic level, unions can invest in developing analytical capabilities, so that when they request logs from algorithmic management systems as part of a labor dispute, they can independently analyze those files.

But that’s just table-stakes. Unions should also consider investing in “counter apps” that help workers. There are workers that act as an external check on employers’ automation, like the UberCheats app, which double-checked the mileage that Uber drivers were paid for. There are apps that enable gig workers to collectively refuse lowball offers, raising the prevailing wage for all the workers in a region, such as the Brazilian StopClub app. Indonesian gig riders have a wide range of “tuyul” apps that let them modify the functionality of their dispatch apps. We love this kind of “adversarial interoperability.” Any time the users of technology get to decide how it works, we celebrate. And in the US, this sort of tech-enabled collective action by workers is likely to be shielded from antitrust liability even if the workers involved are classified as independent contractors.

Developing in-house tech teams also gives unions the know-how to develop the tools for organizers and workers to coordinate their efforts to protect workers. The report acknowledges that this is a lot of tech work to ask individual unions to fund, and it moots the possibility of unions forming cooperative ventures to do this work for the unions in the co-op. At EFF, we regularly hear from skilled people who want to become public interest technologists, and we bet there’d be plenty of people who’d jump at the chance to do this work.

The new Platform Work Directive gives workers and their representatives the right to challenge automated decision-making, to peer inside the algorithms used to dispatch and pay workers, to speak to a responsible human about disputes, and to have their privacy and other fundamental rights protected on the job. It represents a big step forward for workers’ rights in the digital age.

But as the European Trade Union Confederation’s report reminds us, these rights are only as good as workers’ ability to claim them. After 35 years of standing up for people’s digital rights, we couldn’t agree more.

Cory Doctorow

Tile’s Lack of Encryption Is a Danger for Users Everywhere

2 weeks 6 days ago

In research shared with Wired this week, security researchers detailed a series of vulnerabilities and design flaws with Life360’s Tile Bluetooth trackers that make it easy for stalkers and the company itself to track the location of Tile devices.

Tile trackers are small Bluetooth trackers, similar to Apple’s Airtags, but they work on their own network, not Apple’s. We’ve been raising concerns about these types of trackers since they were first introduced and provide guidance for finding them if you think someone is using them to track you without your knowledge.

EFF has worked on improving the Detecting Unwanted Location Trackers standard that Apple, Google, and Samsung use, and these companies have at least made incremental improvements. But Tile has done little to mitigate the concerns we’ve raised around stalkers using their devices to track people.

One of the core fundamentals of that standard is that Bluetooth trackers should rotate their MAC address, making them harder for a third-party to track, and that they should encrypt information sent. According to the researchers, Tile does neither.

This has a direct impact on the privacy of legitimate users and opens the device up to potentially even more dangerous stalking. Tile devices do have a rotating ID, but since the MAC address is static and unencrypted, anyone in the vicinity could pick up and track that Bluetooth device.

Other Bluetooth trackers don’t broadcast their MAC address, and instead use only a rotating ID, which makes it much harder for someone to record and track the movement of that tag. Apple, Google, and Samsung also all use end-to-end encryption when data about the location is sent to the companies’ servers, meaning the companies themselves cannot access that information.

In its privacy policy, Life360 states that, “You are the only one with the ability to see your Tile location and your device location.” But if the information from a tracker is sent to and stored by Tile in cleartext (i.e. unencrypted text) as the researchers believe, then the company itself can see the location of the tags and their owners, turning them from single item trackers into surveillance tools.

There are also issues with the “anti-theft mode” that Tile offers. The anti-theft setting hides the tracker from Tile’s “Scan and Secure” detection feature, so it can’t be easily found using the app. Ostensibly this is a feature meant to make it harder for a thief to just use the app to locate a tracker. In exchange for enabling the anti-theft feature, a user has to submit a photo ID and agree to pay a $1 million fine if they’re convicted of misusing the tracker.

But that’s only helpful if the stalker gets caught, which is a lot less likely when the person being tracked can’t use the anti-stalking protection feature in the app to find the tracker following them. As we’ve said before, it is impossible to make an anti-theft device that secretly notifies only the owner without also making a perfect tool for stalking.

Life360, the company that owns Tile, told Wired it “made a number of improvements” after the researchers reported them, but did not detail what those improvements are.

Many of these issues would be mitigated by doing what their competition is already doing: encrypting the broadcasts from its Bluetooth trackers and randomizing MAC addresses. Every company involved in the location tracker industry business has the responsibility to create a safeguard for people, not just for their lost keys.

Thorin Klosowski
Checked
1 hour 13 minutes ago
EFF's Deeplinks Blog: Noteworthy news from around the internet
Subscribe to EFF update feed