Should I Use My State’s Digital Driver’s License?

2 months 1 week ago

A mobile driver’s license (often called an mDL) is a version of your ID that you keep on your phone instead of in your pocket. In theory, it would work wherever your regular ID works—TSA, liquor stores, to pick up a prescription, or to get into a bar. This sounds simple enough, and might even be appealing—especially if you’ve ever forgotten or lost your wallet. But there are a few questions you should ask yourself before tossing your wallet into the sea and wandering the earth with just your phone in hand.

In the United States, some proponents of digital IDs promise a future where you can present your phone to a clerk or bouncer and only reveal the information they need—your age—without revealing anything else. They imagine everyone whipping through TSA checkpoints with ease and enjoying simplified applications for government benefits. They also see it as a way to verify identity on the internet, a system that likely censors everyone.

There are real privacy and security trade-offs with digital IDs, and it’s not clear if the benefits are big enough—or exist at all—to justify them.

But if you are curious about this technology, there are still a few things you should know and some questions to consider.

Questions to Ask Yourself

Can I even use a Digital ID anywhere? 

The idea of being able to verify your age by just tapping your phone against an electronic reader—like you may already do to pay for items—may sound appealing. It might make checking out a little faster. Maybe you won’t have to worry about the bouncer at your favorite bar creepily wishing you “happy birthday,” or noting that they live in the same building as you.

Most of these use cases aren’t available yet in the United States. While there are efforts to enable private businesses to read mDLs, these credentials today are mainly being used at TSA checkpoints.

For example, in California, only a small handful of convenience stores in Sacramento and Los Angeles currently accept digital IDs for purchasing age-restricted items like alcohol and tobacco. TSA lists airports that support mobile driver’s licenses, but it only works for TSA PreCheck and only for licenses issued in eleven states.

Also, “selective disclosure,” like revealing just your age and nothing else, isn’t always fully baked. When we looked at California’s mobile ID app, this feature wasn’t available in the mobile ID itself, but rather, it was part of the TruAge addon. Even if the promise of this technology is appealing to you, you might not really be able to use it.

Is there a law in my state about controlling how police officers handle digital IDs?

One of our biggest concerns with digital IDs is that people will unlock their phones and hand them over to police officers in order to show an ID. Ordinarily, police need a warrant to search the content of our phones, because they contain what the Supreme Court has properly called “the privacies of life.”

There are some potential technological protections. You can technically get your digital ID read or scanned in the Wallet app on your phone, without unlocking the device completely. Police could also have a special reader like at some retail stores.

But it’s all too easy to imagine a situation where police coerce or trick someone into unlocking their phone completely, or where a person does not even know that they just need to tap their phone instead of unlocking it. Even seasoned Wallet users screw up payment now and again, and doing so under pressure amplifies that risk. Handing your phone over to law enforcement, either to show a QR code or to hold it up to a reader, is also risky since a notification may pop up that the officer could interpret as probable cause for a search.

Currently, there are few guardrails for how law enforcement interacts with mobile IDs. Illinois recently passed a law that at least attempts to address mDL scenarios with law enforcement, but as far as we know it’s the only state to do anything so far.

At the very minimum, law enforcement should be prohibited from leveraging an mDL check to conduct a phone search.

Is it clear what sorts of tracking the state would use this for?

Smartphones have already made it significantly easier for governments and corporations to track everything we do and everywhere we go. Digital IDs are poised to add to that data collection, by increasing the frequency that our phones leave digital breadcrumbs behind us. There are technological safeguards that could reduce these risks, but they’re currently not required by law, and no technology fix is perfect enough to guarantee privacy.

For example, if you use a digital ID to prove your age to buy a six-pack of beer, the card reader’s verifier might make a record of the holder’s age status. Even if personal information isn’t exchanged in the credential itself, you may have provided payment info associated with this transaction. This collusion of personal information might be then sold to data brokers, seized by police or immigration officials, stolen by data thieves, or misused by employees.

This is just one more reason why we need a federal data privacy law: currently, there aren’t sufficient rules around how your data gets used.

Do I travel between states often?

Not every state offers or accepts digital IDs, so if you travel often, you’ll have to carry a paper ID. If you’re hoping to just leave the house, hop on a plane, and rent a car in another state without needing a wallet, that’s likely still years away.

How do I feel about what this might be used for online?

Mobile driver’s licenses are a clear fit for online age verification schemes. The privacy harms of these sorts of mandates vastly outweigh any potential benefit. Just downloading and using a mobile driver’s license certainly doesn’t mean you agree with that plan, but it’s still good to be mindful of what the future might entail.

Am I being asked to download a special app, or use my phone’s built-in Wallet?

Both Google and Apple allow a few states to use their Wallet apps directly, while other states use a separate app. For Google and Apple’s implementations, we tend to have better documentation and a more clear understanding of how data is processed. For apps, we often know less.

In some cases, states will offer Apple and Google Wallet support, while also providing their own app. Sometimes, this leads to different experiences around where a digital ID is accepted. For example, in Colorado, the Apple and Google Wallet versions will get you through TSA. The Colorado ID app cannot be used at TSA, but can be used at some traffic stops, and to access some services. Conversely, California’s mobile ID comes in an app, but also supports Apple and Google Wallets. Both California’s app and the Apple and Google Wallets are accepted at TSA.

Apps can also come and go. For example, Florida removed its app from the Apple App Store and Google Play Store completely. All these implementations can make for a confusing experience, where you don’t know which app to use, or what features—if any—you might get.

The Right to Paper

For now, the success or failure of digital IDs will at least partially be based on whether people show interest in using them. States will likely continue to implement them, and while it might feel inevitable, it doesn’t have to be. There are countless reasons why a paper ID should continue to be accepted. Not everyone has the resources to own a smartphone, and not everyone who has a smartphone wants to put their ID on it. As states move forward with digital ID plans, privacy and security are paramount, and so is the right to a paper ID.

Note: The Real ID Modernization Act provides one protection for using a mDL we initially missed in this blog post: if you present your phone to federal law enforcement, it cannot be construed as consent to seize or search the device.

Thorin Klosowski

Podcast Episode Rerelease: So You Think You’re A Critical Thinker

2 months 1 week ago

This episode was first released in March 2023.

With this year’s election just weeks away, concerns about disinformation and conspiracy theories are on the rise.

We covered this issue in a really enlightening talk in March 2023 with Alice Marwick, the director of research at Data & Society, and previously the cofounder and principal researcher at the Center for Information, Technology and Public Life at the University of North Carolina, Chapel Hill.

We talked with Alice about why seemingly ludicrous conspiracy theories get so many followers, and when fact-checking does and doesn’t work. And we came away with some ideas for how to identify and leverage people’s commonalities to stem disinformation, while making sure that the most marginalized and vulnerable internet users are still empowered to speak out.

We thought this is a good time to re-publish that episode, in hopes that it might help you make some sense of what you might see and hear in the next few months.

If you believe conversations like this are important, we hope you’ll consider voting for How to Fix the Internet in the “General - Technology” category of the Signal Awards’ 3rd Annual Listener's Choice competition. Deadline for voting is Thursday, Oct. 17.

Vote now!

This episode was first published on March 21, 2023.

The promise of the internet was that it would be a tool to melt barriers and aid truth-seekers everywhere. But it feels like polarization has worsened in recent years, and more internet users are being misled into embracing conspiracies and cults.

  

You can also find this episode on the Internet Archive and on YouTube.

From QAnon to anti-vax screeds to talk of an Illuminati bunker beneath Denver International Airport, Alice Marwick has heard it all. She has spent years researching some dark corners of the online experience: the spread of conspiracy theories and disinformation. She says many people see conspiracy theories as participatory ways to be active in political and social systems from which they feel left out, building upon beliefs they already harbor to weave intricate and entirely false narratives.  

Marwick speaks with EFF’s Cindy Cohn and Jason Kelley about finding ways to identify and leverage people’s commonalities to stem this flood of disinformation while ensuring that the most marginalized and vulnerable internet users are still empowered to speak out.  

In this episode you’ll learn about:  

  • Why seemingly ludicrous conspiracy theories get so many views and followers  
  • How disinformation is tied to personal identity and feelings of marginalization and disenfranchisement 
  • When fact-checking does and doesn’t work  
  • Thinking about online privacy as a political and structural issue rather than something that can be solved by individual action 

Alice Marwick is director of research at Data & Society; previously, she was an Associate Professor in the Department of Communication and cofounder and Principal Researcher at the Center for Information, Technology and Public Life at the University of North Carolina, Chapel Hill. She researches the social, political, and cultural implications of popular social media technologies. In 2017, she co-authored Media Manipulation and Disinformation Online (Data & Society), a flagship report examining far-right online subcultures’ use of social media to spread disinformation, for which she was named one of Foreign Policy magazine’s 2017 Global Thinkers. She is the author of Status Update: Celebrity, Publicity and Branding in the Social Media Age (Yale 2013), an ethnographic study of the San Francisco tech scene which examines how people seek social status through online visibility, and co-editor of The Sage Handbook of Social Media (Sage 2017). Her forthcoming book, The Private is Political (Yale 2023), examines how the networked nature of online privacy disproportionately impacts marginalized individuals in terms of gender, race, and socio-economic status. She earned a political science and women's studies bachelor's degree from Wellesley College, a Master of Arts in communication from the University of Washington, and a PhD in media, culture and communication from New York University. 

Transcript

ALICE MARWICK
I show people these TikTok videos that are about these kind of outrageous conspiracy theories, like that the Large Hadron Collider at CERN is creating a multiverse. Or that there's, you know, this pyramid of tunnels under the Denver airport where they're trafficking children and people kinda laugh at them.

They're like, this is silly. And then I'm like, this has 3 million views. You know, this has more views than probably most of the major news stories that came out this week. It definitely has more views than any scientific paper or academic journal article I'll ever write, right? Like, this stuff has big reach, so it's important to understand it, even if it seems kind of frivolous or silly, or, you know, self-evident.

It's almost never self-evident. There's always some other reason behind it, because people don't do things arbitrarily. They do things that help them make sense of their lives. They give their lives meaning these are practices that people engage in because it means something to them. And so I feel like my job as a researcher is to figure out, what does this mean? Why are people doing this?

CINDY COHN
That’s Alice Marwick. The research she’s talking about is something that worries us about the online experience – the spread of conspiracy theories and misinformation. The promise of the internet was that it would be a tool that would melt barriers and aid truth-seekers everywhere. But sometimes it feels like polarization has worsened, and Internet users are misled into conspiracies and cults. Alice is trying to figure out why, how – and more importantly, how to fix it.

I’m Cindy Cohn, the Executive Director of the Electronic Frontier Foundation.

JASON KELLEY And I’m Jason Kelley, EFF’s Associate Director of Digital Strategy.

This is our podcast series: How to Fix the Internet.

CINDY COHN The idea behind this show is that we're trying to fix the internet. We're trying to make our digital lives better. EFF spends a lot of time warning about all the ways that things could go wrong and jumping into the fight when things do go wrong online, but what we'd like to do with this podcast is to give ourselves a vision of what the world looks like if we start to get it right. JASON KELLEY Our guest today is Alice Marwick. She’s a researcher at the Center for Information, Technology and Public Life at the University of North Carolina. She does qualitative research on a topic that affects everyone’s online lives but can be hard to grasp outside of anecdotal data – the spread of conspiracy theories and disinformation online.

This is a topic that many of us have a personal connection to – so we started off our conversation with Alice by asking what drew her into this area of research.

ALICE MARWICK
So like many other people I got interested in missing disinformation in the run up to the 2016 election. I was really interested in how ideas that had formerly been like a little bit subcultural and niche in far right circles were getting pushed into the mainstream and circulating really wildly and widely.

And in doing that research, it sort of helped me understand disinformation as a frame for understanding the way that information ties into marginalization, I think more broadly and disinformation is often a mechanism by which people who are marginalized the stories that the dominant culture tells about those marginalized people, the way that it circulates.

JASON KELLEY
I think it's been a primary focus for a lot of people in a lot of ways over the last few years. I know I have spent a lot of time on alternative social media platforms over the last few years because I find the topics kind of interesting to figure out what's happening there. And also because I have a friend who has kind of entered that space and, uh, I like to learn, you know, where the information that he's sharing with me comes from, essentially, right. But one thing that I've been thinking about with him and and with other folks is, is there something that happened to him that made him kind of easily radicalized, if you will? And I, I don't think that's a term that, that you recommend using, but I think a lot of people just assume that that's something that happens.

That there are people who, um, you know, grew up watching the X-files or something and ended up more able to fall into these misinformation and disinformation traps. And I'm wondering if that's, if that's actually true. It seems like from your research, it's not.

ALICE MARWICK
It's not, and that's because there's a lot of different things that bring people to disinformation, because disinformation is really deeply tied to identity in a lot of ways. There's lots of studies showing that more or less, every American believes in at least one conspiracy theory, but the conspiracy theory that you believe in is really based on who you are.

So in some cases it is about identity, but I think the biggest misconception about [00:04:00] disinformation is that the people who believe it are just completely gullible and that they don't have any critical thinking skills and that they go on YouTube and they watch a video or they listen to a podcast and all of a sudden their entire mindset shifts.

CINDY COHN
So why is radicalization not the right term? How do you think about this term and why you've rejected it?

ALICE MARWICK
The whole idea of radicalization is tied up in this countering violent extremism movement that is multinational, that is tied to this huge surveillance apparatus, to militarization, to, in many ways, like a very Islamophobic idea of the world. People have been researching why individuals commit political violence for 50 years and they haven't found any individual characteristics that make someone more susceptible to doing something violent, like committing a mass shooting or participating in the January 6th insurrection, for example. What instead that we see is that there's a lot of different puzzle pieces that can contribute to whether somebody takes on a set, an ideology, and whether they commit acts of violence and service of that  ideology.

And I think the thing that's frustrating to researchers is sometimes the same thing can have two completely different effects in people. So there's this great study of women in South America who were involved in guerilla warfare, and some of those women, when they had kids, they were like, oh, I'm not gonna do this anymore.

It's too dangerous. You know, I wanna focus on my family. But then there was another set of women that when they had kids, they felt they had more to lose and they had to really contribute to this effort because it was really important to the freedom of them and their children.

So when you think about radicalization, there's this real desire to have this very simplistic pathway that everybody kind of just walks along and they end up a terrorist. But that's just not the way the world works. 

The second reason I don't like radicalization is because white supremacy is baked into the United States from its inception. And white supremacist ideas and racist ideas are pretty foundational. And they're in all kinds of day-to-day language and media and thinking. And so why would we think it's radical to be, for example, anti-black or anti-trans when anti-blackness and anti-transness have like these really long histories?

CINDY COHN
Yeah,  I think that's right. And there is a way in which radicalization makes it sound as if, um, that's something other than our normal society. Iin many instances, that's not actually what's going on.

There's pieces of our society, the water we swim in every day that are getting, um, that are playing a big role in some of this stuff that ends up in a very violent place. And so by calling it radicalization, we're kind of creating an other that we're not a part of that I think will mean that we might miss some of the, some of the pieces of this.

ALICE MARWICK
Yeah, and I think that when we think about disinformation, the difference between a successful and an unsuccessful disinformation campaign is often whether or not the ideas exist in the culture already. One of the reasons QAnon, I think, has been so successful is that it picks up a lot of other pre circulating conspiracy theories.

It mixes them with anti-Semitism, it mixes them with homophobia and transphobia, and it kind of creates this hideous concoction, this like potion that people drink that reinforces a lot of their preexisting beliefs. It's not something that comes out of nowhere. It's something that's been successful precisely because it reinforces ideas that people already had.

CINDY COHN
I think the other thing that I saw in your research that might have been surprising or at least was a little surprising to me, is how participatory Q-Anon is.

You took a look at some of the Q-Anon. Conversations, you could see people pulling in pieces of knowledge from other things, you know, flight patterns and, and unexplained deaths and other things. It's something that they're co-creating, um, which I found fascinating.

ALIVE MARWICK
It's really similar to the dynamics of fandom in a lot of ways. You know, any of us who have ever participated in, like, a Usenet group or a subreddit about a particular TV show, know that people love putting theories together. They love working together to try to figure out what's going on. And obviously we see those same dynamics at play in a lot of different parts of internet culture.

So it's about taking the participatory dynamics of the internet and sort of mixing them with what we're calling conspiratorial literacy, which is sort of the ability to assemble these narratives from all these disparate places to kind of pull together, you know, photos and Wikipedia entries and definitions and flight paths and you know, news stories into these sort of n narratives that are really hard to make coherent sometimes, ‘cause they get really complicated.

But it's also about a form of political participation. I think there's a lot of people in communities where disinformation is rampant, where they feel like talking to people about Q-Anon or anti-vaxing or white supremacy is a way that they can have some kind of political efficacy. It's a way for them to participate, and sometimes I think people feel really disenfranchised in a lot of ways.

JASON KELLEY
I wonder because you mentioned internet culture, if some of this is actually new, right? I mean, we had satanic panics before and something I hear a lot of in various places is that things used to be so much simpler when we had four television channels and a few news anchors and all of them said the same thing, and you couldn't, supposedly, you couldn't find your way out into those other spaces. And I think you call this the myth of the epistemically consistent past. Um, and is that real? Was that a real time that actually existed? 

ALICE MARWICK
I mean, let's think about who that works for, right? If you're thinking about like 1970, let's say, and you're talking about a couple of major TV networks, no internet, you know, your main interpersonal communication is the telephone. Basically, what the mainstream media is putting forth is the narrative that people are getting.

And there's a very long history of critique of the mainstream media, of putting forth a narrative that's very state sponsored, that's very pro-capitalist, that writes out the histories of lots and lots of different types of people. And I think one of the best examples of this is thinking about the White Press and the Black Press.

And the Black Press existed because the White Press didn't cover stories that were of interest to the black community, or they strategically ignored those stories. Like the Tulsa Race massacre, for example, like that was completely erased from history because the white newspapers were not covering it.

So when we think about an. Epistemically consistent past, we're thinking about the people who that narrative worked for.

CINDY COHN
I really appreciate this point. To me, what was exciting about the internet and, you know, I'm a little older. I was alive during the seventies, um, and watched Walter Cronkite and, you know, this idea that, you know, old white guys in New York get, decide what the rest of us see, which is, that's who ran the networks, right.

That, that, you know, and maybe we had a little pbs, so we got a little Sesame Street too. 

But the promise of the Internet was that we could hear from more and more diverse voices, and reduce the power of those gatekeepers. What is scary is that some people are now pretty much saying that the answers to the problems of today’s Internet is to find four old white guys and let them decide what all the rest of us see again.    

ALICE MARWICK
I think it's really easy to blame the internet for the ills of society, and I, I guess I'm a digital critic, but I'm ultimately, I love the internet, like I love social media. I love the internet. I love online community. I love the possibilities that the internet has opened up for people. And when I look at the main amplifiers of disinformation, it's often politicians and political elites whose platforms are basically independent of the internet.

Like people are gonna cover, you know, leading politicians regardless of what media they're covering them with. And when you look at something like the lies around the Dominion voting machines, like, yes, those lies start in these really fringy internet communities, but they're picked up and amplified incredibly quickly by mainstream politicians.

And then they're covered by mainstream news. So who's at fault there? I think that blaming the internet really ignores the fact that there's a lot of other players here, including the government, you know, politicians, these big mainstream media sources. And it's really convenient to blame all social media or just the entire internet for some of these ills, but I don't think it's accurate.

CINDY COHN
Well, one of the things that I saw in your research and, and our friend, Yochai Benkler has done in a lot of things is the role of amplifiers, right? That these, these these places where people, you know, agree about things that aren't true and, and converse about things that aren't true. They predate the internet, maybe the internet gave a little juice to them, but what really gives juice to them is these amplifiers who, as I think you, you rightly point out, are some of the same people who were the mainstream media controllers in that hazy past of yore, um, I think that if this stuff never makes it to more popular amplifiers. I don't think it becomes the kind of thing that we worry about nearly so much.

ALICE MARWICK
Yeah, I mean, when I was looking at white supremacist disinformation in 2017,  someone I spoke with pointed out that the mainstream media is the best recruitment tool for white supremacists because historically it's been really hard for white supremacists to recruit. And I'm not talking about like historically, like in the thirties and forties, I'm talking about like in the eighties and nineties when they had sort of lost a lot of their mainstream political power.

It was very difficult to find like-minded people, especially if people were living in places that were a little bit more progressive or were multiracial. Most people, in reading a debunking story in the Times or the Post or whatever, about white supremacist ideas are going to disagree with those ideas.

But even if one in a thousand believes them and is like, oh wow, this is a person who's spreading white supremacist ideas, I can go to them and learn more about it. That is a far more powerful platform than anything that these fringe groups had. in the past, and one of the things that we've noticed in our research is that often conspiracy theories go mainstream precisely because they're being debunked by the mainstream media

CINDY COHN
Wow. So there's two kinds of amplifiers. There's the amplifiers who are trying to debunk things and accidentally perhaps amplify. But there are, there are people who are intentional amplifiers as well, and that both of them have the same effect, or at least both of them can spread the misinformation.

ALICE MARWICK
Yeah. I mean, of course, debunking has great intentions, right? We don't want horrific misinformation and disinformation to go and spread unchecked. But one of the things that we noticed when we were looking at news coverage of disinformation was that a lot of the times the debunking aspect was not as strong as we would've expected.

You know, you would expect a news story saying, this is not true, this is false, the presumptions are false. But instead, you'd often get these stories where they kind of repeated the narrative and then at the end there was, you know, this is incorrect. And the false narrative is often much more interesting and exciting than whatever the banal truth is.

So I think a lot of this has to do with the business model of journalism, right? There's a real need to comment on everything that comes across Twitter, just so that you can get some of the clicks for it. And that's been really detrimental, I think, to. journalists who have the time and the space to really research things and craft their pieces.

You know, it's an underpaid occupation. They're under a huge amount of economic and time pressure to like get stories out. A lot of them are working for these kind of like clickbaity farms that just churn out news stories on any hot topic of the day. And I think that is just as damaging and dangerous as some of these social media platforms.

JASON KELLEY
So when it comes to debunking, there's a sort of parallel, which is fact checking. And, you know, I have tried to fact check people, myself, um, individually. It doesn't seem to work. Does it work when it's, uh, kind of built into the platform as we've seen in different, um, in different spaces like Facebook or Twitter with community notes they're testing out now?

Or does that also kind of amplify it in some way because it just serves to upset, let's say, the people who have already decided to latch onto the thing that is supposedly being fact checked.

ALICE MARWICK
I think fact checking does work in some instances. If it's about things that people don't already have, like a deep emotional attachment to. I think sometimes also if it's coming from someone they trust, you know, like a relative or a close friend, I think there are instances in which it doesn't get emotional and people are like, oh, I was wrong about that, that's great. And then they move on. 

When it's something like Facebook where, you know, there's literally like a little popup saying, you know, this is untrue. Oftentimes what that does is it just reinforces this narrative that the social platforms are covering things up and that they're biased against certain groups of people because they're like, oh, Facebook only allows for one point of view.

You know, they censor everybody who doesn't believe X, Y, or Z. And the thing is that I think both liberals and conservatives believe that, obviously the narrative that social platforms censor conservatives is much stronger. But if you look at the empirical evidence, conservative stories perform much better on social media, specifically Facebook and Twitter, than do liberal stories.

So it, it's kind of like, it makes nobody happy. I don't think we should be amplifying, especially extremist views or views that are really dangerous. And I think that what you wanna do is get rid of the lowest hanging fruit. Like you don't wanna convert new people to these ideas like you, there might be some people who are already so enmeshed in some of these communities that it's gonna be hard for them to find their way out. But let's try to minimize the number of people who are exposed to it.

JASON KELLEY
That's interesting. It sounds like there are some models of fact checking that can help, but it really more applies to the type of information that's being, uh, fact checked than, than the specific way that the platform kind of sets it up. Is that what I'm hearing? Is that right?

ALICE MARWICK
Yeah, I mean, the problem is with a lot of, a lot of people online, I bet if you ask 99 people, if they consider themselves to be critical thinkers, 95 would say, yes, I'm a critical thinker. I'm a free thinker.

JASON KELLEY
A low estimate, I'm pretty sure.

ALICE MARWICK
A low estimate. So let's say you ask a hundred people in 99 say they're critical thinkers. Um, you know, I, I interview a lot of people about who have sort of what we might call unusual beliefs, and they all claim that they do fact checking and that they, when they hear something, they want to see if it's true.

And so they go and read other perspectives on it. And obviously, you know, they're gonna tell the researcher what they think I wanna hear. They're not gonna be like, oh, I saw this thing on Facebook and then I, like, spread it to 2000 people. And then it, you know, it turned out it was false. Um, but especially in the communities like Q-Anon, or anti-vaxxers, they already think of themselves as like researchers.

A lot of people who are into conspiracy theories think of themselves as researchers. That's one of their identities. And they spend quite a bit of time going down rabbit holes on the internet, looking things up and reading about it. And it's almost like a funhouse mirror held up to academic research because it is about the pleasure of learning, I think, and the joy of sort of educating yourself and these sort of like autodidactic processes where people can kind of learn just for the fun of learning. Um, but then they're doing it in a way that's somewhat divorced from what I would call sort of empirical standards of data collection or, you know, data assessment.

CINDY COHN
So, let's flip it around for a second. What does it look like if we are doing this right? What are the things that we would see in our society and in our conversations that would indicate that we're, we're kind of on the right path, or that we're, we're addressing this?

ALICE MARWICK
Well, I mean, the problem is this is a big problem. So it requires a lot of solutions. A lot of different things need to be worked on. You know, the number one thing I think would be toning down, you know, violent political rhetoric in general. 

Now how you do that, I'm not sure. I think it comes from, you know, there's this kind of window of discourse that's open that I think needs to be shut, where maybe we need to get back to slightly more civil levels of discourse. That's a really hard problem to solve. In terms of the internet, I think right now there's been a lot of focus on the biggest social media sites, and I think that what's happening is you have a lot of smaller social sites and it's much more difficult to play whack-a-Mole with a hundred different platforms than it is with three.

CINDY COHN
Given that we think that a pluralistic society is a good thing and we shouldn't all be having exactly the same beliefs all the time. How do we nurture that diversity without, you know, without the kind of violent edges? Or is it inevitable? Is there a way that we can nurture a pluralistic society that doesn't get to this us versus them, what team are you on kind of approach that I think underlies some of the spilling into violence that we've seen?

ALICE MARWICK
This is gonna sound naive, but I do think that there's a lot more commonalities between people than there are differences. So I interviewed a woman who's a conservative evangelical anti-vaxxer last week, and you. She and I don't have a lot in common in any way, but we had, like, a very nice conversation and one of the things that she told me is be she has this one particular interest that's brought her into conversation with a lot of really liberal people.

And so because she's interacted with a lot of them, she knows that they're not like demonic or evil. She knows they're just people and they have really different, they have really different opinions on a lot of really serious issues, but they're still able to sort of chat [00:32:00] about the things that they do care about.

And I think that if we can trace those lines of inclusion and connectivity between people, I think that's much, that's a much more positive, I think, area for growth than it is just constantly focusing on the differences. And that's easy for me to say as a white woman, right? Like it's much harder to deal with these differences if the difference in question is that the person thinks you're, you know, genetically inferior or that you shouldn't exist.

Those are things that are not easy. You can't just kumbaya your way out of those kinds of things. And in that case, I think we need to center the concerns of the most vulnerable and of the most marginalized, and make sure they're the ones whose voices are getting heard and their concerns are being amplified, which is not always the case, unfortunately.

JASON KELLEY
So let's say that we got to that point and um, you know, the internet space that you're on isn't as polarized, but it's pluralistic. Can you describe a little bit about what that feels like in your mind?

ALICE MARWICK
I think one thing to remember is that most people don't really care about politics. You know, a lot of us are kind of Twitter obsessed and we follow the news and we see our news alerts come up on our phone and we're like, Ooh, what just happened? Most people don't really care about that stuff. If you look at a site like Reddit, which gets a bad rap, but I think Reddit is just like a wonderful site for a lot of reasons.

It's mostly focused around interest-based communities, and the vast, vast majority of them are not about politics. They're about all kinds of other things. You know very mundane stuff. Like you have a dog or a cat, or you like the White Lotus and you wanna talk about the finale. Or you, you know, you live in a community and you want to talk about the fact that they're building a new McDonald's on like Route Six or whatever.

Yes, in those spaces you'll see people get into spats and you'll see people get into arguments and in those cases, there's usually some community moderation, but generally I think a lot of those communities are really healthy and positive. The moderators put forth like these are the norms.

And I think it's funny, I think some people would say Reddit uplifting, but I think you see the same thing in some Facebook groups as well, um, where you have people who really love, like quilting or I'm in dozens and dozens of Facebook groups on all kinds of weird things.

Like, “I found this weird thing at a thrift store,” or “I found this painting, you know, what can you tell me about it?” And I get such a kick out of seeing people from all these walks of life come together and talk about these various interests. And I do think that. You know, that's the utopian ideal of the internet that I think got us all so into it in the eighties and nineties.

This idea that you can come together with people and talk about things that you care about, even if you don't have anyone in your local immediate community who cares about those same things, and we've seen over and over that, that can be really empowering for people. You know, if you're an LGBTQ person in an area where there aren't that many other LGBTQ people, or if you're a black woman and you're the only black woman at your company, you know, you can get resources and support for that.

If you have an illness that isn't very well understood, you know, you can do community education on that. So, You know, these pockets of the internet, they exist and they're pretty big. And when we just constantly focus on this small minority of people who are on Twitter, you know, yelling at each other about stuff, I think it really overlooks the fact that so much of the internet is already this place of like enjoyment and, you know, hope.

CINDY COHN
Oh, I, that is so right and so good to be reminded of, um, that, that, that it's not that we have to fix the internet, it's that we have to grow the part of the internet  that never got broken. Right. That is fixed. 

JASON KELLEY
Let’s take a quick moment to say thank you to our sponsor.

“How to Fix the Internet” is supported by The Alfred P. Sloan Foundation’s Program in Public Understanding of Science and Technology. Enriching people’s lives through a keener appreciation of our increasingly technological world and portraying the complex humanity of scientists, engineers, and mathematicians.


CINDY COHN
Now back to our conversation with Alice Marwick. In addition to all of her fascinating research on disinformation that we’ve been talking about so far, Alice has also been doing some work on another subject very near and dear to our hearts here at EFF – privacy.

Alice has a new book coming out in May 2023 called The Private is Political – so of course we couldn’t let her go without talking about that. 

ALICE MARWICK
I wanted to look at how you can't individually control privacy anymore because all of our privacy is networked because of social media and big data. We share information about each other, information about us as collected by all kinds of entities.

You know, you can configure your privacy settings till the cows come home, but it's not gonna change whether your photo gets swept up in, you know, some AI that then uses it for other kinds of purposes. And the second thing is to think about privacy as a political issue that has big impacts on everyone's lives, especially people who are marginalized in other areas.

I interviewed, oh, people from all kinds of places and spaces with all sorts of identities, and there's this really big misconception that people don't care about privacy. But people care very deeply about privacy and the way that they. Show that care  manifest in like so many different kinds of creative ways.

And so I'm hoping, I'm looking forward to sharing the stories of the people I spoke with.

CINDY COHN
That's great. Can you tell us one or I, I don't wanna spoil it, but -

ALICE MARWICK
Yeah, no. So I spoke with Jazz in North Carolina. These are all pseudonyms. And Jazz is an atheist, gender queer person, and they come from a pretty conservative Southern Baptist family and they're also homeless. They have a child who lives with their sister and they get a little bit of help from their family, like, not a lot, but enough that it can make the difference between whether they get by or not.

So what they did is they created two completely different sets of internet accounts. They have two Facebooks, two Twitters, two email addresses. Everything is different and it's completely firewalled. So on one, they use their preferred name and their pronouns. On the other, they use the pronouns they were assigned at birth and the name that their oarents gave them. And so the contrast between the two was just extreme. And so  Jazz said that they feel like their real, their Facebook page that really reflects them, that's their “me” page. That's where they can be who they really are because they have to kind of cover up who they are in so many other areas of their lives.

So they get this sort of big kick out of having this space on the internet where they can be like fiery and they can talk about politics and gender and things that they care about, but they have a lot to lose if the, if that, you know, seeps into their other life. So they have to be really cognizant of things like who does Facebook recommend that you friend, you know, who might see my other email address, who might do a Google search for my name?

And so I call this privacy work. It's the work that all of us do to maintain our privacy and we all do it. Um, and, but it's just much more intense for some kinds of people. Um, and so I see in jazz, you know, a lot of these themes, somebody who is. Suffering from intersectional forms of marginalization, but is still kind of doing the best they can.

And, you know, moving forward in the world, somebody who's being very creative with the internet, they're using it in ways that none of the designers or technologists ever intended, and they're helping it work for them, but they're also not served well by these technologies because they don't have the options to set the technologies up in ways that would fit their life or their needs.

Um, and so what I'm really calling for here is to, rather than thinking about privacy as individual, as something we each have to solve, as seeing it as a political and a structural problem that cannot be solved by individual responsibility or individual actions.

CINDY COHN
I so support that. That is certainly what we've experienced in the world as well, you know, the fight against the Real Names policy, say at Facebook, which, which really impacted, um, LGBTQ and trans community, especially because people are, they're changing their names, right? And that's important.

This real names policy, you know, first of all it's based on not good science. This idea that if you attach people's names to what they say, they will behave better. Which is, you know, belied by all of Facebook. Um, and, and, you know, it doesn't have any science behind it at all. But also these negative effects for, for, for people who, you know, for safety, you know, we work with a lot of domestic violence victims, you know, being able to separate out. One identity from another is tremendously important. And, and again, can, can matter for people's very lives. Or it could just be like, you know, when I'm Cindy at the dog park, I, I, I'm not interested in being, you know, Cindy, who's the ED of EFF, and being able to segment out your life and show up as, as different people, like, there's, there's a lot of power in that, even if it's not, you know, um, necessary to save your life.

ALICE MARWICK
Yeah, absolutely. Sort of that, that ability to maintain our social roles and to play different aspects of ourselves at different times. That's like a very human thing, and that's sort of fundamental to privacy. It's what parts of yourself do you wanna reveal at any given time. And when you have these huge sites like Facebook where they want a real name and they want you to have a persistent identity, it makes that really difficult.

Whereas sites like Reddit where you can have a pseudonym and you can have 12 accounts and nobody cares, and the site is totally designed to deal with that. You know, that works a lot better with how most people, I think, want to use the internet.

CINDY COHN
What other things do you think we can do? I mean, I'm assuming that we need some legal support here as well as technical, um, uh, support for, uh, more private internet, really More privacy protective internet.

ALICE MARWICK
I mean, we need comprehensive data privacy laws.

CINDY COHN
Yeah.

ALICE MARWICK
The fact that every different type of personal information is governed differently and some aren't governed at all. The fact that your email is not private, that, you know, anything you do through a third party is not private, whereas your video store records are private.

That makes no sense whatsoever. You know, it's just this complete amalgam. It doesn't have any underlying principle whatsoever. The other thing I would say is data brokers. We gotta get 'em out. We gotta get rid of them. You shouldn't be able to collect data in one for one purpose and then use it for God knows how many other purposes.

I think, you know, I was very happy under the Obama administration to see that the FTC was starting to look into data brokers. It seems like we lost a lot of that energy during the Trump administration, but you know, to me they're public enemy number one. Really don't like 'em.

CINDY COHN
We are with you.  And you know this isn’t new – as early as 1973 the federal government developed  something called the Fair Information Practice Principles that included recognizing that it wasn’t fair to collect data for one purpose and then use it for another without meaningful consent – but that’s the central proposition that underlies the data broker business model. I appreciate that your work confirms that those ideas are still good ones.  

ALICE MARWICK
Yeah, I think there's sort of a group of people doing critical privacy critical surveillance studies, um, a more diverse group of people than we've typically seen studying privacy. For a long time it was just sort of the domain of, you know, legal scholars and computer scientists. And so now that it's being sort of opened up to qualitative analysis and sociology and other forms, you know, I think we're starting to see a much more comprehensive understanding, which hopefully at some point will, you know, affect policy making and technology design as well.

CINDY COHN
Yeah, I sure hope so. I mean, I think we're in a time when our US Supreme Court is really not grappling with privacy harms and is effectively making it harder and harder to at least use the judicial remedies to try to address privacy harm. So, you know, this development of the rest of society and people's thinking about eventually, I think, will leak over into, into the judicial side.

But it's one of the things that a fixed internet would give us is the ability to have actual accountability for privacy harms at a level that much better than what we have now. And the other thing I hear you really developing out is that maybe the individual model, which is kind of inherent in a lot of litigation, isn't really the right model for thinking about how to remedy all of this either.

ALICE MARWICK
Well, a lot of it is just theatrical, right? It reminds me of, you know, security theater at the airport. Like the idea that by clicking through a 75-page, you know, terms of service change that's written at, you know, a level that would require a couple of years of law school, that it would take years if you spent, if you actually sat and read those, it would take up like two weeks of your life every year.

Like that is just preposterous. Like, nobody would sit and be like, okay, well here's a problem. What's the best way to solve it? It's just a loophole that allows companies to get away with all kinds of things that I think are, you know, unethical and immoral by saying, oh, well we told you about it.

But I think often what I hear from people is, well, if you don't like it, don't use it. And that's easy to say if you're talking about something that is, you know, an optional extra to your life. But when we're talking about the internet, there aren't other options. And I think what people forget is that the internet has replaced a lot of technologies that kind of withered away. You know, I've driven across country three times, and the first two times was kind of pre-mobile internet or a pre, you know, ubiquitous internet. And you had a giant road atlas in your car. Every gas station had maps and there were payphones everywhere. You know, now most payphones are gone, you go to a gas station, you ask for directions, they're gonna look at you blankly, and no one has a road atlas. You know, there are all these infrastructures that existed pre-internet that allowed us to exist without smartphones in the internet. And now most of those are gone. What are you supposed to do if you're in college and you're not using, you know, at the very least, your course management system, which is probably already, you know, collecting information on you and possibly selling it to a third party.

You can't pass your class. If you're not joining your study group, which might be on Facebook or any other medium, or WhatsApp or whatnot. Like, you can't communicate with people. It's absolutely ridiculous that we're just saying, oh, well, if you don't like it, don't use it. Like you don't tell people, you know.

If you're being targeted by like a murderous sociopath, oh, just don't go outside, right? Just stay inside all the time. That's just not, it's  terrible advice and it's not realistic.

CINDY COHN
No, I think that is true and certainly trying to find a job. I mean,  there are benefits to the fact that all of this stuff is networked, but it really does shine a light on the fact that, that this terms of service approach to things as if this is a contract, like a freely negotiated contract like I learned in law school with two equal parties, having a negotiation and coming to a meeting of the minds like this is, it's a whole other planet from that approach.

And to try to bring that frame to, you know, whether you enforce those terms or not, is, it's jarring to people. It's not how people live. And so it feels this way in which the legal system is kind of divorced from, from our lives. And, and if we get it right, the legal terms and the things that we are agreeing to will be things that we actually agree to, not things that are stuffed into a document that we never read or we really realistically can't read.

ALICE MARWICK
Yeah, I would love it if the terms of service was an actual contract and I could sit there and be like, all right, Facebook, if you want my business, this is what you have to do for me. And make some poor entry level employees sit there and go through all my ridiculous demands. Like, sure, you want it to be a contract, then I'm gonna be an equal participant.

CINDY COHN
You want those green m and ms in the green room?

ALICE MARWICK
Yeah, I want, I want different content moderation standards. I want a pony, I want glittery gifs on every page. You know, give it all to me.

CINDY COHN
Yeah. I mean, you know, there's a, there's a way in which a piece of the fed-averse strategy that I think, uh, we're kind of at the beginning of, uh, perhaps, uh, in this moment is, um, is that a little bit, you have a smaller community, you have people who run the servers, um, who you can actually interact with.

I mean, I don't know that, again, I don't know that there's ponies, but, um, but you know, one of the things that will help get us there is smaller, right? We can't do content moderation at scale. Um, and we can't do, you know, contractual negotiations at scale. So smaller might be helpful and I don't think it's gonna solve all the problems.

I'm, you know, but I think that there, there's a way in which you can at least get your arms around the problem. If you're dealing with a smaller community that then can inter, inter-operate with other communities, but isn't beholden to them with one rule to rule them all.

ALICE MARWICK
Yeah, I mean, I think the biggest problem right now is we need to get around usability and ux and these platforms need to be just as easy to use as like the easiest social platform. You know, it needs to be something that if you aren't, you know, if you don't have a college education, if you're not super techy, if you aren't familiar with, you know, if you're only familiar with very popular social media platforms, you still be, are able to use things like Mastodon.

I don't think we're quite there yet, but I can see a future in which we get there.

CINDY COHN
Well thank you so much for continuing to do this work.

ALICE MARWICK
Oh, thank you. Thank you, Cindy. Thank you, Jason. It was great to chat today.

JASON KELLEY
I'm so glad we got to talk to Alice. That was a really fun conversation and one that I think really underscored a point that I've noticed, um, which is that over the last, I don't know, many years we've seen Congress and other legislators try to tackle these two separate issues that we talked with Alice about.

One being sort of like content on the internet and the other being privacy on the internet. And when we spoke with her about privacy, it was clear that there are a lot. Obvious and simple and direct solutions to kind of informing how we can make privacy on the internet something that actually exists compared to content, which is a much stickier issue.

And, and it's, it's interesting that Congress and other legislators have consistently focused on one of these two topics, or let's say both of them at the expense of, of the one that actually is fairly direct when it comes to solutions. That really sticks out for me, but I'm, I'm wondering, I've blathered on, what do you find  most interesting about what we talked with her about? There was a lot there.

CINDY COHN
Well, I think that Alice does a great service to all of us by pointing out all the ways in which the kind of easy solutions that we reach to, especially around misinformation and disinformation and easy stories we tell ourselves are not easy at all and not empirically supported. So I think one of the things she does is just shine a light on the difference between the kind of stories we tell ourselves about how we could fix some of these problems and the actual empirical evidence about whether those things will work or not.

The other thing that I appreciated is she kind of pointed to spaces on the internet where things are kind of fixed. She talked about Reddit, she talked about some of the fan fiction places she talked about. Facebook groups and pointing out that, you know, sometimes we can be overly focused on politics and the darker pieces of the internet, and that these places that are supportive and loving and good communities that are doing the right thing, they already exist.

We don't have to create them, we just have to find a way to foster them, um, and build more of them. Make the, make more of the internet. That experience. But it, it's, it's refreshing to realize that, you know, Massive pieces of the internet were never broken, um, and don't need to be fixed.

JASON KELLEY
That is 100% right. We're sort of tilted, I think, to focus on the worst things, which is part of our job at EFF. But it's nice when someone says, you know, there are actually good things. And it reminds us that a lot of, in a lot of ways it's working and we can make it better by focusing on what's working.

Well that’s it for this episode of How to Fix the Internet.

Thank you so much for listening. If you want to get in touch about the show, you can write to us at podcast@eff.org or check out the EFF website to become a member, donate, or look at hoodies, tshirts, hats and other merch, just in case you feel the need to represent your favorite podcast and your favorite digital rights organization.

This podcast is licensed Creative Commons Attribution 4.0 International, and includes music licensed Creative Commons Attribution 3.0 Unported by their creators. You can find their names and links to their music in our episode notes, or on our website at eff.org/podcast.

Our theme music is by Nat Keefe of BeatMower with Reed Mathis

How to Fix the Internet is supported by the Alfred P. Sloan Foundation's program in public understanding of science and technology.

We’ll see you next time in two weeks

I’m Jason Kelley

CINDY COHN And I’m Cindy Cohn. MUSIC CREDIT ANNOUNCER This podcast is licensed Creative Commons Attribution 4.0 International, and includes the following music licensed Creative Commons Attribution 3.0 Unported by its creators:
Probably Shouldn’t by J.Lang featuring Mr_Yesterday

CommonGround by airtone featuring: simonlittlefield

Additional beds and alternate theme remixes by Gaëtan Harris

Josh Richman

New IPANDETEC Report Shows Panama’s ISPs Still Lag in Protecting User Data

2 months 1 week ago

Telecom and internet service providers in Panama are entrusted with the personal data of millions of users, bearing a responsibility to not only protect users’ privacy but also be transparent about their data handling policies. Digital rights organization IPANDETEC has evaluated how well companies have lived up to their responsibilities in ¿Quien Defiende Tus Datos? (“Who Defends Your Data?”) reports released in 2019, 2020, and 2022, which showed persistent deficiencies.

IPANDETEC’s new Panama report, released today, reveals that, with a few notable exceptions, providers in Panama continue to struggle to meet important best practice standards like publishing transparency reports, notifying users about government requests for their data, and requiring authorities to obtain judicial authorization for data requests, among other criteria.

As in its prior reports, IPANDETEC assessed mobile phone operators Más Móvil, Digicel, and Tigo. Claro, assessed in earlier reports, was acquired by Más Móvil in 2021 and as such was dropped. This year’s report also ranked fixed internet service providers InterFast Panama, Celero Fiber, and DBS Networks.

Companies were evaluated in nine categories, including disclosure of data protection policies and transparency reports, data security practices, public promotion of human rights, procedures for authorities seeking user data, publication of services and policies in native languages, and making policies and customer service available to people with disabilities. IPANDETEC also assessed whether mobile operators have opposed mandatory facial recognition for users' activation of their services.

Progress Made

Companies are awarded stars and partial stars for meeting parameters set for each category. Más Móvil scored highest with four stars, while Tigo received two and one-half stars and Digicel one and a half. Celero scored highest among fixed internet providers with one and three-quarters stars. Interfast and DBS received three-fourths of a star and one-half star, respectively.

The report showed progress on a few fronts: Más Móvil and Digicel publish privacy policy for their services, while Más Móvil has committed to follow relevant legal procedures before providing authorities with the content of its users’ communications, a significant improvement compared to 2021.

Tigo maintains its commitment to require judicial authorization or follow established procedures before providing data and to reject requests that don’t comply with legal requirements.

Más Móvil and Tigo also stand out for joining human rights-related initiatives. Más Móvil is a signatory of the United Nations Global Compact and belongs to SUMARSE, an organization that promotes Corporate Social Responsibility (CSR) in Panama.

Tigo, meanwhile, has projects aimed at digital and social transformation, including Conectadas: Empowering Women in the Digital World, Entrepreneurs in Action: Promoting the Success of Micro and Medium-sized Enterprises, and Connected Teachers: The Digital Age for teachers.

All three fixed internet service providers received partial credit for meeting some parameters for digital security.

Companies Lag in Key Areas

Still, the report showed that internet providers in Panama have a long way to go to incorporate best practices in most categories. For instance, no company published transparency reports with detailed quantitative data for Panama.

Both mobile and fixed internet telecommunications companies are not committed to informing users about requests or orders from authorities to access their personal data, according to the report. As for digital security, companies have chosen to maintain a passive position regarding the promotion of digital security.

None of the mobile providers have opposed requiring users to undergo facial recognition to register or access their mobile phone services. As the report underlines, companies' resignation "marks a significant step backwards and affects human rights, such as the right to privacy, intimacy and the protection of personal data." Mandating face recognition as a condition to use mobile services is "an abusive intrusion into the privacy of users, setting a worrying precedent with the supposed objective of fighting crime," the report says.

No company has a website or relevant documents available in native languages. Likewise, no company has a declaration and/or accessibility policy for people with disabilities (in physical and digital environments) or important documents in an accessible format.

But it's worth noting that Más Móvil has alternative channels for people with sensory disabilities and Contact Center services for blind users, as well as remote control with built-in voice commands to improve accessibility.  Tigo, too, stands out for being the only company to have a section on its website about discounts for retired and disabled people.

IPANDETEC’s Quien Defiende Tus Datos series of reports is part of a region-wide initiative, akin to EFF’s Who Has Your Back project, which tracks and rates ISPs’ privacy policies and commitments in Latin America and Spain. 

Karen Gullo

Election Security: When to Worry, When to Not

2 months 1 week ago

This post was written by EFF intern Nazli Ungan as an update to a 2020 Deeplinks post by Cindy Cohn.

Everyone wants an election that is secure and reliable and that will ensure that the voters’ actual choices are reflected in the results. That’s as true as we head into the 2024 U.S. general elections as it always has been.

At the same time, not every problem in voting technology or systems is worth pulling the fire alarm—we have to look at the bigger story and context. And we have to stand down when our worst fears turn out to be unfounded.

Resilience is the key word when it comes to the security and the integrity of our elections. We need our election systems to be technically and procedurally resilient against potential attacks or errors. But equally important, we need the voting public to be resilient against false or unfounded claims of attack or error. Luckily, our past experiences and the work of election security experts have taught us a few lessons on when to worry and when to not.

See EFF's handout on Election Security here: https://www.eff.org/document/election-security-recommendations

We Need Risk-Limiting Audits

First, and most importantly, it is critical to have systems in place to support election technology and the election officials who run it. Machines may fail, humans may make errors. We cannot simply assume that there will not be any issues in voting and tabulation. Instead, there must be built-in safety measures that would catch any issues that may affect the official election results.  

It is critical to have systems in place to support election technology and the election officials who run it.

The most important of these is performing routine, post-election Risk-Limiting Audits after every election. RLAs should occur even if there is no apparent reason to suspect the accuracy of the results. Risk-limiting audits are considered the gold standard of post-election audits and they give the public justified confidence in the results. This type of audit entails manually checking randomly selected ballots until there is convincing evidence that the election outcome is correct. In many cases, it can be performed by counting only a small fraction of ballots cast making it cheap enough to be performed in every election. When the margins are tighter, a greater fraction of the votes are required to be hand counted, but this is a good thing because we want to scrutinize close contests more strictly to make sure the right person won the race. Some states have started requiring risk-limiting audits and the rest should catch up!

 We (and many others in the election integrity community) also continue to push for more transparency in election systems, more independent testing and red-team style attacks, including end-to-end pre-election testing.

And We Need A Paper Trail

Second, voting on paper ballots continues to be extremely important and the most secure strategy. Ideally, all voters should use paper ballots marked by hand, or with an assistive device, and verify their votes before casting. If there is no paper record, there is no way to perform a post-election audit, or recount votes in the event of an error or a security incident. On the other hand, if voters vote on paper, they can verify their choices are recorded accurately. More importantly, election officials can hand count a portion of the paper ballots to make sure they match with the electronic vote totals and confirm the accuracy of the election results. 

What happened in Antrim County, Michigan in the 2020 general elections illustrates the importance of paper ballots. Immediately after the 2020 elections, Antrim County published inaccurate unofficial results, and then restated these results three times to correct the errors, which led to conspiracy theories about the voting systems used there. Fortunately, Antrim County voters had voted on paper ballots, so Michigan was able to confirm the final presidential results by conducting a county-wide hand count and affirm them by a state-wide risk-limiting audit pilot. This would not have been possible without paper ballots.  

And we can’t stop there, because not every paper record is created equal. Some direct recording electronic systems are equipped with a type of Voter-Verified Paper Audit Trail that make it difficult for voters to verify their selections and for election officials to use in audits and recounts. The best practice is to have all votes cast on pre-printed paper ballots, marked by hand or an assistive ballot marking device.  

Third, it is important to have the entire voting technical system under the control of election officials so that they can investigate any potential problems, which is one of the reasons why internet voting remains a bad, bad idea. There are “significant security, privacy, and ballot secrecy challenges” associated with electronic ballot return systems and they make it “possible for a single attacker to alter thousands or even millions of votes.” Maybe in the future we will have tools to limit the risks of internet voting. But until then, we should reject any proposal that includes electronic ballot return over the internet. Speaking about the internet, voting machines should never connect to the internet, dial a modem, or communicate wirelessly

Internet voting remains a bad, bad idea

Fourth, every part of the voting process that relies on technology must have paper backups so that voting can continue even when the machines fail. This includes paper backups for electronic pollbooks, emergency paper ballots in case voting machines fail, and provisional ballots in case there voter eligibility cannot be confirmed. 

Stay Vigilant and Informed

Fifth, we should continue to be vigilant. Election officials have come a long way from when we started raising concerns about electronic voting machines and systems. But the public should keep watching and, when warranted, not be afraid to raise or flag things that seem strange. For example, if you see something like voting machines “flipping” the votes, you should tell the poll workers. This doesn’t necessarily mean there has been a security breach; it can be as simple as a calibration error, but it can mean lost votes. Poll workers can and should address the issue immediately by providing voters with emergency paper ballots. 

Sixth, not everything that seems out of the ordinary may be reason to worry. We should build societal resistance to disinformation. CISA's Election Security Rumor vs. Reality website is a good resource that addresses election security rumors and educates us on when we need to be or don’t need to be alarmed. State-specific information is also available online. If we see or hear anything odd about what is happening at a particular locality, we should first hear what the election officials on the ground have to say about it. After all, they were there! We should also pay attention to what non-partisan election protection organizations, such as Verified Voting, say about the incident.  

The 2024 presidential election is fast approaching and there may be many claims of computer glitches and other forms of manipulation concerning our voting systems in November. Knowing when to worry and when NOT to worry will continue to be extremely important.  

In the meantime, the work of securing our elections and building resilience must continue. While not every glitch is worrisome, we should not dismiss legitimate security concerns. As often said: election security is a race without a finish line!

Guest Author

A Sale of 23andMe’s Data Would Be Bad for Privacy. Here’s What Customers Can Do.

2 months 1 week ago

The CEO of 23andMe has recently said she’d consider selling the genetic genealogy testing company–and with it, the sensitive DNA data that it’s collected, and stored, from many of its 15 million customers. Customers and their relatives are rightly concerned. Research has shown that a majority of white Americans can already be identified from just 1.3 million users of a similar service, GEDMatch, due to genetic likenesses, even though GEDMatch has a much smaller database of genetic profiles. 23andMe has about ten times as many customers.

Selling a giant trove of our most sensitive data is a bad idea that the company should avoid at all costs. And for now, the company appears to have backed off its consideration of a third-party buyer. Before 23andMe reconsiders, it should at the very least make a series of privacy commitments to all its users. Those should include: 

  • Do not consider a sale to any company with ties to law enforcement or a history of security failures
  • Prior to any acquisition, affirmatively ask all users if they would like to delete their information, with an option to download it beforehand.
  • Prior to any acquisition, seek affirmative consent from all users before transferring user data. The consent should give people a real choice to say “no.” It should be separate from the privacy policy, contain the name of the acquiring company, and be free of dark patterns.
  • Prior to any acquisition, require the buyer to make strong privacy and security commitments. That should include a commitment to not let law enforcement indiscriminately search the database, and to prohibit disclosing any person’s genetic data to law enforcement without a particularized warrant. 
  • Reconsider your own data retention and sharing policies. People primarily use the service to obtain a genetic test. A survey of 23andMe customers in 2017 and 2018 showed that over 40% were unaware that data sharing was part of the company’s business model.  

23andMe is already legally required to provide users in certain states with some of these rights. But 23andMe—and any company considering selling such sensitive data—should go beyond current law to assuage users’ real privacy fears. In addition, lawmakers should continue to pass and strengthen protections for genetic privacy. 

Existing users can demand that 23andMe delete their data 

The privacy of personal genetic information collected by companies like 23andMe is always going to be at some level of risk, which is why we suggest consumers think very carefully before using such a service. Genetic data is immutable and can reveal very personal details about you and your family members. Data breaches are a serious concern wherever sensitive data is stored, and last year’s breach of 23andMe exposed personal information from nearly half of its customers. The data can be abused by law enforcement to indiscriminately search for evidence of a crime. Although 23andMe’s policies require a warrant before releasing information to the police, some other companies do not. In addition, the private sector could use your information to discriminate against you. Thankfully, existing law prevents genetic discrimination in health insurance and employment.  

What Happens to My Genetic Data If 23andMe is Sold to Another Company?

In the event of an acquisition or liquidation through bankruptcy, 23andMe must still obtain separate consent from users in about a dozen states before it could transfer their genetic data to an acquiring company. Users in those states could simply refuse. In addition, many people in the United States are legally allowed to access and delete their data either before or after any acquisition. Separately, the buyer of 23andMe would, at a minimum, have to comply with existing genetic privacy laws and 23andMe's current privacy policies. It would be up to regulators to enforce many of these protections. 

Below is a general legal lay of the land, as we understand it.  

  • 23andMe must obtain consent from many users before transferring their data in an acquisition. Those users could simply refuse. At least a dozen states have passed consumer data privacy laws specific to genetic privacy. For example, Montana’s 2023 law would require consent to be separate from other documents and to list the buyer’s name. While the consent requirements vary slightly, similar laws exist in Alabama, Arizona, California, Kentucky, Nebraska, Maryland, Minnesota, Tennessee, Texas, Virginia, Utah, Wyoming. Specifically, Wyoming’s law has a private right of action, which allows consumers to defend their own rights in court. 
  • Many users have the legal right to access and delete their data stored with 23andMe before or after an acquisition. About 19 states have passed comprehensive privacy laws which give users deletion and access rights, but not all have taken effect. Many of those laws also classify genetic data as sensitive and require companies to obtain consent to process it. Unfortunately, most if not all of these laws allow companies like 23andMe to freely transfer user data as part of a merger, acquisition, or bankruptcy. 
  • 23andMe must comply with its own privacy policy. Otherwise, the company could be sanctioned for engaging in deceptive practices. Unfortunately, its current privacy policy allows for transfers of data in the event of a merger, acquisition, or bankruptcy. 
  • Any buyer of 23andMe would likely have to offer existing users privacy rights that are equal or greater to the ones offered now, unless the buyer obtains new consent. The Federal Trade Commission has warned companies not to engage in the unfair practice of quietly reducing privacy protections of user data after an acquisition. The buyer would also have to comply with the web of comprehensive and genetic-specific state privacy laws mentioned above. 
  • The federal Genetic Information Nondiscrimination Act of 2008 prevents genetic-based discrimination by health insurers and employers. 
What Can You Do to Protect Your Genetic Data Now?

Existing users can demand that 23andMe delete their data or revoke some of their past consent to research. 

If you don’t feel comfortable with a potential sale, you can consider downloading a local copy of your information to create a personal archive, and then deleting your 23andMe account. Doing so will remove all your information from 23andMe, and if you haven’t already requested it, the company will also destroy your genetic sample. Deleting your account will also remove any genetic information from future research projects, though there is no way to remove anything that’s already been shared. We’ve put together directions for archiving and deleting your account here. When you get your archived account information, some of your data will be in more readable formats than others. For example, your “Reports Summary” will arrive as a PDF that’s easy to read and includes information about traits and your ancestry report. Other information, like the family tree, arrives in a less readable format, like a JSON file.

You also may be one of the 80% or so of users who consented to having your genetic data analyzed for medical research. You can revoke your consent to future research as well by sending an email. Under this program, third-party researchers who conduct analyses on that data have access to this information, as well as some data from additional surveys and other information you provide. Third-party researchers include non-profits, pharmaceutical companies like GlaxoSmithKline, and research institutions. 23andMe has used this data to publish research on diseases like Parkinson’s. According to the company, this data is deidentified, or stripped of obvious identifying information such as your name and contact information. However, genetic data cannot truly be de-identified. Even if separated from obvious identifiers like name, it is still forever linked to only one person in the world. And at least one study has shown that, when combined with data from GenBank, a National Institutes of Health genetic sequence database, data from some genealogical databases can result in the possibility of re-identification. 

What Can 23andMe, Regulators, and Lawmakers Do?

Acquisition talk about a company with a giant database of sensitive data should be a wakeup call for lawmakers and regulators to act

As mentioned above, 23andMe must follow existing law. And it should make a series of additional commitments before ever reconsidering a sale. Most importantly, it must give every user a real choice to say “no” to a data transfer and ensure that any buyer makes real privacy commitments. Other consumer genetic genealogy companies should proactively take these steps as well. Companies should be crystal clear about where the information goes and how it’s used, and they should require an individualized warrant before allowing police to comb through their database. 

Government regulators should closely monitor the company’s plans and press the company to explain how it will protect user data in the event of a transfer of ownership—similar to the FTC’s scrutiny of the prior Facebook WhatsApp acquisition. 

Lawmakers should also work to pass stronger comprehensive privacy protections in general and genetic privacy protections in particular. While many of the state-based genetic privacy laws are a good start, they generally lack a private right of action and only protect a slice of the U.S. population. EFF has long advocated for a strong federal privacy law that includes a private right of action. 

Our DNA is quite literally what makes us human. It is inherently personal and deeply revealing, not just of ourselves but our genetic relatives as well, making it deserving of the strongest privacy protections. Acquisition talk about a company with a giant database of sensitive data should be a wakeup call for lawmakers and regulators to act, and when they do, EFF will be ready to support them. 

Mario Trujillo

Salt Typhoon Hack Shows There's No Security Backdoor That's Only For The "Good Guys"

2 months 1 week ago

At EFF we’ve long noted that you cannot build a backdoor that only lets in good guys and not bad guys. Over the weekend, we saw another example of this: The Wall Street Journal reported on a major breach of U.S. telecom systems attributed to a sophisticated Chinese-government backed hacking group dubbed Salt Typhoon.

According to reports, the hack took advantage of systems built by ISPs like Verizon, AT&T, and Lumen Technologies (formerly CenturyLink) to give law enforcement and intelligence agencies access to the ISPs’ user data. This gave China unprecedented access to data related to U.S. government requests to these major telecommunications companies. It’s still unclear how much communication and internet traffic, and related to whom, Salt Typhoon accessed.

That’s right: the path for law enforcement access set up by these companies was apparently compromised and used by China-backed hackers. That path was likely created to facilitate smooth compliance with wrong-headed laws like CALEA, which require telecommunications companies to facilitate “lawful intercepts”—in other words, wiretaps and other orders by law enforcement and national security agencies. While this is a terrible outcome for user privacy, and for U.S. government intelligence and law enforcement, it is not surprising. 

The idea that only authorized government agencies would ever use these channels for acquiring user data was always risky and flawed. We’ve seen this before: in a notorious case in 2004 and 2005, more than 100 top officials in the Greek government were illegally surveilled for a period of ten months when unknown parties broke into Greece’s “lawful access” program. In 2024, with growing numbers of sophisticated state-sponsored hacking groups operating, it’s almost inevitable that these types of damaging breaches occur. The system of special law enforcement access that was set up for the “good guys” isn’t making us safer; it’s a dangerous security flaw. 

Internet Wiretaps Have Always Been A Bad Idea

Passed in 1994, CALEA requires that makers of telecommunications equipment provide the ability for government eavesdropping. In 2004, the government dramatically expanded this wiretap mandate to include internet access providers. EFF opposed this expansion and explained the perils of wiretapping the internet.  

The internet is different from the phone system in critical ways, making it more vulnerable. The internet is open and ever-changing.  “Many of the technologies currently used to create wiretap-friendly computer networks make the people on those networks more pregnable to attackers who want to steal their data or personal information,” EFF wrote, nearly 20 years ago.

Towards Transparency And Security

The irony should be lost on no one that now the Chinese government may be in possession of more knowledge about who the U.S. government spies on, including people living in the U.S., than Americans. The intelligence and law enforcement agencies that use these backdoor legal authorities are notoriously secretive, making oversight difficult. 

Companies and people who are building communication tools should be aware of these flaws and implement, where possible, privacy by default. As bad as this hack was, it could have been much worse if it wasn’t for the hard work of EFF and other privacy advocates making sure that more than 90% of web traffic is encrypted via HTTPS. For those hosting the 10% (or so) of the web that has yet to encrypt its traffic, now is a great time to consider turning on encryption, either using Certbot or switching to a hosting provider that offers HTTPS by default.

What can we do next? We must demand real privacy and security.  

That means we must reject the loud law enforcement and other voices that continue to pretend that there are “good guy only” ways to ensure access. We can point to this example, among many others, to push back on the idea that the default in the digital world is that governments (and malicious hackers) should be able to access all of our messages and files. We’ll continue to fight against US bills like EARN IT, the EU “Chat Control” file-scanning proposal, and the UK’s Online Safety Act, all of which are based on this flawed premise. 

It’s time for U.S. policymakers to step up too. If they care about China and other foreign countries engaging in espionage on U.S. citizens, it’s time to speak up in favor of encryption by default. If they don’t want to see bad actors take advantage of their constituents, domestic companies, or security agencies, again—speak up for encryption by default. Elected officials can and have done so in the past. Instead of holding hearings that give the FBI a platform to make digital wiretaps easier, demand accountability for the digital lock-breaking they’re already doing

The lesson will be repeated until it is learned: there is no backdoor that only lets in good guys and keeps out bad guys. It’s time for all of us to recognize this, and take steps to ensure real security and privacy for all of us.

Joe Mullin

FTC Findings on Commercial Surveillance Can Lead to Better Alternatives

2 months 1 week ago

On September 19, the FTC published a staff report following a multi-year investigation of nine social media and video streaming companies. The report found a myriad of privacy violations to consumers stemming largely from the ad-revenue based business models of companies including Facebook, YouTube, and X (formerly Twitter) which prompted unbridled consumer surveillance practices. In addition to these findings, the FTC points out various ways in which user data can be weaponized to lock out competitors and dominate the respective markets of these companies.

The report finds that market dominance can be established and expanded by acquisition and maintenance of user data, creating an unfair advantage and preventing new market entrants from fairly competing. EFF has found that  this is not only true for new entrants who wish to compete by similarly siphoning off large amounts of user data, but also for consumer-friendly companies who carve out a niche by refusing to play the game of dominance-through-surveillance. Abusing user data in an anti-competitive manner means users may not even learn of alternatives who have their best interests, rather than the best interests of the company advertising partners, in mind.

The relationship between privacy violations and anti-competitive behavior is elaborated upon in a section of the report which points out that “data abuse can raise entry barriers and fuel market dominance, and market dominance can, in turn, further enable data abuses and practices that harm consumers in an unvirtuous cycle.” In contrast with the recent United States v. Google LLC (2020) ruling, where Judge Amit P. Mehta found that the data collection practices of Google, though injurious to consumers, were outweighed by an improved user experience, the FTC highlighted a dangerous feedback loop in which privacy abuses beget further privacy abuses. We agree with the FTC and find the identification of this ‘unvirtuous cycle’ a helpful focal point for further antitrust action.

In an interesting segment focusing on the existing protections the European Union’s General Data Protection Regulation (GDPR) specifies for consumers’ data privacy rights which the US lacks, the report explicitly mentions not only the right of consumers to delete or correct the data held by companies, but importantly also the right to transfer (or port) one’s data to the third party of their choice. This is a right EFF has championed time and again in pointing out the strength of the early internet came from nascent technologies’ imminent need (and implemented ability) to play nicely with each other in order to make any sense—let alone be remotely usable—to consumers. It is this very concept of interoperability which can now be re-discovered and give users control over their own data by granting them the freedom to frictionlessly pack up their posts, friend connections, and private messages and leave when they are no longer willing to let the entrenched provider abuse them.

We hope and believe that the significance of the FTC staff report comes not only from the abuses they have meticulously documented, but the policy and technological possibilities that can follow from the willingness to embrace alternatives. Alternatives where corporate surveillance cementing dominant players based on selling out their users is not the norm. We look forward to seeing these alternatives emerge and grow.

Bill Budington

The X Corp. Shutdown in Brazil: What We Can Learn

2 months 1 week ago

Update (10/8/2024): Brazil lifted a ban on the X Corp. social media platform today after the country's Supreme Court said the company had complied with all of its orders. Regulators have 24 hours to reinstate the platform, though it could take longer for it to come back online.

The feud between X Corp. and Brazil’s Supreme Court continues to drag on: After a month-long standoff, X Corp. folded and complied with court orders to suspend several accounts, name a legal representative in Brazil, and pay 28.6 million reais ($5.24 million) in fines. That hasn’t cleared the matter up, though.

The Court says X paid the wrong bank, which X denies. Justice Alexandre de Moraes has asked that the funds be redirected to the correct bank and for Brazil’s prosecutor general to weigh in on X’s requests to be reinstated in Brazil.

So the drama continues, as does the collateral damage to millions of Brazilian users who rely on X Corp. to share information and expression. While we watch it unfold, it’s not too early to draw some important lessons for the future.

Let’s break it down.

How We Got Here

The Players

Unlike courts in many countries, the Brazilian Supreme Court has the power to conduct its own investigations in limited circumstances, and issue orders based on its findings. Justice Moraes has drawn on this power frequently in the past few years to target what he called “digital militias,” anti-democratic acts, and fake news. Many in Brazil believe that these investigations, combined with other police work, have helped rein in genuinely dangerous online activities and protect the survival of Brazil’s democratic processes, particularly in the aftermath of January 2023 riots.

At the same time, Moraes’ actions have raised concerns about judicial overreach. For instance, his work is less than transparent. And the resulting content blocking orders more often than not demand suspension of entire accounts, rather than specific posts. Other leaked orders include broad requests for subscriber information of people who used a specific hashtag.

X Corp.’s controversial CEO, Elon Musk has publicly criticized the blocking orders. And while he may be motivated by concern for online expression, it is difficult to untangle that motivation from his personal support for the far-right causes Moraes and others believe threaten democracy in Brazil.

The Standoff

In August, as part of an investigation into coordinated actions to spread disinformation and destabilize Brazilian democracy, Moraes ordered X Corp. to suspend accounts that were allegedly used to intimidate and expose law enforcement officers. Musk refused, directly contradicting his past statements that X Corp. “can’t go beyond the laws of a country”—a stance that supposedly justified complying with controversial orders to block accounts and posts in Turkey and India.

After Moraes gave X Corp. 24 hours to fulfill the order or face fines and the arrest of one of its lawyers, Musk closed down the company’s operations in Brazil altogether. Moraes then ordered Brazilian ISPs to block the platform until Musk designated a legal representative. And people who used tools such as VPNs to circumvent the block can be fined 50,000 reais (approximately $ 9,000 USD) per day.

These orders remain in place unless or until pending legal challenges succeed. Justice Moraes has also authorized Brazil’s Federal Police to monitor “extreme cases” of X Corp. use. It’s unclear what qualifies as an “extreme case,” or how far the police may take that monitoring authority. Flagged users must be notified that X Corp. has been blocked in Brazil; if they continue to use it via VPNs or other means, they are on the hook for substantial daily fines.

A Bridge Too Far

Moraes’ ISP blocking order, combined with the user fines, has been understandably controversial. International freedom of expression standards treat these kinds of orders as extreme measures, permissible only in exceptional circumstances where provided by law and in accordance with necessary and proportionate principles. Justice Moraes said the blocking was necessary given upcoming elections and the risk that X Corp. would ignore future orders and allow the spread of disinformation.

But it has also meant that millions of Brazilians cannot access a platform that, for them, is a valuable source of information. Indeed, restrictions on accessing X Corp. ended up creating hurdles to understanding and countering electoral disinformation. The Brazilian Association of Newspapers has argued the restrictions adversely impact journalism. At the same time, online electoral disinformation holds steady on other platforms (while possibly at a slower pace).

Moreover, now that X Corp. has bowed to his demands, Moraes’ concerns that the company cannot be trusted to comply with Brazilian law are harder to justify. In any event, there are far more balanced options now to deal with the remaining fines that don’t create collateral damage to millions of users.

What Comes Next: Concerns and Open Questions

There are several structural issues that have helped fuel the conflict and exacerbated its negative effects. First, the mechanisms for legal review of Moraes’ orders are unclear and/or ineffective. The Supreme Court has previously held that X Corp. itself cannot challenge suspension of user accounts, thwarting a legal avenue for platforms to defend their users’ speech—even where they may be the only entities that even know about the order before accounts are shut down.

A Brazilian political party and the Federal Council of the Brazilian Bar Association filed legal challenges to the blocking order and user fines, respectively, but it is likely that courts will find these challenges procedurally improper as well.

Back in 2016, a single Supreme Court Justice held back a wave of blocking orders targeting WhatsApp. Eight years later, a single Justice may have created a new precedent in the opposite direction—with little or no means to appeal it.

Second, this case highlights what can happen when too much power is held by just a few people or institutions. On the one hand, in Brazil as elsewhere, a handful of wealthy corporations wield enormous power over online expression. Here, that problem is exacerbated by Elon Musk’s control of Starlink, an important satellite internet provider in Brazil.

On the other hand, the Supreme Court also has tremendous power. Although the court’s actions may have played an important role in preserving Brazilian democracy in recent years, powers that are not properly subject to public oversight or meaningful challenge invite overreach.

All of which speaks to a need for better transparency (in both the public and private sectors) and real checks and balances. Independent observers note that, despite challenges, Brazil has already improved its democratic processes. Strengthening this path includes preventing judicial overreach.

As for social media platforms, the best way to stave off future threats to online expression may be to promote more alternatives, so no single powerful person, whether a judge, a billionaire, or even a president, can dramatically restrict online expression with the stroke of a pen.

 

 

 

 

Corynne McSherry

Germany Rushes to Expand Biometric Surveillance

2 months 1 week ago

Germany is a leader in privacy and data protection, with many Germans being particularly sensitive to the processing of their personal data – owing to the country’s totalitarian history and the role of surveillance in both Nazi Germany and East Germany.

So, it is disappointing that the German government is trying to push through Parliament, at record speed, a “security package” that would increase biometric surveillance at an unprecedented scale. The proposed measures contravene the government’s own coalition agreement, and undermine European law and the German constitution.

In response to a knife-stabbing in the West-German town of Solingen in late-August, the government has introduced a so-called “security package” consisting of a bouquet of measures to tighten asylum rules and introduce new powers for law enforcement authorities.

Among them, three stand out due to their possibly disastrous effect on fundamental rights online. 

Biometric Surveillance  

The German government wants to allow law enforcement authorities to identify suspects by comparing their biometric data (audio, video, and image data) to all data publicly available on the internet. Beyond the host of harms related to facial recognition software, this would mean that any photos or videos uploaded to the internet would become part of the government’s surveillance infrastructure.

This would include especially sensitive material, such as pictures taken at political protests or other contexts directly connected to the exercise of fundamental rights. This could be abused to track individuals and create nuanced profiles of their everyday activities. Experts have highlighted the many unanswered technical questions in the government’s draft bill. The proposal contradicts the government’s own coalition agreement, which commits to preventing biometric surveillance in Germany.

The proposal also contravenes the recently adopted European AI Act, which bans the use of AI systems that create or expand facial recognition databases. While the AI Act includes exceptions for national security, Member States may ban biometric remote identification systems at the national level. Given the coalition agreement, German civil society groups have been hoping for such a prohibition, rather than the introduction of new powers.

These sweeping new powers would be granted not just to law enforcement authorities--the Federal Office for Migration and Asylum would be allowed to identify asylum seekers that do not carry IDs by comparing their biometric data to “internet data.” Beyond the obvious disproportionality of such powers, it is well documented that facial recognition software is rife with racial biases, performing significantly worse on images of people of color. The draft law does not include any meaningful measures to protect against discriminatory outcomes, nor does it acknowledge the limitations of facial recognition.  

Predictive Policing 

Germany also wants to introduce AI-enabled mining of any data held by law enforcement authorities, which is often used for predictive policing. This would include data from anyone who ever filed a complaint, served as a witness, or ended up in a police database for being a victim of a crime. Beyond this obvious overreach, data mining for predictive policing threatens fundamental rights like the right to privacy and has been shown to exacerbate racial discrimination.

The severe negative impacts of data mining by law enforcement authorities have been confirmed by Germany’s highest court, which ruled that the Palantir-enabled practices by two German states are unconstitutional.  Regardless, the draft bill seeks to introduce similar powers across the country.  

Police Access to More User Data 

The government wants to exploit an already-controversial provision of the recently adopted Digital Services Act (DSA). The law, which regulates online platforms in the European Union, has been criticized for requiring providers to proactively share user data with law enforcement authorities in potential cases of violent crime. Due to its unclear definition, the provision risks undermining the freedom of expression online as providers might be pressured to share rather more than less data to avoid DSA fines.

Frustrated by the low volume of cases forwarded by providers, the German government now suggests expanding the DSA to include specific criminal offences for which companies must share user data. While it is unrealistic to update European regulations as complex as the DSA so shortly after its adoption, this proposal shows that protecting fundamental rights online is not a priority for this government. 

Next Steps

Meanwhile, thousands have protested the security package in Berlin. Moreover, experts at the parliament’s hearing and German civil society groups are sending a clear signal: the government’s plans undermine fundamental rights, violate European law, and walk back the coalition parties’ own promises. EFF stands with the opponents of these proposals. We must defend fundamental rights more decidedly than ever.  

 

Svea Windwehr

EFF to Fifth Circuit: Age Verification Laws Will Hurt More Than They Help

2 months 2 weeks ago

EFF, along with the ACLU and the ACLU of Mississippi, filed an amicus brief on Thursday asking a federal appellate court to continue to block Mississippi’s HB 1126—a bill that imposes age verification mandates on social media services across the internet.

Our friend-of-the-court brief, filed in the U.S. Court of Appeals for the Fifth Circuit, argues that HB 1126 is “an extraordinary censorship law that violates all internet users’ First Amendment rights to speak and to access protected speech” online.

HB 1126 forces social media sites to verify the age of every user and requires minors to get explicit parental consent before accessing online spaces. It also pressures them to monitor and censor content  on broad, vaguely defined topics—many of which involve constitutionally protected speech. These sweeping provisions create significant barriers to the free and open internet and “force adults and minors alike to sacrifice anonymity, privacy, and security to engage in protected online expression.” A federal district court already prevented HB 1126 from going into effect, ruling that it likely violated the First Amendment.

Blocking Minors from Vital Online Spaces

At the heart of our opposition to HB 1126 is its dangerous impact on young people’s free expression. Minors enjoy the same First Amendment right as adults to access and engage in protected speech online.

“No legal authority permits lawmakers to burden adults’ access to political, religious, educational, and artistic speech with restrictive age-verification regimes out of a concern for what minors might see. Nor is there any legal authority that permits lawmakers to block minors categorically from engaging in protected expression on general purpose internet sites like those regulated by HB 1126.”

Social media sites are not just entertainment hubs; they are diverse and important spaces where minors can explore their identities—whether by creating and sharing art, practicing religion, or engaging in politics. As our brief explains, minors’ access to these online spaces “is essential to their growth into productive members of adult society because it helps them develop their own ideas, learn to express themselves, and engage productively with others in our democratic public sphere.” 

Social media also “enables individuals whose voices would otherwise not be heard to make vital and even lifesaving connections with one another, and to share their unique perspectives more widely.” LGBTQ+ youth, for example, turn to social media for community, exploration, and support, while others find help in forums that discuss mental health, disability, eating disorders, or domestic violence.

HB 1126’s age-verification regime places unnecessary barriers between young people and these crucial resources. The law compels platforms to broadly restrict minors’ access to a vague list of topics—the majority of which concern constitutionally protected speech—that Mississippi deems “harmful” for minors.

First Amendment Rights: Protection for All

The impact of HB 1126 is not limited to minors—it also places unnecessary and unconstitutional restrictions on adults’ speech. The law requires all users to verify their age before accessing social media, which could entirely block access for the millions of U.S. adults who lack government-issued ID. Should a person who takes public transit every day need to get a driver’s license just to get online? Would you want everything you do online to be linked to your government-issued ID?

HB 1126 also strips away users’ protected right to online anonymity, leaving them vulnerable to exposure and harassment and chilling them from speaking freely on social media. As our brief recounts, the vast majority of internet users have taken steps to minimize their digital footprints and even to “avoid observation by specific people, organizations, or the government.”

“By forcibly tying internet users’ online interactions to their real-world identities, HB 1126 will chill their ability to engage in dissent, discuss sensitive, personal, controversial, or stigmatized content, or seek help from online communities.”

Online Age Verification: A Privacy Nightmare

Finally, HB 1126 forces social media sites to collect users’ most sensitive and immutable data, turning them into prime targets for hackers. In an era where data breaches and identity theft are alarmingly common, HB 1126 puts every user’s personal data at risk. Furthermore, the process of age verification often involves third-party services that profit from collecting and selling user data. This means that the sensitive personal information on your ID—such as your name, home address, and date of birth—could be shared with a web of data brokers, advertisers, and other intermediary entities.

“Under the plain language of HB 1126, those intermediaries are not required to delete users’ identifying data and, unlike the online service providers themselves, they are also not restricted from sharing, disclosing, or selling that sensitive data. Indeed, the incentives are the opposite: to share the data widely.”

No one—neither minors nor adults—should have to sacrifice their privacy or anonymity in order to exercise their free speech rights online.

Courts Continue To Block Laws Like Mississippi’s

Online age verification laws like HB 1126 are not new, and courts across the country have consistently ruled them unconstitutional. In cases from Arkansas to Ohio to Utah, courts have struck down similar online age-verification mandates because they burden users’ access to, and ability to engage with, protected speech.

While Mississippi may have a legitimate interest in protecting children from harm, as the Supreme Court has held, “that does not include a free-floating power to restrict the ideas to which children may be exposed.” By imposing age verification requirements on all users, laws like HB 1126 undermine the First Amendment rights of both minors and adults, pose serious privacy and security risks, and chill users from accessing one of the most powerful expressive mediums of our time.

For these reasons, we urge the Fifth Circuit to follow suit and continue to block Mississippi HB 1126.

Molly Buckley

Digital Inclusion Week, Highlighting CCTV Cambridge's Digital Equity Work

2 months 2 weeks ago

In honor of Digital Inclusion Week, October 7-11th 2024, it’s an honor to uplift one of our Electronic Frontier Alliance (EFA) members who is doing great work making sure technology benefits everyone by addressing the digital divide: CCTV Cambridge. This year they partnered to host a Digital Navigator program. Its aim is to assist in bridging the digital divide in Cambridge by assessing the needs of the community and acting as a technological social worker. Digital Navigators (DN’s) have led to better outreach, assessment, and community connection. 

Making a difference in communities affected by the digital divide is impactful work. So far the DN’s have helped many people access resources online, distributed 50 Thinkpad laptops installed with Windows 10 and Microsoft Office,  and distributed 15 hotspots for wifi with two years paid by T-mobile. This is groundbreaking because typically people are getting chromebooks on loan that have limited capabilities. The beauty of these devices is that you can work and learn on them with reliable, high-speed internet access, and they are able to be used anywhere.

Samara Murrell, Coordinator of CCTV’s Digital Navigator Program states:

"Being part of a solution that attempts to ensure that everyone has equal access to information, education and job opportunities, so that we can all fully participate in our society, is some of the best, most inspiring and honorable work that one can do."

Left to Right: DN Coordinator Samara Murrell and DN’s Lida Griffin, Dana Grotenstein, and Eden Wagayehu

CCTV Cambridge is also slated to start hosting classes in 2025. They hope to offer intermediate Windows and Microsoft Office to the cohort as the first step, and then advanced Excel as the second part for returning members of the cohort.

Maritza Grooms, CCTV Cambridge’s Associate Director of Community Relations, says:

"CCTV is incredibly grateful and honored to be the hub and headquarters of the Digital Navigator Pilot Program in partnership with the City of Cambridge, Cambridge Public Library, Cambridge Public School Department, and Just-A-Start. This program is crucial to serving Cambridge's most vulnerable and marginalized communities and ensuring they have the access to resources they need to be able to fully participate in society in this digital age. We appreciate any and all support to help us make the Digital Navigator Program a continued sustainable program beyond the pilot. Please contact me at maritza@cctvcambridge.org to find out how you can support this program or visit cctvcambridge.org/support to support today."

There are countless examples of the impact CCTV’s DN’s have had already. One patron of the library who came in to take a tech class, had their own laptop because of the DNs. That enabled her to take a tech support class and advance her career. A young college student studying bioengineering needed a laptop and hotspot to continue his studies, and he recently got them from CCTV Cambridge.

Kudos to CCTV Cambridge for addressing the disparities of the digital divide in your community with your awesome digital inclusion work!
To connect with other members of the EFA doing impactful work in your area, please check out our allies page: https://efa.eff.org/allies

Christopher Vines

Join the Movement for Public Broadband in PDX

2 months 2 weeks ago

Did you know the City of Portland, Oregon, already owns and operates a fiber-optic broadband network? It's called IRNE (Integrated Regional Network Enterprise), and despite having it in place Portlanders are forced to pay through the nose for internet access because of a lack of meaningful competition. Even after 24 years of IRNE, too many in PDX struggle to afford and access fast internet service in their homes and small businesses.

EFF and local Electronic Frontier Alliance members, Personal TelCo Project and Community Broadband PDX, are calling on city council & mayoral candidates to sign a pledge to support an open-access business model, where the city owns and leases the "dark" fiber. That way services can be run by local non profits, local businesses, or community cooperatives. The hope is these local services can then grow to support retail service and meet the needs of more residents.

This change will only happen if we show our support, join the campaign today to stay up to date and find volunteer opportunities.  Also come out for fun and learning at The People’s Digital Safety Fair Saturday October 19th, for talks and workshops from the local coalition. Let’s break the private ISP monopoly power in Portland!

Leading this campaign is Community Broadband PDX, with the mission to ‘guide Portlanders to create a new option for fast internet access: Publicly owned and transparently operated, affordable, secure, fast and reliable broadband infrastructure that is always available to every neighborhood and community.’ According to Jennifer Redman, President, Board of Directors, and Campaign Manager of Community Broadband PDX, (who also formerly served as the Community Broadband Planning Manager in the Bureau of Planning and Sustainability with the City of Portland) when asked about the campaign to expand IRNE into affordable accessible internet for all Portlanders, she said:

“Expanding access to the Integrated Regional Network Enterprise (IRNE) is the current campaign focus because within municipal government it is often easier to expand existing programs rather than create entirely new ones - especially if there is a major capital investment required. IRNE is staffed, there are regional partners and the program is highly effective. Yes it is limited in scope but there are active expansion plans.   

Leveraging IRNE allows us to advocate for policies like “Dig Once Dig Smart” every time the ground is open for any type of development in the City of Portland - publicly owned-fiber conduit must be included. The current governmental structure has made implementing these policies extremely difficult because of the siloed nature of how the City is run. For example, the water bureau doesn’t want to be told what to do by the technology services bureau. This should significantly improve with our charter change. Currently the City of Portland really operates as a group of disparate systems that sometimes work together. I hope that under a real city manager, the City is run as one system.

IRNE already partners with Link Oregon - which provides the “retail” network services for many statewide educational and other non-profit institutions.  The City is comfortable with this model - IRNE builds and manages the dark fiber network while partners provide the retail or “lit" service. Let’s grow local ISPs that keep dollars in Portland as opposed to corporate out-of-state providers like Comcast and Century Link.”

The time is now to move Portland forward and make access to the publicly owned fiber optic network available to everyone. As explained by Russell Senior, President and member of the Board of Directors of Personal TelCo Project, this would bring major economic and workforce development advantages to Portland:

“Our private internet access providers exploit their power to gouge us all with arbitrary prices, because our only alternative to paying them whatever they ask is to do without. The funds we pay these companies ends up with far away investors, on the order of $500 million per year in Multnomah County alone. Much of that money could be staying in our pockets and circulating locally if we had access they couldn't choke off.

I learned most of my professional skills from information I found on the Internet. I got a good job, and have a successful career because of the open source software tools that I received from people who shared it on the internet. The internet is an immense store of human knowledge, and ready access to it is an essential part of developing into a fruitful, socially useful and fulfilled person.”

Portland is currently an island of expensive, privately owned internet service infrastructure, as every county surrounding Portland is building or operating affordable publicly owned and publicly available super-fast fiber-optic broadband networks. Fast internet access in Portland remains expensive and limited to neighborhoods that provide the highest profits for the few private internet service providers (ISPs). Individual prosperity and a robust local economy are driven by UNIVERSAL affordable access to fast internet service.  A climate resilient city needs robust publicly owned and available fiber-optic broadband infrastructure. Creating a digitally equitable and just city is dependent upon providing access to fast internet service at an affordable cost for everyone. That is why we are calling city officials to take the pledge that they will support open-access internet in Portland.


Join the campaign to make access to the city owned fiber optic network available to everyone. Let’s break the private ISP monopoly power in Portland!

Christopher Vines

Vote for EFF’s 'How to Fix the Internet’ Podcast in the Signal Awards!

2 months 2 weeks ago

We’re thrilled to announce that EFF’s “How to Fix the Internet” podcast is a finalist in the Signal Awards 3rd Annual Listener's Choice competition. Now we need your vote to put us over the top!

Vote now!

We’re barraged by dystopian stories about technology’s impact on our lives and our futures — from tracking-based surveillance capitalism to the dominance of a few large platforms choking innovation to the growing pressure by authoritarian governments to control what we see and say. The landscape can feel bleak. Exposing and articulating these problems is important, but so is envisioning and then building a better future.

That’s where our podcast comes in. Through curious conversations with some of the leading minds in law and technology, “How to Fix the Internet” explores creative solutions to some of today’s biggest tech challenges.  

signal-badge_finalist_voteforus.png Over our five seasons, we’ve had well-known, mainstream names like Marc Maron to discuss patent trolls, Adam Savage to discuss the rights to tinker and repair, Dave Eggers to discuss when to set technology aside, and U.S. Sen. Ron Wyden, D-OR, to discuss how Congress can foster an internet that benefits everyone. But we’ve also had lesser-known names who do vital, thought-provoking work – Taiwan’s then-Minister of Digital Affairs Audrey Tang discussed seeing democracy as a kind of open-source social technology, Alice Marwick discussed the spread of conspiracy theories and disinformation, Catherine Bracy discussed getting tech companies to support (not exploit) the communities they call home, and Chancey Fleet discussing the need to include people with disabilities in every step of tech development and deployment.  

 That’s just a taste. If you haven’t checked us out before, listen today to become deeply informed on vital technology issues and join the movement working to build a better technological future. 

 And if you’ve liked what you’ve heard, please throw us a vote in the Signal Awards competition! 

Vote Now!

Our deepest thanks to all our brilliant guests, and to the Alfred P. Sloan Foundation's Program in Public Understanding of Science and Technology, without whom this podcast would not be possible. 

Josh Richman

Digital ID Isn't for Everybody, and That's Okay | EFFector 36.13

2 months 2 weeks ago

Need help staying up-to-date on the latest in the digital rights movement? You're in luck! In our latest newsletter, we outline the privacy protections needed for digital IDs, explain our call for the U.S. Supreme Court to strike down an unconstitutional age verification law, and call out the harms of AI monitoring software deployed in schools.

It can feel overwhelming to stay up to date, but we've got you covered with our EFFector newsletter! You can read the full issue here, or subscribe to get the next one in your inbox automatically! You can also listen to the audio version of the newsletter on the Internet Archive, or by clicking the button below:

LISTEN ON YouTube

EFFECTOR 36.13 - Digital ID Isn't for Everybody, and That's Okay

Since 1990 EFF has published EFFector to help keep readers on the bleeding edge of their digital rights. We know that the intersection of technology, civil liberties, human rights, and the law can be complicated, so EFFector is a great way to stay on top of things. The newsletter is chock full of links to updates, announcements, blog posts, and other stories to help keep readers—and listeners—up to date on the movement to protect online privacy and free expression. 

Thank you to the supporters around the world who make our work possible! If you're not a member yet, join EFF today to help us fight for a brighter digital future.

Christian Romero

How to Stop Advertisers From Tracking Your Teen Across the Internet

2 months 2 weeks ago

This post was written by EFF fellow Miranda McClellan.

Teens between the ages of  13 and 17 are being tracked across the internet using identifiers known as Advertising IDs. When children turn 13, they age out of the data protections provided by the Children’s Online Privacy Protection Act (COPPA). Then, they become targets for data collection from data brokers that collect their information from social media apps, shopping history, location tracking services, and more. Data brokers then process and sell the data. Deleting Advertising IDs off your teen’s devices can increase their privacy and stop advertisers collecting their data.

What is an Advertising ID?

Advertising identifiers – Android's Advertising ID (AAID) and Identifier for Advertising (IDFA) on iOS – enable third-party advertising by providing device and activity tracking information to advertisers. The advertising ID is a string of letters and numbers that uniquely identifies your phone, tablet, or other smart device.

How Teens Are Left Vulnerable

In most countries, children must be over 13 years old to manage their own Google account without a supervisory parent account through Google Family Link. Children over 13 gain the right to manage their own account and app downloads without a supervisory parent account—and they also gain an Advertising ID.

At 13, children transition abruptly between two extremes—from potential helicopter parental surveillance to surveillance advertising that connects their online activity and search history to marketers serving targeted ads.

Thirteen is a historically significant age. In the United States, both Facebook and Instagram require users to be at least 13 years old to make an account, though many children pretend to be older. The Children’s Online Privacy Protection Act (COPPA), a federal law, requires companies to obtain “verifiable parental consent” before collecting personal information from children under 13 for commercial purposes.

But this means that teens can lose valuable privacy protections even before becoming adults.

How to Protect Children and Teens from Tracking

 Here are a few steps we recommend that protect children and teens from behavioral tracking and other privacy-invasive advertising techniques:

  • Delete advertising IDs for minors aged 13-17.
  • Require schools using Chromebooks, Android tablets, or iPads to educate students and parents about deleting advertising IDs off school devices and accounts to preserve student privacy.
  • Advocate for extended privacy protections for everyone.
How to Delete Advertising IDs

 Advertising IDs track devices and activity from connected accounts. Both Android and iOS users can reset or delete their advertising IDs from the device. Removing the advertising ID removes a key component advertisers use to identify audiences for targeted ad delivery. While users will still see ads after resetting or deleting their advertising ID, the ads will be severed from previous online behaviors and provide less personally targeted ads.

Follow these instructions, updated from a previous EFF blog post:

On Android

With the release of Android 12, Google began allowing users to delete their ad ID permanently. On devices that have this feature enabled, you can open the Settings app and navigate to Security & Privacy > Privacy > Ads. Tap “Delete advertising ID,” then tap it again on the next page to confirm. This will prevent any app on your phone from accessing it in the future.

The Android opt out should be available to most users on Android 12, but may not be available on older versions. If you don't see an option to "delete" your ad ID, you can use the older version of Android's privacy controls to reset it and ask apps not to track you.

On iOS

Apple requires apps to ask permission before they can access your IDFA. When you install a new app, it may ask you for permission to track you.

Select “Ask App Not to Track” to deny it IDFA access.

To see which apps you have previously granted access to, go to Settings Privacy & Security > Tracking.

In this menu, you can disable tracking for individual apps that have previously received permission. Only apps that have permission to track you will be able to access your IDFA.

You can set the “Allow apps to Request to Track” switch to the “off” position (the slider is to the left and the background is gray). This will prevent apps from asking to track in the future. If you have granted apps permission to track you in the past, this will prompt you to ask those apps to stop tracking as well. You also have the option to grant or revoke tracking access on a per-app basis.

Apple has its own targeted advertising system, separate from the third-party tracking it enables with IDFA. To disable it, navigate to Settings > Privacy > Apple Advertising and set the “Personalized Ads” switch to the “off” position to disable Apple’s ad targeting.

Miranda McClellan served as a summer fellow at EFF on the Public Interest Technology team. Miranda has a B.S. and M.Eng. in Computer Science from MIT. Before joining EFF, Miranda completed a Fulbright research fellowship in Spain to apply machine learning to 5G networks, worked as a data scientist at Microsoft where she built machine learning models to detect malware, and was a fellow at the Internet Society. In her free time, Miranda enjoys running, hiking, and crochet.

At EFF, Miranda conducted research focused on understanding the data broker ecosystem and enhancing children’s privacy. She received funding from the National Science Policy Network.

Guest Author

EFF Awards Night: Celebrating Digital Rights Founders Advancing Free Speech and Access to Information Around the World

2 months 2 weeks ago

Digital freedom and investigative reporting about technology have been at risk amid political and economic strife around the world. This year’s annual EFF Awards honored the achievements of people helping to ensure that the power of technology, the right to privacy and free speech, and access to information, is available to people all over the world. 

On September 12 in San Francisco’s Presidio, EFF presented awards to investigative news organization 404 Media, founder of Latin American digital rights group Fundación Karisma Carolina Botero, and Cairo-based nonprofit Connecting Humanity, which helps Palestinians in Gaza regain access to the internet.

All our award winners overcame roadblocks to build organizations that protect and advocate for people’s rights to online free speech, digital privacy, and the ability to live free from government surveillance.  

If you missed the ceremony in San Francisco, you can still catch what happened on YouTube and the Internet Archive. You can also find a transcript of the live captions.

Watch Now

EFF Awards Ceremony on YouTube

EFF Executive Director Cindy Cohn kicked off the ceremony, highlighting some of EFF’s recent achievements and milestones, including our How to Save the Internet podcast, now in its fifth season, which won two awards this year and saw a 21 percent increase in downloads. 

Cindy talked about EFF’s legal work defending a security researcher at this year’s DEF CON who was threatened for his planned talk about a security vulnerability he discovered. EFF’s Coders’ Rights team helped the researcher avoid a lawsuit and present his talk on the conference’s last day. Another win: EFF fought back to ensure that police drone footage was not exempt from public records requests. As a result, “we can see what the cops are seeing,” Cindy said.

EFF Executive Director Cindy Cohn kicks off the ceremony.

“It can be truly exhausting and scary to feel the weight of the world’s problems on our shoulders, but I want to let you in on a secret,” she said. “You’re not alone, and we’re not alone. And, as a wise friend once said, courage is contagious.” 

Cindy turned the program over to guest speaker Elizabeth Minkel, journalist and co-host of the long-running fan culture podcast Fansplaining. Elizabeth kept the audience giggling as she recounted her personal fandom history with Buffy the Vampire Slayer and later Harry Potter, and how EFF’s work defending fair use and fighting copyright maximalism has helped fandom art and fiction thrive despite attacks from movie studios and entertainment behemoths.

Elizabeth Minkel—co-host and editor of the Fansplaining podcast, journalist, and editor.

“The EFF’s fight for open creativity online has been helping fandom for longer than I’ve had an internet connection,” Minkel said. “Your values align with what I think of as the true spirit of transformative fandom, free and open creativity, and a strong push back against those copyright strangleholds in the homogenization of the web.”

Presenting the first award of the evening, EFF Director of Investigations Dave Maass took the stage to introduce 404 Media, winner of EFF’s Award for Fearless Journalism. The outlet’s founders were all tech journalists who worked together at Vice Media’s Motherboard when its parent company filed for bankruptcy in May 2023. All were out of a job, part of a terrible trend of reporter layoffs and shuttered news sites as media businesses struggle financially.

Journalists Jason Koebler, Sam Cole, Joseph Cox, and Emanuel Maiberg together resolved to go out on their own; in 2023 they started 404 Media, aiming to uncover stories about how technology impacts people in the real world.

Since its founding, journalist-owned 404 Media has published scoops on hacking, cyber security, cybercrime, artificial intelligence, and consumer rights. They uncovered the many ways tech companies and speech platforms sell users' data without their knowledge or consent to AI companies for training purposes. Their reporting led to Apple banning apps that help create non-consensual sexual AI imagery, and revealed a feature on New York city subway passes that enabled rider location tracking, leading the subway system to shut down the feature.

Jason Koebler remotely accepts the EFF Award for Fearless Journalism on behalf of 404 Media.

“We believe that there is a huge demand for journalism that is written by humans for other humans, and that real people do not want to read AI-generated news stories that are written for search engine optimization algorithms and social media,” said 404 Media's Jason Koebler in a video recorded for the ceremony.

EFF Director for International Freedom of Expression Jillian York introduced the next award recipient, Cairo-based nonprofit Connecting Humanity represented by Egyptian journalist and activist Mirna El Helbawi.

The organization collects and distributes embedded SIMs (eSIMs), a software version of the physical chip used to connect a phone to cellular networks and the internet. The eSIMS have helped thousands of Gazans stay digitally connected with family and the outside world, speak to loved ones at hospitals, and seek emergency help amid telecom and internet blackouts during Israel’s war with Hamas.

Connecting Humanity has distributed 400,000 eSIMs to people in Gaza since October. The eSIMS have been used to save families from under the rubble, allow people to resume their online jobs and attend online school, connect hospitals in Gaza, and assist journalists reporting on the ground, Mirna said.

Mirna El Helbawi accepts EFF Award on behalf of Connecting Humanity.

“This award is for Connecting Humanity’s small team of volunteers, who worked day and night to connect people in Gaza for the past 11 months and are still going strong,” she told the audience. “They are the most selfless people I have ever met. Not a single day has passed without this team doing their best to ensure that people are connecting in Gaza.”

EFF Policy Director for Global Privacy Katitza Rodriguez took the stage next to introduce the night’s final honoree, Fundación Karisma founder and former executive director Carolina Botero. A researcher, lecturer, writer, and consultant, Carolina is among the foremost leaders in the fight for digital rights in Latin America.

Karisma has worked since 2003 to put digital privacy and security on policymaking agendas in Colombia and the region and ensure that technology protects human rights.

She played a key role in helping to defeat a copyright law that would have brought a DMCA-like notice and takedown regime in Colombia, threatening free expression. Her opposition to the measure made her a target of government surveillance, but even under intense pressure from the government, she refused to back down.

Karisma and other NGOs proposed amending Brazil’s intelligence law to strengthen monitoring, transparency, and accountability mechanisms, and fought to increase digital security for human rights and environmental activists, who are often targets of government tracking.

Carolina Botero receives the EFF Award for Fostering Digital Rights in Latin America.

“Quiet work is a particularly thankless aspect of our mission in countries like Colombia, where there are few resources and few capacities, and where these issues are not on the public agenda,” Carolina said in her remarks. She left her position at Karisma this year, opening the door for a new generation while leaving an inspiring digital rights legacy in Latin America in the fight for digital rights.

EFF is grateful that it can honor and lift up the important work of these award winners, who work both behind the scenes and in very public ways to protect online privacy, access to information, free expression, and the ability to find community and communicate with loved ones and the world on the internet.

The night’s honorees saw injustices, rights violations, and roadblocks to information and free expression, and did something about it. We thank them.

And thank you to all EFF members around the world who make our work possible—public support is the reason we can push for a better internet. If you're interested in supporting our work, consider becoming an EFF member! You can get special gear as a token of our thanks and help support the digital freedom movement.

Of course, special thanks to the sponsors of this year’s EFF Awards: Dropbox and Electric Capital.

Karen Gullo

New Email Scam Includes Pictures of Your House. Don’t Fall For It.

2 months 3 weeks ago

You may have arrived at this post because you received an email with an attached PDF from a purported hacker who is demanding payment or else they will send compromising information—such as pictures sexual in nature—to all your friends and family. You’re searching for what to do in this frightening situation, and how to respond to an apparently personalized threat that even includes your actual “LastNameFirstName.pdf” and a picture of your house.

Don’t panic. Contrary to the claims in your email, you probably haven't been hacked (or at least, that's not what prompted that email). This is merely a new variation on an old scam —actually, a whole category of scams called "sextortion." This is a type of online phishing that is targeting people around the world and preying on digital-age fears. It generally uses publicly available information or information from data breaches, not information obtained from hacking the recipients of the emails specifically, and therefore it is very unlikely the sender has any "incriminating" photos or has actually hacked your accounts or devices.

They begin the emails showing you your address, full name, and possibly a picture of your house. 

We’ll talk about a few steps to take to protect yourself, but the first and foremost piece of advice we have: do not pay the ransom.

We have pasted an example of this email scam at the bottom of this post. The general gist is that a hacker claims to have compromised your computer and says they will release embarrassing information—such as images of you captured through your web camera or your pornographic browsing history—to your friends, family, and co-workers.  The hacker promises to go away if you send them thousands of dollars, usually with bitcoin. This is different from a separate sextortion scam in which a stranger befriends and convinces a user to exchange sexual content then demands payment for secrecy; a much more perilous situation which requires a more careful response.

What makes the email especially alarming is that, to prove their authenticity, they begin the emails showing you your address, full name, and possibly a picture of your house. 

Again, this still doesn't mean you've been hacked. The scammers in this case likely found a data breach which contained a list of names, emails, and home addresses and are sending this email out to potentially millions of people, hoping that some of them would be worried enough and pay out that the scam would become profitable.

Here are some quick answers to the questions many people ask after receiving these emails.

They Have My Address and Phone Number! How Did They Get a Picture of My House?

Rest assured that the scammers were not in fact outside your house taking pictures. For better or worse, pictures of our houses are all over the internet. From Google Street View to real estate websites, finding a picture of someone’s house is trivial if you have their address. While public data on your home may be nerve-wracking, similar data about government property can have transparency benefits.

Unfortunately, in the modern age, data breaches are common, and massive sets of peoples’ personal information often make their way to the criminal corners of the Internet. Scammers likely obtained such a list or multiple lists including email addresses, names, phone numbers, and addresses for the express purpose of including a kernel of truth in an otherwise boilerplate mass email.

It’s harder to change your address and phone number than it is to change your password. The best thing you can do here is be aware that your information is out there and be careful of future scams using this information. Since this information (along with other leaked info such as your social security number) can be used for identity theft, it's a good idea to freeze your credit.

And of course, you should always change your password when you’re alerted that your information has been leaked in a breach. You can also use a service like Have I Been Pwned to check whether you have been part of one of the more well-known password dumps.

Should I Respond to the Email?

Absolutely not. With this type of scam, the perpetrator relies on the likelihood that a small number of people will respond out of a batch of potentially millions. Fundamentally this isn't that much different from the old Nigerian prince scam, just with a different hook. By default they expect most people will not even open the email, let alone read it. But once they get a response—and a conversation is initiated—they will likely move into a more advanced stage of the scam. It’s better to not respond at all.

So,  I Shouldn’t Pay the Ransom?

You should not pay the ransom. If you pay the ransom, you’re not only losing money, but you’re encouraging the scammers to continue phishing other people. If you do pay, then the scammers may also use that as a pressure point to continue to blackmail you, knowing that you’re susceptible.

What Should I Do Instead?

Unfortunately there isn’t much you can do. But there are a few basic security hygiene steps you can take that are always a good idea. Use a password manager to keep your passwords strong and unique. Moving forward, you should make sure to enable two-factor authentication whenever that is an option on your online accounts. You can also check out our Surveillance Self-Defense guide for more tips on how to protect your security and privacy online.

One other thing to do to protect yourself is apply a cover over your computer’s camera. We offer some through our store, but a small strip of electrical tape will do. This can help ease your mind if you're worried that a rogue app may be turning your camera on, or that you left it on yourself—unlikely, but possible scenarios. 

We know this experience isn't fun, but it's also not the end of the world. Just ignore the scammers' empty threats and practice good security hygiene going forward!

Overall this isn’t an issue that is up to consumers to fix. The root of the problem is that data brokers and nearly every other company have been allowed to store too much information about us for too long. Inevitably this data gets breached and makes its way into criminal markets where it is sold and traded and used for scams like this one. The most effective way to combat this would be with comprehensive federal privacy laws. Because, if the data doesn’t exist, it can’t be leaked. The best thing for you to do is advocate for such a law in Congress, or at the state level. 

Below are real examples of the scam that were sent to EFF employees. The scam text is similar across many different victims..

Example 1

[Name],

I know that calling [Phone Number] or visiting [your address] would be a convenient way to contact you in case you don't act. Don't even try to escape from this. You've no idea what I'm capable of in [Your City].

I suggest you read this message carefully. Take a moment to chill, breathe, and analyze it thoroughly. 'Cause we're about to discuss a deal between you and me, and I don't play games. You do not know me but I know you very well and right now, you are wondering how, right? Well, you've been treading on thin ice with your browsing habits, scrolling through those videos and clicking on links, stumbling upon some not-so-safe sites. I placed a Malware on a porn website & you visited it to watch(you get my drift). While you were watching those videos, your smartphone began working as a RDP (Remote Control) which provided me complete control over your device. I can peep at everything on your display, flick on your camera and mic, and you wouldn't even suspect a thing. Oh, and I have got access to all your emails, contacts, and social media accounts too.

Been keeping tabs on your pathetic life for a while now. It's simply your bad luck that I accessed your misdemeanor. I gave in more time than I should have looking into your personal life. Extracted quite a bit of juicy info from your system. and I've seen it all. Yeah, Yeah, I've got footage of you doing filthy things in your room (nice setup, by the way). I then developed videos and screenshots where on one side of the screen, there's whatever garbage you were enjoying, and on the other half, its your vacant face. With simply a single click, I can send this video to every single of your contacts.

I see you are getting anxious, but let's get real. Actually, I want to wipe the slate clean, and allow you to get on with your daily life and wipe your slate clean. I will present you two alternatives. First Alternative is to disregard this email. Let us see what is going to happen if you take this path. Your video will get sent to all your contacts. The video was lit, and I can't even fathom the humiliation you'll endure when your colleagues, friends, and fam check it out. But hey, that's life, ain't it? Don't be playing the victim here.

Option 2 is to pay me, and be confidential about it. We will name it my “privacy charges”. let me tell you what will happen if you opt this option. Your secret remains private. I will destroy all the data and evidence once you come through with the payment. You'll transfer the payment via Bitcoin only.

Pay attention, I'm telling you straight: 'We gotta make a deal'. I want you to know I'm coming at you with good intentions. My word is my bond.

Required Amount: $1950

BITCOIN ADDRESS: [REDACTED]

Let me tell ya, it's peanuts for your tranquility.

Notice: You now have one day in order to make the payment and I will only accept Bitcoins (I have a special pixel within this message, and now I know that you have read through this message). My system will catch that Bitcoin payment and wipe out all the dirt I got on you. Don't even think about replying to this or negotiating, it's pointless. The email and wallet are custom-made for you, untraceable. If I suspect that you've shared or discussed this email with anyone else, the garbage will instantly start getting sent to your contacts. And don't even think about turning off your phone or resetting it to factory settings. It's pointless. I don't make mistakes, [Name].

Can you notice something here?

Honestly, those online tips about covering your camera aren't as useless as they seem. I am waiting for my payment…

Example 2

[NAME],
Is visiting [ADDRESS] a better way to contact in case you don't act
Beautiful neighborhood btw
It's important you pay attention to this message right now. Take a moment to chill, breathe, and analyze it thoroughly. We're talking about something serious here, and I ain't playing games. You do not know anything about me but I know you very well and right now, you are thinking how, correct?
Well, You've been treading on thin ice with your browsing habits, scrolling through those filthy videos and clicking on links, stumbling upon some not-so-safe sites. I installed a Spyware called "Pegasus" on a app you frequently use. Pegasus is a spyware that is designed to be covertly and remotely installed on mobile phones running iOS and Android. While you were busy watching videos, your device started out working as a RDP (Remote Protocol) which gave me total control over your device. I can peep at everything on your display, flick on your cam and mic, and you wouldn't even notice. Oh, and I've got access to all your emails, contacts, and social media accounts too.
What I want?
Been keeping tabs on your pathetic existence for a while now. It's just your hard luck that I accessed your misdemeanor. I invested in more time than I probably should've looking into your personal life. Extracted quite a bit of juicy info from your system. and I've seen it all. Yeah, Yeah, I've got footage of you doing embarrassing things in your room (nice setup, by the way). I then developed videos and screenshots where on one side of the screen, there's whatever garbage you were enjoying, and on the other part, it is your vacant face. With just a click, I can send this filth to all of your contacts.
What can you do?
I see you are getting anxious, but let's get real. Wholeheartedly, I am willing to wipe the slate clean, and let you move on with your regular life and wipe your slate clean. I am about to present you two alternatives. Either turn a blind eye to this warning (bad for you and your family) or pay me a small amount to finish this mattter forever. Let us understand those 2 options in details.
First Option is to ignore this email. Let us see what will happen if you select this path. I will send your video to your contacts. The video was straight fire, and I can't even fathom the embarrasement you'll endure when your colleagues, friends, and fam check it out. But hey, that's life, ain't it? Don't be playing the victim here.
Other Option is to pay me, and be confidential about it. We will name it my “privacy fee”. let me tell you what happens when you go with this choice. Your filthy secret will remain private. I will wipe everything clean once you send payment. You'll transfer the payment through Bitcoin only. I want you to know I'm aiming for a win-win here. I'm a person of integrity.
Transfer Amount: USD 2000
My Bitcoin Address: [BITCOIN ADDRESS]
Or, (Here is your Bitcoin QR code, you can scan it):
[IMAGE OF A QR CODE]
Once you pay up, you'll sleep like a baby. I keep my word.
Important: You now have one day to sort this out. (I've a special pixel in this message, and now I know that you've read through this mail). My system will catch that Bitcoin payment and wipe out all the dirt I got on you. Don't even think about replying to this, it's pointless. The email and wallet are custom-made for you, untraceable. I don't make mistakes, [NAME]. If I notice that you've shared or discussed this mail with anyone else, your garbage will instantly start getting sent to your contacts. And don't even think about turning off your phone or resetting it to factory settings. It's pointless.
Honestly, those online tips about covering your camera aren't as useless as they seem.
Don't dwell on it. Take it as a little lesson and keep your guard up in the future.

 

Cooper Quintin

FTC Report Confirms: Commercial Surveillance is Out of Control

2 months 3 weeks ago

A new Federal Trade Commission (FTC) report confirms what EFF has been warning about for years: tech giants are widely harvesting and sharing your personal information to fuel their online behavioral advertising businesses. This four-year investigation into the data practices of nine social media and video platforms, including Facebook, YouTube, and X (formerly Twitter), demonstrates how commercial surveillance leaves consumers with little control over their privacy. While not every investigated company committed the same privacy violations, the conclusion is clear: companies prioritized profits over privacy. 

While EFF has long warned about these practices, the FTC’s investigation offers detailed evidence of how widespread and invasive commercial surveillance has become. Here are key takeaways from the report:

Companies Collected Personal Data Well Beyond Consumer Expectations

The FTC report confirms that companies collect data in ways that far exceed user expectations. They’re not just tracking activity on their platforms, but also monitoring activity on other websites and apps, gathering data on non-users, and buying personal information from third-party data brokers. Some companies could not, or would not, disclose exactly where their user data came from. 

The FTC found companies gathering detailed personal information, such as the websites you visit, your location data, your demographic information, and your interests, including sensitive interests like “divorce support” and “beer and spirits.” Some companies could only report high-level descriptions of the user attributes they tracked, while others produced spreadsheets with thousands of attributes. 

There’s Unfettered Data Sharing With Third Parties

Once companies collect your personal information, they don’t always keep it to themselves. Most companies reported sharing your personal information with third parties. Some companies shared so widely that they claimed it was impossible to provide a list of all third-party entities they had shared personal information with. For the companies that could identify recipients, the lists included law enforcement and other companies, both inside and outside the United States. 

Alarmingly, most companies had no vetting process for third parties before sharing your data, and none conducted ongoing checks to ensure compliance with data use restrictions. For example, when companies say they’re just sharing your personal information for something that seems unintrusive, like analytics, there's no guarantee your data is only used for the stated purpose. The lack of safeguards around data sharing exposes consumers to significant privacy risks.

Consumers Are Left in the Dark

The FTC report reveals a disturbing lack of transparency surrounding how personal data is collected, shared, and used by these companies. If companies can’t tell the FTC who they share data with, how can you expect them to be honest with you?

Data tracking and sharing happens behind the scenes, leaving users largely unaware of how much privacy they’re giving up on different platforms. These companies don't just collect data from their own platforms—they gather information about non-users and from users' activity across the web. This makes it nearly impossible for individuals to avoid having their personal data swept up into these vast digital surveillance networks. Even when companies offer privacy controls, the controls are often opaque or ineffective. The FTC also found that some companies were not actually deleting user data in response to deletion requests.

The scale and secrecy of commercial surveillance described by the FTC demonstrates why the burden of protecting privacy can’t fall solely on individual consumers.

Surveillance Advertising Business Models Are the Root Cause

The FTC report underscores a fundamental issue: these privacy violations are not just occasional missteps—they’re inherent to the business model of online behavioral advertising. Companies collect vast amounts of data to create detailed user profiles, primarily for targeted advertising. The profits generated from targeting ads based on personal information drive companies to develop increasingly invasive methods of data collection. The FTC found that the business models of most of the companies incentivized privacy violations.

FTC Report Underscores Urgent Need for Legislative Action

Without federal privacy legislation, companies have been able to collect and share billions of users’ personal data with few safeguards. The FTC report confirms that self-regulation has failed: companies’ internal data privacy policies are inconsistent and inadequate, allowing them to prioritize profits over privacy. In the FTC’s own words, “The report leaves no doubt that without significant action, the commercial surveillance ecosystem will only get worse.”

To address this, the EFF advocates for federal privacy legislation. It should have many components, but these are key:

  1. Data minimization and user rights: Companies should be prohibited from processing a person’s data beyond what’s necessary to provide them what they asked for. Users should have the right to access their data, port it, correct it, and delete it.
  2. Ban on Online Behavioral Advertising: We should tackle the root cause of commercial surveillance by banning behavioral advertising. Otherwise, businesses will always find ways to skirt around privacy laws to keep profiting from intrusive data collection.
  3. Strong Enforcement with Private Right of Action: To give privacy legislation bite, people should have a private right of action to sue companies that violate their privacy. Otherwise, we’ll continue to see widespread violation of privacy laws due to limited government enforcement resources. 

Using online services shouldn't mean surrendering your personal information to countless companies to use as they see fit.  When you sign up for an account on a website, you shouldn’t need to worry about random third-parties getting your information or every click being monitored to serve you ads. For now, our Privacy Badger extension can help you block some of the tracking technologies detailed in the FTC report. But the scale of commercial surveillance revealed in this investigation requires significant legislative action. Congress must act now and protect our data from corporate exploitation with a strong federal privacy law.

Lena Cohen

The UN General Assembly and the Fight Against the Cybercrime Treaty

2 months 3 weeks ago

Note on the update: The text has been revised to reflect the updated timeline for the UN General Assembly’s consideration of the convention, which is now expected at the end of this year. The update also emphasizes that states should reject the convention. Additionally, a new section outlines the risks associated with broad evidence-sharing, particularly the lack of robust safeguards needed to act as checks against the misuse of power. While the majority of the  investigatory powers in the convention used the shall language in Chapter IV, and therefore, are mandatory, the safeguards are left to each state’s discretion in how they are applied. Please note that our piece in Just Security and this post are based on the latest version of the UNCC.

The final draft text of the United Nations Convention Against Cybercrime, adopted last Thursday by the United Nations Ad Hoc Committee, is now headed to the UN General Assembly for a vote. The last hours of deliberations were marked by drama as Iran repeatedly, though unsuccessfully, attempted to remove almost all human rights protections that survived in the final text, receiving support from dozens of nations. Although Iran’s efforts were defeated, the resulting text is still nothing to celebrate, as it remains riddled with unresolved human rights issues. States should vote No when the UNGA votes on the UN Cybecrime Treaty.

The Fight Moves to the UN General Assembly

States will likely consider adopting or rejecting the treaty at the UN General Assembly later this year. It is crucial for States to reject the treaty and vote against it. This moment offers a key opportunity to push back and build a strong, coordinated opposition. 

Over more than three years of advocacy, we consistently fought for clearer definitions, narrower scope, and stronger human rights protections. Since the start of the process, we made it clear that we didn’t believe the treaty was necessary, and, given the significant variation in privacy and human rights standards among member states, we raised concerns that the investigative powers adopted in the treaty may accommodate the most intrusive police surveillance practices across participating countries. Yet, we engaged in the discussions in good faith to attempt to ensure that the treaty would be narrow in scope and include strong, mandatory human rights safeguards.

However, in the end, the e-evidence sharing chapter remains broad in scope, and the rights section unfortunately falls short. Indeed, instead of merely facilitating cooperation on core cybercrime, this convention authorizes open-ended evidence gathering and sharing for any serious crime that a country chooses to punish with a sentence of at least four years or more, without meaningful limitations. While the convention excludes cooperation requests if there are substantial grounds to believe that the request is for the purpose of prosecuting or punishing someone based on their political beliefs or personal characteristics, it sets an extremely high bar for such exclusions and provides no operational safeguards or mechanisms to ensure that acts of transnational repression or human rights abuses are refused. 

The convention requires that these surveillance measures are proportionate, but leaves critical safeguards such as judicial review, the need for grounds of justifying surveillance, and the need for effective redress as optional despite the intrusive nature of the surveillance powers it adopts. Even more concerning, some states have already indicated that in their view the requirements for these critical safeguards is purely a matter of states' domestic law, many of which already fail to meet international human rights standards and lack meaningful judicial oversight or legal accountability. 

The convention ended up accommodating the most intrusive practices.  For example, blanket, generalized data retention is problematic under human rights law but states that ignore these restrictions, and have such powers under their domestic law, can respond to assistance requests by sharing evidence that was retained through blanket data retention regimes. Similarly, encryption is also protected under international human rights standards but nothing in this convention prevents a state from employing encryption-breaking powers they have under their domestic law when responding to a cross-border request to access data.

The convention’s underlying flaw is the assumption that, in accommodating all countries' practices, states will act in good faith. This assumption is flawed, as it only increases the likelihood that the powerful global cooperation tools established by the convention will be abused.

The Unsettling Concessions in the Treaty Negotiations

The key function of the Convention, if ratified, will be to create a means of requiring legal assistance between countries that do not already have mutual legal assistance treaties (MLATs) or other cooperation agreements. This would include repressive regimes who may previously have been hindered in their attempts to engage in cross-border surveillance and data sharing, in some cases because their concerning human rights records have excluded them from MLATs. For countries that already have MLATs in place, the new treaty’s cross-border cooperation provisions may provide additional tools for assistance.

A striking pattern throughout the Convention as adopted is the leeway that it gives to states to decide whether or not to require human rights safeguards; almost all of the details of how human rights protections are implemented is left up to national law. For example, the scope and definition of many offenses “may"—or may not—include certain protective elements. In addition, states are not required to decline requests from other states to help investigate acts that are not crimes under their domestic law; they can choose to cooperate with those requests instead. Nor does the treaty obligate states to carefully scrutinize surveillance requests to ensure they are not pretextual attempts at persecution.

This pattern continues. For example, the list of core cybercrimes under the convention—that in the past swept in good faith security research, whistleblowers, and journalistic activities—let states choose whether specific elements must be included before an act will be considered a crime, for example that the offense was done with dishonest intent or that it caused serious harm. Sadly, these elements are optional, not required.

Similarly, provisions on child sexual abuse material (CSAM) allow states to adopt exceptions that would ensure scientific, medical, artistic or educational materials are not wrongfully targeted, and that would exclude consensual, age-appropriate exchanges between minors, in line with international human rights standards. Again, these exceptions are optional, meaning that over-criminalization is not only consistent with the Convention but also qualifies for the Convention's cross-border surveillance and extradition mechanisms.

The broad discretion granted to states under the UN Cybercrime Treaty is a deliberate design intended to secure agreement among countries with varying levels of human rights protections. This flexibility, in certain cases, allows states with strong protections to uphold them, but it also permits those with weaker standards to maintain their lower levels of protection. This pattern was evident in the negotiations, where key human rights safeguards were made optional rather than mandatory, such as in the  list of core cybercrimes and provisions on cross-border surveillance.

These numerous options in the convention are also disappointing because they took the place of what would have been preferred: advancing the protections in their national laws as normative globally, and encouraging or requiring other states to adopt them. 

Exposing States’ Contempt For Rights

Iran’s last-ditch attempts to strip human rights protections from the treaty were a clear indicator of the challenges ahead. In the final debate, Iran proposed deleting provisions that would let states refuse international requests for personal data when there’s a risk of persecution based on political opinions, race, ethnicity, or other factors. Despite its disturbing implications, the proposal received 25 votes in support including from India, Cuba, China, Belarus, Korea, Nicaragua, Nigeria, Russia, and Venezuela.

That was just one of a series of proposals by Iran to remove specific human rights or procedural protections from the treaty at the last minute. Iran also requested a vote on deleting Article 6(2) of the treaty, another human rights clause that explicitly states that nothing in the Convention should be interpreted as allowing the suppression of human rights or fundamental freedoms, as well as Article 24, which establishes the conditions and safeguards—the essential checks and balances—for domestic and cross-border surveillance powers.

Twenty-three countries, including Jordan, India, and Sudan, voted to delete Article 6(2), with 26 abstentions from countries like China, Uganda, and Turkey. This means a total of 49 countries either supported or chose not to oppose the removal of this critical clauses, showing a significant divide in the international community's commitment to protecting fundamental freedoms.  And 11 countries voted to delete Article 24, with 23 abstentions.

These and other Iranian proposals would have removed nearly every reference to human rights from the convention, stripping the treaty of its substantive human rights protections and impacting both domestic legislation and international cooperation, leaving only the preamble and general clause, which states: "State Parties shall ensure that the implementation of their obligations under this Convention is consistent with their obligations under international human rights law.”

Additional Risks of Treaty Abuse

The risk that treaty powers can be abused to persecute people is real and urgent. It is even more concerning that some states have sought to declare (by announcing a future potential “reservation”) that they may intend to not follow Article 6.2 (general human rights clause), Article 24 (conditions and safeguards for domestic and cross border spying assistance), and Article 40(22) on human-rights-based grounds for refusing mutual legal assistance, despite their integral roles in the treaty.

Such reservations should be prohibited. According to the International Law Commission’s "Guide to Practice on Reservations to Treaties," a reservation is impermissible if it is incompatible with the object and purpose of the treaty. Human-rights safeguards, while not robust enough, are essential elements of the treaty, and reservations that undermine these safeguards could be considered incompatible with the treaty’s object and purpose. Furthermore, the Guide states that reservations should not affect essential elements necessary to the general tenor of the treaty, and if they do, such reservations impair the raison d’être of the treaty itself. Therefore, allowing reservations against human rights safeguards may not only undermine the treaty’s integrity but also challenge its legal and moral foundations.

All of the attacks on safeguards in the treaty process raise particular concerns when foreign governments use the treaty powers to demand information from U.S. companies, who should be able to rely on the strong standards embedded in US law. Where norms and safeguards were made optional, we can presume that many states will choose to forego them.

Cramming Even More Crimes Back In?

Throughout the negotiations, several delegations voiced concerns that the scope of the Convention did not cover enough crimes, including many that threaten online content protected by the rights to free expression and peaceful protest. Russia, China, Nigeria, Egypt, Iran, and Pakistan advocated for broader criminalization, including crimes like incitement to violence and desecration of religious values. In contrast, the EU, the U.S., Costa Rica, and others advocated for a treaty that focuses solely on computer-related offenses, like attacks on computer systems, and some cyber-enabled crimes like CSAM and grooming.

Despite significant opposition, Russia, China, and other states successfully advanced the negotiation of a supplementary protocol for additional crimes, even before the core treaty has been ratified and taken effect. This move is particularly troubling as it leaves unresolved the critical issue of consensus on what constitutes core cybercrimes—a ticking time bomb that could lead to further disputes and could retroactively expand application of the Convention's cross-border cooperation regime even further. 

Under the final agreement, it will take 40 ratifications for the treaty to enter into force and 60 before any new protocols can be adopted. While consensus remains the goal, if it cannot be reached, a protocol can still be adopted with a two-thirds majority vote of the countries present.

The treaty negotiations are disappointing, but civil society and human rights defenders can unite to urge states to vote against the convention at the next UN General Assembly, ensuring that these flawed provisions do not undermine human rights globally.

Katitza Rodriguez

Digital ID Isn't for Everybody, and That's Okay

2 months 3 weeks ago

How many times do you pull out your driver’s license a week? Maybe two to four times to purchase age restricted items, pick up prescriptions, or go to a bar. If you get a mobile driver’s license (mDL) or other forms of digital identification (ID) being offered in Google and Apple wallets, you may have to share this information much more often than before, because this new technology may expand the scope of scenarios demanding your ID.

These mDLs and digital IDs are being deployed faster than states can draft privacy protections, including for presenting your ID to more third parties than ever before. While proponents of these digital schemes emphasize a convenience factor, these IDs can easily expand into new territories like controversial age verification bills that censor everyone. Moreover, digital ID is simultaneously being tested in sensitive situations, and expanded into a potential regime of unprecedented data tracking.

In the digital ID space, the question of “how can we do this right?” often usurps the more pertinent question of “should we do this at all?” While there are highly recommended safeguards for these new technologies, we must always support each person’s right to choose to continue using physical documentation instead of going digital. Also, we must do more to bring understanding and decision power over these technologies to all, overzealously promoting them as a potential equalizer.

What’s in Your Wallet?

With modern hardware, phones can now safely store more sensitive data and credentials with higher levels of security. This enables functionalities like Google and Apple Pay exchanging transaction data online with e-commerce sites. While there’s platform-specific terminology, the general term to know is “Trusted Platform Module” (TPM). This hardware enables “Trusted Execution Environments” (TEEs) for sensitive data to be processed within this environment. Most modern phones, tablets, and laptops come with TPMs.

Digital IDs are considered at a higher level of security within the Google and Apple wallets (as they should be). So if you have an mDL provisioned with this device, the contents of the mDL is not “synced to the cloud.” Instead, it stays on that device, and you have the option to remotely wipe the credential if the device is stolen or lost.

Moving away from digital wallets already common on most phones, some states have their own wallet app for mDLs that would require downloading from an app store. The security on these applications can vary, along with the data they can and can’t see. Different private partners have been making wallet/ID apps for different states. These include IDEMIA, Thales, and Spruce ID, to name a few. Digital identity frameworks, like Europe’s (eIDAS), have been creating language and provisions for “open wallets,” where you don’t have to necessarily rely on big tech for a safe and secure wallet. 

However, privacy and security need to be paramount. If privacy is an afterthought, digital IDs can quickly become yet another gold mine of breaches for data brokers and bad actors.

New Announcements, New Scope

Digital ID has been moving fast this summer.

Proponents of digital ID frequently present the “over 21” example, which is often described like this:

You go to the bar, you present a claim from your phone that you are over 21, and a bouncer confirms the claim with a reader device for a QR code or a tap via NFC. Very private. Very secure. Said bouncer will never know your address or other information. Not even your name. This is called an “abstract claim”, where more-sensitive information is not exchanged, but instead just a less-sensitive attestation to the verifier. Like an age threshold rather than your date of birth and name.

But there is a high privacy price to pay for this marginal privacy benefit. mDLs will not just swap in as a 1-on-1 representation of your physical ID. Rather, they are likely to expand the scenarios where businesses and government agencies demand that you prove your identity before entering physical and digital spaces or accessing goods and services. Our personal data will be passed at more frequent rates than ever, via frequent online verification of identity per day or week with multiple parties. This privacy menace far surpasses the minor danger of a bar bouncer collecting, storing, and using your name and address after glancing at your birth-date on your plastic ID for 5 seconds in passing. In cases where bars do scan ID, we’re still being asked to consider one potential privacy risk for an even more expanded privacy risk through digital ID presentation across the internet.

While there are efforts to enable private businesses to read mDLs, these credentials today are mainly being used with the TSA. In contracts and agreements we have seen with Apple, the company largely controls the marketing and visibility of mDLs.

In another push to boost adoption, Android allows you to create a digital passport ID for domestic travel. This development must be seen through the lens of the federal government’s 20-year effort to impose “REAL ID” on state-issued identification systems. REAL ID is an objective failure of a program that pushes for regimes that strip privacy from everyone and further marginalize undocumented people. While federal-level use of digital identity so far is limited to TSA, this use can easily expand. TSA wants to propose rules for mDLs in an attempt (the agency says) to “allow innovation” by states, while they contemplate uniform rules for everyone. This is concerning, as the scope of TSA —and its parent agency, the Department of Homeland Security—is very wide. Whatever they decide now for digital ID will have implications way beyond the airport.

Equity First > Digital First

We are seeing new digital ID plans being discussed for the most vulnerable of us. Digital ID must be designed for equity (as well as for privacy).

With Google’s Digital Credential API and Apple’s IP&V Platform (as named from the agreement with California), these two major companies are going to be in direct competition with current age verification platforms. This alarmingly sets up the capacity for anyone to ask for your ID online. This can spread beyond content that is commonly age-gated today. Different states and countries may try to label additional content as harmful to children (such as LGBTQIA content or abortion resources), and require online platforms to conduct age verification to access that content.

For many of us, opening a bank account is routine, and digital ID sounds like a way to make this more convenient. Millions of working class people are currently unbanked. Digital IDs won’t solve their problems. Many people can’t get simple services and documentation for a variety of reasons that come with having low-income. Millions of people in our country don’t have identification. We shouldn’t apply regimes that utilize age verification technology against people who often face barriers to compliance, such as license suspension for unpaid, non-traffic safety related fines. A new technical system with far less friction to attempt to verify age will, without regulation to account for nuanced lives, lead to an expedited, automated “NO” from digital verification.

Another issue is that many lack a smartphone or an up-to-date smartphone, or may share a smartphone with their family. Many proponents of “digital first” solutions assume a fixed ratio of one smartphone for each person. While this assumption may work for some, others will need humans to talk to on a phone or face-to-face to access vital services. In the case of an mDL, you still need to upload your physical ID to even obtain an mDL, and need to carry a physical ID on your person. Digital ID cannot bypass the problem that some people don’t have physical ID. Failure to account for this is a rush to perceived solutions over real problems.

Inevitable?

No, digital identity shouldn’t be inevitable for everyone: many people don’t want it or lack resources to get it. The dangers posed by digital identity don’t have to be inevitable, either—if states legislate protections for people. It would also be great (for the nth time) to have a comprehensive federal privacy law. Illinois recently passed a law that at least attempts to address mDL scenarios with law enforcement. At the very minimum, law enforcement should be prohibited from using consent for mDL scans to conduct illegal searches. Florida completely removed their mDL app from app stores and asked residents who had it, to delete it; it is good they did not simply keep the app around for the sake of pushing digital ID without addressing a clear issue.

State and federal embrace of digital ID is based on claims of faster access, fraud prevention, and convenience. But with digital ID being proposed as a means of online verification, it is just as likely to block claims of public assistance as facilitate them. That’s why legal protections are at least as important as the digital IDs themselves.

Lawmakers should ensure better access for people with or without a digital ID.

 

Alexis Hancock
Checked
2 hours 35 minutes ago
EFF's Deeplinks Blog: Noteworthy news from around the internet
Subscribe to EFF update feed