Drone As First Responder Programs Are Swarming Across the United States

2 months ago

Law enforcement wants more drones, and we’ll probably see many more of them overhead as police departments seek to implement a popular project justifying the deployment of unmanned aerial vehicles (UAVs): the “drone as first responder” (DFR).

Police DFR programs involve a fleet of drones, which can range in number from four or five to hundreds. In response to 911 calls and other law enforcement calls for service, a camera-equipped drone is launched from a regular base (like the police station roof) to get to the incident first, giving responding officers a view of the scene before they arrive. In theory and in marketing materials, the advance view from the drone will help officers understand the situation more thoroughly before they get there, better preparing them for the scene and assisting them in things such as locating wanted or missing individuals more quickly. Police call this “situational awareness.”

In practice, law enforcement's desire to get “a view of the scene” becomes a justification for over-surveilling neighborhoods that produce more 911 calls and for collecting information on anyone who happens to be in the drone’s path. For example, a drone responding to a vandalism case may capture video footage of everyone it passes along the way. Also, drones are subject to the same mission-creep issues that already plague other police tools designed to record the public; what is pitched as a solution to violent crime can quickly become a tool for policing homelessness or low-level infractions that otherwise wouldn't merit police resources.

With their birds-eye view, drones can observe individuals in previously private and constitutionally protected spaces, like their backyards, roofs, and even through home windows. And they can capture crowds of people, like protestors and other peaceful gatherers exercising their First Amendment rights. Drones can be equipped with cameras, thermal imaging, microphones, license plate readers, face recognition, mapping technology, cell-site simulators, weapons, and other payloads. Proliferation of these devices enables state surveillance even for routine operations and in response to innocuous calls —situations unrelated to the original concerns of terrorism or violent crime originally used to justify their adoption.

Drones are also increasingly tied into other forms of surveillance. More departments — including those in Las Vegas, Louisville, and New York City — are toying with the idea of dispatching drones in response to ShotSpotter gunshot detection alerts, which are known to send many false positive alerts. This could lead to drone surveillance of communities that happen to have a higher concentration of ShotSpotter microphones or other acoustic gunshot detection technology. Data revealed recently shows that a disproportionate number of these gunshot detection sensors  are located in Black communities in the United States. Also, artificial intelligence is also being added to drone data collection; connecting what's gathered from the sky to what has been gathered on the street and through other methods is a trending part of the police panopticon plan.

chula_vista_drone_program_2.jpg

A CVPD official explains the DFR program to EFF staff in 2022. Credit: Jason Kelley (EFF)

DFR programs have been growing in popularity since first launched by the Chula Vista Police Department in 2018. Now there are a few dozen departments with known DFR programs among the approximately 1,500 police departments known to have any drone program at all, according to EFF’s Atlas of Surveillance, the most comprehensive dataset of this kind of information. The Federal Aviation Administration (FAA) regulates use of drones and is currently mandated to prepare new regulations for how they can be operated beyond the operator’s line of sight (BVLOS), the kind of long-distance flight that currently requires a special waiver. All the while, police departments and the companies that sell drones are eager to move forward with more DFR initiatives.

Agency State Arapahoe County Sheriff's Office CO Beverly Hills Police Department CA Brookhaven Police Department GA Burbank Police Department CA Chula Vista Police Department CA Clovis Police Department CA Commerce City Police Department CO Daytona Beach Police Department FL Denver Police Department CO Elk Grove Police Department CA Flagler County Sheriff's Office FL Fort Wayne Police Department IN Fremont Police Department CA Gresham Police Department OR Hawthorne Police Department CA Hemet Police Department CA Irvine Police Department CA Montgomery County Police Department MD New York City Police Department NY Oklahoma City Police Department OK Oswego Police Department NY Redondo Beach CA Santa Monica Police Department CA West Palm Beach Police Department FL Yonkers Police Department NY Schenectady Police Department NY Queen Creek Police Department AZ Greenwood Village Police Department CO Hawthorne Police Department CA

Transparency around the acquisition and use of drones will be important to the effort to protect civilians from government and police overreach and abuse as agencies commission more of these flying machines. A recent Wired investigation raised concerns about Chula Vista’s program, finding that roughly one in 10 drone flights lacked a stated purpose, and for nearly 500 of its recent flights, the reason for deployment was an “unknown problem.” That same investigation also found that each average drone flight exposes nearly 5,000 city residents to enhanced surveillance, primarily in predominantly Black and brown neighborhoods.

“For residents we spoke to,” Wired wrote, “the discrepancy raises serious concerns about the accuracy and reliability of the department's transparency efforts—and experts say the use of the drones is a classic case of self-perpetuating mission creep, with their existence both justifying and necessitating their use.”

Chula Vista's "Drone-Related Activity Dashboard" indicates that more than 20 percent of drone flights are welfare checks or mental health crises, while only roughly 6% are responding to assault calls. Chula Vista Police claim that the DFR program lets them avoid potentially dangerous or deadly interactions with members of the public, with drone responses resulting in their department avoiding sending a patrol unit in response to 4,303 calls. However, this theory and the supporting data needs to be meaningfully evaluated by independent researchers.

This type of analysis is not possible without transparency around the program in Chula Vista, which, to its credit, publishes regular details like the location and reason for each of its deployments. Still, that department has also tried to prevent the public from learning about its program, rejecting California Public Records Act (CPRA) requests for drone footage. This led to a lawsuit in which EFF submitted an amicus brief, and ultimately the California Court of Appeal correctly found that drone footage is not exempt from CPRA requests.

While some might take for granted that the government is not allowed to conduct surveillance — intentional, incidental, or otherwise — on you in spaces like your fenced-in backyard, this is not always the case. It took a lawsuit and a recent Alaska Supreme Court decision to ensure that police in that state must obtain a warrant for drone surveillance in otherwise private areas. While some states do require a warrant to use a drone to violate the privacy of a person’s airspace, Alaska, California, Hawaii, and Vermont are currently the only states where courts have held that warrantless aerial surveillance violates residents’ constitutional protections against unreasonable search and seizure absent specific exceptions.

Clear policies around the use of drones are a valuable part of holding police departments accountable for their drone use. These policies must include rules around why a drone is deployed and guardrails on the kind of footage that is collected, the length of time it is retained, and with whom it can be shared.

A few state legislatures have taken some steps toward providing some public accountability over growing drone use.

  • In Minnesota, law enforcement agencies are required to annually report their drone programs' costs and the number of times they deployed drones with, including how many times they were deployed without a warrant.
  • In Illinois, the Drones as First Responders Act went into effect June 2023, requiring agencies to report whether they own drones; how many are owned; the number of times the drones were deployed, as well as the date, location, and reason for the deployment; and whether video was captured and then retained from each deployment. Illinois agencies also must share a copy of their latest use policies, drone footage is generally supposed to be deleted after 24 hours, and the use of face recognition technology is prohibited except in certain circumstances.
  • In California, AB 481 — which took effect in May 2022 with the aim of providing public oversight over military-grade police equipment — requires police departments to publicly share a regular inventory of the drones that they use. Under this law, police acquisition of drones and the policies governing their use require approval from local elected officials following an opportunity for public comment, giving communities an important chance to provide feedback.

DFR programs are just one way police are acquiring drones, but law enforcement and UAV manufacturers are interested in adding drones in other ways, including as part of regular patrols and in response to high-speed vehicle pursuits. These uses also create the risk of law enforcement bypassing important safeguards.  Reasonable protections for public privacy, like robust use policies, are not a barrier to public safety but a crucial part of ensuring just and constitutional policing.

Companies are eager to tap this growing market. Police technology company Axon —known for its Tasers and body-worn cameras — recently acquired drone company Dedrone, specifically citing that company’s efforts to push DFR programs as one reason for the acquisition. Axon since has established a partnership with Skydio in order to expand their DFR sales.

It’s clear that as the skies open up for more drone usage, law enforcement will push to procure more of these flying surveillance tools. But police and lawmakers must exercise far more skepticism over what may ultimately prove to be a flashy trend that wastes resources, infringes on people's rights, and results in unforeseen shifts in policing strategy. The public must be kept aware of how cops are coming for their privacy from above.

Beryl Lipton

Government Has Extremely Heavy Burden to Justify TikTok Ban, EFF Tells Appeals Court

2 months ago
New Law Subject to Strictest Scrutiny Because It Imposes Prior Restraint, Directly Restricts Free Speech, and Singles Out One Platform for Prohibition, Brief Argues

SAN FRANCISCO — The federal ban on TikTok must be put under the finest judicial microscope to determine its constitutionality, the Electronic Frontier Foundation (EFF) and others argued in a friend-of-the-court brief filed Wednesday to the U.S. Court of Appeals for the D.C. Circuit. 

The amicus brief says the Court must review the Protecting Americans from Foreign Adversary Controlled Applications Act — passed by Congress and signed by President Biden in April — with the most demanding legal scrutiny because it imposes a prior restraint that would make it impossible for users to speak, access information, and associate through TikTok. It also directly restricts protected speech and association, and deliberately singles out a particular medium for a blanket prohibition. This demanding First Amendment test must be used even when the government asserts national security concerns. 

The Court should see this law for what it is: “a sweeping ban on free expression that triggers the most exacting scrutiny under the First Amendment,” the brief argues, adding it will be extremely difficult for the government to justify this total ban. 

Joining EFF in this amicus brief are the Freedom of the Press Foundation, TechFreedom, Media Law Resource Center, Center for Democracy and Technology, First Amendment Coalition, and Freedom to Read Foundation. 

TikTok hosts a wide universe of expressive content from musical performances and comedy to politics and current events, the brief notes, and with more than 150 million users in the United States and 1.6 billion users worldwide, the platform hosts enormous national and international communities that most U.S. users cannot readily reach elsewhere. It plays an especially important and outsized role for minority communities seeking to foster solidarity online and to highlight issues vital to them. 

“The First Amendment protects not only TikTok’s US users, but TikTok itself, which posts its own content and makes editorial decisions about what user content to carry and how to curate it for each individual user,” the brief argues.  

Congress’s content-based justifications for the ban make it clear that the government is targeting TikTok because it finds speech that Americans receive from it to be harmful, and simply invoking national security without clearly demonstrating a threat doesn’t overcome the ban’s unconstitutionality, the brief argues. 

“Millions of Americans use TikTok every day to share and receive ideas, information, opinions, and entertainment from other users around the world lies, and that’s squarely within the protections of the First Amendment,” EFF Civil Liberties Director David Greene said. “By barring all speech on the platform before it can happen, the law effects the kind of prior restraint that the Supreme Court has rejected for the past century as unconstitutional in all but the rarest cases.” 

For the brief: https://www.eff.org/document/06-26-2024-eff-et-al-amicus-brief-tiktok-v-garland

For EFF’s stance on TikTok bans: https://www.eff.org/deeplinks/2023/03/government-hasnt-justified-tiktok-ban 

Contact:  DavidGreeneCivil Liberties Directordavidg@eff.org
Josh Richman

The Global Suppression of Online LGBTQ+ Speech Continues

2 months ago

A global increase in anti-LGBTQ+ intolerance is having a significant impact on digital rights. As we wrote last year, censorship of LGBTQ+ websites and online content is on the rise. For many LGBTQ+ individuals the world over, the internet can be a safer space for exploring identity, finding community, and seeking support. But with anti-LGBTQ+ bills restricting free expression and privacy to content moderation decisions that disproportionately impact LGBTQ+ users, digital spaces that used to seem like safe havens are, for many, no longer so.

EFF's mission is to ensure that technology supports freedom, justice, and innovation for all people of the world, and that includes LGBTQ+ communities, which all too often face threats, censorship, and other risks when they go online. This Pride month—and the rest of the year—we’re highlighting some of those risks, and what we’re doing to help change online spaces for the better.

Worsening threats in the Americas

In the United States, where EFF is headquartered, recent gains in rights have been followed by an uptick in intolerance that has led to legislative efforts, mostly at the state level. In 2024 alone, 523 anti-LGBTQ+ bills have been proposed by state legislatures, many of which restrict freedom of expression. In addition to these bills, a drive in mostly conservative areas to ban books in school libraries—many of which contain LGBTQ themes—is creating an environment in which queer youth feel even more marginalized.

At the national level, an effort to protect children from online harms—the Kids Online Safety Act (KOSA)—risks alienating young people, particularly those from marginalized communities, by restricting their access to certain content on social media. EFF spoke with young people about KOSA, and found that many are concerned that they will lose access to help, education, friendship, and a sense of belonging that they have found online. At a time when many young people have just come out of several years of isolation during the pandemic and reliance on online communities for support, restricting their access could have devastating consequences.

TAKE ACTION

TELL CONGRESS: OPPOSE THE KIDS ONLINE SAFETY ACT

Similarly, age-verification bills being put forth by state legislatures often seek to prevent access to material deemed harmful to minors. If passed, these measures would restrict access to vital content, including education and resources that LGBTQ+ youth without local support often rely upon. These bills often contain vague and subjective definitions of “harm” and are all too often another strategy in the broader attack on free expression that includes book bans, censorship of reproductive health information, and attacks on LGBTQ+ youth.

Moving south of the border, in much of South and Central America, legal progress has been made with respect to rights, but violence against LGBTQ+ people is particularly high, and that violence often has online elements to it. In the Caribbean, where a number of countries have strict anti-LGBTQ+ laws on the books often stepping from the colonial era, online spaces can be risky and those who express their identities in them often face bullying and doxxing, which can lead to physical harm.

In many other places throughout the world, the situation is even worse. While LGBTQ+ rights have progressed considerably over the past decade in a number of democracies, the sense of freedom and ease that these hard-won freedoms created for many are suffering serious setbacks. And in more authoritarian countries where the internet may have once been a lifeline, crackdowns on expression have coincided with increases in user growth and often explicitly target LGBTQ+ speech.

In Europe, anti-LGBTQ+ violence at a record high

In recent years, legislative efforts aimed at curtailing LGBTQ+ rights have gained momentum in several European countries, largely the result of a rise in right-wing populism and conservatism. In Hungary, for instance, the Orban government has enacted laws that restrict LGBTQ+ rights under the guise of protecting children. In 2021, the country passed a law banning the portrayal or promotion of LGBTQ+ content to minors. In response, the European Commission launched legal cases against Hungary—as well as some regions in Poland—over LGBTQ+ discrimination, with Commission President Ursula von der Leyen labeling the law as "a shame" and asserting that it clearly discriminates against people based on their sexual orientation, contravening the EU's core values of equality and human dignity​.

In Russia, the government has implemented severe restrictions on LGBTQ+ content online. A law initially passed in 2013 banning the promotion of “non-traditional sexual relations” among minors was expanded in 2022 to apply to individuals of all ages, further criminalizing LGBTQ+ content. The law prohibits the mention or display of LGBTQ+ relationships in advertising, books, media, films, and on online platforms, and has created a hostile online environment. Media outlets that break the law can be fined or shut down by the government, while foreigners who break the law can be expelled from the country. 

Among the first victims of the amended law were seven migrant sex workers—all trans women—from Central Asia who were fined and deported in 2023 after they published their profiles on a dating website. Also in 2023, six online streaming platforms were penalised for airing movies with LGBTQ-related scenes. The films included “Bridget Jones: The Edge of Reason”, “Green Book”, and the Italian film “Perfect Strangers.”

Across the continent, as anti-LGBTQ+ violence is at a record high, queer communities are often the target of online threats. A 2022 report by the European Digital Media Observatory reported a significant increase in online disinformation campaigns targeting LGBTQ+ communities, which often frame them as threats to traditional family values. 

Across Africa, LGBTQ+ rights under threat

In 30 of the 54 countries on the African continent, homosexuality is prohibited. Nevertheless, there is a growing movement to decriminalize LGBTQ+ identities and push toward achieving greater rights and equality. As in many places, the internet often serves as a safer space for community and organizing, and has therefore become a target for governments seeking to crack down on LGBTQ+ people.

In Tanzania, for instance, where consensual same-sex acts are prohibited under the country’s colonial-era Penal Code, authorities have increased digital censorship against LGBTQ+ content, blocking websites and social media platforms that provide support and information to the LGBTQ+ community .This crackdown is making it increasingly difficult for people to find safe spaces online. As a result of these restrictions, many online groups used by the LGBTQ+ community for networking and support have been forced to disband, driving individuals to riskier public spaces to meet and socialize​. 

In other countries across the continent, officials are weaponizing legal systems to crack down on LGBTQ+ people and their expression. According to Access Now, a proposed law in Kenya, the Family Protection Bill, seeks to ban a variety of actions, including public displays of affection, engagement in activities that seek to change public opinion on LGBTQ+ issues, and the use of the internet, media, social media platforms, and electronic devices to “promote homosexuality.” Furthermore, the prohibited acts would fall under the country’s Computer Misuse and Cybercrimes Act of 2018, giving law enforcement the power to monitor and intercept private communications during investigations, as provided by Section 36 of the National Intelligence Service Act, 2012. 

A draconian law passed in Uganda in 2023, the Anti-Homosexuality Act, introduced capital punishment for certain acts, while allowing for life imprisonment for others. The law further imposes a 20-year prison sentence for people convicted of “promoting homosexuality,” which includes the publication of LGBTQ+ content, as well as “the use of electronic devices such as the internet, mobile phones or films for the purpose of homosexuality or promoting homosexuality.”

In Ghana, if passed, the anti-LGBTQ+ Promotion of Proper Human Sexual Rights and Ghanaian Family Values Bill would introduce prison sentences for those who engage in LGBTQ+ sexual acts as well as those who promote LGBTQ+ rights. As we’ve previously written, ban all speech and activity on and offline that even remotely supports LGBTQ+ rights. Though the bill passed through parliament in March, he won’t sign the bill until the country’s Supreme Court rules on its constitutionality.

And in Egypt and Tunisia, authorities have integrated technology into their policing of LGBTQ+ people, according to a 2023 Human Rights Watch report. In Tunisia, where homosexuality is punishable by up to three years in prison, online harassment and doxxing are common, threatening the safety of LGBTQ+ individuals. Human Rights Watch has documented cases in which social media users, including alleged police officers, have publicly harassed activists, resulting in offline harm.

Egyptian security forces often monitor online LGBTQ+ activity and have used social media platforms as well as Grindr to target and arrest individuals. Although same-sex relations are not explicitly banned by law in the country, authorities use various morality provisions to effectively criminalize homosexual relations. More recently, prosecutors have utilized cybercrime and online morality laws to pursue harsher sentences.

In Asia, Cybercrime laws threaten expression

LGBTQ+ rights in Asia vary widely. While homosexual relations are legal in a majority of countries, they are strictly banned in twenty, and same-sex marriage is only legal in three—Taiwan, Nepal, and Thailand. Online threats are also varied, ranging from harassment and self-censorship to the censoring of LGBTQ+ content—such as in Indonesia, Iran, China, Saudi Arabia, the UAE, and Malaysia, among other nations—as well as legal restrictions with often harsh penalties.

The use of cybercrime provisions to target LGBTQ+ expression is on the rise in a number of countries, particularly in the MENA region. In Jordan, the Cybercrime Law of 2023, passed last August, imposes restrictions on freedom of expression, particularly for LGBTQ+ individuals. Articles 13 and 14 of the law impose penalties for producing, distributing, or consuming “pornographic activities or works” and for using information networks to “facilitate, promote, incite, assist, or exhort prostitution and debauchery, or seduce another person, or expose public morals.” Jordan follows in the footsteps of neighboring Egypt, which instituted a similar law in 2018.

The LGBTQ+ movement in Bangladesh is impacted by the Cyber Security Act, quietly passed in 2023. Several provisions of the Act can be used to target LGBTQ+ sites; Section 8 enables the government to shut down websites, while section 42 grants law enforcement agencies the power to search and seize a person’s hardware, social media accounts, and documents, both online and offline, without a warrant. And section 25 criminalizes published contents that tarnish the image or reputation of the country.

The online struggle is global

In addition to national-level restrictions, LGBTQ+ individuals often face content suppression on social media platforms. While some of this occurs as the result of government requests, much of it is actually due to platforms’ own policies and practices. A recent GLAAD case study points to specific instances where content promoting or discussing LGBTQ+ issues is disproportionately flagged and removed, compared to non-LGBTQ+ content. The GLAAD Social Media Safety Index also provides numerous examples where platforms inconsistently enforce their policies. For instance, posts that feature LGBTQ+ couples or transgender individuals are sometimes taken down for alleged policy violations, while similar content featuring heterosexual or cisgender individuals remains untouched. This inconsistency suggests a bias in content moderation that EFF has previously documented and leads to the erasure of LGBTQ+ voices in online spaces.

Likewise, the community now faces threats at the global level, in the form of the impending UN Cybercrime Convention, currently in negotiations. As we’ve written, the Convention would expand cross-border surveillance powers, enabling nations to potentially exploit these powers to probe acts they controversially label as crimes based on subjective moral judgements rather than universal standards. This could jeopardize vulnerable groups, including the LGBTQ+ community.

EFF is pushing back to ensure that the Cybercrime Treaty's scope must be narrow, and human rights safeguards must be a priority. You can read our written and oral interventions and follow our Deeplinks Blog for updates. Earlier this year, along with Access Now, we also submitted comment to the U.N. Independent Expert on protection against violence and discrimination based on sexual orientation and gender identity (IE SOGI) to inform the Independent Expert’s thematic report presented to the U.N. Human Rights Council at its fifty-sixth session.

But just as the struggle for LGBTQ+ rights and recognition is global, so too is the struggle for a safer and freer internet. EFF works year round to highlight that struggle and to ensure LGBTQ+ rights are protected online. We collaborate with allies around the world, and work to ensure that both states and companies protect and respect the rights of LGBTQ+ communities worldwide.

We also want to help LGBTQ+ communities stay safer online. As part of our Surveillance Self-Defense project, we offer a number of guides for safer online communications, including a guide specifically for LGBTQ+ youth.

EFF believes in preserving an internet that is free for everyone. While there are numerous harms online as in the offline world, digital spaces are often a lifeline for queer youth, particularly those living in repressive environments. The freedom of discovery, the sense of community, and the access to information that the internet has provided for so many over the years must be preserved. 



Jillian C. York

Hack of Age Verification Company Shows Privacy Danger of Social Media Laws

2 months ago

We’ve said it before: online age verification is incompatible with privacy. Companies responsible for storing or processing sensitive documents like drivers’ licenses are likely to encounter data breaches, potentially exposing not only personal data like users’ government-issued ID, but also information about the sites that they visit. 

This threat is not hypothetical. This morning, 404 Media reported that a major identity verification company, AU10TIX, left login credentials exposed online for more than a year, allowing access to this very sensitive user data. 

A researcher gained access to the company’s logging platform, “which in turn contained links to data related to specific people who had uploaded their identity documents,” including “the person’s name, date of birth, nationality, identification number, and the type of document uploaded such as a drivers’ license,” as well as images of those identity documents. Platforms reportedly using AU10TIX for identity verification include TikTok and X, formerly Twitter. 

Lawmakers pushing forward with dangerous age verifications laws should stop and consider this report. Proposals like the federal Kids Online Safety Act and California’s Assembly Bill 3080 are moving further toward passage, with lawmakers in the House scheduled to vote in a key committee on KOSA this week, and California's Senate Judiciary committee set to discuss  AB 3080 next week. Several other laws requiring age verification for accessing “adult” content and social media content have already passed in states across the country. EFF and others are challenging some of these laws in court. 

In the final analysis, age verification systems are surveillance systems. Mandating them forces websites to require visitors to submit information such as government-issued identification to companies like AU10TIX. Hacks and data breaches of this sensitive information are not a hypothetical concern; it is simply a matter of when the data will be exposed, as this breach shows. 

Data breaches can lead to any number of dangers for users: phishing, blackmail, or identity theft, in addition to the loss of anonymity and privacy. Requiring users to upload government documents—some of the most sensitive user data—will hurt all users. 

According to the news report, so far the exposure of user data in the AU10TIX case did not lead to exposure beyond what the researcher showed was possible. If age verification requirements are passed into law, users will likely find themselves forced to share their private information across networks of third-party companies if they want to continue accessing and sharing online content. Within a year, it wouldn’t be strange to have uploaded your ID to a half-dozen different platforms. 

No matter how vigilant you are, you cannot control what other companies do with your data. If age verification requirements become law, you’ll have to be lucky every time you are forced to share your private information. Hackers will just have to be lucky once. 

Jason Kelley

EFF Livestream Series Coming to a Platform Near You!

2 months ago

EFF is excited to kick off a new series of livestream events this summer! Please join EFF staff and fellow digital freedom supporters as we dive into three topics near and dear to our hearts.

July 18: The U.S. Supreme Court Takes on the Internet

In the first segment of EFF's livestream series, we'll dive into the impact of the U.S. Supreme Court's recent opinions on technology and civil liberties. Get an expert's look at the court cases making the biggest waves for tech users with our panel featuring EFF Civil Liberties Director David Greene, Techdirt founder Mike Masnick, and Daphne Keller from the Stanford Center for Internet and Society.



August 28:
Reproductive Justice in the Digital Age

This summer marks the two-year anniversary of the Dobbs decision overturning Roe v. Wade. Join EFF for a livestream discussion about restrictions to reproductive healthcare and the choices people seeking an abortion must face in the digital age where everything is connected, and surveillance is rampant. Learn what’s happening across the United States and how you can get involved.



October 17:
How to Protest with Privacy in Mind

Do you know what to do if you’re subjected to a search or arrest at a protest? Join EFF for a livestream discussion about how to protect your electronic devices and digital assets before, during, and after a demonstration. Learn how you can avoid confiscation or forced deletion of media, and keep your movements and associations private.


We hope you can join for all three events!
Be sure to share this post with any interested friends and tell them to join us! Thank you for helping EFF spread the word about privacy and free expression online.

We encourage everyone to join us live for these discussions. Please note that they will be recorded. Recordings will be available following each event.

Melissa Srago

EFF Welcomes Tarah Wheeler to Its Board of Directors

2 months ago
Wheeler Brings Perspectives on Information Security and International Conflict to the Board of Directors

SAN FRANCISCO—The Electronic Frontier Foundation (EFF) is honored to announce today that Tarah Wheeler — a social scientist studying international conflict, an author, and a poker player who is CEO of the cybersecurity compliance company Red Queen Dynamics — has joined EFF’s Board of Directors. 

Wheeler has served on EFF’s advisory board since June 2020. She is the Senior Fellow for Global Cyber Policy at Council on Foreign Relations and was elected to Life Membership at CFR in 2023. She is an inaugural contributing cybersecurity expert for the Washington Post, and a Foreign Policy contributor on cyber warfare. She is the author of the best-selling “Women In Tech: Take Your Career to The Next Level With Practical Advice And Inspiring Stories” (2016). 

“I am very excited to have Tarah bring her judgment, her technical expertise and her enthusiasm to EFF’s Board,” EFF Executive Director Cindy Cohn said. “She has supported us in many ways before now, including creating and hosting the ‘Betting on Your Digital Rights: EFF Benefit Poker Tournament at DEF CON,’ which will have its third year this summer. Now we get to have her in a governance role as well.” 

"I am deeply honored to join the Board of Directors at the Electronic Frontier Foundation,” Wheeler said. “EFF's mission to defend civil liberties in the digital world is more critical than ever, and I am humbled to be invited to serve in this work. EFF has been there for me and other information security researchers when we needed a champion the most. Together, we will continue to fight for the rights and freedoms that ensure a free and open internet for all." 

Wheeler has been a US/UK Fulbright Scholar in Cyber Security and Fulbright Visiting Scholar at the Centre for the Resolution of Intractable Conflict at the University of Oxford, the Brookings Institution’s contributing cybersecurity editor, a Cyber Project Fellow at the Belfer Center for Science and International Affairs at Harvard University‘s Kennedy School of Government, and an International Security Fellow at New America leading a new international cybersecurity capacity building project with the Hewlett Foundation’s Cyber Initiative. She has been Head of Offensive Security & Technical Data Privacy at Splunk & Senior Director of Engineering and Principal Security Advocate at Symantec Website Security. She has led projects at Microsoft Game Studios (Halo and Lips) and architected systems at encrypted mobile communications firm Silent Circle. She has two cashes and $4,722 in lifetime earnings in the World Series of Poker

Members of the Board of Directors ensure EFF’s sustainability by adopting sound, ethical, and legal governance and financial management policies so that the organization has adequate resources to advance its mission.  

Shari Steele — who had been on EFF’s Board since 2015 when she ceased being EFF’s Executive Director — has rotated off the Board. Gigi Sohn has been elected Chair of the Board. 

For the full roster of EFF’s Board of Directors: https://www.eff.org/about/board

Josh Richman

EFF Statement on Assange Plea Deal

2 months ago

The United States has now, for the first time in the more than 100-year history of the Espionage Act, obtained an Espionage Act conviction for basic journalistic acts. Here, Assange's Criminal Information is for obtaining newsworthy information from a source, communicating it to the public, and expressing an openness to receiving more highly newsworthy information. This sets a dangerous practical precedent, and all those who value a free press should work to make sure that it never happens again. While we are pleased that Assange can now be freed for time served and return to Australia, these charges should never have been brought.

Additional information about this charge: 

David Greene

EFF Opposes the American Privacy Rights Act

2 months 1 week ago

Protecting people's privacy is the first step we should take to create meaningful online regulation. That's why EFF has previously expressed concerns about the American Privacy Rights Act (APRA) which, rather than set up strong protections, instead freezes consumer data privacy protections in place, preempts existing state laws, and would prevent states from creating stronger protections in the future

While the bill has not yet been formally introduced, subsequent discussion drafts of the bill have not addressed our concerns; in fact, they've only deepened them. So, earlier this month, EFF told Congress that it opposes APRA and signed two letters to reiterate why overriding stronger state laws—and preventing states from passing stronger laws—hurts everyone.

EFF has a clear position on this: federal privacy laws should not roll back state privacy protections. And there is no reason that we must trade strong state laws for weaker national privacy protection. Companies that collect and use data—and have worked to kill strong state privacy bills time and again— want Congress to believe a "patchwork" of state laws is unworkable for data privacy, even though existing federal privacy and civil rights laws operate as regulatory floors and do not prevent states from enacting and enforcing their own stronger statutes. In a letter opposing the preemption sections of the bill, our allies at the American Civil Liberties Union (ACLU) stated it this way: "the soundest approach to avoid the harms from preemption is to set the federal standard as a national baseline for privacy protections — and not a ceiling." Advocates from ten states signed on to the letter warning how APRA, as written, would preempt dozens of stronger state laws. These include laws protecting AI regulation in Colorado, internet privacy in Maine, healthcare and tenant privacy in New York, and biometric privacy in Illinois, just to name a handful. 

APRA would also override a California law passed to rein in data brokers and replace it with weaker protections. EFF last year joined Privacy Rights Clearinghouse (PRC) and others to support and pass the California Delete Act, which gives people an easy way to delete information held by data brokers. In a letter opposing APRA, several organizations that supported California's law highlighted ways that APRA falls short of what's already on the books in California. "By prohibiting authorized agents, omitting robust transparency and audit requirements, removing stipulated fines, and, fundamentally, preempting stronger state laws, the APRA risks leaving consumers vulnerable to ongoing privacy violations and undermining the progress made by trailblazing legislation like the California Delete Act," the letter said.

EFF continues to advocate for strong privacy legislation and encourages APRA's authors to center strong consumer protections in future drafts.

To view the coalition letter on the preemption provisions of APRA, click here: https://www.eff.org/document/aclu-letter-apra-preemption

To view the coalition letter opposing APRA because of its data broker provisions, click here: https://www.eff.org/document/prc-letter-apra-data-broker-provisions

Hayley Tsukayama

🌜 A voice cries out under the crescent moon...

2 months 1 week ago

EFF needs your help to defend privacy and free speech online. Learn why you're crucial to the fight in this edition of campfire tales from our friends, The Encryptids. These cunning critters have come out of hiding to help us celebrate EFF’s summer membership drive for internet freedom.

Through EFF's 34th birthday on July 10, you can be a member for just $20 and receive 2 rare gifts (including a Bigfoot enamel pin!), and as a bonus new recurring monthly or annual donations get a free match! Join us today.

Today’s post comes from international vocal icon Banshee. She may not be a beast like many cryptids, but she is a *BEAST* when it comes to free speech and local activism...

-Aaron Jue
EFF Membership Team

_______________________________________

W

hat’s that saying about being well behaved and making history? Most people picture me shrieking across the Irish countryside. It's a living, but my voice has real power: it can help me speak truth to power, and it can lend support to the people in my communities.

Free expression is a human right, full stop. And it’s tough to get it right on the internet. Just look at messy content moderation from social media giants. Or the way politicians, celebrities, and companies abuse copyright and trademark law to knock their critics offline. And don’t get me started on repressive governments cutting the internet during protests. Censorship hits disempowered groups the hardest. That’s why I raise my voice to prop up the people around me, and why EFF is such an important ally in the fight to protect speech in the modern world.

Free expression is a human right, full stop.

The things you create, say, and share can change the world, and there’s never been a better megaphone than the internet. A free web carries your voice whether your cause is the environment, workers’ rights, gender equality, or your local parent-teacher group. For all the sewage that people spew online, we must fight back with better ideas and a brighter vision for the future.

EFF’s lawyers, policy analysts, tech experts, and activists know free speech, creativity, and privacy online better than anyone. Hell, EFF even helped establish computer code as legally protected speech back in the 90s. I hope you’ll use your compassion to protect our freedom online with even a small donation to EFF (or even start a monthly donation!).

Join EFF

Free expression is a human right

So the next time someone tells you that you’re being shrill, remind him to STFU because you have something to say. And be grateful that people around the world support EFF to protect our rights online.

Down for the Cause,

Banshee

_______________________________________

EFF is a member-supported U.S. 501(c)(3) organization celebrating TEN YEARS of top ratings from the nonprofit watchdog Charity Navigator! Your donation is tax-deductible as allowed by law.

Banshee .

For The Bragging Rights: EFF’s 16th Annual Cyberlaw Trivia Night

2 months 1 week ago

This post was authored by the mysterious Raul Duke.

The weather was unusually cool for a summer night. Just the right amount of bitterness in the air for attorneys from all walks of life to gather in San Francisco’s Mission District for EFF’s 16th annual Cyberlaw Trivia Night.

Inside Public Works, attorneys filled their plates with chicken and waffles, grabbed a fresh tech-inspired cocktail, and found their tables—ready to compete against their colleagues in obscure tech law trivia. The evening started promptly six minutes late, 7:06 PM PT, with Aaron Jue, EFF's Director of Member Engagement, introducing this year’s trivia tournament.

A lone Quizmaster, Kurt Opsahl, took the stage, noting that his walk-in was missing a key component, until The Blues Brothers started playing, filling the quizmaster with the valor to thank EFF’s intern fund supporters Fenwick and Morrison Forrester. The judges begrudgingly took the stage as the quizmaster reminded them that they have jobs at this event.

One of the judges, EFF’s Civil Liberties Director David Greene, gave some fiduciary advice to the several former EFF interns that were in the crowd. It was anyone’s guess as to whether they had gleaned any inside knowledge about the trivia.

I asked around as to what the attorneys had to gain by participating in this trivia night. I learned that not only were bragging rights on the table, but additionally teams had a chance to win champion steins.

The prizes: EFF steins!

With formalities out of the way, the first round of trivia - “General” - started with a possibly rousing question about the right to repair. Round one ended with the eighth question, which included a major typo calling the “Fourth Amendment is Not for Sale Act” the “First Amendment...” The proofreaders responsible for this mistake have been dealt with.

I was particularly struck by the names of each team: “Run DMCA,” “Ineffective Altruists,” “Subpoena Colada,” “JDs not LLM,” “The little VLOP that could,” and “As a language model, I can't answer that question.” Who knew attorneys could create such creative names?

I asked one of the lawyers if he could give me legal advice on a personal matter (I won’t get into the details here, but it concerns both maritime law and equine law). The lawyer gazed at me with the same look one gives a child who has just proudly thew their food all over the floor. I decided to drop the matter.

Back to the event. It was a close game until the sixth and final round, though we wouldn’t hear the final winners until after the tiebreaker questions.

After several minutes, the tiebreaker was announced. The prompt: which team could get the closest to Pi without going over. This sent your intrepid reporter into an existential crisis. Could one really get to the end of pi? I’m told you could get to Pluto with just the first four and didn’t see any reason in going further than that. During my descent into madness, it was revealed that team “JDs not LLMs” knew 22 digits of pi.

After that shocking revelation, the final results were read, with the winning trivia masterminds being:

1st Place: JDs not LLMs

2nd Place: The Little VLOP That Could

3rd Place: As A Language Model, I Can't Answer That Question

EFF Membership Advocate Christian Romero taking over for Raul Duke.

EFF hosts Cyberlaw Trivia Night to gather those in the legal community who help protect online freedom for tech users. Among the many firms that dedicate their time, talent, and resources to the cause, we would especially like to thank Fenwick and Morrison Foerster for supporting EFF’s Intern Fund!

If you are an attorney working to defend civil liberties in the digital world, consider joining EFF's Cooperating Attorneys list. This network helps EFF connect people to legal assistance when we are unable to assist.

Are you interested in attending or sponsoring an upcoming EFF Trivia Night? Please reach out to tierney@eff.org for more information.

Be sure to check EFF’s events page and mark your calendar for next year’s 17th annual Cyberlaw Trivia Night

Christian Romero

Opposing a Global Surveillance Disaster | EFFector 36.8

2 months 1 week ago

Join EFF on a road trip through the information superhighway! As you choose the perfect playlist for the trip we'll share our findings about the latest generation of cell-site simulators; share security tips for protestors at college campuses; and rant about the surveillance abuses that could come from the latest UN Cybercrime Convention draft.

As we reach the end of our road trip, know that you can stay up-to-date on these issues with our EFFector newslettter! You can read the full issue here, or subscribe to get the next one in your inbox automatically! You can also listen to the audio version of the newsletter on the Internet Archive, or by clicking the button below:

LISTEN ON YouTube

EFFECTOR 36.8 - Opposing A Global Surveillance Disaster

Since 1990 EFF has published EFFector to help keep readers on the bleeding edge of their digital rights. We know that the intersection of technology, civil liberties, human rights, and the law can be complicated, so EFFector is a great way to stay on top of things. The newsletter is chock full of links to updates, announcements, blog posts, and other stories to help keep readers—and listeners—up to date on the movement to protect online privacy and free expression. 

Thank you to the supporters around the world who make our work possible! If you're not a member yet, join EFF today to help us fight for a brighter digital future.

Christian Romero

Police are Using Drones More and Spending More For Them

2 months 1 week ago

Police in Minnesota are buying and flying more drones than ever before, according to an annual report recently released by the state’s Bureau of Criminal Apprehension (BCA). Minnesotan law enforcement flew their drones without a warrant 4,326 times in 2023, racking up a state-wide expense of over $1 million. This marks a large, 41 percent increase from 2022, when departments across the state used drones 3,076 times and spent $646,531.24 on using them. The data show that more was spent on drones last year than in the previous two years combined. Minneapolis Police Department, the state’s largest police department, implemented a new drone program at the end of 2022 and reported that its 63 warrantless flights in 2023 cost nearly $100,000.

Since 2020, the state of Minnesota has been obligated to put out a yearly report documenting every time and reason law enforcement agencies in the state — local, county, or state-wide — used unmanned aerial vehicles (UAVs), more commonly known as drones, without a warrant. This is partly because Minnesota law requires a warrant for law enforcement to use drones except for specific situations listed in the statute. The State Court Administrator is also required to provide a public report of the number of warrants issued for the use of UAVs, and the data gathered by them. These regular reports give us a glimpse into how police are actually using these devices and how often. As more and more police departments around the country use drones or experiment with drones as first responders, it offers an example of how transparency around drone adoption can be done.

You can read our blog about the 2021 Minnesota report here.

According to EFF’s Atlas of Surveillance, 130 of Minnesota’s 408 law enforcement agencies have drones. Of the Minnesota agencies known to have drones prior to this month’s report, 29 of them did not provide the BCA with 2023 use and cost data.

One of the more revealing aspects of drone deployment provided by  the report is the purpose for which police are using them. A vast majority of uses, almost three-quarters of every time police in Minnesota used drones, were either related to obtaining an aerial view of incidents involving injuries  or death, like car accidents, or for police training and public relations purposes.

Are drones really just a 1 million dollar training tool? We’ve argued many times that tools deployed by police for very specific purposes often find punitive uses that far outreach their original, possibly more innocuous intention. In the case of Minnesota’s drone usage, that can be seen in the other exceptions to the warrant requirement, such as surveilling a public event where there’s a “heightened risk” for participant security. The warrant requirement is meant to prevent using aerial surveillance in violation of civil liberties, but these exceptions open the door to surveillance of First Amendment-protected gatherings and demonstrations. 

Matthew Guariglia

New ALPR Vulnerabilities Prove Mass Surveillance Is a Public Safety Threat

2 months 1 week ago

Government officials across the U.S. frequently promote the supposed, and often anecdotal, public safety benefits of automated license plate readers (ALPRs), but rarely do they examine how this very same technology poses risks to public safety that may outweigh the crimes they are attempting to address in the first place. When law enforcement uses ALPRs to document the comings and goings of every driver on the road, regardless of a nexus to a crime, it results in gargantuan databases of sensitive information, and few agencies are equipped, staffed, or trained to harden their systems against quickly evolving cybersecurity threats.

The Cybersecurity and Infrastructure Security Agency (CISA), a component of the U.S. Department of Homeland Security, released an advisory last week that should be a wake up call to the thousands of local government agencies around the country that use ALPRs to surveil the travel patterns of their residents by scanning their license plates and "fingerprinting" their vehicles. The bulletin outlines seven vulnerabilities in Motorola Solutions' Vigilant ALPRs, including missing encryption and insufficiently protected credentials.

To give a sense of the scale of the data collected with ALPRs, EFF found that just 80 agencies in California using primarily Vigilant technology, collected more than 1.6 billion license plate scans (CSV) in 2022. This data can be used to track people in real time, identify their "pattern of life," and even identify their relations and associates. An EFF analysis from 2021 found that 99.9% of this data is unrelated to any public safety interest when it's collected. If accessed by malicious parties, the information could be used to harass, stalk, or even extort innocent people.

Unlike location data a person shares with, say, GPS-based navigation app Waze, ALPRs collect and store this information without consent and there is very little a person can do to have this information purged from these systems. And while a person can turn off their phone if they are engaging in a sensitive activity, such as visiting a reproductive health facility or attending a protest, tampering with your license plate is a crime in many jurisdictions. Because drivers don't have control over ALPR data, the onus for protecting the data lies with the police and sheriffs who operate the surveillance and the vendors that provide the technology.

It's a general tenet of cybersecurity that you should not collect and retain more personal data than you are capable of protecting. Perhaps ironically, a Motorola Solutions cybersecurity specialist wrote an article in Police Chief magazine this month that  public safety agencies "are often challenged when it comes to recruiting and retaining experienced cybersecurity personnel," even though "the potential for harm from external factors is substantial." 

That partially explains why, more than 125 law enforcement agencies reported a data breach or cyberattacks between 2012 and 2020, according to research by former EFF intern Madison Vialpando. The Motorola Solutions article claims that ransomware attacks "targeting U.S. public safety organizations increased by 142 percent" in 2023.

Yet, the temptation to "collect it all" continues to overshadow the responsibility to "protect it all." What makes the latest CISA disclosure even more outrageous is it is at least the third time in the last decade that major security vulnerabilities have been found in ALPRs.

In 2015, building off the previous works of University of Arizona researchers, EFF published an investigation that found more than 100 ALPR cameras in Louisiana, California and Florida were connected unsecured to the internet, many with publicly accessible websites that anyone could use to manipulate the controls of the cameras or siphon off data. Just by visiting a URL, a malicious actor, without any specialized knowledge, could view live feeds of the cameras, including one that could be used to spy on college students at the University of Southern California. Some of the agencies involved fixed the problem after being alerted about that problem. However, 3M, which had recently bought the ALPR manufacturer PIPS Technology (which has since been sold to Neology), claimed zero responsibility for the problem, saying instead that it was the agencies' responsibility to manage the devices' cybersecurity. "The security features are clearly explained in our packaging," they wrote. Four years later, TechCrunch found that the problem still persisted.

In 2019, Customs & Border Protections' vendor providing ALPR technology for Border Patrol checkpoints was breached, with hackers gaining access to 105,000 license plate images, as well as more than 184,000 images of travelers from a face recognition pilot program. Some of those images made it onto the dark web, according to reporting by journalist Joseph Cox.

If there's one positive thing we can say about the latest Vigilant vulnerability disclosures, it's that for once a government agency identified and reported the vulnerabilities before they could do damage. The initial discovery was made by the Michigan State Police Michigan Cyber Command Center, which passed the information onto CISA, which then worked with Motorola Solutions to address the problems.

The Michigan Cyber Command center found a total of seven vulnerabilities in Vigilant devices; two of which were medium severity and 5 of which were high severity vulnerabilities.

One of the most severe vulnerabilities (given a score of 8.6 out of 10,) was that every camera sold by Motorola had a wifi network turned on by default that used the same hardcoded password as every other camera, meaning that if someone was able to find the password to connect to one camera they could connect to any other camera as long as they were near it.

Someone with physical access to the camera could also easily install a backdoor, which would allow them access to the camera even if the wifi was turned off. An attacker could even log into the system locally using a default username and password. Once they connected to that camera they would be able to see live video and control the camera, even disable it. Or they could view historic recordings of license plate data stored without any kind of encryption. They would also see logs containing authentication information which could be used to connect to a back-end server where more information is stored. Motorola claims that they have mitigated all of these vulnerabilities.

When vulnerabilities are found, it's not enough for them be patched: They must be used as a stark warnings for policy makers and the courts. Following EFF's report in 2015, Louisiana Gov. Bobby Jindal spiked a statewide ALPR program, writing in his veto message:

Camera programs such as these that make private information readily available beyond the scope of law enforcement, pose a fundamental risk to personal privacy and create large pools of information belonging to law abiding citizens that unfortunately can be extremely vulnerable to theft or misuse.

In May, a Norfolk Circuit Court Judge reached the same conclusion, writing in an order suppressing the data collected by ALPRs in a criminal case:

The Court cannot ignore the possibility of a potential hacking incident either. For example, a team of computer scientists at the University of Arizona was able to find vulnerable ALPR cameras in Washington, California, Texas, Oklahoma, Louisiana, Mississippi, Alabama, Florida, Virginia, Ohio, and Pennsylvania. (Italics added for emphasis.) … The citizens of Norfolk may be concerned to learn the extent to which the Norfolk Police Department is tracking and maintaining a database of their every movement for 30 days. The Defendant argues “what we have is a dragnet over the entire city” retained for a month and the Court agrees.

But a data breach isn't the only way that ALPR data can be leaked or abused. In 2022, an officer in the Kechi (Kansas) Police Department accessed ALPR data shared with his department by the Wichita Police Department to stalk his wife. Meanwhile, recently the Orrville (Ohio) Police Department released a driver's raw ALPR scans to a total stranger in response to a public records request, 404 Media reported.

Public safety agencies must resist the allure of marketing materials promising surveillance omniscience, and instead collect only the data they need for actual criminal investigations. They must never store more data than they adequately protect within their limited resources–or they must keep the public safe from data breaches by not collecting the data at all.

Dave Maass

California Lawmakers Should Reject Mandatory Internet ID Checks

2 months 1 week ago

California lawmakers are debating an ill-advised bill that would require internet users to show their ID in order to look at sexually explicit content. EFF has sent a letter to California legislators encouraging them to oppose Assembly Bill 3080, which would have the result of censoring the internet for all users. 

If you care about a free and open internet for all, and are a California resident, now would be a good time to contact your California Assemblymember and Senator and tell them you oppose A.B. 3080. 

Adults Have The Right To Free And Anonymous Internet Browsing

If A.B. 3080 passes, it would make it illegal to show websites with one-third or more “sexually explicit content” to minors. These “explicit” websites would join a list of products or services that can’t be legally sold to minors in California, including things like firearms, ammunition, tobacco, and e-cigarettes. 

But these things are not the same, and should not be treated the same under state or federal law. Adults have a First Amendment right to look for information online, including sexual content. One of the reasons EFF has opposed mandatory age verification is because there’s no way to check ID online just for minors without drastically harming the rights of adults to read, get information, and to speak and browse online anonymously. 

As EFF explained in a recent amicus brief on the issue, collecting ID online is fundamentally different—and more dangerous—than in-person ID checks in the physical world. Online ID checks are not just a momentary display—they require adults “to upload data-rich, government-issued identifying documents to either the website or a third-party verifier” and create a “potentially lasting record” of their visit to the establishment. 

The more information a website collects about visitors, the more chances there are for such data to get into the hands of a criminal or other bad actor, a marketing company, or someone who has filed a subpoena for it. So-called “anonymized” data can be reassembled, especially when it consists of data-rich government ID together with browsing data like IP addresses. 

Data breaches are a fact of life. Once governments insist on creating these ID logs for visiting websites with sexual content, those data breaches will become more dangerous. 

This Bill Mandates ID Checks For A Wide Range Of Content 

The bar is set low in this bill. It’s far from clear what websites prosecutors will consider to have one-third content that’s not appropriate for minors, as that can vary widely by community and even family standards. The bill will surely rope in general-use websites that allow some explicit content. A sex education website for high-school seniors, for instance, could be considered “offensive” and lacking in educational value for young minors. 

Social media sites, online message forums, and even email lists may have some portion of content that isn’t appropriate for younger minors, but also a large amount of general-interest content. Bills like California’s that require ID checks for any site with 33% content that prosecutors deem explicit is similar to having Netflix require ID checks at login, whether a user wants to watch a G-rated movie or an R-rated movie. 

Adults’ Right To View Websites Of Their Choice Is Settled Law 

U.S. courts have already weighed in numerous times on government efforts to age-gate content, including sexual content. In Reno v. ACLU, the Supreme Court overruled almost all of the Communications Decency Act, a 1996 law that was intended to keep “obscene or indecent” material away from minors. 

The high court again considered the issue in 2004 in ACLU v. Ashcroft, when it found that a federal law of that era, which sought to impose age-verification requirements on sexual online content, was likely unconstitutional. 

Other States Will Follow 

In the past year, several other state legislatures have passed similar unwise and unconstitutional “online ID check” laws. They are being subject to legal challenges now working their way through courts, including a Texas age verification law that EFF has asked the Supreme Court to look at. 

Elected officials in many other states, however, wisely refused to enact mandatory online ID laws, including Minnesota, Illinois, and Wisconsin. In April, Arizona’s governor vetoed a mandatory ID-check bill that was passed along partisan lines in her state, stating that the bill “goes against settled case law” and insisting any future proposal must be bipartisan and also “work within the bounds of the First Amendment.” 

California is not only the largest state, it is the home of many of the nation’s largest creative industries. It has also been a leader in online privacy law. If California passes A.B. 3080, it will be a green light to other states to pass online ID-checking laws that are even worse. 

Tennessee, for instance, recently passed a mandatory ID bill that includes felony penalties for anyone who “publishes or distributes” a website with one-third adult content. Tennessee’s fiscal review committee estimated that the state will incarcerate one person per year under this law, and has budgeted accordingly. 

California lawmakers have a chance to restore some sanity to our national conversation about how to protect minors online. Mandatory ID checks, and fines or incarceration for those who fail to use them, are not the answer. 

Further reading: 

Joe Mullin

How to Clean Up Your Bluesky Feed

2 months 1 week ago

In our recent comparison of Mastodon, Bluesky, and Threads, we detail a few of the ways the similar-at-a-glance microblogging social networks differ, and one of the main distinctions is how much control you have over what you see as a user. We’ve detailed how to get your Mastodon feed into shape before, and now it’s time to clean up your Bluesky feed. We’ll do this mostly through its moderation tools.

Currently, Bluesky is mostly a single experience that operates on one set of flagship services operated by the Bluesky corporation. As the AT Protocol expands and decentralizes, so will the variety of moderation and custom algorithmic feed options. But for the time being, we have Bluesky.

Bluesky’s current moderation filters operate on two levels: the default options built in the Bluesky app, and community created filters called “labelers”. The company’s default system includes options and company labelers which hide the sorts of things we’re all used to having restricted on social networks, like spam or adult content. It also includes defaults to hiding other categories like engagement farming and certain extremist views. Community options use Bluesky’s own moderation tool, Ozone, and are built exactly the same system as the company’s default ones; the only difference is which ones are built into the app. All this choice ends up being both powerful and overwhelming. So let’s walk through how to use it to make your Bluesky experience as good as possible.

Familiarize Yourself with Bluesky’s Moderation Tools

Bluesky offers several ways to control what appears in your feed: labeling and curation tools to hide (or warn about) the content of a post, and tools to block accounts from your feed entirely. Let’s start with customizing the content you see.

Get to Know Bluesky’s Built-In Settings

By default, Bluesky offers a basic moderation tool that allows you to show, hide, or warn about a range of content related to everything from topics like self-harm, extremist views, or intolerance, to more traditional content moderation like security concerns, scams, or inauthentic accounts.

This build-your-own filter approach is different from other social networks, which tend to control moderation on a platform level, leaving little up to the end user. This gives you control over what you see in your feed, but it’s also overwhelming to wrap your head around. We suggest popping into the moderation screen to see how it’s set up, and tweak any options you’d like:

Tap > Settings > Moderation > Bluesky Moderation Service to get to the settings. You can choose from three display options for each type of post: off (you’ll see it), warn (you’ll get a warning before you can view the post), or hide (you won’t see the post at all). blueskymoderation.png There’s no way currently to entirely opt out of Bluesky’s defaults, though the company does note that any separate client app (i.e., not the official Bluesky app) can set up its own rules. However, you can subscribe to custom label sets to layer on top of the Bluesky defaults. These labels are similar to the Block Together tool formerly supported by Twitter, and allow individual users or communities to create their own moderation filters. As with the default moderation options, you can choose to have anything that gets labeled hidden or see a warning if it’s flagged. These custom services can include all sorts of highly specific labels, like whether an image is suspected to be made with AI, includes content that may trigger phobias (like spiders), and more. There’s currently no way to easily search for these labeling services, but Bluesky notes a few here, and there’s a broad list here.

To enable one of these, search for the account name of a labeler, like “@xblock.aendra.dev” and then subscribe to it. Once you subscribe, you can toggle any labeling filters the account offers. If you decide you no longer want to use the service or you want to change the settings, you can do so on the same moderation page noted above.

blueskylabeler_2.png

Build Your Own Mute and Block Lists (or Subscribe to Others)

Custom moderation and labels don’t replace one of the most common tools in all of social media: the ability to block accounts entirely. Here, Bluesky offers something new with the old, though. Not only can you block and mute users, you can also subscribe to block lists published by other users, similar to tools like Block Party.

To mute or block someone, tap their user profile picture to get to their profile, then the three-dot icon, then choose to “Mute Account,” which makes it so they don’t appear in your feed, but they can still see yours, or “Block Account,” which makes it so they don’t appear in your feed and they can’t view yours. Note that a list of your Muted accounts is private, but your Blocked accounts are public. Anyone can see who you’ve blocked, but not who you’ve muted. blueskymute.png You can also use built-in algorithmic tools like muting specific words or phrases. Tap > Settings > Moderation and then tap “Mute words & tags.” Type in any word or phrase you want to mute, select whether to mute it if it appears “text & tags” or just in “tags only,” and then it’ll be hidden from your feed.

Users can also experiment with more elaborate algorithmic curation options, such as using tools like Blacksky to completely reshape your feed.

If all this manual work makes you tired, then mute lists might be the answer. These are curated lists made by other Bluesky users that mass mute accounts. These mute lists, unlike muted accounts, are public, though, so keep that in mind before you create or sign up for one.

As with community run moderation services, there’s not currently a great way to search for these lists. To sign up for mute list you’ll need to know the username of someone who has created a block or mute list that you want to use. Search for their profile, tap the “Lists” option from their profile page, tap the list you’re interested in, then “Subscribe.” Confusingly, from this screen, a “List” can be a feed you subscribe to of posts you want to see (like if someone made a list of “people who work at EFF,”) or a block or mute list. If it's referred to as a “user list” and has the option to “Pin to home,” then it’s a feed you can follow, otherwise it’s a mute or block list.

blueskymodlist.png

Clean Up Your Timeline

Is there some strange design decision in the app that makes you question why you use it? Perhaps you hate seeing reposts? Bluesky offers a few ways to choose how information is displayed in the app that can make it easier to use. These are essentially custom algorithms, which Bluesky calls “Feeds,” that filter and focus your content however you want.

Subscribe to (or Build Your Own) Custom Feeds

bluskyfeeds.png

Unlike most social networks, Bluesky gives you control over the algorithm that displays content. By default, you’ll get a chronological feed, but you can pick and choose from other options using custom feeds. These let you tinker with your feed, create entirely new ones, and more. Custom feeds make it so you can look at a feed of very specific types of posts, like only mutuals (people who also follow you back), quiet posters (people who don’t post much), news organizations, or just photos of cats. Here, unlike with some of the other custom tools, Bluesky does at least provide a way to search for feeds to use.

Tap > Settings > Feeds. You’ll find a list of your current feeds here, and if you scroll down you’ll find a search bar to look for new ones. These can be as broad as “Posters in Japan,” to as focused as “Posts about Taylor Swift.” Once you pick a few, these custom feeds will appear at the top of your main timeline. If you ever want to rearrange what order these appear in, head back to the Feeds page, then tap the gear icon in the top-right to get to a screen where you can change the order. If you’re still struggling to find useful feeds, this search engine might help.

Customize How Replies Work, and Other Little Things in Your Feed

blueskyfeedcleanup.png

Bluesky has one last trick to making it a little nicer to use than other social networks, and that’s the amount of control you get over your main “following” feed. From your feed, tap the controls icon in the top right to get to the “Following Feed Preferences” page.

Here, you can do everything from hide replies to controlling what replies you do see (like only seeing replies to posts from people you follow, or only for posts with more than two replies). You can also hide reposts and quote posts, and even allow for posts from some of your custom feeds to get injected into your main feed. For example, if you enable the “Show Posts from My Feeds” option and you have subscribed to “Quiet Posters,” you’ll occasionally get a post from someone you follow outside of a strictly chronological time.

Final bonus tip: enable two-factor authentication: Bluesky rolled out email-based two-factor authentication well after many people signed up. If you’ve never looked at your settings, you probably never noticed this was offered. We suggest you turn it on to better secure your account. Head to > Settings, then scroll down to “Require email code to log into your account,” and enable it.

Phew, if that all felt a little overwhelming, that’s because it is. Sure, many people can sign up for Bluesky and never touch any of this stuff, but for those who want a safe, customizable experience, the whole thing feels a bit too crunchy in its current state. And while this sort of empowerment for users, which gives so many levers to control the content, is great, it’s also a lot. The good news is that Bluesky’s defaults are currently good enough to get started. But one of the benefits of community-based moderation like we see on Mastodon or certain Subreddits, is that volunteers do a lot of this heavy lifting for everyone. AT Protocol is still new however, and perhaps as more developers shape its future through new tools and services, these difficulties will be eased.

Thorin Klosowski

What’s the Difference Between Mastodon, Bluesky, and Threads?

2 months 1 week ago

The ongoing Twitter exodus sparked life into a new way of doing social media. Instead of a handful of platforms trying to control your life online, people are reclaiming control by building more open and empowering approaches to social media. Some of these you may have heard of: Mastodon, Bluesky, and Threads. Each is distinct, but their differences can be hard to understand as they’re rooted in their different technical approaches. 

The mainstream social web arguably became “five websites, each consisting of screenshots of text from the other four,”  but in just the last few years radical and controversial changes to major platforms were a wake up call to many and are driving people to seek alternatives to the billionaire-driven monocultures.

Two major ecosystems have emerged in the wake, both encouraging the variety and experimentation of the earlier web. The first, built on ActivityPub protocol, is called the Fediverse. While it includes many different kinds of websites, Mastodon and Threads have taken off as alternatives for Twitter that use this protocol. The other is the AT Protocol, powering the Twitter alternative Bluesky.

These protocols, a shared language between computer systems, allow websites to exchange information. It’s a simple concept you’re benefiting from right now, as protocols enable you to read this post in your choice of app or browser. Opening this freedom to social media has a huge impact, letting everyone send and receive posts their own preferred way. Even better, these systems are open to experiment and can cater to every niche, while still connecting to everyone in the wider network. You can leave the dead malls of platform capitalism, and find the services which cater to you.

To save you some trial and error, we have outlined some differences between these options and what that might mean for them down the road.

ActivityPub and AT Protocols ActivityPub

The Fediverse goes a bit further back,  but ActivityPub’s development by the world wide web consortium (W3C) started in 2014. The W3C is a public-interest non-profit organization which has played a vital role in developing open international standards which define the internet, like HTML and CSS (for better or worse). Their commitment to ActivityPub gives some assurance the protocol will be developed in a stable and ostensibly consensus driven process.

This protocol requires a host website (often called an “instance”) to maintain an “inbox” and “outbox” of content for all of its users, and selectively share this with other host websites on behalf of the users. In this federation model users are accountable to their instance, and instances are accountable to each other. Misbehaving users are banned from instances, and misbehaving instances are cut off from others through “defederation.” This creates some stakes for maintaining good behavior, for users and moderators alike.

ActivityPub handles a wide variety of uses, but the application most associated with the protocol is Mastodon. However, ActivityPub is also integral to Meta’s own Twitter alternative, Threads, which is taking small steps to connect with the Fediverse. Threads is a totally different application, solely hosted by Meta, and is ten times bigger than the Fediverse and Bluesky networks combined—making it the 500-pound gorilla in the room. Meta’s poor reputation on privacy, moderation, and censorship, has driven many Fediverse instances to vow they’ll defederate from Threads. Other instances still may connect with Threads to help users find a broader audience, and perhaps help sway Threads users to try Mastodon instead.

AT Protocol

The Authenticated Transfer (AT) Protocol is newer; sparked by Twitter co-founder Jack Dorsey in 2019. Like ActivityPub, it is also an open source protocol. However, it is developed unilaterally by a private for-profit corporation— Bluesky PBLLC— though it may be imparted to a web standards body in the future. Bluesky remains mostly centralized. While it has recently opened up to small hosts, there are still some restrictions preventing major alternatives from participating. As developers further loosens control we will likely see rapid changes in how people use the network.

The AT Protocol network design doesn’t put the same emphasis on individual hosts as the Fediverse does, and breaks up hosting, distribution, and curation into distinct services. It’s easiest to understand in comparison to traditional web hosting. Your information, like posts and profiles, are held in Personal Data Servers (PDSes)—analogous to the hosting of a personal website. This content is then fetched by relay servers, like web crawlers, which aggregate a “firehose” of everyone’s content without much alteration. To sort and filter this on behalf of the user, like a “search engine,” AT has Appview services, which give users control over what they see. When accessing the Appview through a client app or website, the user has many options to further filter, sort, and curate their feed, as well as “subscribe” to filters and labels someone else made.

The result is a decentralized system which can be highly tailored while still offering global reach. However, this atomized system also may mean the community accountability encouraged by the host-centered system may be missing, and users are ultimately responsible for their own experience and moderation. This will depend on how the network opens to major hosts other than the Bluesky corporation.

User Experience

Mastodon, Threads and Bluesky have a number of differences that are not essential to their underlying protocol which affect users looking to get involved today. Mastodon and Bluesky are very customizable, so these differences are just addressing the prevalent trends.

Timeline Algorithm

Most Mastodon and most ActivityPub sites prefer a more straightforward timeline of content from accounts you follow. Threads have a Meta-controlled algorithm, like Instagram. Bluesky defaults to a chronological feed, but opens algorithmic curation and filtering up to apps and users. 

User Design

All three services present a default appearance that will be familiar to anyone who has used Twitter. Both Mastodon and Bluesky have alternative clients with the only limit being a developer’s imagination. In fact, thanks to their open nature, projects like SkyBridge let users of one network use apps built for the other (in this case, Bluesky users using Mastodon apps). Threads does not have any alternate clients and requires a developer API, which is still in beta.

Onboarding 

Threads has the greatest advantage to getting people to sign up, as it has only one site which accepts an Instagram account as a login. Bluesky also has only one major option for signing up, but has some inherent flexibility in moving your account later on. That said, diving into a few extra setup steps can improve the experience. Finally, one could easily join Mastodon by joining the flagship instance, mastodon.social. However, given the importance of choosing the right instance, you may miss out on some of the benefits of the Fediverse and want to move your account later on. 

Culture

Threads has a reputation for being more brand-focused, with more commercial accounts and celebrities, and Meta has made no secret about their decisions to deemphasize political posts on the platform. Bluesky is often compared to early Twitter, with a casual tone and a focus on engaging with friends. Mastodon draws more people looking for community online, especially around shared interests, and each instance will have distinct norms.

Privacy Considerations

Neither ActivityPub nor AT Protocol currently support private end-to-end encrypted messages at this time, so they should not be used for sensitive information. For all services here, the majority of content on your profile will be accessible from the public web. That said, Mastodon, Threads, and Bluesky differ in how they handle user data.

Mastodon

Everything you do as a user is entrusted to the instance host including posts, interactions, DMs, settings, and more. This means the owner of your instance can access this information, and is responsible for defending it against attackers and law enforcement. Tech-savvy people may choose to self-host, but users generally need to find an instance run by someone they trust.

The Fediverse muffles content sharing through a myriad of permissions set by users and instances. If your instance blocks a poorly moderated instance for example, the people on that other site will no longer be in your timelines nor able to follow your posts. You can also limit how messages are shared to further reduce the intended audience. While this can create a sense of community and closeness,  remember it is still public and instance hosts are always part of the equation. Direct messages, for example, will be accessible to your host and the host of the recipient.

If content needs to be changed or deleted after being shared, your instance can request these changes, and this is often honored. That said, once something is shared to the network, it may be difficult to “undo.”

Threads

All user content is entrusted to one host, in this case Meta, with a privacy policy similar to Instagram. Meta determines when information is shared with law enforcement, how it is used for advertising, how well protected it is from a breach, and so on.

Sharing with instances works differently for Threads, as Meta has more restricted interoperability. Currently, content sharing is one-way: Threads users can opt-in to sharing their content with the Fediverse, but won’t see likes or replies. By the end of this year, they will allow Threads users to follow accounts on Mastodon accounts.

Federation on Threads may always be restricted, and features like transferring one's account to Mastodon may never be supported. Limits in sharing should not be confused with enhanced privacy or security, however. Public posts are just that—public—and you are still trusting your host (Meta) with private data like DMs (currently handled by Instagram). Instead these restrictions, should they persist, should be seen as the minimum level of control over users Meta deems necessary.

Bluesky

Bluesky, in contrast, is a very “loud” system. Every public message, interaction, follow and block is hosted by your PDS and freely shared to everyone in the network. Every public post is for everyone and is only discovered according to their own app and filter preferences. There are ways to algorithmically imitate smaller spaces with filtering and algorithmic feeds, such as with the Blacksky project, but these are open to everyone and your posts will not be restricted to that curated space.

Direct messages are limited to the flagship Bluesky app, and can be accessed by the Bluesky moderation team. The project plans to eventually incorporate DMs into the protocol, and include end-to-end-encryption, but it is not currently supported. Deletion on Bluesky is simply handled by removing the content from your PDS, but once a message is shared to Relay and Appview services it may remain in circulation a while longer according to their retention settings.

Moderation Mastodon

Mastodon’s approach to moderation is often compared to subreddits, where the administrators of an instance are responsible for creating a set of rules and empowering a team of moderators to keep the community healthy. The result is a lot more variety in moderation experience, with the only boundary being an instance’s reputation in the broader Fediverse. Instances coordinating and “defederating” from problematic hosts has already been effective in the Fediverse. One former instance, Gab, was successfully cut off from the Fediverse for hosting extreme right-wing hate. The threat of defederation sets a baseline of behavior across the Fediverse, and from there users can choose instances based on reputation and on how aligned the hosts are with their own moderation preferences.

At its best, instances prioritize things other than growth. New members are welcomed and onboarded carefully as new community members, and hosts only grow the community if their moderation team can support it. Some instances even set a permanent cap on participation to a few thousand to ensure a quality and intimate experience. Current members too can vote with their feet, and if needed split off into their own new instance without needing to disconnect entirely.

While Mastodon has a lot going for it by giving users a choiceavoiding automation, and avoiding unsustainable growth, there are other evergreen moderation issues at play. Decisions can be arbitrary, inconsistent, and come with little recourse. These aren't just decisions impacting individual users, but also those affecting large swaths of them, when it comes to defederation. 

Threads

Threads, as alluded to when discussing privacy above, aims for a moderation approach more aligned with pre-2022 Twitter and Meta’s other current platforms like Instagram. That is, an impossible task of scaling moderation with endless growth of users.

As the largest of these services however, this puts Meta in a position to set norms around moderation as it enters the Fediverse. A challenge for decentralized projects will be to ensure Meta’s size doesn’t make them the ultimate authority on moderation decisions, a pattern of re-centralization we’ve seen happen in email. Spam detection tools have created an environment where email, though an open standard, is in practice dominated by Microsoft and Google as smaller services are frequently marked as spammers. A similar dynamic could play out with the federated social web, where Meta has capacity to exclude smaller instances with little recourse. Other instances may copy these decisions or fear not to do so, lest they are also excluded. 

Bluesky

While in beta, Bluesky received a lot of praise and criticism for its moderation. However, up until recently, all moderation was handled by the centralized Bluesky company—not throughout the distributed AT network. The true nature of moderation structure on the network is only now being tested.

AT Protocol relies on labeling services, aka “labelers”  for moderation. These special accounts using Bluesky’s Ozone tool labels posts with small pieces of metadata. You can also filter accounts with account block lists published by other users, a lot like the Block Together tool formerly available on Twitter. Your Appview aggregating your feed uses these labels to and block lists to filter content. Arbitrary and irreconcilable moderation decisions are still a problem, as are some of the risks of using automated moderation, but it is less impactful as users are not deplatformed and remain accessible to people with different moderation settings. This also means problematic users don’t go anywhere and can still follow you, they are just less visible.

The AT network is censorship resistant, and conversely, it is difficult to meaningfully ban users. To be propagated in the network one only needs a PDS to host their account, and at least one Relay to spread that information. Currently Relays sit out of moderation, only scanning to restrict CSAM. In theory Relays could be more like a Fediverse instance and more accurately curate and moderate users. Even then, as long as one Relay carries the user they will be part of the network. PDSes, much like web hosts, may also choose to remove controversial users, but even in those cases PDSes are easy to self-host even on a low-power computer.

Like the internet generally, removing content relies on the fragility of those targeted. With enough resources and support, a voice will remain online. Without user-driven approaches to limit or deplatform content (like defederation), Bluesky services may be targeted by censorship on the infrastructure level, like on the ISP level.

Hosting and Censorship

With any internet service, there are some legal obligations when hosting user generated content. No matter the size, hosts may need to contend with DMCA takedowns, warrants for user data, cyber attacks,  blocking from authoritarian regimes, and other pressures from powerful interests. This decentralized approach to social media also relies on a shared legal protection for all hosts, Section 230.  By ensuring they are not held liable for user-generated content, this law provides the legal protection necessary for these platforms to operate and innovate.

Given the differences in the size of hosts and their approach to moderation, it isn’t surprising that each of these platforms will address platform liability and censorship differently.

Mastodon

Instance hosts, even for small communities, need to navigate these legal considerations as we outlined in our Fediverse legal primer. We have already seen some old patterns reemerge with these smaller, and often hobbyist, hosts struggling to defend themselves from legal challenges and security threats. While larger hosts have resources to defend against these threats, an advantage of the decentralized model is censors need to play whack-a-mole in a large network where messages flow freely across the globe. Together, the Fediverse is set up to be quite good at keeping information safe from censorship, but individual users and accounts are very susceptible to targeted censorship efforts and will struggle with rebuilding their presence.

Threads

Threads is the easiest to address, as Meta is already several platforms deep into addressing liability and speech concerns, and have the resources to do so. Unlike Mastodon or Bluesky, they also need to do so on a much larger scale with a larger target on their back as the biggest platform backed by a multi-billion dollar company. The unique challenge for Threads however will be how Meta decides to handle content from the rest of the Fediverse. Threads users will also need to navigate the perks and pitfalls of sticking with a major host with a spotty track record on censorship and disinformation.

Bluesky

Bluesky is not yet tested beyond the flagship Bluesky services, and raises a lot more questions. PDSes, Relays and even Appviews play some role in hosting, and can be used with some redundancies. For example your account on one PDS may be targeted, but the system is designed to be easy for users to change this host, self-host, or have multiple hosts while retaining one identity on the network.

Relays, in contrast, are more computationally demanding and may remain the most “centralized” service as natural monopolies— users have some incentive to mostly follow the biggest relays. The result is a potential bottle-neck susceptible to influence and censorship. However, if we see a wide variety of relays with different incentives, it becomes more likely that messages can be shared throughout the network despite censorship attempts.

You Might Not Have to Choose

With this overview, you can start diving into one of these new Twitter alternatives leading the way in a more free social web. Thanks to the open nature of these new systems, where you set up will become less important with improved interoperability.

Both ActivityPub and AT Protocol developers are receptive to making the two better at communicating with one another, and independent projects like  Bridgy Fed, SkyBridge, RSS Parrot and Mastofeed are already letting users get the best of both worlds. Today a growing number of projects speak both protocols, along with older ones like RSS. It may be these paths towards a decentralized web become increasingly trivial as they converge, despite some early growing pains. Or the two may be eclipsed by yet another option. But their shared trajectory is moving us towards a more free, more open and refreshingly weird social web free of platform gatekeepers.

Rory Mir

Ah, Steamboat Willie. It’s been too long. 🐭

2 months 1 week ago

Did you know Disney’s Steamboat Willie entered the public domain this year? Since its 1928 debut, U.S. Congress has made multiple changes to copyright law, extending Disney’s ownership of this cultural icon for almost a century. A century.

Creativity should spark more creativity.

That’s not how intellectual property laws are supposed to work. In the United States, these laws were designed to give creators a financial incentive to contribute to science and culture. Then eventually the law makes this expression free for everyone to enjoy and build upon. Disney itself has reaped the abundant benefits of works in the public domain including Hans Christian Andersen’s “The Little Mermaid" and "The Snow Queen." Creativity should spark more creativity.

In that spirit, EFF presents to you this year’s EFF member t-shirt simply called “Fix Copyright":

Copyright Creativity is fun for the whole family.

The design references Steamboat Willie, but also tractor owners’ ongoing battle to repair their equipment despite threats from manufacturers like John Deere. These legal maneuvers are based on Section 1201 of the Digital Millennium Copyright Act or DMCA. In a recent appeals court brief, EFF and co-counsel Wilson Sonsini Goodrich & Rosati argued that Section 1201 chills free expression, impedes scientific research, and to top it off, is unenforceable because it’s too broad and violates the First Amendment. Ownership ain’t what it used to be, so let’s make it better.

We need you! Get behind this mission and support EFF's work as a member. Through EFF's 34th anniversary on July 10:

You can help cut through the BS and make the world a little brighter—whether online or off.

Join EFF

Defend Creativity & Innovation Online

_________________________

EFF is a member-supported U.S. 501(c)(3) organization celebrating TEN YEARS of top ratings from the nonprofit watchdog Charity Navigator! Your donation is tax-deductible as allowed by law.

Aaron Jue

Podcast Episode: AI in Kitopia

2 months 1 week ago

Artificial intelligence will neither solve all our problems nor likely destroy the world, but it could help make our lives better if it’s both transparent enough for everyone to understand and available for everyone to use in ways that augment us and advance our goals — not for corporations or government to extract something from us and exert power over us. Imagine a future, for example, in which AI is a readily available tool for helping people communicate across language barriers, or for helping vision- or hearing-impaired people connect better with the world.

%3Ciframe%20height%3D%2252px%22%20width%3D%22100%25%22%20frameborder%3D%22no%22%20scrolling%3D%22no%22%20seamless%3D%22%22%20src%3D%22https%3A%2F%2Fplayer.simplecast.com%2Fea9307db-050b-40c7-a346-91dce12a1683%3Fdark%3Dtrue%26amp%3Bcolor%3D000000%22%20allow%3D%22autoplay%22%3E%3C%2Fiframe%3E Privacy info. This embed will serve content from simplecast.com

   

(You can also find this episode on the Internet Archive and on YouTube.)

This is the future that Kit Walsh, EFF’s Director of Artificial Intelligence & Access to Knowledge Legal Projects, and EFF Senior Staff Technologist Jacob Hoffman-Andrews, are working to bring about. They join EFF’s Cindy Cohn and Jason Kelley to discuss how AI shouldn’t be a tool to cash in, or to classify people for favor or disfavor, but instead to engage with technology and information in ways that advance us all. 

In this episode you’ll learn about: 

  • The dangers in using AI to determine who law enforcement investigates, who gets housing or mortgages, who gets jobs, and other decisions that affect people’s lives and freedoms. 
  • How "moral crumple zones” in technological systems can divert responsibility and accountability from those deploying the tech. 
  • Why transparency and openness of AI systems — including training AI on consensually obtained, publicly visible data — is so important to ensure systems are developed without bias and to everyone’s benefit. 
  • Why “watermarking” probably isn’t a solution to AI-generated disinformation. 

Kit Walsh is a senior staff attorney at EFF, serving as Director of Artificial Intelligence & Access to Knowledge Legal Projects. She has worked for years on issues of free speech, net neutrality, copyright, coders' rights, and other issues that relate to freedom of expression and access to knowledge, supporting the rights of political protesters, journalists, remix artists, and technologists to agitate for social change and to express themselves through their stories and ideas. Before joining EFF, Kit led the civil liberties and patent practice areas at the Cyberlaw Clinic, part of Harvard University's Berkman Klein Center for Internet and Society; earlier, she worked at the law firm of Wolf, Greenfield & Sacks, litigating patent, trademark, and copyright cases in courts across the country. Kit holds a J.D. from Harvard Law School and a B.S. in neuroscience from MIT, where she studied brain-computer interfaces and designed cyborgs and artificial bacteria. 

Jacob Hoffman-Andrews is a senior staff technologist at EFF, where he is lead developer on Let's Encrypt, the free and automated Certificate Authority; he also works on EFF's Encrypt the Web initiative and helps maintain the HTTPS Everywhere browser extension. Before working at EFF, Jacob was on Twitter's anti-spam and security teams. On the security team, he implemented HTTPS-by-default with forward secrecy, key pinning, HSTS, and CSP; on the anti-spam team, he deployed new machine-learned models to detect and block spam in real-time. Earlier, he worked on Google’s maps, transit, and shopping teams.

Resources: 

What do you think of “How to Fix the Internet?” Share your feedback here

Transcript

KIT WALSH
Contrary to some marketing claims, AI is not the solution to all of our problems. So I'm just going to talk about how AI exists in Kitopia. And in particular, the technology is available for everyone to understand. It is available for everyone to use in ways that advance their own values rather than hard coded to advance the values of the people who are providing it to you and trying to extract something from you and as opposed to embodying the values of a powerful organization, public or private, that wants to exert more power over you by virtue of automating its decisions.
So it can make more decisions classifying people, figuring out whom to favor, whom to disfavor. I'm defining Kitopia a little bit in terms of what it's not, but to get back to the positive vision, you have this intellectual commons of research development of data that we haven't really touched on privacy yet, but but data that is sourced in a consensual way and when it's, essentially, one of the things that I would love to have is a little AI muse that actually does embody my values and amplifies my ability to engage with technology and information on the Internet in a way that doesn't feel icky or oppressive and I don't have that in the world yet.

CINDY COHN
That’s Kit Walsh, describing an ideal world she calls “Kitopia”. Kit is a senior staff attorney at the Electronic Frontier Foundation. She works on free speech, net neutrality and copyright and many other issues related to freedom of expression and access to knowledge. In fact, her full title is EFF’s Director of Artificial Intelligence & Access to Knowledge Legal Projects. So, where is Kitopia, you might ask? Well we can’t get there from here - yet. Because it doesn’t exist. Yet. But here at EFF we like to imagine what a better online world would look like, and how we will get there and today we’re joined by Kit and by EFF’s Senior Staff Technologist Jacob Hoffman-Andrews. In addition to working on AI with us, Jacob is a lead developer on Let's Encrypt, and his work on that project has been instrumental in helping us encrypt the entire web. I’m Cindy Cohn, the executive director of the Electronic Frontier Foundation.

JASON KELLEY
And I’m Jason Kelley, EFF’s Activism Director. This is our podcast series How to Fix the Internet.

JACOB HOFFMAN-ANDREWS
I think in my ideal world people are more able to communicate with each other across language barriers, you know, automatic translation, transcription of the world for people who are blind or for deaf people to be able to communicate more clearly with hearing people. I think there's a lot of ways in which AI can augment our weak human bodies in ways that are beneficial for people and not simply increasing the control that their governments and their employers have over their lives and their bodies.

JASON KELLEY
We’re talking to Kit and Jacob both, because this is such a big topic that we really need to come at it from multiple angles to make sense of it and to figure out the answer to the really important question which is, How can AI actually make the world we live in, a better place?

CINDY COHN
So while many other people have been trying to figure out how to cash in on AI, Kit and Jacob have been looking at AI from a public interest and civil liberties perspective on behalf of EFF. And they’ve also been giving a lot of thought to what an ideal AI world looks like.

JASON KELLEY
AI can be more than just another tool that’s controlled by big tech. It really does have the potential to improve lives in a tangible way. And that’s what this discussion is all about. So we’ll start by trying to wade through the hype, and really nail down what AI actually is and how it can and is affecting our daily lives.

KIT WALSH
The confusion is understandable because AI is being used as a marketing term quite a bit, rather than as an abstract concept, rather than as a scientific concept.
And the ways that I think about AI, particularly in the decision-making context, which is one of our top priorities in terms of where we think that AI is impacting people's rights, is first I think about what kind of technology are we really talking about because sometimes you have a tool that actually no one is calling AI, but it is nonetheless an example of algorithmic decision-making.
That also sounds very fancy. This can be a fancy computer program to make decisions, or it can be a buggy Excel spreadsheet that litigators discover is actually just omitting important factors when it's used to decide whether people get health care or not in a state health care system.

CINDY COHN
You're not making those up, Kit. These are real examples.

KIT WALSH
That’s not a hypothetical. Unfortunately, it’s not a hypothetical, and the people who litigated that case lost some clients because when you're talking about not getting health care that can be life or death. And machine learning can either be a system where you – you, humans, code a reinforcement mechanism. So you have sort of random changes happening to an algorithm, and it gets rewarded when it succeeds according to your measure of success, and rejected otherwise.
It can be training on vast amounts of data, and that's really what we've seen a huge surge in over the past few years, and that training can either be what's called unsupervised, where you just ask your system that you've created to identify what the patterns are in a bunch of raw data, maybe raw images, or it can be supervised in the sense that humans, usually low paid humans, are coding their views on what's reflected in the data.
So I think that this is a picture of a cow, or I think that this picture is adult and racy. So some of these are more objective than others, and then you train your computer system to reproduce those kinds of classifications when it makes new things that people ask for with those keywords, or when it's asked to classify a new thing that it hasn't seen before in its training data.
So that's really a very high level oversimplification of the technological distinctions. And then because we're talking about decision-making, it's really important who is using this tool.
Is this the government which has all of the power of the state behind it and which administers a whole lot of necessary public benefits - that is using decisions to decide who is worthy and who is not to obtain those benefits? Or, who should be investigated? What neighborhoods should be investigated?
We'll talk a little bit more about the use in law enforcement later on, but it's also being used quite a bit in the private sector to determine who's allowed to get housing, whether to employ someone, whether to give people mortgages, and that's something that impacts people's freedoms as well.

CINDY COHN
So Jacob, two questions I used to distill down on AI decision-making are, who is the decision-making supposed to be serving and who bears the consequences if it gets it wrong? And if we think of those two framing questions, I think we get at a lot of the issues from a civil liberties perspective. That sound right to you?

JACOB HOFFMAN-ANDREWS
Yeah, and, you know, talking about who bears the consequences when an AI or technological system gets it wrong, sometimes it's the person that system is acting upon, the person who's being decided whether they get healthcare or not and sometimes it can be the operator.
You know, it's, uh, popular to have kind of human in the loop, like, oh, we have this AI decision-making system that's maybe not fully baked. So there's a human who makes the final call. The AI just advises the human and, uh, there's a great paper by Madeleine Clare Elish describing this as a form of moral crumple zones. Uh, so, you may be familiar in a car, modern cars are designed so that in a collision, certain parts of the car will collapse to absorb the force of the impact.
So the car is destroyed but the human is preserved. And, in some human in the loop decision making systems often involving AI, it's kind of the reverse. The human becomes the crumple zone for when the machine screws up. You know, you were supposed to catch the machine screwup. It didn't screw up in over a thousand iterations and then the one time it did, well, that was your job to catch it.
And, you know, these are obviously, you know, a crumple zone in a car is great. A moral crumple zone in a technological system is a really bad idea. And it takes away responsibility from the deployers of that system who ultimately need to bear the responsibility when their system harms people.

CINDY COHN
So I wanna ask you, what would it look like if we got it right? I mean, I think we do want to have some of these technologies available to help people make decisions.
They can find patterns in giant data probably better than humans can most of the time. And we'd like to be able to do that. So since we're fixing the internet now, I want to stop you for a second and ask you how would we fix the moral crumple zone problem or what were the things we think about to do that?

JACOB HOFFMAN-ANDREWS
You know, I think for the specific problem of, you know, holding say a safety driver or like a human decision-maker responsible for when the AI system they're supervising screws up, I think ultimately what we want is that the responsibility can be applied all the way up the chain to the folks who decided that that system should be in use. They need to be responsible for making sure it's actually a safe, fair system that is reliable and suited for purpose.
And you know, when a system is shown to bring harm, for instance, you know, a self-driving car that crashes into pedestrians and kills them, you know, that needs to be pulled out of operation and either fixed or discontinued.

CINDY COHN
Yeah, it made me think a little bit about, you know, kind of a change that was made, I think, by Toyota years ago, where they let the people on the front line stop the line, right? Um, I think one thing that comes out of that is you need to let the people who are in the loop have the power to stop the system, and I think all too often we don't.
We devolve the responsibility down to that person who's kind of the last fair chance for something but we don't give them any responsibility to raise concerns when they see problems, much less the people impacted by the decisions.

KIT WALSH
And that’s also not an accident of the appeal of these AI systems. It's true that you can't hold a machine accountable really, but that doesn't deter all of the potential markets for the AI. In fact, it's appealing for some regulators, some private entities, to be able to point to the supposed wisdom and impartiality of an algorithm, which if you understand where it comes from, the fact that it's just repeating the patterns or biases that are reflected in how you trained it, you see it's actually, it's just sort of automated discrimination in many cases and that can work in several ways.
In one instance, it's intentionally adopted in order to avoid the possibility of being held liable. We've heard from a lot of labor rights lawyers that when discriminatory decisions are made, they're having a lot more trouble proving it now because people can point to an algorithm as the source of the decision.
And if you were able to get insight in how that algorithm were developed, then maybe you could make your case. But it's a black box. A lot of these things that are being used are not publicly vetted or understood.
And it's especially pernicious in the context of the government making decisions about you, because we have centuries of law protecting your due process rights to understand and challenge the ways that the government makes determinations about policy and about your specific instance.
And when those decisions and when those decision-making processes are hidden inside an algorithm then the old tools aren't always effective at protecting your due process and protecting the public participation in how rules are made.

JASON KELLEY
It sounds like in your better future, Kit, there's a lot more transparency into these algorithms, into this black box that's sort of hiding them from us. Is that part of what you see as something we need to improve to get things right?

KIT WALSH
Absolutely. Transparency and openness of AI systems is really important to make sure that as it develops, it develops to the benefit of everyone. It's developed in plain sight. It's developed in collaboration with communities and a wider range of people who are interested and affected by the outcomes, particularly in the government context though I'll speak to the private context as well. When the government passes a new law, that's not done in secret. When a regulator adopts a new rule, that's also not done in secret. There's either, sure, that's, there are exceptions.

CINDY COHN
Right, but that’s illegal.

JASON KELLEY
Yeah, that's the idea. Right. You want to get away from that also.

KIT WALSH
Yeah, if we can live in Kitopia for a moment where, where these things are, are done more justly, within the framework of government rulemaking, if that's occurring in a way that affects people, then there is participation. There's meaningful participation. There's meaningful accountability. And in order to meaningfully have public participation, you have to have transparency.
People have to understand what the new rule is that's going to come into force. And because of a lot of the hype and mystification around these technologies, they're being adopted under what's called a procurement process, which is the process you use to buy a printer.
It's the process you use to buy an appliance, not the process you use to make policy. But these things embody policy. They are the rule. Sometimes when the legislature changes the law, the tool doesn't get updated and it just keeps implementing the old version. And that means that the legislature's will is being overridden by the designers of the tool.

JASON KELLEY
You mentioned predictive policing, I think, earlier, and I wonder if we could talk about that for just a second because it's one way where I think we at EFF have been thinking a lot about how this kind of algorithmic decision-making can just obviously go wrong, and maybe even should never be used in the first place.
What we've seen is that it's sort of, you know, very clearly reproduces the problems with policing, right? But how does AI or this sort of predictive nature of the algorithmic decision-making for policing exacerbate these problems? Why is it so dangerous I guess is the real question.

KIT WALSH
So one of the fundamental features of AI is that it looks at what you tell it to look at. It looks at what data you offer it, and then it tries to reproduce the patterns that are in it. Um, in the case of policing, as well as related issues around decisions for pretrial release and parole determinations, you are feeding it data about how the police have treated people, because that's what you have data about.
And the police treat people in harmful, racist, biased, discriminatory, and deadly ways that it's really important for us to change, not to reify into a machine that is going to seem impartial and seem like it creates a veneer of justification for those same practices to continue. And sometimes this happens because the machine is making an ultimate decision, but that's not usually what's happening.
Usually the machine is making a recommendation. And one of the reasons we don't think that having a human in the loop is really a cure for the discriminatory harms is that humans are more likely to follow the AI if it gives them cover for a biased decision that they're going to make. And relatedly, some humans, a lot of people, develop trust in the machine and wind up following it quite a bit.
So in these contexts, if you really wanted to make predictions about where a crime was going to occur, well it would send you to Wall Street. And that's not, that's not the result that law enforcement wants.
But, first of all, you would actually need data about where crimes occur, and generally people who don't get caught by the police are not filling out surveys to say, here are the crimes I got away with so that you can program a tool that's going to do better at sort of reflecting some kind of reality that you're trying to capture. You only know how the system has treated people so far and all that you can do with AI technology is reinforce that. So it's really not an appropriate problem to try to solve with this technology.

CINDY COHN
Yeah, our friends at Human Rights Data Analysis Group who did some of this work said, you know, we call it predictive policing, but it's really predicting the police because we're using what the police already do to train up a model, and of course it's not going to fix the problems with how police have been acting in the past. Sorry to interrupt. Go on.

KIT WALSH
No, to build on that, by definition, it thinks that the past behavior is ideal, and that's what it should aim for. So, it's not a solution to any kind of problem where you're trying to change a broken system.

CINDY COHN
And in fact, what they found in the research was that the AI system will not only replicate what the police do, it will double down on the bias because it's seeing a small trend and it will increase the trend. And I don't remember the numbers, but it's pretty significant. So it's not just that the AI system will replicate what the police do. What they found in looking at these systems is that the AI systems increase the bias in the underlying data.
It's really important that we continue to emphasize the ways in which AI and machine learning are already being used and already being used in ways that people may not see, but dramatically impact them. But right now, what's front of mind for a lot of people is generative AI. And I think many, many more people have started playing around with that. And so I want to start with how we think about generative AI and the issues it brings. And Jacob, I know you have some thoughts about that.

JACOB HOFFMAN-ANDREWS
Yeah. To call back to, at the beginning you asked about, how do we define AI? I think one of the really interesting things in the field is that it's changed so much over time. And, you know, when computers first became broadly available, you know, people have been thinking for a very long time, what would it mean for a computer to be intelligent? And for a while we thought, wow, you know, if a computer could play chess and beat a human, we would say that's an intelligent computer.
Um, if a computer could recognize, uh, what's in an image, is this an image of a cat or a cow - that would be intelligence. And of course now they can, and we don't consider it intelligence anymore. And you know, now we might say if a computer could write a term paper, that's intelligence and I don't think we're there yet, but the development of chatbots does make a lot of people feel like we're closer to intelligence because you can have a back and forth and you can ask questions and receive answers.
And some of those answers will be confabulations and, but some percentage of the time they'll be right. And it starts to feel like something you're interacting with. And I think, rightly so, people are worried that this will destroy jobs for writers and for artists. And to an earlier question about, you know, what does it look like if we get it right, I think, you know, the future we want is one where people can write beautiful things and create beautiful things and, you know, still make a great living at it and be fulfilled and safe in their daily needs and be recognized for that. And I think that's one of the big challenges we're facing with generative AI.

JASON KELLEY
Let’s pause for just a moment to say thank you to our sponsor. How to Fix the Internet is supported by The Alfred P. Sloan Foundation’s Program in Public Understanding of Science and Technology. Enriching people’s lives through a keener appreciation of our increasingly technological world and portraying the complex humanity of scientists, engineers, and mathematicians. And now back to our discussion with Kit and Jacob about AI: the good, the bad, and what could be better.

CINDY COHN
There’s been a lot of focus on the dark side of generative AI and the idea of using copyright to address those problems has emerged. We have worries about that as a way to sort out between good and bad uses of AI, right Kit?

KIT WALSH
Absolutely. We have had a lot of experience with copyright being used as a tool of censorship, not only against individual journalists and artists and researchers, but also against entire mediums for expression, against libraries, against the existence of online platforms where people are able to connect and copyright not only lasts essentially forever, it comes with draconian penalties that are essentially a financial death sentence for the typical person in the United States. So in the context of generative AI, there is a real issue with the potential to displace creative labor. And it's a lot like the issues of other forms of automation that displace other forms of labor.
And it's not always the case that an equal number of new jobs are created, or that those new jobs are available to the people who have been displaced. And that's a pretty big social problem that we have. In Kitopia, we have AI and it's used so that there's less necessary labor to achieve a higher standard of living for people, and we should be able to be excited about automation of labor tasks that aren't intrinsically rewarding.
One of the reasons that we're not is because the fruits of that increased production flow to the people who own the AI, not to the people who were doing that labor, who now have to find another way to trade their labor for money or else become homeless and starve and die, and that's cruel.
It is the world that we're living in so it's really understandable to me that an artist is going to want to reach for copyright, which has the potential of big financial damages against someone who infringes, and is the way that we've thought about monetization of artistic works. I think that way of thinking about it is detrimental, but I also think it's really understandable.
One of the reasons why the particular legal theories in the lawsuits against generative AI technologies are concerning is because they wind up stretching existing doctrines of copyright law. So in particular, the very first case against Stable Diffusion argued that you were creating an infringing derivative work when you trained your model to recognize the patterns in five billion images.
It's a derivative work of each and every one of them. And that can only succeed as a legal theory if you throw out the existing understanding of what a derivative work is, that it has to be substantially similar to a thing that it's infringing and that limitation is incredibly important for human creativity.
The elements of my work that you might recognize from my artistic influences in the ordinary course of artistic borrowing and inspiration are protected. I'm able to make my art without people coming after me because I like to draw eyes the same way as my inspiration or so on, because ultimately the work is not substantially similar.
And if we got rid of that protection, it would be really bad for everybody.
But at the same time, you can see how someone might say, why should I pay a commission to an artist if I can get something in the same style? To which I would say, try it. It's not going to be what you want because art is not about replicating patterns that are found in a bunch of training data.
It can be a substitute for stock photography or other forms of art that are on the lower end of how much creativity is going into the expression, but for the higher end, I think that part of the market is safe. So I think all artists are potentially impacted by this. I'm not saying only bad artists have to care, but there is this real impact.
Their financial situation is precarious already, and they deserve to make a living, and this is a bandaid because we don't have a better solution in place to support people and let them create in a way that is in accord with their values and their goals. We really don't have that either in the situation where people are primarily making their income doing art that a corporation wants them to make to maximize its products.
No artist wants to create assets for content. Artists want to express and create new beauty and new meaning and the system that we have doesn't achieve that. We can certainly envision better ones but in the meantime, the best tool that artists have is banding together to negotiate with collective power, and it's really not a good enough tool at this point.
But I also think there's a lot of room to ethically use generative AI if you're working with an artist and you're trying to communicate your vision for something visual, maybe you're going to use an AI tool in order to make something that has some of the elements you're looking for and then say this, this is what I want to pay you to, to draw. I want this kind of pose, right? But, but, more unicorns.

JASON KELLEY
And I think while we're talking about these sort of seemingly good, but ultimately dangerous solutions for the different sort of problems that we're thinking about now more than ever because of generative AI, I wanted to talk with Jacob a little bit about watermarking. And this is meant to solve a sort of problem of knowing what is and is not generated by AI.
And people are very excited about this idea that through some sort of, well, actually you just explain Jacob, cause you are the technologist. What is watermarking? Is this a good idea? Will this work to help us understand and distinguish between AI-generated things and things that are just made by people?

JACOB HOFFMAN-ANDREWS
Sure. So a very real and closely related risk of generative AI is that it is - it will, and already is - flooding the internet with bullshit. Uh, you know, many of the articles you might read on any given topic, these days the ones that are most findable are often generated by AI.
And so an obvious next step is, well, what if we could recognize the stuff that's written by AI or the images that are generated by AI, because then we could just skip that. You know, I wouldn't read this article cause I know it's written by AI or you can go even a step further, you could say, well, maybe search engines should downrank things that were written by AI or social networks should label it or allow you to opt out of it.
You know, there's a lot of question about, if we could immediately recognize all the AI stuff, what would we do about it? There's a lot of options, but the first question is, can we even recognize it? So right off the bat, you know, when ChatGPT became available to the public, there were people offering ChatGPT detectors. You know, you could look at this content and, you know, you can kind of say, oh, it tends to look like this.
And you can try to write something that detects its output, and the short answer is it doesn't work and it's actually pretty harmful. A number of students have been harmed because their instructors have run their work through a ChatGPT detector, an AI detector that has incorrectly labeled it.
There's not a reliable pattern in the output that you can always see. Well, what if the makers of the AI put that pattern there? And, you know, for a minute, let's switch from text based to image based stuff. Jason, have you ever gone to a stock photo site to download a picture of something?

JASON KELLEY
I sadly have.

JACOB HOFFMAN-ANDREWS
Yeah. So you might recognize the images they have there, they want to make sure you pay for the image before they use it. So there's some text written across it in a kind of ghostly white diagonal. It says, this is from say shutterstock.com. So that's a form of watermark. If you just went and downloaded that image rather than paying for the cleaned up version, there's a watermark on it.
So the concept of watermarking for AI provenance is that It would be invisible. It would be kind of mixed into the pixels at such a subtle level that you as a human can't detect it, but you know, a computer program designed to detect that watermark could so you could imagine the AI might generate a picture and then in the top left pixel, increase its shade by the smallest amount, and then the next one, decrease it by the smallest amount and so on throughout the whole image.
And you can encode a decent amount of data that way, like what system produced it, when, all that information. And actually the EFF has published some interesting research in the past on a similar system in laser printers where little yellow dots are embedded by certain laser printers, by most laser printers that you can get as an anti counterfeiting measure.

JASON KELLEY
This is one of our most popular discoveries that comes back every few years, if I remember right, because people are just gobsmacked that they can't see them, but they're there, and that they have this information. It's a really good example of how this works.

CINDY COHN
Yeah, and it's used to make sure that they can trace back to the printer that printed anything on the off chance that what you're printing is fake money.

JACOB HOFFMAN-ANDREWS
Indeed, yeah.
The other thing people really worry about is that AI will make it a lot easier to generate disinformation and then spread it and of course if you're generating disinformation it's useful to strip out the watermark. You would maybe prefer that people don't know it's AI. And so you're not limited to resizing or cropping an image. You can actually, you know, run it through a program. You can see what the shades of all the different pixels are. And you, in theory probably know what the watermarking system in use is. And given that degree of flexibility, it seems very, very likely - and I think past technology has proven this out - that it's not going to be hard to strip out the watermark. And in fact, it's not even going to be hard to develop a program to automatically strip out the watermark.

CINDY COHN
Yep. And you, you end up in a cat and mouse game where the people who you most want to catch, who are doing sophisticated disinformation, say to try to upset elections, are going to be able to either strip out the watermark or fake it and so you end up where the things that you most want to identify are probably going to trick people. Is that, is that the way you're thinking about it?

JACOB HOFFMAN-ANDREWS
Yeah, that's pretty much what I'm getting at. I wanted to say one more thing on, um, watermarking. I'd like to talk about chainsaw dogs. There's this popular genre of image on Facebook right now of a man and his chainsaw carved wooden dog and, often accompanied by a caption like, look how great my dad is, he carved this beautiful thing.
And these are mostly AI generated and they receive, you know, thousands of likes and clicks and go wildly viral. And you can imagine a weaker form of the disinformation claim of say, ‘Well, okay, maybe state actors will strip out watermarks so they can conduct their disinformation campaigns, but at least adding watermarks to AI images will prevent this proliferation of garbage on the internet.’
People will be able to see, oh, that's a fake. I'm not going to click on it. And I think the problem with that is even people who are just surfing for likes on social media actually love to strip out credits from artists already. You know, cartoonists get their signatures stripped out and in the examples of these chainsaw dogs, you know, there is actually an original.
There's somebody who made a real carving of a dog. It was very skillfully executed. And these are generated using kind of image to image AI, where you take an image and you generate an image that has a lot of the same concepts. A guy, a dog, made of wood and so they're already trying to strip attribution in one way.
And I think likely they would also find a way to strip any watermarking on the images they're generating.

CINDY COHN
So Jacob, we heard earlier about Kit's ideal world. I'd love to hear about the future world that Jacob wants us to live in.

JACOB HOFFMAN-ANDREWS
Yeah. I think the key thing is, you know, that people are safer in their daily lives than they are today. They're not worried about their livelihoods going away. I think this is a recurring theme when most new technology is invented that, you know, if it replaces somebody's job, and that person's job doesn't get easier, they don't get to keep collecting a paycheck. They just lose their job.
So I think in the ideal future, people have a means to live and to be fulfilled in their lives to do meaningful work still. And also in general, human agency is expanded rather than restricted. The promise of a lot of technologies that, you know, you can do more in the world, you can achieve the conditions you want in your life.

CINDY COHN
Oh that sounds great. I want to come back to you Kit. We've talked a little about Kitopia, including at the top of the show. Let's talk a little bit more. What else are we missing?

KIT WALSH
So in Kitopia, people are able to use AI if it's a useful part of their artistic expression, they're able to use AI if they need to communicate something visual when I'm hiring a concept artist, when I am getting a corrective surgery, and I want to communicate to the surgeon what I want things to look like.
There are a lot of ways in which words don't communicate as well as images. And not everyone has the skill or the time or interest to go and learn a bunch of photoshop to communicate with their surgeon. I think it would be great if more people were interested and had the leisure and freedom to do visual art.
But in Kitopia, that's something that you have because your basic needs are met. And in part, automation is something that should help us do that more. The ability to automate aspects of, of labor should wind up benefiting everybody. That's the vision of AI in Kitopia.

CINDY COHN
Nice. Well that's a wonderful place to end. We're all gonna pack our bags and move to Kitopia. And hopefully by the time we get there, it’ll be waiting for us.
You know, Jason, that was such a rich conversation. I'm not sure we need to do a little recap like we usually do. Let's just close it out.

JASON KELLEY
Yeah, you know, that sounds good. I'll take it from here. Thanks for joining us for this episode of How to Fix the Internet. If you have feedback or suggestions, we would love to hear from you. You can visit EFF.org slash podcasts to click on listener feedback and let us know what you think of this or any other episode.
You can also get a transcript or information about this episode and the guests. And while you're there of course, you can become an EFF member, pick up some merch, or just see what's happening in digital rights this or any other week. This podcast is licensed Creative Commons Attribution 4. 0 International and includes music licensed Creative Commons Unported by their creators.
In this episode, you heard Kalte Ohren by Alex featuring starfrosch & Jerry Spoon; lost Track by Airtone; Come Inside by Zep Hume; Xena's Kiss/Medea's Kiss by MWIC; Homesick By Siobhan D and Drops of H2O ( The Filtered Water Treatment ) by J.Lang. Our theme music is by Nat Keefe of BeatMower with Reed Mathis. And How to Fix the Internet is supported by the Alfred P. Sloan Foundation's program in public understanding of science and technology. We’ll see you next time. I’m Jason Kelley.

CINDY COHN
And I’m Cindy Cohn.

 

Josh Richman

California’s Facial Recognition Bill Is Not the Solution We Need

2 months 2 weeks ago

California Assemblymember Phil Ting has introduced A.B. 1814, a bill that would supposedly regulate police use of facial recognition technology. The problem is that it would do little to actually change the status quo of how police use this invasive and problematic technology. Police use of facial recognition poses a massive risk to civil liberties, privacy, and even our physical health as the technology has been known to wrongfully sic armed police on innocent peopleparticularly Black men and women. That’s why this issue is too important to throw inadequate or half-measures like A.B. 1814 to try to fix it.

The bill dictates that police should examine facial recognition matches “with care” and that a match should not be the sole basis for the probable cause for an arrest or search warrant. And while we agree it is a big issue that police seem to repeatedly use the matches spit out by a computer as the only justification for arresting people, theoretically the limit this bill imposes is already the limit. Police departments and facial recognition companies alike both maintain that police cannot justify an arrest using only algorithmic matches–so what would this bill really change? It only gives the appearance of doing something to address face recognition technology's harms, while inadvertently allowing the practice to continue.

Additionally, A.B. 1814 gives defendants no real recourse against police who violate its requirements. There is neither a suppression remedy nor a usable private cause of action. The bill lacks transparency requirements which would compel police departments to reveal if they used face recognition in the first place. This means if police did arrest someone wrongfully because a computer said they looked similar to the subject, someone would likely not even know they could sue the department over damages, unless they uncovered it while being prosecuted. 

Despite these attempts at creating leaky bureaucratic reforms, police may continue to use this technology to identify people at protests, track marginalized individuals when they visit doctors or have other personal encounters, as well as any other number of civil liberties-chilling uses police might overtly or inadvertently deploy. It is this reason that EFF continues to advocate for a complete ban on government use of face recognition–an approach that has also resulted in cities across the United States standing up for themselves and enacting bans. Until the day comes that California lawmakers realize the urgent need to ban government use of face recognition, we will continue to differentiate between bills that will make a serious difference in the lives of the surveilled, and those that do not. That is why we are urging Assemblymembers to vote no on A.B. 1814. 

Matthew Guariglia

The Surgeon General's Fear-Mongering, Unconstitutional Effort to Label Social Media

2 months 2 weeks ago

Surgeon General Vivek Murthy’s extraordinarily misguided and speech-chilling call this week to label social media platforms as harmful to adolescents is shameful fear-mongering that lacks scientific evidence and turns the nation’s top physician into a censor. This claim is particularly alarming given the far more complex and nuanced picture that studies have drawn about how social media and young people’s mental health interact.

The Surgeon General’s suggestion that speech be labeled as dangerous is extraordinary. Communications platforms are not comparable to unsafe food, unsafe cars, or cigarettes, all of which are physical products—rather than communications platforms—that can cause physical injury. Government warnings on speech implicate our fundamental rights to speak, to receive information, and to think. Murthy’s effort will harm teens, not help them, and the announcement puts the surgeon general in the same category as censorial public officials like Anthony Comstock

There is no scientific consensus that social media is harmful to children's mental health. Social science shows that social media can help children overcome feelings of isolation and anxiety. This is particularly true for LBGTQ+ teens. EFF recently conducted a survey in which young people told us that online platforms are the safest spaces for them, where they can say the things they can't in real life ‘for fear of torment.’ They say these spaces have improved their mental health and given them a ‘haven’ to talk openly and safely. This comports with Pew Research findings that teens are more likely to report positive than negative experiences in their social media use. 

Additionally, Murthy’s effort to label social media creates significant First Amendment problems in its own right, as any government labeling effort would be compelled speech and courts are likely to strike it down.

Young people’s use of social media has been under attack for several years. Several states have recently introduced and enacted unconstitutional laws that would require age verification on social media platforms, effectively banning some young people from them. Congress is also debating several federal censorship bills, including the Kids Online Safety Act and the Kids Off Social Media Act, that would seriously impact young people’s ability to use social media platforms without censorship. Last year, Montana banned the video-sharing app TikTok, citing both its Chinese ownership and its interest in protecting minors from harmful content. That ban was struck down as unconstitutionally overbroad; despite that, Congress passed a similar federal law forcing TikTok’s owner, ByteDance, to divest the company or face a national ban.

Like Murthy, lawmakers pushing these regulations cherry-pick the research, nebulously citing social media’s impact on young people, and dismissing both positive aspects of platforms and the dangerous impact these laws have on all users of social media, adults and minors alike. 

We agree that social media is not perfect, and can have negative impacts on some users, regardless of age. But if Congress is serious about protecting children online, it should enact policies that promote choice in the marketplace and digital literacy. Most importantly, we need comprehensive privacy laws that protect all internet users from predatory data gathering and sales that target us for advertising and abuse.

Aaron Mackey
Checked
28 minutes 1 second ago
EFF's Deeplinks Blog: Noteworthy news from around the internet
Subscribe to EFF update feed