U.S. Border Surveillance Towers Have Always Been Broken

3 months 2 weeks ago

A new bombshell scoop from NBC News revealed an internal U.S. Border Patrol memo claiming that 30 percent of camera towers that compose the agency's "Remote Video Surveillance System" (RVSS) program are broken. According to the report, the memo describes "several technical problems" affecting approximately 150 towers.

Except, this isn't a bombshell. What should actually be shocking is that Congressional leaders are acting shocked, like those who recently sent a letter about the towers to Department of Homeland Security (DHS) Secretary Alejandro Mayorkas. These revelations simply reiterate what people who have been watching border technology have known for decades: Surveillance at the U.S.-Mexico border is a wasteful endeavor that is ill-equipped to respond to an ill-defined problem.

Yet, after years of bipartisan recognition that these programs were straight-up boondoggles, there seems to be a competition among political leaders to throw the most money at programs that continue to fail.

Official oversight reports about the failures, repeated breakages, and general ineffectiveness of these camera towers have been public since at least the mid-2000s. So why haven't border security agencies confronted the problem in the last 25 years? One reason is that these cameras are largely political theater; the technology dazzles publicly, then fizzles quietly. Meanwhile, communities that should be thriving at the border are treated like a laboratory for tech companies looking to cash in on often exaggerated—if not fabricated—homeland security threats.

The Acronym Game

EFF is mapping surveillance at the U.S.-Mexico border

.

In fact, the history of camera towers at the border is an ugly cycle. First, Border Patrol introduces a surveillance program with a catchy name and big promises. Then a few years later, oversight bodies, including Congress, conclude it's an abject mess. But rather than abandon the program once and for all, border security officials come up with a new name, slap on a fresh coat of paint, and continue on. A few years later, history repeats.

In the early 2000s, there was the Integrated Surveillance Intelligence System (ISIS), with the installation of RVSS towers in places like Calexico, California and Nogales, Arizona, which was later became the America's Shield Initiative (ASI). After those failures, there was Project 28 (P-28), the first stage of the Secure Border Initiative (SBInet). When that program was canceled, there were various new programs like the Arizona Border Surveillance Technology Plan, which became the Southwest Border Technology Plan. Border Patrol introduced the Integrated Fixed Tower (IFT) program and the RVSS Update program, then the Automated Surveillance Tower (AST) program. And now we've got a whole slew of new acronyms, including the Integrated Surveillance Tower (IST) program and the Consolidated Towers and Surveillance Equipment (CTSE) program.

Feeling overwhelmed by acronyms? Welcome to the shell game of border surveillance. Here's what happens whenever oversight bodies take a closer look.

ISIS and ASI

An RVSS from the early 2000s in Calexico, California.

Let's start with the Integrated Surveillance Intelligence System (ISIS), a program comprised of towers, sensors and databases originally launched in 1997 by the Immigration and Naturalization Service. A few years later, INS was reorganized into the U.S. Department of Homeland Security (DHS), and ISIS became part of the newly formed Customs & Border Protection (CBP).

It was only a matter of years before the DHS Inspector General concluded that ISIS was a flop: "ISIS remote surveillance technology yielded few apprehensions as a percentage of detection, resulted in needless investigations of legitimate activity, and consumed valuable staff time to perform video analysis or investigate sensor alerts."

During Senate hearings, Sen. Judd Gregg (R-NH), complained about a "total breakdown in the camera structures," and that the U.S. government "bought cameras that didn't work."

Around 2004, ISIS was folded into the new America's Shield Initiative (ASI), which officials claimed would fix those problems. CBP Commissioner Robert Bonner even promoted ASI as a "critical part of CBP’s strategy to build smarter borders." Yet, less than a year later, Bonner stepped down, and the Government Accountability Office (GAO) found the ASI had numerous unresolved issues necessitating a total reevaluation. CBP disputed none of the findings and explained it was dismantling ASI in order to move onto something new that would solve everything: the Secure Border Initiative (SBI).

Reflecting on the ISIS/ASI programs in 2008, Rep. Mike Rogers (R-MI) said, "What we found was a camera and sensor system that was plagued by mismanagement, operational problems, and financial waste. At that time, we put the Department on notice that mistakes of the past should not be repeated in SBInet." 

You can guess what happened next. 

P-28 and SBInet

The subsequent iteration was called Project 28, which then evolved into the Secure Border Initiative's SBInet, starting in the Arizona desert. 

In 2010, the DHS Chief Information Officer summarized its comprehensive review: "'Project 28,' the initial prototype for the SBInet system, did not perform as planned. Project 28 was not scalable to meet the mission requirements for a national comment [sic] and control system, and experienced significant technical difficulties." 

A DHS graphic illustrating the SBInet concept

Meanwhile, bipartisan consensus had emerged about the failure of the program, due to the technical problems as well as contracting irregularities and cost overruns.

As Rep. Christopher Carney (D-PA) said in his prepared statement during Congressional hearings:

P–28 and the larger SBInet program are supposed to be a model of how the Federal Government is leveraging technology to secure our borders, but Project 28, in my mind, has achieved a dubious distinction as a trifecta of bad Government contracting: Poor contract management; poor contractor performance; and a poor final product.

Rep. Rogers' remarks were even more cutting: "You know the history of ISIS and what a disaster that was, and we had hoped to take the lessons from that and do better on this and, apparently, we haven’t done much better. "

Perhaps most damning of all was yet another GAO report that found, "SBInet defects have been found, with the number of new defects identified generally increasing faster than the number being fixed—a trend that is not indicative of a system that is maturing." 

In January 2011, DHS Secretary Janet Napolitano canceled the $3-billion program.

IFTs, RVSSs, and ASTs

Following the termination of SBInet, the Christian Science Monitor ran the naive headline, "US cancels 'virtual fence' along Mexican border. What's Plan B?" Three years later, the newspaper answered its own question with another question, "'Virtual' border fence idea revived. Another 'billion dollar boondoggle'?"

Boeing was the main contractor blamed for SBINet's failure, but Border Patrol ultimately awarded one of the biggest new contracts to Elbit Systems, which had been one of Boeing's subcontractors on SBInet. Elbit began installing IFTs (again, that stands for "Integrated Fixed Towers") in many of the exact same places slated for SBInet. In some cases, the equipment was simply swapped on an existing SBInet tower.

Meanwhile, another contractor, General Dynamics Information Technology, began installing new RVSS towers and upgrading old ones as part of the RVSS-U program. Border Patrol also started installing hundreds of "Autonomous Surveillance Towers" (ASTs) by yet another vendor, Anduril Industries, embracing the new buzz of artificial intelligence.

An Autonomous Surveillance Tower and an RVSS tower along the Rio Grande.

In 2017, the GAO complained the Border Patrol's poor data quality made the agency "limited in its ability to determine the mission benefits of its surveillance technologies." In one case, Border Patrol stations in the Rio Grande Valley claimed IFTs assisted in 500 cases in just six months. The problem with that assertion was there are no IFTs in Texas or, in fact, anywhere outside Arizona.

A few years later, the DHS Inspector General issued yet another report indicating not much had improved:

CBP faced additional challenges that reduced the effectiveness of its existing technology. Border Patrol officials stated they had inadequate personnel to fully leverage surveillance technology or maintain current information technology systems and infrastructure on site. Further, we identified security vulnerabilities on some CBP servers and workstations not in compliance due to disagreement about the timeline for implementing DHS configuration management requirements.

CBP is not well-equipped to assess its technology effectiveness to respond to these deficiencies. CBP has been aware of this challenge since at least 2017 but lacks a standard process and accurate data to overcome it.

Overall, these deficiencies have limited CBP’s ability to detect and prevent the illegal entry of noncitizens who may pose threats to national security.

Around that same time, the RAND Corporation published a study funded by DHS that found "strong evidence" the IFT program was having no impact on apprehension levels at the border, and only "weak" and "inconclusive" evidence that the RVSS towers were having any effect on apprehensions.

And yet, border authorities and their supporters in Congress are continuing to promote unproven, AI-driven technologies as the latest remedy for years of failures, including the ones voiced in the memo obtained by NBC News. These systems involve cameras controlled by algorithms that automatically identify and track objects or people of interest. But in an age when algorithmic errors and bias are being identified nearly everyday in every sector including law enforcement, it is unclear how this technology has earned the trust of the government.

History Keeps Repeating

That brings us today, with reportedly 150 or more towers out of service. So why does Washington keep supporting surveillance at the border? Why are they proposing record-level funding for a system that seems irreparable? Why have they abandoned their duty to scrutinize federal programs?

Well, one reason may be that treating problems at the border as humanitarian crises or pursuing foreign policy or immigration reform measures isn't as politically useful as promoting a phantom "invasion" that requires a military-style response. Another reason may be that tech companies and defense contractors wield immense amounts of influence and stand to make millions, if not billions, profiting off border surveillance. The price is paid by taxpayers, but also in the civil liberties of border communities and the human rights of asylum seekers and migrants.

But perhaps the biggest reason this history keeps repeating itself is that no one is ever really held accountable for wasting potentially billions of dollars on high-tech snake oil.

Dave Maass

EFF to Third Circuit: TikTok Has Section 230 Immunity for Video Recommendations

3 months 2 weeks ago

UPDATE: On October 23, 2024, the Third Circuit denied TikTok's petition for rehearing en banc.

EFF legal intern Nick Delehanty was the principal author of this post.

EFF filed an amicus brief in the U.S. Court of Appeals for the Third Circuit in support of TikTok’s request that the full court reconsider the case Anderson v. TikTok after a three-judge panel ruled that Section 230 immunity doesn’t apply to TikTok’s recommendations of users’ videos. We argued that the panel was incorrect on the law, and this case has wide-ranging implications for the internet as we know it today. EFF was joined on the brief with Center for Democracy & Technology (CDT), Foundation for Individual Rights and Expression (FIRE), Public Knowledge, Reason Foundation, and Wikimedia Foundation.

At issue is the panel’s misapplication of First Amendment precedent. The First Amendment protects the editorial decisions of publishers about whether and how to display content, such as the videos TikTok displays to users through its recommendation algorithm.

Additionally, because common law allows publishers to be liable for other people’s content that they publish (for example, letters to the editor that are defamatory in print newspapers) due to limited First Amendment protection, Congress passed Section 230 to protect online platforms from liability for harmful user-generated content.

Section 230 has been pivotal for the growth and diversity of the internet—without it, internet intermediaries would potentially be liable for every piece of content posted by users, making them less likely to offer open platforms for third-party speech.

In this case, the Third Circuit panel erroneously held that since TikTok enjoys protection for editorial choices under the First Amendment, TikTok’s recommendations of user videos amount to TikTok’s first-party speech, making it ineligible for Section 230 immunity. In our brief, we argued that First Amendment protection for editorial choices and Section 230 protection are not mutually exclusive.

We also argued that the panel’s ruling does not align with what every other circuit has found: that Section 230 also immunizes the editorial decisions of internet intermediaries. We made four main points in support of this argument:

  • First, the panel ignored the text of Section 230 in that editorial choices are included in the commonly understood definition of “publisher” in the statute.
  • Second, the panel created a loophole in Section 230 by allowing plaintiffs who were harmed by user-generated content to bypass Section 230 by focusing on an online platform’s editorial decisions about how that content was displayed.
  • Third, it’s crucial that Section 230 protects editorial decisions notwithstanding additional First Amendment protection because Section 230 immunity is not only a defense against liability, it’s also a way to end a lawsuit early. Online platforms might ultimately win lawsuits on First Amendment grounds, but the time and expense of protracted litigation would make them less interested in hosting user-generated content. Section 230’s immunity from suit (as well as immunity from liability) advances Congress’ goal of encouraging speech at scale on the internet.
  • Fourth, TikTok’s recommendations specifically are part of a publisher’s “traditional editorial functions” because recommendations reflect choices around the display of third-party content and so are protected by Section 230.

We also argued that allowing the panel’s decision to stand would harm not only internet intermediaries, but all internet users. If internet intermediaries were liable for recommending or otherwise deciding how to display third-party content posted to their platforms, they would end useful content curation and engage in heavy-handed censorship to remove anything that might be legally problematic from their platforms. These responses to a weakened Section 230 would greatly limit users’ speech on the internet.

The full Third Circuit should recognize the error of the panel’s decision and reverse to preserve free expression online.

Sophia Cope

A Flourishing Internet Depends on Competition

3 months 2 weeks ago

Antitrust law has long recognized that monopolies stifle innovation and gouge consumers on price. When it comes to Big Tech, harm to innovation—in the form of  “kill zones,” where major corporations buy up new entrants to a market before they can compete with them—has been easy to find. Consumer harms have been harder to quantify, since a lot of services the Big Tech companies offer are “free.” This is why we must move beyond price as the major determinator of consumer harm. And once that’s done, it’s easier to see even greater benefits competition brings to the greater internet ecosystem. 

In the decades since the internet entered our lives, it has changed from a wholly new and untested environment to one where a few major players dominate everyone's experience. Policymakers have been slow to adapt and have equated what's good for the whole internet with what is good for those companies. Instead of a balanced ecosystem, we have a monoculture. We need to eliminate the build up of power around the giants and instead have fertile soil for new growth.

Content Moderation 

In content moderation, for example, it’s basically rote for experts to say that content moderation is impossible at scale. Facebook reports over three billion active users and is available in over 100 languages. However, Facebook is an American company that primarily does its business in English. Communication, in every culture, is heavily dependent on context. Even if it was hiring experts in every language it is in, which it manifestly is not, the company itself runs on American values. Being able to choose a social media service rooted in your own culture and language is important. It’s not that people have to choose that service, but it’s important that they have the option.  

This sometimes happens in smaller fora. For example, the knitting website Ravelry, a central hub for patterns and discussions about yarn, banned all discussions about then-President Donald Trump in 2019, as it was getting toxic. A number of disgruntled users banded together to make their disallowed content available in other places. 

In a competitive landscape, instead of demanding that Facebook or Twitter, or YouTube have the exact content rules you want, you could pick a service with the ones you want. If you want everything protected by the First Amendment, you could find it. If you want an environment with clear rules, consistently enforced, you could find that. Especially since smaller platforms could actually enforce its rules, unlike the current behemoths.  

Product Quality 

The same thing applies to product quality and the “enshittification” of platforms. Even if all of Facebook’s users spoke the same language, that’s no guarantee that they share the same values, needs, or wants. But, Facebook is an American company and it conducts its business largely in English and according to American cultural norms. As it is, Facebook’s feeds are designed to maximize user engagement and time on the service. Some people may like the recommendation algorithm, but other may want the traditional chronological feed. There’s no incentive for Facebook to offer the choice because it is not concerned with losing users to a competitor that does. It’s concerned with being able to serve as many ads to as many people as possible. In general, Facebook lacks user controls that would allow people to customize their experience on the site. That includes the ability to reorganize your feed to be chronological, to eliminate posts from anyone you don’t know, etc. There may be people who like the current, ad-focused algorithm, but no one else can get a product they would like. 

Another obvious example is how much the experience of googling something has deteriorated. It’s almost hack to complain about it now, but when when it started, Google was revolutionary in its ability to a) find exactly what you were searching for and b) allow normal language searching (that is, not requiring you to use boolean searches in order to get the desired result). Google’s secret sauce was, for a long time, the ability to find the right result to a totally unique search query. If you could remember some specific string of words in the thing you were looking for, Google could find it. However, in the endless hunt for “growth,” Google moved away from quality search results and towards quantity.  It also clogged the first page of results with ads and sponsored links.  

Morals, Privacy, and Security 

There are many individuals and small businesses that would like to avoid using Big Tech services, either because they are bad or because they have ethical and moral concerns. But, the bigger they are, the harder it is to avoid. For example, even if someone decides not to buy products from Amazon.com because they don’t agree with how it treats its workers, they may not be able to avoid patronizing Amazon Web Services (AWS), which funds the commerce side of the business. Netflix, The Guardian, Twitter, and Nordstrom are all companies that pay for Amazon’s services. The Mississippi Department of Employment Security moved its data management to Amazon in 2021. Trying to avoid Amazon entirely is functionally impossible. This means that there is no way for people to “vote with their feet,” withholding their business from companies they disagree with.  

Security and privacy are also at risk without competition. For one thing, it’s easier for a malicious actor or oppressive state to get what they want when it’s all in the hands of a single company—a single point of failure. When a single company controls the tools everyone relies on, an outage cripples the globe. This digital monoculture was on display during this year's Crowdstrike outage, where one badly-thought-out update crashed networks across the world and across industries. The personal danger of digital monoculture shows itself when Facebook messages are used in a criminal investigation against a mother and daughter discussing abortion and in “geofence warrants” that demand Google turn over information about every device within a certain distance of a crime. For another thing, when everyone is only able to share expression in a few places that makes it easier for regimes to target certain speech and for gatekeepers to maintain control over creativity.  

Another example of the relationship between privacy and competition is Google’s so-called “Privacy Sandbox.” Google’s messaged it as removing “third-party cookies” that track you across the internet. However, the change actually just moved that data into the sole control of Google, helping cement its ad monopoly. Instead of eliminating tracking, the Privacy Sandbox does tracking within the browser directly, allowing Google to charge for access to the insights gleaned from your browsing history with advertisers and websites, rather than those companies doing it themselves. It’s not more privacy, it’s just concentrated control of data. 

You see this same thing at play with Apple’s app store in the saga of Beeper Mini, an app that allowed secure communications through iMessage between Apple and non-Apple phones. In doing so, it eliminated the dreaded “green bubbles” that indicated that messages were not encrypted (ie not between two iPhones). While Apple’s design choice was, in theory, meant to flag that your conversation wasn’t secure, it ended up being a design choice that motivated people to get iPhones just to avoid the stigma. Beeper Mini made messages more secure and removed the need to get a whole new phone to get rid of the green bubble. So Apple moved to break Beeper Mini, effectively choosing monopoly over security. If Apple had moved to secure non-iPhone messages on its own, that would be one thing. But it didn’t, it just prevented users from securing them on their own.  

Obviously, competition isn’t a panacea. But, like privacy, its prioritization means less emergency firefighting and more fire prevention. Think of it as a controlled burn—removing the dross that smothers new growth and allows fires to rage larger than ever before.  

Katharine Trendacosta

California Attorney General Issues New Guidance on Military Equipment to Law Enforcement

3 months 2 weeks ago

California law enforcement should take note: the state’s Attorney General has issued a new bulletin advising them on how to comply with AB 481—a state law that regulates how law enforcement agencies can use, purchase, and disclose information about military equipment at their disposal. This important guidance comes in the wake of an exposé showing that despite awareness of AB 481, the San Francisco Police Department (SFPD) flagrantly disregarded the law. EFF applauds the Attorney General’s office for reminding police and sheriff’s departments what the law says and what their obligations are, and urges the state’s top law enforcement officer to monitor agencies’ compliance with the law.

The bulletin emphasizes that law enforcement agencies must seek permission from governing bodies like city councils or boards of supervisors before buying any military equipment, or even applying for grants or soliciting donations to procure that equipment. The bulletin also reminds all California law enforcement agencies and state agencies with law enforcement divisions of their transparency obligations: they must post on their website a military equipment use policy that describes, among other details, the capabilities, purposes and authorized uses, and financial impacts of the equipment, as well as oversight and enforcement mechanisms for violations of the policy. Law enforcement agencies must also publish an annual military equipment report that provides information on how the equipment was used the previous year and the associated costs.

Agencies must cease use of any military equipment, including drones, if they have not sought the proper permission to use them. This is particularly important in San Francisco, where the SFPD has been caught, via public records, purchasing drones without seeking the proper authorization first, over the warnings of the department’s own policy officials.

In a climate where few cities and states have laws governing what technology and equipment police departments can use, Californians are fortunate to have regulations like AB 481 requiring transparency, oversight, and democratic control by elected officials of military equipment. But those regulations are far less effective if there is no accountability mechanism to ensure that police and sheriff’s departments follow them.


The SFPD and all other California law enforcement agencies must re-familiarize themselves with the rules. Police and sheriff’s departments must obtain permission and justify purchases before they buy military equipment, have use policies approved by their local governing body, and  provide yearly reports about what they have and how much it costs.

Matthew Guariglia

Prosecutors in Washington State Warn Police: Don’t Use Gen AI to Write Reports

3 months 2 weeks ago

The King County Prosecuting Attorney’s Office, which handles all prosecutions in the Seattle area, has instructed police in no uncertain terms: do not use AI to write police reports...for now. This is a good development. We hope prosecutors across the country will exercise such caution as companies continue to peddle technology – generative artificial intelligence (genAI) to help write police reports – that could harm people who come into contact with the criminal justice system.

Chief Deputy Prosecutor Daniel J. Clark said in a memo about AI-based tools to write narrative police reports based on body camera audio that the technology as it exists is “one we are not ready to accept.”

The memo continues,“We do not fear advances in technology – but we do have legitimate concerns about some of the products on the market now... AI continues to develop and we are hopeful that we will reach a point in the near future where these reports can be relied on. For now, our office has made the decision not to accept any police narratives that were produced with the assistance of AI.” We would add that, while EFF embraces advances in technology, we doubt genAI in the near future will be able to help police write reliable reports.

We agree with Chief Deputy Clark that: “While an officer is required to edit the narrative and assert under penalty of perjury that it is accurate, some of the [genAI] errors are so small that they will be missed in review.”

This is a well-reasoned and cautious approach. Some police want to cut the time they spend writing reports, and Axon’s new product DraftOne claims to do so by  exporting the labor to machines. But the public, and other local agencies, should be skeptical of this tech. After all, these documents are often essential for prosecutors to build their case, for district attorneys to recommend charges, and for defenders to cross examine arresting officers.

To read more on generative AI and police reports, click here

Matthew Guariglia

Preemption Playbook: Big Tech’s Blueprint Comes Straight from Big Tobacco

3 months 2 weeks ago

Big Tech is borrowing a page from Big Tobacco's playbook to wage war on your privacy, according to Jake Snow of the ACLU of Northern California. We agree.  

In the 1990s, the tobacco industry attempted to use federal law to override a broad swath of existing state laws and prevent states from future action on those areas. For Big Tobacco, it was the “Accommodation Program,” a national campaign ultimately aimed to override state indoor smoking laws with weaker federal law. Big Tech is now attempting this with federal privacy bills, like the American Privacy Rights Act (APRA), that would preempt many state privacy laws.  

In “Big Tech is Trying to Burn Privacy to the Ground–And They’re Using Big Tobacco’s Strategy to Do It,” Snow outlines a three-step process that both industries have used to weaken state laws. Faced with a public relations crisis, the industries:

  1. Muddy the waters by introducing various weak bills in different states.
  2. Complain that they are too confusing to comply with, 
  3. Ask for “preemption” of grassroots efforts.

“Preemption” is a legal doctrine that allows a higher level of government to supersede the power of a lower level of government (for example, a federal law can preempt a state law, and a state law can preempt a city or county ordinance).  

EFF has a clear position on this: we oppose federal privacy laws that preempt current and future state privacy protections, especially by a lower federal standard.  

Congress should set a nationwide baseline for privacy, but should not take away states' ability to react in the future to current and unforeseen problems. Earlier this year, EFF joined ACLU and dozens of digital and human rights organizations in opposing APRA’s preemption sections. The letter points out that, "the soundest approach to avoid the harms from preemption is to set the federal standard as a national baseline for privacy protections — and not a ceiling.” EFF led a similar coalition effort in 2018.  

Companies that collect and use our data—and have worked to kill strong state privacy bills time and again— want Congress to believe a "patchwork" of state laws is unworkable for data privacy. But many existing federal laws concerning privacy, civil rights, and more operate as regulatory floors and do not prevent states from enacting and enforcing their own stronger statutes. Complaints of this “patchwork” have long been a part of the strategy for both Big Tech and Big Tobacco.  

States have long been the “laboratories of democracy” and have led the way in the development of innovative privacy legislation. Because of this, federal laws should establish a floor and not a ceiling, particularly as new challenges rapidly emerge. Preemption would leave consumers with inadequate protections, and make them worse off than they would be in the absence of federal legislation.  

Congress never preempted states' authority to enact anti-smoking laws, despite Big Tobacco’s strenuous efforts. So there is hope that Big Tech won’t be able to preempt state privacy law, either. EFF will continue advocating against preemption to ensure that states can protect their citizens effectively. 

Read Jake Snow’s article here.

Rindala Alajaji

Courts Agree That No One Should Have a Monopoly Over the Law. Congress Shouldn’t Change That

3 months 2 weeks ago

Some people just don’t know how to take a hint. For more than a decade, giant standards development organizations (SDOs) have been fighting in courts around the country, trying use copyright law to control access to other laws. They claim that that they own the copyright in the text of some of the most important regulations in the country – the codes that protect product, building and environmental safety--and that they have the right to control access to those laws. And they keep losing because, it turns out, from New York, to Missouri, to the District of Columbia, judges understand that this is an absurd and undemocratic proposition. 

They suffered their latest defeat in Pennsylvania, where  a district court held that UpCodes, a company that has created a database of building codes – like the National Electrical Code--can include codes incorporated by reference into law. ASTM, a private organization that coordinated the development of some of those codes, insists that it retains copyright in them even after they have been adopted into law. Some courts, including the Fifth Circuit Court of Appeals, have rejected that theory outright, holding that standards lose copyright protection when they are incorporated into law. Others, like the DC Circuit Court of Appeals in a case EFF defended on behalf of Public.Resource.Org, have held that whether or not the legal status of the standards changes once they are incorporated into law, posting them online is a lawful fair use. 

In this case, ASTM v. UpCodes, the court followed the latter path. Relying in large part on the DC Circuit’s decision, as well as an amicus brief EFF filed in support of UpCodes, the court held that providing access to the law (for free or subject to a subscription for “premium” access) was a lawful fair use. A key theme to the ruling is the public interest in accessing law: 

incorporation by reference creates serious notice and accountability problems when the law is only accessible to those who can afford to pay for it. … And there is significant evidence of the practical value of providing unfettered access to technical standards that have been incorporated into law. For example, journalists have explained that this access is essential to inform their news coverage; union members have explained that this access helps them advocate and negotiate for safe working conditions; and the NAACP has explained that this access helps citizens assert their legal rights and advocate for legal reforms.

We’ve seen similar rulings around the country, from California to New York to Missouri. Combined with two appellate rulings, these amount to a clear judicial consensus. And it turns out the sky has not fallen; SDOs continue to profit from their work, thanks in part to the volunteer labor of the experts who actually draft the standards and don’t do it for the royalties.  You would think the SDOs would learn their lesson, and turn their focus back to developing standards, not lawsuits.

Instead, SDOs are asking Congress to rewrite the Constitution and affirm that SDOs retain copyright in their standards no matter what a federal regulator does, as long as they make them available online. We know what that means because the SDOs have already created “reading rooms” for some of their standards, and they show us that the SDOs’ idea of “making available” is “making available as if it was 1999.” The texts are not searchable, cannot be printed, downloaded, highlighted, or bookmarked for later viewing, and cannot be magnified without becoming blurry. Cross-referencing and comparison is virtually impossible. Often, a reader can view only a portion of each page at a time and, upon zooming in, must scroll from right to left to read a single line of text. As if that wasn’t bad enough, these reading rooms are inaccessible to print-disabled people altogether.. 

It’s a bad bargain that would trade our fundamental due process rights in exchange for a pinky promise of highly restricted access to the law. But if Congress takes that step, it’s a comfort to know that we can take the fight back to the courts and trust that judges, if not legislators, understand why laws are facts, not property, and should be free for all to access, read, and share. 

Corynne McSherry

EFF and IFPTE Local 20 Attain Labor Contract

3 months 2 weeks ago
First-Ever, Three-Year Pact Protects Workers’ Pay, Benefits, Working Conditions, and More

SAN FRANCISCO—Employees and management at the Electronic Frontier Foundation have achieved a first-ever labor contract, they jointly announced today.  EFF employees have joined the Engineers and Scientists of California Local 20, IFPTE.  

The EFF bargaining unit includes more than 60 non-management employees in teams across the organization’s program and administrative staff. The contract covers the usual scope of subjects including compensation; health insurance and other benefits; time off; working conditions; nondiscrimination, accommodation, and diversity; hiring; union rights; and more. 

"EFF is its people. From the moment that our staff decided to organize, we were supportive and approached these negotiations with a commitment to enshrining the best of our practices and adopting improvements through open and honest discussions,” EFF Executive Director Cindy Cohn said. “We are delighted that we were able to reach a contract that will ensure our team receives the wages, benefits, and protections they deserve as they continue to advance our mission of ensuring that technology supports freedom, justice and innovation for all people of the world.” 

“We’re pleased to have partnered with EFF management in crafting a contract that helps our colleagues thrive both at work and outside of work,” said Shirin Mori, a member of the EFF Workers Union negotiating team. “This contract is a testament to creative solutions to improve working conditions and benefits across the organization, while also safeguarding the things employees love about working at EFF. We deeply appreciate the positive working relationship with management in establishing a strong contract.” 

The three-year contract was ratified unanimously by EFF’s board of directors Sept. 18, and by 96 percent of the bargaining unit on Sept. 25. It is effective Oct. 1, 2024 through Sept. 30, 2027. 

EFF is the largest and oldest nonprofit defending civil liberties in the digital world. Founded in 1990, EFF champions user privacy, free expression, and innovation through impact litigation, policy analysis, grassroots activism, and technology development.  

The Engineers and Scientists of California Local 20, International Federation of Professional and Technical Engineers, is a democratic labor union representing more than 8,000 engineers, scientists, licensed health professionals, and attorneys at PG&E, Kaiser Permanente, the U.S. Environmental Protection Agency, Legal Aid at Work, numerous clinics and hospitals, and other employers throughout Northern California.  

For the contract: https://ifpte20.org/wp-content/uploads/2024/10/Electronic-Frontier-Foundation-2024-2027.pdf 

For more on IFPTE Local 20: https://ifpte20.org/ 

Josh Richman
Checked
2 hours 7 minutes ago
EFF's Deeplinks Blog: Noteworthy news from around the internet
Subscribe to EFF update feed