“Guardrails” Won’t Protect Nashville Residents From AI-Enabled Camera Networks

1 month ago

Nashville’s Metropolitan Council is one vote away from passing an ordinance that’s being branded as “guardrails” against the privacy problems that come with giving the police a connected camera system like Axon’s Fusus. But Nashville locals are right to be skeptical of just how much protection from mass surveillance products they can expect.  

"I am against these guardrails," council member Ginny Welsch told the Tennessean recently. "I think they're kind of a farce. I don't think there can be any guardrail when we are giving up our privacy and putting in a surveillance system." 

Likewise, Electronic Frontier Alliance member Lucy Parsons Labs has inveighed against Fusus and the supposed guardrails as a fix to legislators’ and residents’ concerns in a letter to the Metropolitan Council. 

While the ordinance doesn’t name the company specifically, it was introduced in response to privacy concerns over the city’s possible contract for Fusus, an Axon system that facilitates access to live camera footage for police and helps funnel such feeds into real-time crime centers. In particular, local opponents are concerned about data-sharing—a critical part of Fusus—that could impede the city’s ability to uphold its values against the criminalization of some residents, like undocumented immigrants and people seeking reproductive or gender-affirming care.

This technology product, which was acquired by the police surveillance giant Axon in 2024, facilitates two major functions for police:

  • With the click of a button—or the tap of an icon on a map—officers can get access to live camera footage from public and private cameras, including the police’s Axon body-worn cameras, that have been integrated into the Fusus network.
  • Data feeds from a variety of surveillance tools—like body-worn cameras, drones, gunshot detection, and the connected camera network—can be aggregated into a system that makes those streams quickly accessible and susceptible to further analysis by features marketed as “artificial intelligence.”

From 2022 through 2023, Metropolitan Nashville Police Department (MNPD) had, unbeknownst to the public, already been using Fusus. When the contract came back under consideration, a public outcry and unanswered questions about the system led to its suspension, and the issue was deferred multiple times before the contract renewal was voted down late last year. Nashville council members determined that the Fusus system posed too great a threat to vulnerable groups that the council has sought to protect with city policies and resolutions, including pregnant residents, immigrants, and residents seeking gender-affirming care, among others. The state has criminalized some of the populations that the city of Nashville has passed ordinances to protect. 

Unfortunately, the fight against the sprawling surveillance of Fusus continues. The city council is now making its final consideration of the aforementionedan ordinance that some of its members say will protect city residents in the event that the mayor and other Fusus fans are able to get a contract signed after all.

These so-called guardrails include:

  • restricting the MNPD from accessing private cameras or installing public safety cameras in locations “where there is a reasonable expectation of privacy”; 
  • prohibiting using face recognition to identify individuals in the connected camera system’s footage; 
  • policies addressing authorized access to and use of the connected camera system, including how officers will be trained, and how they will be disciplined for any violations of the policy;
  • quarterly audits of access to the connected camera system; 
  • mandatory inclusion of a clause in procurement contracts allowing for immediate termination should violations of the ordinance be identified; 
  • mandatory reporting to the mayor and the council about any violations of the ordinance, the policies, or other abuse of access to the camera network within seven days of the discovery. 

Here’s the thing: even if these limited “guardrails” are in place, the only true protection from the improper use of the AI-enabled Fusus system is to not use it at all. 

We’ve seen that when law enforcement has access to cameras, they will use them, even if there are clear regulations prohibiting those uses: 

  • Black residents of a subsidized housing development became the primary surveillance targets for police officers with Fusus access in Toledo, Ohio. 

Firms such as Fusus and its parent company Axon are pushing AI-driven features, and databases with interjurisdictional access. Surveillance technology is bending toward a future where all of our data are being captured, including our movements by street cameras (like those that would be added to Fusus), our driving patterns by ALPR, our living habits by apps, and our actions online by web trackers, and then being combined, sold, and shared.

When Nashville first started its relationship with Fusus in 2022, the company featured only a few products, primarily focused on standardizing video feeds from different camera providers. 

Now, Fusus is aggressively leaning into artificial intelligence, claiming that its “AI on the Edge” feature is built into the initial capture phase and processes as soon as video is taken. Even if the city bans use of face recognition for the connected camera system, the Fusus system boasts that it can detect humans, objects, and combine other characteristics to identify individuals, detect movements, and set notifications based on certain characteristics and behaviors. Marketing material claims that the system comes “pre-loaded with dozens of search and analysis variables and profiles that are ready for action,” including a "robust & growing AI library.” It’s unclear how these AI recognition options are generated or how they are vetted, if at all, or whether they can even be removed as would be required by the ordinance.

A page from Fusus marketing materials, released through a public records request, featuring information on the artificial intelligence capabilities of its system

The proposed “guardrails” in Nashville are insufficient to address danger posed by mass surveillance systems, and the city of Nashville shouldn’t think they’ve protected their residents, tourists, and other visitors by passing them. Nashville residents and other advocacy groups have already raised concerns.

The only true way to protect Nashville’s residents against dragnet surveillance and overcriminalization is to block access to these invasive technologies altogether. Though this ordinance has passed its second reading, Nashville should not adopt Fusus or any other connected camera system, regardless of whether the ordinance is ultimately adopted. If Councilors care about protecting their constituents, they should hold the line against Fusus. 

Beryl Lipton

【リレー時評】沖縄戦から80年「この国」の現実は=金城正洋(JCJ沖縄世話人)

1 month ago
 沖縄は旧暦で各種行事を行う。今年の旧正月は1月29日だった。県内各地の漁港では漁業者(ウミンチュ)たちが航海安全と豊漁、家族の健康を願い、漁船に大漁旗を掲げるのが習わしだ。 旧正月のころ台湾や中国、香港など中華圏は新年の「春節」。ベトナムは「テト」。韓国、東南アジアも新年に沸く。民族の大移動といわれる春節。沖縄も海外の観光客であふれている。 カンヒザクラ(緋寒桜ともいう)の淡いピンクの花が青空に映える中、初春の沖縄の風物詩・プロ野球の春季キャンプも真っ盛り。どの球場も県内外..
JCJ

[B] 「アメリカにNOノ―!」【西サハラ最新情報】  平田伊都子

1 month ago
トランプ米大統領が己の思いつくまま発信する言葉に、「Oh NO ‼オー、ノー!!」と、世界中で声が上がっています。 しかし、「ノー」の声は届かず、トランプは毎日毎日、嬉々としてトランプ・ワンマンショーを、ホワイトハウスをスタジオにし楽しんでいます。 大迷惑を被っている世界の庶民は、「ウクライナ戦争もガザ戦争も関税戦争も、全て空手形でなぎ倒そうとするトランプは、結局、ユダヤ財閥にノーと言えないのだ」と、トランプのアキレス腱を突っついて、憂さを晴らすしかないようです。
日刊ベリタ

EFF to NSF: AI Action Plan Must Put People First

1 month ago

This past January the new administration issued an executive order on Artificial Intelligence (AI), taking the place of the now rescinded Biden-era order, calling for a new AI Action Plan tasked with “unburdening” the current AI industry to stoke innovation and remove “engineered social agendas” from the industry. This new action plan for the president is currently being developed and open to public comments to the National Science Foundation (NSF).

EFF answered with a few clear points: First, government procurement of decision-making (ADM) technologies must be done with transparency and public accountability—no secret and untested algorithms should decide who keeps their job or who is denied safe haven in the United States. Second, Generative AI policy rules must be narrowly focused and proportionate to actual harms, with an eye on protecting other public interests. And finally, we shouldn't entrench the biggest companies and gatekeepers with AI licensing schemes.

Government Automated Decision Making

US procurement of AI has moved with remarkable speed and an alarming lack of transparency. By wasting money on systems with no proven track record, this procurement not only entrenches the largest AI companies, but risks infringing the civil liberties of all people subject to these automated decisions.

These harms aren’t theoretical, we have already seen a move to adopt experimental AI tools in policing and national security, including immigration enforcement. Recent reports also indicate the Department of Government Efficiency (DOGE) intends to apply AI to evaluate federal workers, and use the results to make decisions about their continued employment.

Automating important decisions about people is reckless and dangerous. At best these new AI tools are ineffective nonsense machines which require more labor to correct inaccuracies, but at worst result in irrational and discriminatory outcomes obscured by the blackbox nature of the technology.

Instead, the adoption of such tools must be done with a robust public notice-and-comment practice as required by the Administrative Procedure Act. This process helps weed out wasteful spending on AI snake oil, and identifies when the use of such AI tools are inappropriate or harmful.

Additionally, the AI action plan should favor tools developed under the principles of free and open-source software. These principles are essential for evaluating the efficacy of these models, and ensure they uphold a more fair and scientific development process. Furthermore, more open development stokes innovation and ensures public spending ultimately benefits the public—not just the most established companies.

Don’t Enable Powerful Gatekeepers

Spurred by the general anxiety about Generative AI, lawmakers have drafted sweeping regulations based on speculation, and with little regard for the multiple public interests at stake. Though there are legitimate concerns, this reactionary approach to policy is exactly what we warned against back in 2023.

For example, bills like NO FAKES and NO AI Fraud expand copyright laws to favor corporate giants over everyone else’s expression. NO FAKES even includes a scheme for a DMCA-like notice takedown process, long bemoaned by creatives online for encouraging broader and automated online censorship. Other policymakers propose technical requirements like watermarking that are riddled with practical points of failure.

Among these dubious solutions is the growing prominence of AI licensing schemes which limit the potential of AI development to the highest bidders. This intrusion on fair use creates a paywall protecting only the biggest tech and media publishing companies—cutting out the actual creators these licenses nominally protect. It’s like helping a bullied kid by giving them more lunch money to give their bully.

This is the wrong approach. Looking for easy solutions like expanding copyright, hurts everyone. Particularly smaller artists, researchers, and businesses who cannot compete with the big gatekeepers of industry. AI has threatened the fair pay and treatment of creative labor, but sacrificing secondary use doesn’t remedy the underlying imbalance of power between labor and oligopolies.

People have a right to engage with culture and express themselves unburdened by private cartels. Policymakers should focus on narrowly crafted policies to preserve these rights, and keep rulemaking constrained to tested solutions addressing actual harms.

You can read our comments here.

Rory Mir

[B] イタリア現代史ミステリー第2弾「ウスティカの悲劇」(その2)~チャオ!イタリア通信(サトウノリコ)

1 month ago
事件当時、ローマの検察官だったジョルジョ・サンタクローチェが事件の捜査を担当する。サンタクローチェは、1980年11月にワシントンに赴く。アメリカ海軍の元戦闘機操縦士で航空レーダー分析の専門家であるジョン・マッキンドルにレーダー分析をしてもらうためだ。(サトウノリコ=イタリア在住)
日刊ベリタ

EFF Thanks Fastly for Donated Tools to Help Keep Our Website Secure

1 month ago

EFF’s most important platform for welcoming everyone to join us in our fight for a better digital future is our website, eff.org. We thank Fastly for their generous in-kind contribution of services helping keep EFF’s website online.

Eff.org was first registered in 1990, just three months after the organization was founded, and long before the web was an essential part of daily life. Our website and the fight for digital rights grew rapidly alongside each other. However, along with rising threats to our freedoms online, threats to our site have also grown.

It takes a village to keep eff.org online in 2025. Every day our staff work tirelessly to protect the site from everything from DDoS attacks to automated hacking attempts, and everything in between. As AI has taken off, so have crawlers and bots that scrape content to train LLMs, sometimes without respecting rate limits we’ve asked them to observe. Newly donated security add-ons from Fastly help us automate DDoS prevention and rate limiting, preventing our servers from getting overloaded when misbehaving visitors abuse our sites. Fastly also caches the content from our site around the globe, meaning that visitors from all over the world can access eff.org and our other sites quickly and easily.

EFF is member-supported by people who share our vision for a better digital future. We thank Fastly for showing their support for our mission to ensure that technology supports freedom, justice, and innovation for all people of the world with an in-kind gift of their full suite of services.

Allison Morris