Beyond 5G時代に向けた新ビジネス戦略セミナー(第24回) 〜「AI時代の知財・標準化戦略−業界関係者が語る変革の最前線」開催の御案内〜
情報通信審議会 電気通信事業政策部会 電気通信番号政策委員会(第43回) 開催案内
携帯電話端末の販売価格に関する注意喚起
令和7年度「救急の日」及び「救急医療週間」
第25回日韓内政関係者セミナーの開催
村上総務大臣閣議後記者会見の概要
令和7年台風第12号による被害に係る普通交付税(11月定例交付分)の繰上げ交付
利用者情報に関するワーキンググループ(第30回)
家計調査報告(二人以上の世帯)2025年(令和7年)7月分
情報通信審議会 情報通信技術分科会 新世代モバイル通信システム委員会(第36回)
令和7年度震災復興特別交付税の9月交付額の決定
California Lawmakers: Support S.B. 524 to Rein in AI Written Police Reports
EFF urges California state lawmakers to pass S.B. 524, authored by Sen. Jesse Arreguín. This bill is an important first step in regaining control over police using generative AI to write their narrative police reports.
This bill does several important things: It mandates that police reports written by AI include disclaimers on every page or within the body of the text that make it clear that this report was written in part or in total by a computer. It also says that any reports written by AI must retain their first draft. That way, it should be easier for defense attorneys, judges, police supervisors, or any other auditing entity to see which portions of the final report were written by AI and which parts were written by the officer. Further, the bill requires officers to sign and verify that they read the report and its facts are correct. And it bans AI vendors from selling or sharing the information a police agency provided to the AI.
These common-sense, first-step reforms are important: watchdogs are struggling to figure out where and how AI is being used in a police context. In fact, a popular AI police report writing tool, Axon’s Draft One, would be out of compliance with this bill, which would require them to redesign their tool to make it more transparent.
This bill is an important first step in regaining control over police using generative AI to write their narrative police reports.
Draft One takes audio from an officer’s body-worn camera, and uses AI to turn that dialogue into a narrative police report. Because independent researchers have been unable to test it, there are important questions about how the system handles things like sarcasm, out of context comments, or interactions with members of the public that speak languages other than English. Another major concern is Draft One’s inability to keep track of which parts of a report were written by people and which parts were written by AI. By design, their product does not retain different iterations of the draft—making it easy for an officer to say, “I didn’t lie in my police report, the AI wrote that part.”
All lawmakers should pass regulations of AI written police reports. This technology could be nearly everywhere, and soon. Axon is a top supplier of body-worn cameras in the United States, which means they have a massive ready-made customer base. Through the bundling of products, AI-written police reports could be at a vast percentage of police departments.
AI-written police reports are unproven in terms of their accuracy, and their overall effects on the criminal justice system. Vendors still have a long way to go to prove this technology can be transparent and auditable. While it would not solve all of the many problems of AI encroaching on the criminal justice system, S.B. 524 is a good first step to rein in an unaccountable piece of technology.
We urge California lawmakers to pass S.B. 524.
【フォトアングル】日中戦争終結80周年記念イベントに370人参加=8月11日、東京・練馬区、伊東良平撮影
EFF Awards Spotlight ✨ Erie Meyer
In 1992 EFF presented our very first awards recognizing key leaders and organizations advancing innovation and championing civil liberties and human rights online. Now in 2025 we're continuing to celebrate the accomplishments of people working toward a better future for everyone with the EFF Awards!
All are invited to attend the EFF Awards on Wednesday, September 10 at the San Francisco Design Center. Whether you're an activist, an EFF supporter, a student interested in cyberlaw, or someone who wants to munch on a strolling dinner with other likeminded individuals, anyone can enjoy the ceremony!
GENERAL ADMISSION: $55 | CURRENT EFF MEMBERS: $45 | STUDENTS: $35
If you're not able to make it, we'll also be hosting a livestream of the event on Friday, September 12 at 12:00 PM PT. The event will also be recorded, and posted to YouTube and the Internet Archive after the livestream.
We are honored to present the three winners of this year's EFF Awards: Just Futures Law, Erie Meyer, and Software Freedom Law Center, India. But, before we kick off the ceremony next week, let's take a closer look at each of the honorees. This time—Erie Meyer, winner of the EFF Award for Protecting Americans' Data:
Erie Meyer is a Senior Fellow at the Vanderbilt Policy Accelerator where she focuses on the intersection of technology, artificial intelligence, and regulation, and a Senior Fellow at the Georgetown Law Institute for Technology Law & Policy. Since January 20, Meyer has helped organize former government technologists to stand up for the privacy and integrity of governmental systems that hold Americans’ data. In addition to organizing others, she filed a declaration in federal court in February warning that 12 years of critical records could be irretrievably lost in the CFPB’s purge by the Trump Administration’s Department of Government Efficiency. In April, she filed a declaration in another case warning about using private-sector AI on government information. That same month, she testified to the House Oversight Subcommittee on Cybersecurity, Information Technology, and Government Innovation that DOGE is centralizing access to some of the most sensitive data the government holds—Social Security records, disability claims, even data tied to national security—without a clear plan or proper oversight, warning that “DOGE is burning the house down and calling it a renovation.”
We're excited to celebrate Erie Meyer and the other EFF Award winners in person in San Francisco on September 10! We hope that you'll join us there.
Thank you to Fastly, DuckDuckGo, Corellium, and No Starch Press for their year-round support of EFF's mission.
Want to show your team’s support for EFF? Sponsorships ensure we can continue hosting events like this to build community among digital rights supporters. Please visit eff.org/thanks or contact tierney@eff.org for more information on corporate giving and sponsorships.
EFF is dedicated to a harassment-free experience for everyone, and all participants are encouraged to view our full Event Expectations.
Questions? Email us at events@eff.org.
From Libraries to Schools: Why Organizations Should Install Privacy Badger
In an era of pervasive online surveillance, organizations have an important role to play in protecting their communities’ privacy. Millions of people browse the web on computers provided by their schools, libraries, and employers. By default, popular browsers on these computers leave people exposed to hidden trackers.
Organizations can enhance privacy and security on their devices by installing Privacy Badger, EFF’s free, open source browser extension that automatically blocks trackers. Privacy Badger is already used by millions to fight online surveillance and take back control of their data.
Why Should Organizations Install Privacy Badger on Managed Devices? Protect People from Online SurveillanceMost websites contain hidden trackers that let advertisers, data brokers, and Big Tech companies monitor people’s browsing activity. This surveillance has serious consequences: it fuels scams, government spying, predatory advertising, and surveillance pricing.
By installing Privacy Badger on managed devices, organizations can protect entire communities from these harms. Most people don’t realize the risks of browsing the web unprotected. Organizations can step in to make online privacy available to everyone, not just the people who know they need it.
Ad Blocking is a Cybersecurity Best PracticePrivacy Badger helps reduce cybersecurity threats by blocking ads that track you (unfortunately, that’s most ads these days). Targeted ads aren’t just a privacy nightmare. They can also be a vehicle for malware and phishing attacks. Cybercriminals have tricked legitimate ad networks into distributing malware, a tactic known as malvertising.
The risks are serious enough that the U.S. Cybersecurity and Infrastructure Security Agency (CISA) recommends federal agencies deploy ad-blocking software. The NSA, CIA, and other intelligence agencies already follow this guidance. These agencies are using advertising systems to surveil others, but blocking ads for their own employees.
All organizations, not just spy agencies, should make ad blocking part of their security strategy.
A Tracker Blocker You Can TrustFour million users already trust Privacy Badger, which has been recommended by The New York Times' Wirecutter, Consumer Reports, and The Washington Post.
Trust is crucial when choosing an ad-blocking or tracker-blocking extension because they require high levels of browser permissions. Unfortunately, not all extensions deserve that trust. Avast’s “privacy” extension was caught collecting and selling users’ browsing data to third parties—the very practice it claimed to prevent.
Privacy Badger is different. EFF released it over a decade ago, and the extension has been open-source—meaning other developers and researchers can inspect its code—that entire time. Built by a nonprofit with a 35-year history fighting for user rights, organizations can trust that Privacy Badger works for its users, not for profit.
Which Organizations Should Deploy Privacy Badger?All of them! Installing Privacy Badger on managed devices improves privacy and security across an organization. That said, Privacy Badger is most beneficial for two types of organizations: libraries and schools. Both can better serve their communities by safeguarding the computers they provide.
LibrariesThe American Library Association (ALA) already recommends installing Privacy Badger on public computers to block third-party tracking. Librarians have a long history of defending privacy. The ALA’s guidance is a natural extension of that legacy for the digital age. While librarians protect the privacy of books people check out, Privacy Badger protects the privacy of websites they visit on library computers.
Millions of Americans depend on libraries for internet access. That makes libraries uniquely positioned to promote equitable access to private browsing. With Privacy Badger, libraries can ensure that safe and private browsing is the default for anyone using their computers.
Libraries also play a key role in promoting safe internet use through their digital literacy trainings. By including Privacy Badger in these trainings, librarians can teach patrons about a simple, free tool that protects their privacy and security online.
SchoolsSchools should protect their students’ from online surveillance by installing Privacy Badger on computers they provide. Parents are rightfully worried about their children’s privacy online, with a Pew survey showing 85% worry about advertisers using data about what kids do online to target ads. Deploying Privacy Badger is a concrete step schools can take to address these concerns.
By blocking online trackers, schools can protect students from manipulative ads and limit the personal data fueling social media algorithms. Privacy Badger can even block tracking in Ed Tech products that schools require students to use. Alarmingly, a Human Rights Watch analysis of Ed Tech products found that 89% shared children’s personal data with advertisers or other companies.
Instead of deploying invasive student monitoring tools, schools should keep students safe by keeping their data safe. Students deserve to learn without being tracked, profiled, and targeted online. Privacy Badger can help make that happen.
How Can Organizations Deploy Privacy Badger On Managed Devices?System administrators can deploy and configure Privacy Badger on managed devices by setting up an enterprise policy. Chrome, Firefox, and Edge provide instructions for automatically installing extensions organization-wide. You’ll be able to configure certain Privacy Badger settings for all devices. For example, you can specify websites where Privacy Badger is disabled or prevent Privacy Badger’s welcome page from popping up on computers that get reset after every session.
We recommend educating users about the addition of Privacy Badger and what it does. Since some websites deeply embed tracking, privacy protections can occasionally break website functionality. For example, a video might not play or a comments section might not appear. If this happens, users should know that they can easily turn off Privacy Badger on any website. Just open the Privacy Badger popup and click “Disable for this site.”
Don't hesitate to reach out if you're interested in deploying Privacy Badger at scale. Our team is here to help you protect your community's privacy. And if you're already deploying Privacy Badger across your organization, we'd love to hear how it’s going!
Make Private Browsing the Default at Your OrganizationSchools, libraries, and other organizations can make private browsing the norm by deploying Privacy Badger on devices they manage. If you work at an organization with managed devices, talk to your IT team about Privacy Badger. You can help strengthen the security and privacy of your entire organization while joining the fight against online surveillance.
プレカリアートユニオン通信(9/3)フレッシュロジスティクスと月額約1万6000円の賃上げを実現する
Verifying Trust in Digital ID Is Still Incomplete
In the past few years, governments across the world have rolled out different digital identification options, and now there are efforts encouraging online companies to implement identity and age verification requirements with digital ID in mind. This blog is the second in a short series that explains digital ID and the pending use case of age verification. Upcoming posts will evaluate what real protections we can implement with current digital ID frameworks and discuss how better privacy and controls can keep people safer online.
Digital identity encompasses various aspects of an individual's identity that are presented and verified through either the internet or in person. This could mean a digital credential issued by a certification body or a mobile driver’s license provisioned to someone’s mobile wallet. They can be presented in plain text on a device, as a scannable QR code, or through tapping your device to something called a Near Field Communication (NFC) reader. There are other ways to present credential information that is a little more privacy preserving, but in practice those three methods are how we are seeing digital ID being used today.
Advocates of digital ID often use a framework they call the "Triangle of Trust." This is usually presented as a triangle of exchange between the holder of an ID—those who use a phone or wallet application to access a service; the issuer of an ID—this is normally a government entity, like the state Departments of Motor Vehicles in the U.S, or a banking system; and the verifier of an ID—the entity that wants to confirm your identity, such as law enforcement, a university, a government benefits office, a porn site, or an online retailer.
This triangle implies that the issuer and verifier—for example, the government who provides the ID and the website checking your age—never need to talk to one another. This theoretically avoids the tracking and surveillance threats that arise by preventing your ID, by design, from phoning home every time you verify your ID with another party.
But it also makes a lot of questionable assumptions, such as:
1) the verifier will only ever ask for a limited amount of information.
2) the verifier won’t store information it collects.
3) the verifier is always trustworthy.
The third assumption is especially problematic. How do you trust that the verifier will protect your most personal information and not use, store, or sell it beyond what you have consented to? Any of the following could be verifiers:
- Law enforcement when doing a traffic stop and verifying your ID as valid.
- A government benefits office that requires ID verification to sign up for social security benefits.
- A porn site in a state or country which requires age verification or identity verification before allowing access.
- An online retailer selling products like alcohol or tobacco.
Looking at the triangle again, this isn’t quite an equal exchange. Your personal ID like a driver’s license or government ID is both one of the most centralized and sensitive documents you have—you can’t control how it is issued or create your own, having to go through your government to obtain one. This relationship will always be imbalanced. But we have to make sure digital ID does not exacerbate these imbalances.
The effort to answer the questions of how to prevent verifier abuse is ongoing. But instead of working on the harms that these systems cause, the push for this technology is being fast-tracked by governments around the world scrambling to solve what they see as a crisis of online harms by mandating age verification. And current implementations of the Triangle of Trust have already proven disastrous.
One key example of the speed of implementation outpacing proper protections is the Digital Credential API. Initially launched by Google and now supported by Apple, this rollout allows for mass, unfettered verification by apps and websites to use the API to request information from your digital ID. The introduction of this technology to people’s devices came with no limits or checks on what information verifiers can seek—incentivizing verifiers to over-ask for ID information beyond the question of whether a holder is over a certain age, simply because they can.
Digital Credential API also incentivizes for a variety of websites to ask for ID information that aren’t required and did not commonly do so previously. For example, food delivery services, medical services, and gaming sites, and literally anyone else interested in being a verifier, may become one tomorrow with digital ID and the Digital Credential API. This is both an erosion of personal privacy, as well as a pathway into further surveillance. There must be established limitations and scope, including:
- verifiers establishing who they are and what they plan to ask from holders. There should also be an established plan for transparency on verifiers and their data retention policies.
- ways to identify and report abusive verifiers, as well as real consequences, like revoking or blocking a verifier from requesting IDs in the future.
- unlinkable presentations that do not allow for verifier and issuer collusion. As well as no data shared between verifiers you attest to. Preventing tracking of your movements in person or online every time you attest your age.
A further point of concern arises in cases of abuse or deception. A malicious verifier can send a request with no limiting mechanisms or checks and the user who rejects the request could be fully blocked from the website or application. There must be provisions that ensure people have access to vital services that will require age verification from visitors.
Government's efforts to tackle verifiers potentially abusing digital ID requests haven’t come to fruition yet. For example, the EU Commission recently launched its age verification “mini app” ahead of the EU ID wallet for 2026. The mini app will not have a registry for verifiers, as EU regulators had promised and then withdrew. Without verifier accountability, the wallet cannot tell if a request is legitimate. As a result, verifiers and issuers will demand verification from the people who want to use online services, but those same people are unable to insist on verification and accountability from the other sides of the triangle.
While digital ID gets pushed as the solution to the problem of uploading IDs to each site users access, the security and privacy on them varies based on implementation. But when privacy is involved, regulators must make room for negotiation. There should be more thoughtful and protective measures for holders interacting with more and more potential verifiers over time. Otherwise digital ID solutions will just exacerbate existing harms and inequalities, rather than improving internet accessibility and information access for all.