【Bookガイド】1月の“推し本”紹介=萩山 拓(ライター)

2 months 3 weeks ago
ノンフィクション・ジャンルからチョイスした気になる本の紹介です(刊行順・販価は税別)◆丹羽典生『ガラパゴスを歩いた男─朝枝利男の太平洋探検記』教育評論社 1/8刊 2400円「ガラパゴスを歩いた男」.jpg 著者が訪れた博物館の収蔵庫には、朝枝利男という見知らぬ人物によって撮影されたガラパゴス諸島の写真が、たくさん収められていた。それだけでなく彼の日記、水彩画も保管されていた。それを基に「ガラパゴス探検の日本人のパイオニア」でありながら、ほぼ無名の人物である朝枝利男の生涯とガ..
JCJ

【焦点】最高裁と巨大法律事務所 癒着の構図 原発事故であらわに 司法の信頼、地に落ちる 後藤秀典氏オンライン講演=橋詰雅博

2 months 3 weeks ago
                   2024年度JCJ賞受賞『東京電力の変節』(旬報社)は、国、最高裁裁判官、東京電力、巨大法律事務所が深くつながっていることを浮かび上がらせた。持ちつ持たれつの構図から2年前の6月17日、福島原発事故で国を免責する〝異様〟な最高裁判決となって表面化したというのだ。原発事故裁判を始めさまざまな住民訴訟を通じて司法不信の声は増えている。司法の独立の危機と訴える法律家も少なくない。本書の著者のジャーナリスト・後藤秀典氏は11月30日、癒着構造の詳..
JCJ

【能登半島地震】1年たっても遅れる復旧・復興 被災住民の思いと齟齬が=須藤 春夫<br />

2 months 3 weeks ago
 今年元日の能登半島地震から間もなく約1年となる。マスメディア各社が地震半年後に実施した被災住民へのアンケートでは、7割を超える人が「能登に帰りたい、今後も能登で暮らし続けたい」という強い思いを示している。だが、9月には被災地を集中豪雨が襲い、復旧への足かせが重なった。24年12月6日現在、輪島市、珠洲市の1次避難所には40人、金沢市内の旅館・ホテルの2次避難所には24人と、未だに帰れない人たちが取り残されている。 県や市町は「創造的復興」を掲げるが、被災家屋の公費解体・修理..
JCJ

[B] 「トランプ不動産屋はガザ下見すべし」【西サハラ最新情報】  平田伊都子

2 months 3 weeks ago
トランプ第47代米大統領就任式が近づくにつれて、民主党支持者が続々とトランプ支持に転向するのに、ビックリしました!  どなたも立派な<民主主義者>で<人道主義者>で、トランプのことを<ばか者呼ばわり>し、ことごとく中央政界から外そうとしてきました。 が、不動産屋は復活し、整地を始めました。 ガザを更地にする前に、トランプ次期米大統領殿、ぜひぜひ、ガザの戦地を下見してください。 餓死していくガザの子供たちに会ってください!  凍死していくガザの新生児を見舞ってください!!
日刊ベリタ

Meta’s New Content Policy Will Harm Vulnerable Users. If It Really Valued Free Speech, It Would Make These Changes

2 months 3 weeks ago

Earlier this week, when Meta announced changes to their content moderation processes, we were hopeful that some of those changes—which we will address in more detail in this post—would enable greater freedom of expression on the company’s platforms, something for which we have advocated for many years. While Meta’s initial announcement primarily addressed changes to its misinformation policies and included rolling back over-enforcement and automated tools that we have long criticized, we expressed hope that “Meta will also look closely at its content moderation practices with regards to other commonly censored topics such as LGBTQ+ speech, political dissidence, and sex work.”

Facebook has a clear and disturbing track record of silencing and further marginalizing already oppressed peoples, and then being less than forthright about their content moderation policy.

However, shortly after our initial statement was published, we became aware that rather than addressing those historically over-moderated subjects, Meta was taking the opposite tack and —as reported by the Independent—was making targeted changes to its hateful conduct policy that would allow dehumanizing statements to be made about certain vulnerable groups. 

It was our mistake to formulate our responses and expectations on what is essentially a marketing video for upcoming policy changes before any of those changes were reflected in their documentation. We prefer to focus on the actual impacts of online censorship felt by people, which tends to be further removed from the stated policies outlined in community guidelines and terms of service documents. Facebook has a clear and disturbing track record of silencing and further marginalizing already oppressed peoples, and then being less than forthright about their content moderation policy. These first changes to actually surface in Facebook's community standards document seem to be in the same vein.

Specifically, Meta’s hateful conduct policy now contains the following:

  • People sometimes use sex- or gender-exclusive language when discussing access to spaces often limited by sex or gender, such as access to bathrooms, specific schools, specific military, law enforcement, or teaching roles, and health or support groups. Other times, they call for exclusion or use insulting language in the context of discussing political or religious topics, such as when discussing transgender rights, immigration, or homosexuality. Finally, sometimes people curse at a gender in the context of a romantic break-up. Our policies are designed to allow room for these types of speech. 

But the implementation of this policy shows that it is focused on allowing more hateful speech against specific groups, with a noticeable and particular focus on enabling more speech challenging the legitimacy of LGBTQ+ rights. For example, 

  • While allegations of mental illness against people based on their protected characteristics remain a tier 2 violation, the revised policy now allows “allegations of mental illness or abnormality when based on gender or sexual orientation, given political and religious discourse about transgenderism [sic] and homosexuality.”
  • The revised policy now specifies that Meta allows speech advocating gender-based and sexual orientation-based-exclusion from military, law enforcement, and teaching jobs, and from sports leagues and bathrooms.
  • The revised policy also removed previous prohibitions on comparing people to inanimate objects, feces, and filth based on their protected characteristics.

These changes reveal that Meta seems less interested in freedom of expression as a principle and more focused on appeasing the incoming U.S. administration, a concern we mentioned in our initial statement with respect to the announced move of the content policy team from California to Texas to address “appearances of bias.” Meta said it would be making some changes to reflect that these topics are “the subject of frequent political discourse and debate” and can be said “on TV or the floor of Congress.” But if that is truly Meta’s new standard, we are struck by how selectively it is being rolled out, and particularly allowing more anti-LGBTQ+ speech.

We continue to stand firmly against hateful anti-trans content remaining on Meta’s platforms, and strongly condemn any policy change directly aimed at enabling hate toward vulnerable communities—both in the U.S. and internationally.

Real and Sincere Reforms to Content Moderation Can Both Promote Freedom of Expression and Protect Marginalized Users

In its initial announcement, Meta also said it would change how policies are enforced to reduce mistakes, stop reliance on automated systems to flag every piece of content, and add staff to review appeals. We believe that, in theory, these are positive measures that should result in less censorship of expression for which Meta has long been criticized by the global digital rights community, as well as by artists, sex worker advocacy groups, LGBTQ+ advocates, Palestine advocates, and political groups, among others.

But we are aware that these problems, at a corporation with a history of biased and harmful moderation like Meta, need a careful, well-thought-out, and sincere fix that will not undermine broader freedom of expression goals.

For more than a decade, EFF has been critical of the impact that content moderation at scale—and automated content moderation in particular—has on various groups. If Meta is truly interested in promoting freedom of expression across its platforms, we renew our calls to prioritize the following much-needed improvements instead of allowing more hateful speech.

Meta Must Invest in Its Global User Base and Cover More Languages 

Meta has long failed to invest in providing cultural and linguistic competence in its moderation practices often leading to inaccurate removal of content as well as a greater reliance on (faulty) automation tools. This has been apparent to us for a long time. In the wake of the 2011 Arab uprisings, we documented our concerns with Facebook’s reporting processes and their effect on activists in the Middle East and North Africa. More recently, the need for cultural competence in the industry generally was emphasized in the revised Santa Clara Principles.

Over the years, Meta’s global shortcomings became even more apparent as its platforms were used to promote hate and extremism in a number of locales. One key example is the platform’s failure to moderate anti-Rohingya sentiment in Myanmar—the direct result of having far too few Burmese-speaking moderators (in 2015, as extreme violence and violent sentiment toward the Rohingya was well underway, there were just two such moderators).

If Meta is indeed going to roll back the use of automation to flag and action most content and ensure that appeals systems work effectively, which will solve some of these problems, it must also invest globally in qualified content moderation personnel to make sure that content from countries outside of the United States and in languages other than English is fairly moderated. 

Reliance on Automation to Flag Extremist Content Allows for Flawed Moderation

We have long been critical of Meta’s over-enforcement of terrorist and extremist speech, specifically of the impact it has on human rights content. Part of the problem is Meta’s over-reliance on moderation to flag extremist content. A 2020 document reviewing moderation across the Middle East and North Africa claimed that algorithms used to detect terrorist content in Arabic incorrectly flag posts 77 percent of the time

More recently, we have seen this with Meta’s automated moderation to remove the phrase “from the river to the sea.” As we argued in a submission to the Oversight Board—with which the Board also agreed—moderation decisions must be made on an individualized basis because the phrase has a significant historical usage that is not hateful or otherwise in violation of Meta’s community standards.

Another example of this problem that has overlapped with Meta’s shortcomings with respect to linguistic competence is in relation to the term “shaheed,” which translates most closely to “martyr” and is used by Arabic speakers and many non-Arabic-speaking Muslims elsewhere in the world to refer primarily (though not exclusively) to individuals who have died in the pursuit of ideological causes. As we argued in our joint submission with ECNL to the Meta Oversight Board, use of the term is context-dependent, but Meta has used automated moderation to indiscriminately remove instances of the word. In their policy advisory opinion, the Oversight Board noted that any restrictions on freedom of expression that seek to prevent violence must be necessary and proportionate, “given that undue removal of content may be ineffective and even counterproductive.”

Marginalized communities that experience persecution offline often face disproportionate censorship online. It is imperative that Meta recognize the responsibilities it has to its global user base in upholding free expression, particularly of communities that may otherwise face censorship in their home countries.

Sexually-Themed Content Remains Subject to Discriminatory Over-censorship

Our critique of Meta’s removal of sexually-themed content goes back more than a decade. The company’s policies on adult sexual activity and nudity affect a wide range of people and communities, but most acutely impact LGBTQ+ individuals and sex workers. Typically aimed at keeping sites “family friendly” or “protecting the children,” these policies are often unevenly enforced, often classifying LGBTQ+ content as “adult” or “harmful” when similar heterosexual content isn’t. These policies were often written and enforced discriminatorily and at the expense of gender-fluid and nonbinary speakers—we joined in the We the Nipple campaign aimed at remedying this discrimination.

In the midst of ongoing political divisions, issues like this have a serious impact on social media users. 

Most nude content is legal, and engaging with such material online provides individuals with a safe and open framework to explore their identities, advocate for broader societal acceptance and against hate, build communities, and discover new interests. With Meta intervening to become the arbiters of how people create and engage with nudity and sexuality—both offline and in the digital space—a crucial form of engagement for all kinds of users has been removed and the voices of people with less power have regularly been shut down. 

Over-removal of Abortion Content Stifles User Access to Essential Information 

The removal of abortion-related posts on Meta platforms containing the word ‘kill’ have failed to meet the criteria for restricting users’ right to freedom of expression. Meta has regularly over-removed abortion related content, hamstringing its user’s ability to voice their political beliefs. The use of automated tools for content moderation leads to the biased removal of this language, as well as essential information. In 2022, Vice reported that a Facebook post stating "abortion pills can be mailed" was flagged within seconds of it being posted.

At a time when bills are being tabled across the U.S. to restrict the exchange of abortion-related information online, reproductive justice and safe access to abortion, like so many other aspects of managing our healthcare, is fundamentally tied to our digital lives. And with corporations deciding what content is hosted online, the impact of this removal is exacerbated. 

What was benign data online is effectively now potentially criminal evidence. This expanded threat to digital rights is especially dangerous for BIPOC, lower-income, immigrant, LGBTQ+ people and other traditionally marginalized communities, and the healthcare providers serving these communities. Meta must adhere to its responsibility to respect international human rights law, and ensure that any abortion-related content removal be both necessary and proportionate.

Meta’s symbolic move of its content team from California to Texas, a state that is aiming to make the distribution of abortion information illegal, also raises serious concerns that Meta will backslide on this issue—in line with local Texan state law banning abortion—rather than make improvements. 

Meta Must Do Better to Provide Users With Transparency 

EFF has been critical of Facebook’s lack of transparency for a long time. When it comes to content moderation the company’s transparency reports lack many of the basics: how many human moderators are there, and how many cover each language? How are moderators trained? The company’s community standards enforcement report includes rough estimates of how many pieces of content of which categories get removed, but does not tell us why or how these decisions are taken.

Meta makes billions from its own exploitation of our data, too often choosing their profits over our privacy—opting to collect as much as possible while denying users intuitive control over their data. In many ways this problem underlies the rest of the corporation’s harms—that its core business model depends on collecting as much information about users as possible, then using that data to target ads, as well as target competitors

That’s why EFF, with others, launched the Santa Clara Principles on how corporations like Meta can best obtain meaningful transparency and accountability around the increasingly aggressive moderation of user-generated content. And as platforms like Facebook, Instagram, and X continue to occupy an even bigger role in arbitrating our speech and controlling our data, there is an increased urgency to ensure that their reach is not only stifled, but reduced.

Flawed Approach to Moderating Misinformation with Censorship 

Misinformation has been thriving on social media platforms, including Meta. As we said in our initial statement, and have written before, Meta and other platforms should use a variety of fact-checking and verification tools available to it, including both community notes and professional fact-checkers, and have robust systems in place to check against any flagging that results from it. 

Meta and other platforms should also employ media literacy tools such as encouraging users to read articles before sharing them, and to provide resources to help their users assess reliability of information on the site. We have also called for Meta and others to stop privileging governmental officials by providing them with greater opportunities to lie than other users.

While we expressed some hope on Tuesday, the cynicism expressed by others seems warranted now. Over the years, EFF and many others have worked to push Meta to make improvements. We've had some success with its "Real Names" policy, for example, which disproportionately affected the LGBTQ community and political dissidents. We also fought for, and won improvements on, Meta's policy  on allowing images of breastfeeding, rather than marking them as "sexual content." If Meta truly values freedom of expression, we urge it to redirect its focus to empowering historically marginalized speakers, rather than empowering only their detractors.

Jillian C. York