Facebook updates Instagram's nudity policy after Oversight Board recommendation

It's not necessarily a win for the #FreeTheNipple movement, but Instagram’s policy now clarifies that "health-related nudity" is allowed.
 By 
Christianna Silva
 on 
Facebook updates Instagram's nudity policy after Oversight Board recommendation
Mark Zuckerberg, CEO of Facebook Credit: Getty Images

A month after the fresh-faced Facebook Oversight Board published 17 recommendations for how the platform could improve its content moderation, Facebook has responded with a resounding “OK, but it depends.”

One of the more significant changes Facebook will adopt relates to Instagram’s nudity policy, to clarify that "health-related nudity" is allowed. The Oversight Board, which is an independent entity that rules on knotty content cases across all of Facebook’s platforms, recommended that Facebook update Instagram’s community guidelines around adult nudity to clarify that some nudity — if it is health-related, photos of breastfeeding, giving birth, breast cancer awareness, gender confirmation surgery, or in an act of protest — is allowed on the platform. Facebook agreed. It will take a while for that to go into effect holistically, but Facebook says it'll provide progress updates. It's not necessarily a win for the #FreeTheNipple movement, but it's at least a step towards nuance.

Some nudity — if it is health-related, photos of breastfeeding, giving birth, breast cancer awareness, gender confirmation surgery, or in an act of protest — is allowed.

Including the Instagram nudity policy change, Facebook is acting on 11 of the board’s recommendations, “assessing feasibility” on five of them, and is dismissing one of them. This comes after the board published decisions on its first cases, which were immediately implemented.


You May Also Like

Some of the other areas where Facebook says it is “committed to action” appear to be largely commitments to transparency. Facebook said it would clarify its community standards to include how it treats COVID-19 misinformation that could cause immediate physical harm and how it handles humor, satire, and personal experiences. Facebook is also launching a transparency center to help users better understand the platform’s community standards.

A recommendation Facebook won’t be implementing is one in which the board intriguingly asked for less oversight regarding COVID-19 misinformation. The board recommended that Facebook should “adopt a range of less intrusive measures” when users post information about COVID-19 treatments that contradict the advice of health authorities.

“In consultation with global health authorities, we continue to believe our approach of removing COVID-19 misinformation that might lead to imminent harm is the correct one during a global pandemic,” Facebook said in a statement.

A few of the board’s recommendations had to do with automation tools that make content moderation decisions. Facebook said it'd work to ensure its algorithms don’t automatically remove nudity posts that it does allow by refining its systems and sampling more training data. The board also recommended that users should be able to appeal decisions made by automated systems and ask that they are re-reviewed by an employee. That's still under consideration as Facebook assesses its feasibility. Finally, the board recommended that users are informed when automation is used to make decisions about their content, and Facebook said it would test "the impact of telling people more about how an enforcement action decision was made." Facebook is still considering how doable it is to disclose how many photos were removed automatically and how many of those decisions were then reversed after humans checked the algorithms' work.

Facebook is also tackling some details of its Dangerous Individuals and Organizations Community Standard, the policy that captures everything from human trafficking to terrorist rhetoric. In response to a U.S. user posting a quote from Joseph Goebbels, the Reich's minister of propaganda in Nazi Germany, Facebook removed the post for violating this policy.

The Oversight Board recommended that Facebook explain to the user which community standard it was enforcing when a post is removed and give examples of why it goes against that standard. Facebook agreed, adding that it will increase transparency around the standards by adding definitions of “praise,” “support” and “representation” over the next few months, since the Dangerous Individuals and Organizations Community Standard removes content that expresses "support or praise for groups, leaders, or individuals involved in these activities." Facebook is also “assessing the feasibility” of providing a public list of “dangerous” organizations and individuals that are classified under the Dangerous Individuals and Organizations Community Standard.

These responses give us interesting insight into how Facebook will interact with the Oversight Board, which is still getting its sea legs after its first report. Only the board's individual content decisions are binding, which is why Facebook had some wiggle room in its responses to these broader recommendations.

The 20-member board, which includes a Nobel Prize winner, academics, digital rights advocates, and a former prime minister, is also tasked with eventually ruling on if Donald Trump’s ban is permanent.

Topics Facebook

Mashable Image
Christianna Silva
Senior Culture Reporter

Christianna Silva is a senior culture reporter covering social platforms and the creator economy, with a focus on the intersection of social media, politics, and the economic systems that govern us. Since joining Mashable in 2021, they have reported extensively on meme creators, content moderation, and the nature of online creation under capitalism.

Before joining Mashable, they worked as an editor at NPR and MTV News, a reporter at Teen Vogue and VICE News, and as a stablehand at a mini-horse farm. You can follow her on Bluesky @christiannaj.bsky.social and Instagram @christianna_j.

Mashable Potato

Recommended For You
Meta 'Supreme Court' wants your take on banning users
The Meta logo, white on blue, with blurred figures passing by.

Stephen Colbert torches CBS during monologue on Trump's billion-dollar peace board
Stephen Colbert presents The Late Show.

Anthropic changes safety policy amid intense AI competition
Claude logo on screen with coding in the background, on screen.

Big policy change coming to Amazon Wish Lists
amazon logo on phone in front of gift boxes

OpenAI updates Department of War deal after backlash
The OpenAI logo appears on the screen of a smartphone placed on a reflective surface where the seal of the United States Department of War (Department of Defense) is projected.

More in Tech

Trending on Mashable
NYT Connections hints today: Clues, answers for April 3, 2026
Connections game on a smartphone

Wordle today: Answer, hints for April 3, 2026
Wordle game on a smartphone


NYT Strands hints, answers for April 3, 2026
A game being played on a smartphone.

You can track Artemis II in real time as Orion flies to the moon
Victor Glover and Reid Wiseman piloting the Orion spacecraft
The biggest stories of the day delivered to your inbox.
These newsletters may contain advertising, deals, or affiliate links. By clicking Subscribe, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy.
Thanks for signing up. See you at your inbox!