Categories
Journalism Tech

Inside Meta, Debate Over What’s Fair in Suppressing Comments in the Palestinian Territories

That’s the headline on my latest story, an October 21 exclusive with my colleagues Sam Schechner and Jeff Horwitz.

It begins <– free link 🎁

After Hamas stormed Israel and murdered civilians on Oct. 7, hateful comments from the region surged through Instagram. Meta Platforms managers cranked up automatic filters meant to slow the flood of violent and harassing content.

But still the comments kept appearing—especially from the Palestinian territories, according to a Meta manager. So Meta turned up its filters again, but only there.

In an internal forum for Muslim employees, objections poured in.

“What we’re saying and what we’re doing seem completely opposed at the moment,” one employee posted internally, according to documents viewed by The Wall Street Journal. Meta has publicly pledged to apply its policies equally around the world.

The social media giant has been wrestling with how best to enforce its content rules in the midst of the brutal and chaotic war. Meta relies heavily on automation to police Instagram and Facebook META 2.91%increase; green up pointing triangle, but those tools can stumble: They have struggled to parse the Palestinian Arabic dialect and in some cases they don’t have enough Hebrew-language data to work effectively.

In one recent glitch, Instagram’s automatic translations of users’ profiles started rendering the word “Palestinian” along with an emoji and an innocuous Arabic phrase as “Palestinian terrorists.”

And when Meta turns to human employees to fill the gaps, some teams have different views on how the rules should be applied, and to whom.

Click through to read the rest.

Leave a Reply

Your email address will not be published. Required fields are marked *