Facebook’s crisis management algorithm runs on outrage

San Francisco: Last year, a Facebook user in Sri Lanka posted an angry message to the social network. “Kill all the Muslim babies without sparing even an infant,” the person wrote in Sinhala, the languag e of the country’s Buddhist majority. “F—ing dogs!” The post went up early in 2018, in white text and on one of the playful pink and purple backgrounds that Facebook began offering in 2016 to encourage its users to share more with one another. The sentiment about killing Muslims got 30 likes before someone else found it troubling enough to click the “give feedback” button instead. The whistleblower selected the option for “hate speech”, one of nine possible categories for objectionable content on Facebook.

For years, non-profits in Sri Lanka have warned that Facebook posts are playing a role in escalating ethnic tensions between Sinhalese Buddhists and Tamil Muslims, but the company had ignored them. It took six days for Facebook to respond to the hate speech report. “Thanks for the feedback,” the company told the whistleblower, who posted the response to Twitter. The content, Facebook continued, “doesn’t go against one of our specific Community Standards.”

Read more

You may also like

More in IT

Comments are closed.