Twitter tests warning prompt on replies with offensive, harmful language
Twitter is experimenting on new features, including a way to inform users to edit tweet replies if they can be considered offensive or have hate content before publishing. It won’t be an “edit tweet” option that users have long been asking for, but a warning message or a prompt to notify users that their replies has questionable content, which they might like to relook at.
“When things get heated, you may say things you don’t mean. To let you rethink a reply, we’re running a limited experiment on iOS with a prompt that gives you the option to revise your reply before it’s published if it uses language that could be harmful.” tweeted Twitter Support.