SOCIAL
Twitter Tests New Self-Reporting Option for Potentially Sensitive Images and Videos in Tweets
Twitter’s testing a new option that enables users to add their own sensitive content warning screens to visuals attached to tweets, providing another way to limit unwanted exposure to graphic content.
People use Twitter to discuss what’s happening in the world, which sometimes means sharing unsettling or sensitive content. We’re testing an option for some of you to add one-time warnings to photos and videos you Tweet out, to help those who might want the warning. pic.twitter.com/LCUA5QCoOV
— Twitter Safety (@TwitterSafety) December 7, 2021
As you can see in this example, now, when you go to attach a photo or video to a tweet, you’ll be able to select a flagging option from the three dots function menu. From there, you’ll be able to tick whether the visual includes nudity, violence or otherwise sensitive content, which could help other users avoid unwanted exposure.
The warning screen will then include a note saying that the tweet author has flagged the content, with the visual hidden behind a blurred pane.
Twitter does already have a sensitive content screening system to help users avoid such, but it’s based on self-reporting, while it additionally uses automation to detect violations, though that’s not a perfect system. This extra measure will provide more protection, and could help to further limit exposure, with more people able to flag their own content to avoid any potential restrictions or penalties for posting.
Twitter could also look to expand the reasons for self-reporting in future, further increasing the value of the tool, and any measure that increases user safety by reducing exposure can only be a positive.
The new feature is currently in testing with some users.