Twitter has unveiled a feature that will allow users to temporarily block accounts that are sending them abuse.
The social media platform announced it has begun testing Safety Mode, which once turned on will block accounts and content for seven days that Twitter’s systems spot using harmful language or sending repetitive, uninvited replies.
The announcement comes as the firm, and wider social media, continues to face questions over how to better protect users from online abuse following a number of high-profile incidents, including the targeting of black footballers during Euro 2020.
Turned on via the Twitter settings, Safety Mode will assess the likelihood of an interaction between two users being negative based on the context of the tweets in question, as well as any existing relationship they may have – with Twitter confirming that accounts which a user already follows or frequently interacts with will not be automatically blocked.
Those accounts which are autoblocked by the tool will be temporarily unable to follow a user, see their tweets or send them direct messages.
Twitter said it will keep information about the tweets flagged through Safety Modes so that users can view those details at any time if they wish.
The tool will also allow users to undo autoblocks at any time themselves if they feel Twitter has made a mistake.
“We want you to enjoy healthy conversations, so this test is one way we’re limiting overwhelming and unwelcome interactions that can interrupt those conversations,” Twitter’s product lead Jarrod Doherty said in a blog post.
“Our goal is to better protect the individual on the receiving end of tweets by reducing the prevalence and visibility of harmful remarks.”
Instagram has recently introduced a similar tool called Limits, which enables users to temporarily restrict unwanted interactions.
For now, Twitter said its Safety Mode will be tested among a small feedback group on iOS, Android and desktop.