Twitter is planning to impose new restrictions on pornographic and hateful imagery as part of a renewed effort to tackle abuse on its social network.
The US company has also said it intends to review user complaints more quickly.
The efforts are outlined in a leaked email from the company’s head of safety, which was published by Wired.
But one UK charity has already said the company needs to go further than “tinkering” with its existing rules.
Twitter’s chief executive, Jack Dorsey, had said on Friday that he planned to announce a “more aggressive stance” against online abuse this week, after Twitter was criticised for temporarily blocking the account of Rose McGowan – an actress who had accused Hollywood producer Harvey Weinstein of rape.
Harvey Weinstein denies all allegations of non-consensual sex.
The UK government had also recently urged social-media leaders to do more to tackle the problem, and suggested Twitter and others would have to pay a levy to fund anti-abuse campaigns in the future.
Twitter has confirmed that Wired’s report is accurate.
“Although we planned on sharing these updates later this week, we hope our approach and upcoming changes, as well as our collaboration with the Trust and Safety Council, show how seriously we are rethinking our rules and how quickly we’re moving to update our policies and how we enforce them,” a spokesman told the BBC.
The council referred to is a new body of 50 independent organisations that Twitter intends to consult to ensure its users can “express themselves with confidence”.
Its members include the Internet Watch Foundation, EU Kids Online and the UK Safer Internet Centre.
The leaked email had been addressed to members of the newly formed council.
Twitter’s struggle to grow its number of active users has been linked to the abuse some have faced
Among the new steps detailed are:
- the immediate and permanent suspension of accounts identified as the original source of nude imagery taken and shared without the subject’s permission. In the past, perpetrators faced only a temporary lockout if it was the first time they had committed the offence
- the definition of non-consensual nudity has been expanded to include hidden-camera content and “upskirt” imagery that might have been captured without the victim being aware
- hate symbols and other hateful imagery will now be treated as sensitive media and should be marked as such by the poster, allowing the content to be initially hidden behind warning alerts
- action may be taken against unwanted sexual advances even if the complaint was made by someone that was not a participant in the conversation
Twitter adds that unspecified “enforcement action” is planned against account-holding groups that have historically used violence to advance their causes.
Furthermore, it promises to start taking steps against those who post messages that glorify or condone violence, even if the users do not issue threats of their own.
In both these and others cases, “more details to come” are promised.
Likewise, the email says Twitter will be “investing heavily” in shortening the time it takes to handle complaints, but is not specific about what its new targets will be.
The proposals have been welcomed by the Fawcett Society – a UK-based gender-rights campaign group that previously accused Twitter of “failing women”.
But the charity – which is not a member of the new council – said Twitter’s managers should go further.
“These are positive changes that do more to recognise the impact of abusive behaviour online,” said Jemima Olchawski, the society’s head of policy.
“However, [our] research shows that it takes Twitter far, far too long to respond to abusive tweets – if they do at all.
“As a minimum, abusive content should be removed within 24 hours of being reported. Twitter must genuinely commit the resources to make this policy meaningful – tinkering will not be enough.”