Last March, Twitter CEO Jack Dorsey admitted that, despite the company’s intentions, Twitter wasn’t the best at encouraging productive or meaningful conversations among users. More often, the platform served to further abuse, harassment, and the spread of misinformation, while plunging users deeper into echo chambers. The solution, Dorsey said, was to shift from banning the most egregious offenders—a task that many would argue Twitter hadn’t excelled at—and toward promoting “conversational health.”
A year later, little has changed. Twitter is still a largely toxic place riddled with extremists, sock-puppet armies, and fake accounts; and the company’s top brass is still talking about conversational health. Its most recent attempt to improve discourse on the platform was confirmed last week: a feature that would allow users to selectively hide replies to their tweets from public view, so that other users can’t see the offending reply when interacting with the initial tweet.
First uncovered by Jane Manchun Wong, a software engineer known for discovering yet-to-be-announced features being tested by social media companies like Twitter and Facebook, the new feature allows others to view hidden replies through a menu option.
“We think the transparency of the hidden replies would allow the community to notice and call out situations where people use the feature to hide content they disagree with,” said senior Twitter product manager Michelle Yasmeen Haq. In a Twitter thread responding to Wong’s discovery, Haq confirmed that the feature would be tested publicly in the coming months. “We think this can balance the product experience between the original Tweeter and the audience.” Twitter did not respond to a request for additional comment.
Wong agrees that the new feature could be helpful. On Twitter, she said that the feature could benefit those looking for peace of mind on the platform. In response to comments suggesting it will only reinforce ideological echo chambers, she said it could be used as an effective way to check people that use it to silence those with different views. “Given Twitter is the best debate platform for politicians to fall from their grace, I’m sure people will sure check the hidden tweets to call them out,” she tweeted.
At present, Twitter users have three tools to address toxic or unproductive replies: They can block the user, mute them, or report the account. Blocking or muting a particular user only affects the experience of the user being blocked, and Twitter’s reporting tools are only useful if the comment in question violates the platform’s policies governing user behavior. Meaning that if a user is being harassed in a way that isn’t explicitly against Twitter’s guidelines—like when sex workers were targeted by the so-called “ThotAuditors” last November—there’s little they can do to stop the abuse from appearing publicly.
The hidden replies feature aims to change that by giving users another tool in their digital toolbox to tamp down on unwanted comments, Haq said. However, despite what the wording in the Twitter app’s code might indicate, it clearly isn’t a substitute for proper moderation.
More Great WIRED Stories