Making the ‘Report Abuse’ Option Meaningful

Jane Austen, the British 10-pound note and a Twitter sh*tstorm.

Jude Ellison Sady Doyle

British journalist Caroline Criado-Perez received 50 rape threats per hour on twitter after her successful campaign to get a woman on British currency. (Shawn Campbell / Flickr / Creative Commons)

On July 24, journalist Caroline Criado-Perez enjoyed what may have been the most polite, inoffensive, charmingly British victory in feminist history: She led a petition drive to increase female representation on British currency, and the British government complied, placing a picture of novelist Jane Austen on the 10-pound note. Criado-Perez was rewarded for this delightfully genteel accomplishment with 50 rape threats per hour, every hour, for a 12-hour stretch.

But if we can all agree that 50 direct rape threats per hour constitutes abuse, what about 50 rape threats per day? What about 10 rape threats in a week? What about one rape threat, on one occasion? These are the sorts of questions that social media platforms will have to answer when they craft abuse policy.

A thus-far-unnamed man has already been arrested in Manchester on suspicion of coordinating the online attack. But the larger issue of gendered online harassment won’t end with the assault on Criado-Perez. Supporters have set up a Change​.org petition, asking Twitter to make its report abuse” button — a feature that’s already available on Twitter’s iPhone platform — more widely available, and to revise its terms and conditions so that gendered harassment can be dealt with effectively.

The terms of that petition, which has over 129,000 signatures to date, matter immensely. Because although Twitter has responded to the petition, and promised to improve, the campaigners seem to have gotten only half of what they asked for — and it might be the less important half.

What they got was a promise to roll out the report abuse” button to every available platform, and a statement from Twitter that the company is talking with our users, advocacy groups, and government officials to see how we can improve Twitter.” But, as critics have already pointed out, the presence of a report abuse” button doesn’t constitute a reliable benefit without the corresponding policy to support it. In addition to asking for the report abuse” button, the Change​.org petitioners specifically request the following:

It is time Twitter took a zero tolerance policy on abuse, and learns to tell the difference between abuse and defence. Women standing up to abuse should not fear having their accounts cancelled because Twitter fail[s] to see the issue at hand. 

The report abuse button needs to be accompanied by Twitter reviewing the [terms and conditions] on abusive behaviour to reflect an awareness of the complexity of violence against women, and the multiple oppressions women face[.]

Blogger Lindsey Bieda points to a troubling history of harassers using report for abuse” policies to shut down marginalized people’s work. She cites policy on platforms like YouTube, where an account can be suspended” with a large enough number of abuse notifications, regardless of its content.

The first major faulty assumption [of these policies,]” Bieda writes, is that if a large enough group of people think that something is abusive it must be abusive. This might be true in a egalitarian society… but when discussing oppression as a member of an oppressed group the masses have the numbers and the ability to silence those trying to get the word out.”

The other problem with the policies, as Bieda and others have noted, is that the terms and conditions of the platforms in question, or the reporting procedures themselves, may be too vaguely defined or user-unfriendly to benefit users who are being harassed.

For illustration’s sake: I ran into this problem myself, shortly after Facebook supposedly revised its policies to allow the banning of sexist hate speech.” At that time, I was dealing with an imposter account, apparently set up by an anti-feminist who was angry at me for writing critically about rape jokes. The imposter account — as I recall, it was filled with jokes” about how I was secretly a man, couldn’t get laid, and hated penises — popped up almost immediately after I blocked a comedian with whom I’d argued. While he didn’t explicitly claim credit for creating the imposter account, the comedian began claiming that he’d invented me — and the Internet was littered with claims of other people, claiming he’d invented imposter accounts for them as well, including fellow rape-joke protester Lindy West.

Sexism aside, Facebook’s policies forbid creating accounts in order to impersonate another user. But when I reported the account, I was informed that it did not violate Facebook’s policies: It looks like you share the same name with the person who owns this timeline. Because so many people use Facebook, there’s a good chance that there are multiple people with the same name on Facebook,” the chipper rejection message informed me. (This Sady Doyle was listed as male,” yet another example of the high-caliber humor to be found within the page.) When a friend reported the account, he informed me that he was given a message telling him to direct the complaints to me, so that I could report them to Facebook, so that I could be told, again, that the imposter account didn’t constitute harassment.

That account was shut down, eventually, after I complained about it on Twitter and attracted the attention of a staff member, but other women report that everything from gendered harassment to direct rape threats have been deemed not in violation” of the social media platforms that made them possible. All of which speaks to the fact that the option to report abuse” doesn’t work unless the company in question has a solid screening policy and a clear, coherent, useful definition of what constitutes abuse” in the first place. 

Criado-Perez’s case may have been easy for the public to rally around simply because it was incredibly clear-cut: Not only was she reporting direct threats of violence, she was reporting a torrential number of them. Fifty rape threats per hour is not a free speech” issue — threats of rape, death, or other violence are not considered to be protected speech— nor can a reasonable person argue that their recipient is hypersensitive” or irrational if she’s troubled by a deluge of this size. 

But if we can all agree that 50 direct rape threats per hour constitutes abuse, what about 50 rape threats per day? What about 10 rape threats in a week? What about one rape threat, on one occasion? These are the sorts of questions that social media platforms will have to answer when they craft abuse policy.

Even if the platforms in question take a zero-tolerance policy toward threats, deciding that even one direct threat of rape or death will result in account suspension or shutdown — which is, to be clear, the most sensible policy — they will have to decide whether direct threats” constitute the only suspension-worthy offense, or simply the outer limits of harassment. What about comments that don’t directly threaten rape, but do say that the recipient ought to be raped, or that joke or fantasize about the rape or death of the recipient? Do these count?

And, discounting violence, what about gendered or racialized language? This would probably be the next clear barrier; someone flinging slurs is being abusive in the eyes of most reasonable people. But if someone calls you a bitch” once, will you be able to shut down their account? And what if they don’t call you bitch,” but asshole” or shitheel” or anything else you find personally hurtful? What if you’re a feminist woman who’s calling someone a sexist asshole” because, in your view, he’s been sexist enough to merit the description? Are you being abusive?

The more one looks into the question of screening abuse out of online spaces, the foggier and less satisfying the answers get. Most misogynist abuse — thank God — is less dramatic than the abuse experienced by Criado-Perez. Most sexist or otherwise bigoted interactions are complex, situational, and rely on a series of stereotypes and power imbalances that are too complicated and subtle to fit easily into pre-defined macros. You can suspend people for threats, and perhaps for slurs, but you can’t screen out the casual objectification, condescension, or hostility that make Internet spaces — and the world at large — unsafe for women. Only widespread cultural change can accomplish that.

Which isn’t to object to the report abuse” option, on Twitter, or anywhere else. It’s only to say that, well, the option to report it is better than nothing. But the option to end it is still pretty far out of our reach. 

Please consider supporting our work.

I hope you found this article important. Before you leave, I want to ask you to consider supporting our work with a donation. In These Times needs readers like you to help sustain our mission. We don’t depend on—or want—corporate advertising or deep-pocketed billionaires to fund our journalism. We’re supported by you, the reader, so we can focus on covering the issues that matter most to the progressive movement without fear or compromise.

Our work isn’t hidden behind a paywall because of people like you who support our journalism. We want to keep it that way. If you value the work we do and the movements we cover, please consider donating to In These Times.

Jude Ellison Sady Doyle is an In These Times contributing writer. They are the author of Trainwreck: The Women We Love to Hate, Mock, and Fear… and Why (Melville House, 2016) and was the founder of the blog Tiger Beatdown. You can follow them on Twitter at @sadydoyle.

Illustrated cover of Gaza issue. Illustration shows an illustrated representation of Gaza, sohwing crowded buildings surrounded by a wall on three sides. Above the buildings is the sun, with light shining down. Above the sun is a white bird. Text below the city says: All Eyes on Gaza
Get 10 issues for $19.95

Subscribe to the print magazine.