This article was originally published on Lenny.
The musician Mia Matsumiya has documented a lot of the online harassment she's suffered on Instagram at @perv_magnet. Her bio expresses her mission succinctly: "4'9" violinist & perv magnet. I've archived 1,000+ messages from creeps, weirdos & fetishists over the past 10 years. I've decided to post them all." An alarming amount of the abuse comes through Facebook messages, invisible to anyone but her. "Anyone ever tell you how sexy you are and how bad they wanna let you face fuck them," begins one unsolicited message from a random person that ends with a smiley face. She also posts submissions from others, one of which reads: "Hey asian whore want to get raped? I know where you live."
Matsumiya has reported her harassment to the police, but it's unclear whether she's reported the repeated abuse to the social networks themselves (as of press time, she hadn't responded to requests for comment). But Facebook knew its structure was to blame: last fall, the company changed its messaging features to remove the dreaded "Other" folder, where any user could contact you. Now, users must send requests to contact someone before they can message them, Facebook tells us, though there's still a "filtered requests" tab buried two levels deep on the desktop version of Facebook that provides a home for potentially hateful noise. Though extreme, what Matsumiya is experiencing isn't rare — a full quarter of women ages 18 to 24 report being sexually harassed online, and 26 percent report being stalked.
If you're experiencing abuse in this or one of its many, many other forms, this article's for you: it is a practical guide to understanding how to report harassment and abuse online, and what to expect from various social networks when you do.
They, unfairly, expect you to do a lot, and it sucks that so much of the burden of protecting yourself falls on you, the person being harassed. Per Matsumiya's account above, sometimes the very structure of the app you're using unnecessarily enables abuse. But many social-media services have started beefing up their trust-and-safety teams and expanding their understanding of all the ways a person can be harassed on their platforms.
The way you'll have to deal with harassment will vary across different platforms, but there are a few best practices that can help keep your case strong.
- Screenshot, screenshot, screenshot. A lot of attacks online come from burner accounts that may be reported and suspended before you can report them yourself, and attacks may also take the form of content that is posted and then deleted, such that it may show up in an alert very briefly and then disappear once the damage is done. Keeping your own record of what is happening not only helps quantify things and keep them straight but also may help you or the platforms establish patterns across accounts or services.
- Report accounts, not just content, where applicable. If a user is being relentless or conducting an attack against you, it's often appropriate to report both the content they're posting as well as the account they're using.
- Escalate to law enforcement if you're in danger. Unfortunately, most platforms can't respond substantively to harassment that quickly; sometimes they may only take a few hours, but most don't guarantee a response time. If your situation is urgent, know that it's within the purview of law enforcement to respond to a direct threat (unfortunately, many of them may not understand what's going on; you may not be able to count on them to know what Twitter is, but you should at least try to get a report filed). Online harassers don't often substantiate their threats, but it's not worth the risk to assume they won't, or that since it's "just online" it should not be taken seriously.
The basics: Twitter's reporting forms are here. Under "Report a violation," there are different options depending on whether you wish to report harassment, impersonation, or privacy violations.
The details: Twitter has been a flash point for discussions about abuse, for good reason. Twitter "makes harassment so visible," said Anita Sarkeesian, the founder of Feminist Frequency. "The same metric we use to judge expression is the same one we use to judge harassment." That is, someone harassing you about a tweet you made is as visible to you as your own tweet.
On the back end, reports get routed to different teams depending on the content — for instance, child porn goes to a different place than someone directing violent threats at you. You can report both accounts and individual tweets, but if an account is repeatedly tweeting at you, it will be simpler to report the account rather than each individual tweet. Twitter used to notify the person you were reporting when you did so. It no longer does this.
When you report someone, Twitter will generate an email to you that becomes a thread with the support team. When Twitter decides whether what you've reported is or is not harassment, it will email you and tell you so. Twitter initially turning down even fairly obvious cases is not unusual, but that doesn't mean that's the end of the exchange. The platform allows users to reply to the support email chain with to challenge decisions and provide additional evidence if possible. Twitter is also unusual in that it allows users to report harassment happening to someone else and communicates just as actively to the person who files those reports.
An annoying thing about this process (which will hopefully change any minute) is that Twitter does not identify what you reported in the emails it generates, which can make things very confusing if you are reporting many people or tweets at once. This makes it difficult to follow up with relevant information.
Anecdote time: Last spring, I tweeted a screenshot of a rude DM sent to me by a random account. The user saw it and immediately marshaled about a dozen sock puppets to tweet repeatedly at me that I deserved to die. As I recall, Twitter found one of the sock-puppet accounts to be in violation and suspended it, but it didn't get the rest and didn't understand how to go after the account running the attack. At the time, Twitter's reporting structure couldn't accommodate this type of attack; now it can.
The basics: For violations where you cannot report content on Facebook in context, here is the form you can use. For violations where there is context, steps are below.
The details: Facebook's real-name policy is meant to curtail certain kinds of abuse, like truly psychotic hate speech or direct threats, by making it harder to maintain an anonymous identity. However, this underestimates the crazy stuff people don't mind having attached to their real names (see below). The fact that victims' profiles must be tied to their real names also leaves them vulnerable.
Reporting content on Facebook varies slightly depending on whether the item is a posted status, a link, a photo, or something else, though pop-up menus offered through reporting and flagging buttons make it relatively easy; all the community violations options are under "This [content] doesn't belong on Facebook." Once something is reported, it's routed to your own personal "support inbox" on the service, which is useful for keeping track of what you reported and when you reported it, and lets Facebook thread replies into individual reported items.
However, if Facebook turns your request down, there's no opportunity for you to follow up; Facebook only provides shortcut buttons to deal with the problem on your own by, for instance, blocking the user.
Facebook's support terms give people a lot of leeway — the satire/humor/social commentary clause in its community standards is interpreted pretty broadly — and it can be hard to get the company to affirm violations and remove them. The company recently made statements about cracking down harder on hate speech, expanding beyond the direct-threat threshold.
Anecdote time: Facebook came under scrutiny in Germany this March for not adequately policing xenophobic and racist hate speech, a type of content condemned by the community standards, but the problem persists at home, too. Even a publicly posted NBC News story gets vicious public comments on Facebook. A March 18 post titled "Latino, Immigrant Advocates to Protest All Trump Arizona Events" received a comment from one user, Paul: "Let's keep all of the Central American immigrants, and deport Donald Trump and his racist doofus supporters." Another user, Pamela Thomas Jones, responded, "Paul lets LOCK all of them up in a cage and send them to a jungle. They are animals that belong I a cage." As a user, I had to first "hide" this comment (from only myself) in order to go through the motions of reporting it as hate speech. By the time I saw it, the comment had been up for four days. Facebook responded a couple of days later saying it did not violate the community standards.
The basics: For a long time, Instagram did not have a form for reporting content directly to the staff; if you couldn't see the content you wanted to report (because of blocking) or couldn't describe it through built-in forms, you were out of luck. But now there is one buried deep, with There is a separate online form for reporting harassment or bullying.
The details: Instagram's format leaves some of its users particularly vulnerable to harassment and bullying, in part because the bully's tactics are highly visible, but only to the user they are attacking. A harasser might tag the victim in a disgusting photo or leave an offensive comment on an old photo, for instance — so the victim can see the harassment, but it's mostly invisible to other users. Because there's not an easy way for others to see this behavior, community enforcing doesn't help here. The fact that users must be logged in to view otherwise-public content means if they're blocked by an account harassing them, they are not even able to access the usual tools for reporting, and Instagram has limited web functionality.
Additionally, Instagram has one of the woolier reporting mechanisms. The dialog that pops up when you select a photo or account to report is quick and straightforward, but there is no room for elaboration. While Instagram is owned by Facebook, its harassment-reporting dialog structure is different. "This photo shouldn't be on Instagram" leads to the hate-speech or graphic-violence reporting options, but if you need to report harassment or bullying, you must select "This photo puts people at risk."
When you file a report with Instagram, it does not generate any feedback or confirmation beyond the "Thanks for your report" dialog: no emails, no messages. Likewise, Instagram does not generate follow-up emails to let you know what decision it's made, and there's no way to appeal a decision. Generally, Instagram decides whether to take action within 24 to 72 hours.
Anecdote time: Reports of bullying among teens are extremely common on Instagram; in one case a couple of years ago, parents sued a few boys running an account that allegedly targeted their daughter with nude photos and gave space for others to leave harassing comments. The page no longer exists, at least not in the form it did (deleting old Instagram accounts and setting up new ones is an extremely common practice). In another case, a user's ex-boyfriend used Instagram to mock her for having cancer and tell her to "kill [her]self." The parents in question did not respond to queries, and Instagram would not comment on these specific cases, but a representative stated that "Instagram has zero tolerance for threats of violence, bullying, and harassment to our community, and when instances are reported, we move swiftly to take down violating content."
The basics: The general reporting form for YouTube is here, though you may not be able to get all the way through, depending on whether you can use its auto-generated fill-ins, which don't capture harassment happening in, for instance, a third-party video's comments.
The details: Poor YouTubers. The video service has one of the worst frameworks for reporting harassment. A lot of it is automated, but in a regimented way that burdens the reporting user, and a lot of the infrastructure makes harassers uniquely visible.
Any individual comment can be reported on a page, but reporting an individual user is three clicks deep (their profile > about > the flag icon > report user). After you work your way through the dialogs, YouTube gives you an auto-generated form that pulls the users' videos and comments on your own channel or videos and asks you to identify which of them you're complaining about. Notably, this does not allow you to systematically report a user if, say, they are leaving comments about you strewed across others' videos.
Reporting an individual comment on someone else's YouTube video generates vague feedback that sounds like nothing is being done, but the company tells us the complaint does get submitted. Per a support page, YouTube uses a "strike" system invisible to other users that will sometimes result in account termination.
The only feedback users receive indicating that their reports have done anything is if the offending video or comment is removed; reports do not generate any paper trail, and there is no dedicated interface for managing reports. YouTube does not specify how long it takes to act on reports but notes only that a staff of specialists monitors them 24/7.
Anecdote time: Again, what constitutes a threat relies on YouTube's interpretation. For instance, footage of someone playing a game called "Beat Up Anita Sarkeesian" remains posted. One commenter on the video writes, "can't you just kill the bitch instead." Despite reports, both the video and comment remain posted.
Sarkeesian also pointed out another unique form of harassment: the majority of videos that appear in the "recommended" sidebar next to her own are made and circulated by abusers. "If you watch one of my videos, you will then be recommended all of these anti-feminist videos," she said. "The related-channels function gets defaulted onto everyone's YouTube page and is populated by [YouTube's] algorithms. On my channel, it's all harassers." Sarkeesian must opt out of the recommended networks entirely, meaning her videos cannot appear in a recommended sidebar, ever, if she wants to stop this from happening, denying her a big source of traffic to her content.
The basics: Tumblr's page for reporting harassment or abuse.
The details: Tumblr is an extremely popular platform for anonymous users, and it has its share of problems, including cultural pockets of self-harm obsession, like pro-ana blogs. It can be a target for "raids" by subfactions of users on Reddit or 4chan, where they launch abusive attacks against users they find distasteful. The abuse policy states Tumblr will remove "overtly malicious" material or, in the case of self-harm, "active promotion or glorification."
Tumblr allows users to enter a report form from within their main dashboard, by selecting "flag post" from the three-dot menu. A short and simple set of menu options allows users to frame a report, and there's a text box at the end for contextualizing problems with the post. However, on a post's web page, there is no reporting button. In that case, users can use the abuse form directly, which has no menu options, just a text box (meaning it will be an extra step for Tumblr to sort it appropriately).
According to Tumblr, the company tries to field most complaints within 24 hours, and the ending dialog to filing a complaint says it can take "a day or two." After that period, Tumblr will follow up with an email letting you know it's looking at your complaint, though it will not follow up again to let you know what its decision was.
Anecdote time: Per usual, the lines around abuse involve a lot of interpretation on Tumblr's end. I ran across a post discussing the terms transtrender and genderspecial that a user had reblogged, telling the original poster to "run five miles into traffic in the middle of the freeway." Tumblr's abuse-and-harassment team said the team didn't find it bad enough to be taken down, though Tumblr would not explain its reasoning any further.
Casey Johnston is an editor at the Wirecutter and a freelance journalist.