WHY HASN’T SOCIAL MEDIA DONE MORE?

SOCIAL MEDIA’S ROLE IN THE SPREAD OF HATE

It can be difficult for users to accept their place in the business plan of a social media or other internet provider. We all like to think of ourselves as the top of the food chain, the valued customer, the consumer. But we aren’t. For social media and internet service providers, we are the product. Companies that buy our data or buy contact to us through targeted ads are the valued customer. With roughly 7.7 billion people on the planet, and about 50% of these people online, internet providers are very focused on growth and revenue. Besides, even a complaining user is still a user and therefore, still a product. Moderating content is time consuming, hiring people to review potentially offending posts is expensive, so with the absence of a financial incentive, social media platforms have historically done little to address online abuse.

However, recently major social media providers have found themselves under unprecedented scrutiny. Investors, journalists, and politicians want to know why social media providers haven’t done more to prevent the spread of online hate, harassment, and misinformation. Facebook, Twitter, and Google executives have all been questioned by Congress, the media around the globe has reported on suicides, rapes, and murders conducted, incited, or promoted online, and many countries are creating laws to compel providers to better protect online users privacy and safety.

HOW DO SOCIAL MEDIA PLATFORMS DEAL WITH ONLINE HATE AND HARASSMENT AND WHY ISN'T IT WORKING?

Many social media platforms have adopted a “flag” system of user regulation. The idea is simply that users should police the platform’s content and “flag” or report content and photos that are objectionable. Once enough people flag a post it is reviewed and potentially removed. Perhaps a good idea in theory, in reality this leaves victims of online harassment and hate campaigns with no protection. Only able to flag an item once on their own account, victims are left with two options, try and rally others to read and then flag the objectionable content leading to further anguish and humiliation or try and ignore the harassment.

In addition to the flagging system, most hosts and social media platforms do offer a method of reporting content that may be harassing or illegal. Either through an online form or a specific complaint email address, the sheer volume of reported content means that even timely (live streaming suicides or violence) can go unaddressed for days, weeks, even months. After well publicized criticisms many platforms are attempting to prioritize crimes against children and live streaming suicides and violence.

When called out to answer for a lack of protection offered to victims, major platforms have typically defaulted to a “we don’t want to chill free expression” position and referred to their flagging system. In reality, many platforms are ill equipped to deal with the sheer number of complaints by people being harassed, bullied, and stalked online. Instead of hiring more live people to review and respond, many have turned to AI programs which often miss serious issues. In some cases, women who posted pictures of themselves in swimwear or breast feeding were removed, while revenge porn and extreme violence was protected. 

IS ONLINE HATE AND HARASSMENT REALLY THAT SIGNIFICANT OF A PROBLEM?

In a recent New York Times article, it was revealed that a former Facebook content moderator was suing the social media giant for PTSD due to the volume and extreme nature of the thousands of posts she was compelled to watch as a part of her job. In addition to the hateful posts, content moderators are compelled to watch child sexual abuse, beheadings, murders, suicide, beastiality and all manner of violent videos, photos, and graphics. Of course, repeated, exposure to this type of content can cause significant emotional and psychological damage.

In the article it is revealed that at Facebook, there are roughly 7,500 people, worldwide, who review over TEN MILLION reported posts EACH WEEK! Each content moderator is therefore compelled to review and address tens of thousands of instances of reported content each month.

Facebook has responded that they (similar to other platforms) will be hiring more content moderators in the future. But this also provides us as users a glimpse at the scope of the problem. Millions of people are being harassed, exposed, and abused in mass online and their only line of defense seems to be a significantly over-worked, emotionally fragile, similarly abused group of moderators. If moderators are suffering to this degree over posts targeting strangers, it should be easy to understand the effects on victims themselves.

With first quarter net profits of $4.9 billion dollars, it seems that Facebook could invest significantly more into hiring an appropriate number of moderators, establishing clear rules for removal (one of the sited issues), and providing better mental health for employees. This in turn would allow them to better protect users.

WHY AREN’T SOCIAL MEDIA AND INTERNET PROVIDERS RESPONSIBLE FOR THE CONTENT PEOPLE POST ON THEIR PLATFORMS?

This is one of the most common questions posed by victims of online harassment and hate. Social media and internet providers in the United States are largely shielded from responsibility for the content that third parties post on their site thanks to Section 230 of the Communications Decency Act of 1996. This law essentially differentiates between a provider (Facebook, Google, WordPress, Twitter, etc.) and the person who uses the provider’s service to publish content.

There are some exceptions to the rule and activities that can invalidate Section 230 protections. If a provider takes an active role in the creation or editing of content, they can risk losing their immunity if that content is a violation of someone’s rights.

Many countries have adapted their own versions of the Communications Decency Act to better address the challenges that rapidly changing, advanced technology has created. In the US there is a constant and growing voice for the Act to be reviewed and updated for a modern, computer based society.

79% BELIEVE ONLINE SERVICE

providers should be responsible to step in when harassing behavior occurs on their platform and that other than tougher laws and more effective law enforcement, online services play the major role when combating online harassment.*

Together, through empowerment, education, and advocacy we can effect change and create a safer more inclusive online experience for all.

Help us achieve this goal.

Subscribe

Sign up with your email address to receive news and updates.

We respect your privacy.