On the 16th of April, Facebook users were witnesses to a shocking live stream that would help change the conversation about the company’s legal status. The video of Steve Stephens murdering another man in Cleveland remained on Facebook for two hours before it was taken down. The backlash was so immediate that on the following day, Justin Osofsky, Facebook’s vice president for global operations and media partnerships, released a statement saying the company would be analyzing the process for reporting and reviewing such videos. Writing about the incident for Reuters, David Ingram explains how “Facebook relies largely on its 1.9 billion users to report items that violate its terms of service”, adding that “the shooting was the latest violent incident shown on Facebook, raising questions about how the company moderates huge amounts of content uploaded from around the world.”
Brooke Masters of the Financial Times called Facebook’s handling of this incident an “abject failure” citing a 1996 law that essentially relieves internet content providers of all responsibility regarding user postings. A January 3rd article on Wired.com claims the Telecommunications Act of 1996 is “often cited as the most important tool ever created for free speech on the internet”. Quoting straight from section 230 of the Communications Decency Act, which makes up part of the larger Telecommunications Act, Christopher Zara highlights the 26 words that “has allowed today’s biggest internet companies to flourish”:
No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.
However, Masters is not impressed. She points out how a 1998 law obligates internet companies to take down copyrighted material and urges Congress to pass a similar law that would “quickly remove illegal postings — child porn, hate speech and libel”2. Also, Facebook and similar companies already take down material they deem is in violation of their terms and conditions, so it is misleading to think of internet companies as being bastions of free speech and unaccountable to what their users post online.
And this precise need for accountability is only growing.
Some governments are adjusting the way they perceive companies in order to adapt to the changing media landscape, for example. District Magistrate Yogeshwar Ram Mishra and Senior Superintendent of Police Nitin Tiwari issued a joint order on April 20th, warning Facebook and WhatsApp users that they could be held accountable for spreading libel and fake news, stating “There are several groups on social media which are named on news groups and also groups with other names which are propagating news and information which is not authentic”. Similarly, Dame Patricia Hodgson, chairwoman of the UK government’s state –approved media regulator Ofcom, stated that “she believes internet businesses such as Google and Facebook are publishers, raising the prospect that they could eventually face more regulation”, according to The Guardian. In the following day’s article by the same newspaper, Graham Ruddick goes further, reporting that the British government’s culture secretary, Karen Bradley announced “the [UK] government is considering changing the legal status of Google, Facebook and other internet companies amid growing concerns about copyright infringement and the spread of extremist material online”.
While the outcry for more regulations on internet companies like Facebook become louder and louder, that does not mean that Facebook has stood idly by disinterestedly. On November 16th, Facebook announced a new initiative it dubbed “Trust Indicators”. The system, devised by the Trust Project at Santa Clara University, “ wants to essentially tag publishers for reliability, so ‘digital platforms, such as Google, Facebook, and Bing, will be able to use machine-readable signals from the Trust Indicators to surface quality news to their users’”7. But even this falls short, according to Jake Swearingen of New York magazine, who feels that the “fake news problem on Facebook is still rooted in how easy it is to start a fake web news site and then use Facebook’s incredible scale and the tendency of its users to share inflammatory headlines to reach many, many people”. He points out that “left to my own devices, if Facebook ever opened up its trust indicators to all publishers, I could conceivably rate my fact checking as impeccable, my correction policy as top notch, my ownership structure as being made up of ‘true patriots,’ and my masthead as being comprised of Abraham Lincoln, Sean Hannity, and Jesus Christ”, – the lack of human beings actually fact-checking Facebook’s postings would essentially render the process meaningless.
It is not just politicians and journalists who are worried about Facebook’s increasing influence. Even Chamath Palihapitiya, who worked as Facebook’s vice president for user growth, expressed deep concern over Facebook’s surmounting stronghold, not just the media landscape, but in society’s psyche in an event run by the Stanford Graduate School of Business on November 10th. Palihapitiya claimed that “”We have created tools that are ripping apart the social fabric of how society works”, adding that there is “no civil discourse, no cooperation, misinformation, mistruth and it’s not just an American problem, it’s not just about Russian ads, this is a global problem”.
In an article for the New York Times, John Herrman illustrates further Facebook’s ubiquitousness: “Facebook, in the years leading up to this election, hasn’t just become nearly ubiquitous among American users; it has centralized online news consumption in an unprecedented way. According to the company, its site is used by more than 200 million people in the United States each month, out of a total population of 320 million. A 2016 Pew study found that 44 percent of Americans read or watch news on Facebook.”
That there should be mechanisms in place to regulate Facebook and somehow make them more accountable for users’ postings is an argument that has steadily been gaining steam in the mainstream for some time now. Opponents would argue that the phone company is not held liable should a user make a threat or admit to committing illegal acts while making a phone call so why should Facebook and similar companies be held accountable for what users say or do out of their own free will on their platforms?
Part of the answer lies in the fact that Facebook is not merely a conduit for social media interactions. Its sole purpose is to sell warm bodies to advertisers, essentially making big money from what its users are posting online. Facebook and companies like it “have become vassals to essentially unregulated, monopolistic distribution mechanisms” according to Matt Taibbi, “who additionally appropriate the lion’s share of the profits that used to fund things like investigative journalism”. Taibbi is talking about the “life-or-death power” that Facebook holds over media companies” where “they can steer traffic wherever they please simply by tweaking their algorithms” with their “monstrous influence”.
The idea of having gigantic media monopolies that bill themselves as unaccountable entities simply because they are mere conduits for material that users choose to share willingly should be alarming to anybody who studies the matter carefully. It is not simply a matter of more big government regulations on what was essentially the last bastion of free speech left, the Internet. It should be imperative to stand up to companies like Facebook and tell them that they cannot have it both ways: they cannot wash their hands off the responsibility that providing a platform for distribution provides while also making huge profits selling access to customers to advertisement companies.
In her conclusion, Adrianne Lafrance cited a 1915 column on the Chicago Daily Book under the heading of “fake news”: “The people of this country will demand as much protection against adulterated news as they now get against adulterated food for the stomach”. Well, one would hope, but Facebook is making so money off of fake news, libel and quite possibly highly illegal content, not to mention videos of assassinations, terrorist recruitment groups and child pornography, that it will take massive amounts of public outrage to finally change the tide on this issue. Surely there must be some legal course to regulate Facebook on grounds of profiteering off of illegal content and, if there is not, there should be. That a society would allow Facebook to claim irresponsibility in the face of huge profits should be worrisome and enraging.
 Ingram, David. “Cleveland killing leads Facebook to review handling of videos” Reuters. https://www.reuters.com/article/us-cleveland-murder-facebook/cleveland-killing-leads-facebook-to-review-handling-of-videos-idUSKBN17J1Q6
 Masters, Brooke. “Facebook is more than just a pipe — it is a publisher too” Financial Times. https://www.ft.com/content/da427af2-2670-11e7-8691-d5f7e0cd0a16
 Zara, Christopher. “The Most Important Law in Tech Has a Problem” Wired.com. https://www.wired.com/2017/01/the-most-important-law-in-tech-has-a-problem/
 “Offensive WhatsApp posts can now land group administrator in jail” Economic Times, India Times. https://economictimes.indiatimes.com/news/politics-and-nation/offensive-whatsapp-posts-can-now-land-group-administrator-in-jail/articleshow/58281149.cms
 Ruddick, Graham. “Ofcom chair raises prospect of regulation for Google and Facebook” The Guardian. https://www.theguardian.com/media/2017/oct/10/ofcom-patricia-hodgson-google-facebook-fake-news
 Ruddick, Graham. “UK government considers classifying Google and Facebook as publishers” The Guardian. https://www.theguardian.com/technology/2017/oct/11/government-considers-classifying-google-facebook-publishers
 Swearingen, Jake. “Can Facebook Trust Publishers to Say How Trustworthy They Are?” New York Magazine. http://nymag.com/selectall/2017/11/facebook-trust-indicators-fake-news.html
 Taibbi, Matt. “RIP Edward Herman, Who Co-Wrote a Book That’s Now More Important Than Ever” RollingStone.com http://www.rollingstone.com/politics/features/matt-taibbi-on-the-death-of-edward-herman-w511766
 Lafrance, Adrienne “How the Fake News Crisis of 1896 Explains Trump” The Atlantic. https://www.theatlantic.com/technology/archive/2017/01/the-fake-news-crisis-120-years-ago/513710/