In 1995 – three years before Google was founded, nine years ahead of Facebook, a decade before YouTube and 11 years earlier than Twitter – a US court judged that internet-based Prodigy Services was akin to a publisher because the web-services company vetted and deleted inappropriate material from online message boards that attracted about 60,000 posts each day. The ruling meant that companies that interfered with content were liable for all material on their websites, whereas passive hosts of content were not.
Two US lawmakers, concerned the ruling would stifle innovation, introduced an amendment to the Communications Decency Act to ensure “providers of an interactive computer service” were not liable for what people might say and do on their websites. The amendment contrasted with how publishers and broadcasters are legally accountable in the US and elsewhere for the content they make public in traditional or online form.
The amendment, which became Section 230 in the Telecommunications Act of 1996 (as is known as CDA 230), enabled companies such as Facebook, Google, LinkedIn (Microsoft owned since 2016), Reddit, Snapchat, Tumblr, Twitter and YouTube (Google owned since 2006) to emerge as human ingenuity allowed.
But the growth of these companies seems to have outpaced their ability to police misuse of their products without them incurring any legal penalty. Technology is a neutral instrument in a moral or ethical sense but the distinction blurs amid ill use by even a few bad actors. The internet’s drawbacks include that it encourages excesses to get attention and it can be a tool for extremists. Terrorist propaganda and how-to manuals are found via Google search or on YouTube. On Facebook’s platforms, fake news thrives and its algorithms can be used to spread misinformation to influence elections as many claim occurred in the US in 2016 – or groups can use these formulas to find fringe elements such as people who identify as ‘Jew haters’. Across the platforms the world over, fake news, the manipulation of algorithms to promote articles to ‘trending’ status, troll armies, bogus ‘likes’, web-based smear campaigns and viral conspiracy theories have hyped partisanship, cheapened facts and amplified the role that emotion has played in discourse on these for-profit ‘public squares’ such that social media is accused of being a ‘threat to democracy’.
The controversies have roused policymakers, egged on by traditional media that has lost advertising income to these newcomers. Moves are underway in the US to extend to the internet regulations that govern political advertising in traditional media. Some people even question the rationale behind CDA 230. While regulation of political ads is possible, US lawmakers are restrained when taking on the tech giants on content for two main reasons. The first is that the products of these companies are beloved by their billions of users so anything that would disrupt these services would prove unpopular. The other is that digital platforms are difficult to regulate, no matter their size, because they are different from traditional publishers and broadcasters. The content-heavy business models of the platforms are likely safe for now.
That said, the tech companies (as distinct from their products) have shed much goodwill in recent years as these and other controversies have swirled. The platform operators must ensure recent steps to limit misuse work, to stave off US lawmakers and preserve public support, while regulators in many countries already have enough legal power with which to pressure the platforms. In many ways, the influence of the internet on politics is exaggerated. At worst, the platforms have magnified conflicts, not caused them. But with so many controversies raging of late, the platforms are under pressure to limit abuses on their inventions that have a more sinister side than their creators perhaps expected. If the platforms don’t assume more control, regulators the world over will force them to.
Before the internet, perhaps the most famous episode of fake news in the electronic age was a 1938 radio drama The war of the worlds aired in the US. The broadcast was blamed for triggering alarm when listeners believed false reports purportedly from the US military that aliens had invaded. While the panic appears to be myth, the incident was used to justify curbs on radio content by policymakers who were concerned the medium had helped authoritarians in Europe mobilise the masses, hardly the vision of radio’s investors. Just as democracies in the 1930s entrusted regulators and industry self-regulation to oversee radio content, politicians across the gamut are pressuring the digital platforms to prevent the internet’s openness from being twisted against the public interest as radio’s reach was eight decades ago.
The digital platforms are responding. Facebook has cracked down on false accounts and is taking steps to reduce fake news. Google is curbing problematic searches and trying to blacklist ‘authoritative’ content. Search ‘did the Holocaust happen’, for instance, and denials no longer appear on the first search page. Twitter has introduced rules around hate symbols, revenge porn and the glorification of violence and is suppressing bots that mass tweet to game trending topics. Reddit is seeking to rid the internet forum of content that incites violence. Tech companies have teamed with G7 countries to block extremist Islamist content.
But the platforms face challenges to diffuse the controversies around content. To maximise user numbers and time on site, Facebook algorithms are coded to send people content that inspire ‘comments’, ‘likes’ and ‘shares’ with friends. Users end up fragmented into like-minded clusters that make it easy to share agreeable and fake news. On top of that, objective news stories to some are biased to others. Facebook, for instance, is accused by former Facebook staff of suppressing news stories that would please conservatives on the influential ‘trending’ sidebar on user home pages, an allegation a Facebook investigation disputed.
An overarching challenge for the platforms is how to balance the trade-off between controlling content and keeping their networks open to all to preserve free speech. When Facebook, Google or Twitter censor something they often only promote a backlash and stir debate about why they have a power that belongs to governments. Twitter, for example, was criticised when it hobbled actor Rose McGowan’s account at a pivotal moment in the scandal surrounding Hollywood producer Harvey Weinstein after she attacked men distancing themselves from Weinstein by tweeting: “You all knew”. To limit hate speech, Google has partnered with the controversial Southern Poverty Law Center, which condemns many groups and individuals on disputed rationales. Google is accused by left- and right-wing fringe political outlets of censorship by tampering with search results to suppress visits to their sites. The opaqueness of how Google derives its search results only inflames its opponents. Google says it keeps its search algorithms secret so people can’t manipulate them.
Even though lowering abuses around content remain problematic, US politicians are more focused on supervising political ads. The US$1.2 billion spent digitally on local, state and nationwide elections in the US in 2016 is too big an amount to ignore, especially when no one knows what people are seeing, whereas campaign ads on mainstream media are visible and regulated. US Republican and Democratic senators are pushing (via the Honest Ads Act) to end the exception from laws governing advertising that online has enjoyed since 2006. Back then, the Federal Election Commission left unregulated almost all political activity on the internet because the web was “unique” and “distinct”.
Ahead of any new law, tech companies are overhauling practices around political ads – Facebook has announced steps to boost “transparency” while Twitter will label political ads and say who paid for them. Many US lawmakers, though, are sceptical Facebook can properly screen its five million advertisers each month, a business that is largely handled via software.
While legislation on political ads stands a fair chance of being passed, the challenge for lawmakers on content remains that the internet is “unique” and “distinct”, as the US election body put it. They are not publishers or broadcasters even though many people go to them for their news. While Facebook CEO Mark Zuckerberg concedes misinformation on Facebook may have influenced the US election, Facebook, for instance, argues it is not a media company, even with its News Feeds, a stance that implicitly means the company deserves the CDA 230 protection. “We’re a tech company. We don’t hire journalists,” Facebook COO Sheryl Sandberg said recently. Twitter likewise forswears any ability to regulate content on such an open and real-time platform, though Snapchat, which operates the Discover publisher portal, says it’s a publisher.
The tech industry overall says that CDA 230 is a needed protection for online services that provide third-party content and for bloggers who host comments from readers. Without the exception, sites would either forgo hosting content or be forced to ensure content didn’t breach laws – a claim that would apply differently across the platforms. “Given the sheer size of user-generated websites …, it would be infeasible for online intermediaries to prevent objectionable content from cropping up on their site(s),” says US tech lobby group, the Electronic Frontier Foundation. “Rather than face potential liability for their users' actions, most would likely not host any user content at all or would need to protect themselves by being actively engaged in censoring what we say, what we see, and what we do online.”
The traditional media derides the tech line as too narrow a definition of a publisher or broadcaster – see WIRED’s “Memo to Facebook: How to tell if you are a media company”: Are you the country’s largest source of news? Do you commission content? Employ content moderators? Censor content? Use fact checkers? Does your CEO sort of admit to running a media company? Have you partnered with a media company to attract viewers? Yes, yes, yes, yes, yes and yes, WIRED concludes. 
The solution for US politicians would seem to be to impose content rules on the digital platforms that are forceful but less stringent than those governing traditional media. Germany’s new Network Enforcement Law is a portent of regulation to come – it is regarded as the toughest of laws passed recently to regulate internet content in more than 50 countries by the count of The New York Times. Under the German law effective from October 1, digital platforms face fines for hosting for more than 24 hours any content that “manifestly” violates the country’s Criminal Code, which bars incitement to hatred or crime.
In the US, a workable compromise on regulating content could take time, even years, to work out. With the public still enamoured with their favourite platforms, the tech companies will enjoy for a while yet the protections that flowed from that US court case 22 years ago. How soon the day arrives before such protections are watered down could boil down to how well the tech giants police their platforms from here.
By Michael Collins, Investment Specialist
This material contains the opinions of the manager and such opinions are subject to change without notice. This material has been distributed for informational purposes only. Forecasts, estimates and certain information contained herein are based upon independent research and should not be considered as investment advice or a recommendation of any particular security, strategy or investment product. Information contained herein has been obtained from sources believed to be reliable, but not guaranteed.
 Electronic Frontier Foundation (a US lobby group). ‘CDA 230. The most important law protecting internet speech.’ eff.org/issues/cda230/legislative-history
 For perspective, a BuzzFeed News analysis found that top fake election news stories of the 2016 US election generated more total engagement on Facebook than the most-read valid election stories from 19 major news outlets combined. BuzzFeed News. ‘This analysis shows how viral fake election news stories outperformed real news on Facebook.’ 17 November 2016. buzzfeed.com/craigsilverman/viral-fake-election-news-outperformed-real-news-on-facebook?utm_term=.oxOmz9909#.emYNlqqEq
 ProPublica. ‘Facebook enabled advertisers to reach ‘Jew haters’.’ 14 September 2017. propublica.org/article/facebook-enabled-advertisers-to-reach-jew-haters
 The debunked ‘Pizzagate’ allegations of 2016 about a Democratic Party paedophile ring even inspired a shooting at a pizza restaurant in Washington.
 See cover of The Economist dated 4 to 10 November 2017 that stated ‘Social media’s threat to democracy’ for but one example. economist.com/ap/printedition/2017-11-04
 The New Yorker. ‘The fake-news fallacy’ by Adrian Chen. 4 September 2017. newyorker.com/magazine/2017/09/04/the-fake-news-fallacy
 Facebook. ‘Improvements in protecting the integrity of activity on Facebook.’ 12 April 2017. facebook.com/notes/facebook-security/improvements-in-protecting-the-integrity-of-activity-on-facebook/10154323366590766/
 Facebook newsroom. ‘Working to stop misinformation and false news.’ 6 April 2017. newsroom.fb.com/news/2017/04/working-to-stop-misinformation-and-false-news/
 Search Engine Land. ‘Google’s Project Owl – a three-pronged attack on fake news & problematic content.’ 25 April 2017. searchengineland.com/googles-project-owl-attack-fake-news-273700
 WIRED. “Here are Twitter’s latest rules for fighting hate and abuse.” 17 October 2017. wired.com/story/here-are-twitters-latest-rules-for-fighting-hate-and-abuse/
 Colin Crowell, vice president public policy at Twitter. ‘Our approach to bots & misinformation.’ 14 June 2017. blog.twitter.com/official/en_us/topics/company/2017/Our-Approach-Bots-Misinformation.html
 Reddit help page. ‘Do not post violent content.’ https://www.reddithelp.com/en/categories/rules-reporting/account-and-community-restrictions/do-not-post-violent-content
 Agence France-Presse. ‘G7, tech giants agree on plan to block jihadist content online.’ 20 October 2017. afp.com/en/news/205/g7-tech-giants-agree-plan-block-jihadist-content-online
 Gizmodo. ‘Former Facebook workers: We routinely suppressed conservative news.’ 10 May 2016. gizmodo.com.au/2016/05/former-facebook-workers-we-routinely-suppressed-conservative-news/
 The Wall Street Journal. ‘Facebook to revamp ‘trending topics’ feature to reduce bias risk.’ 23 July 2016. wsj.com/articles/facebook-shifts-trending-topics-feature-amid-bias-fears-1464051610
 The New York Times. ‘Rose McGowan’s Twitter account locked after posts about Weinstein.’ 12 October 2017. nytimes.com/2017/10/12/arts/rose-mcgowan-twitter-weinstein.html
 Borrell Associates (a US-based advertising research firm). ‘The final analysis. What happened to political advertising in 2016 (and forever).’ Free PDF of executive summary can be downloaded at borrellassociates.com/shop/the-final-analysis-political-advertising-in-2016-detail
 Federal Election Commission. Rules and regulations. Federal Register. Vol 71, No 70.’ 12 April 2006. transition.fec.gov/law/cfr/ej_compilation/2006/notice_2006-8.pdf
 Facebook post. ‘Update on our advertising transparency and authenticity efforts.’ 27 October 2017. newsroom.fb.com/news/2017/10/update-on-our-advertising-transparency-and-authenticity-efforts/
 Twitter blog. ‘New transparency for ads on Twitter.’ 24 October 2017. blog.twitter.com/official/en_us/topics/product/2017/New-Transparency-For-Ads-on-Twitter.html
 Vanity Fair. HIVE. ‘Republican eviscerates Facebook executives: “Do you have a profile on me”’. 31 October 2017. vanityfair.com/news/2017/10/senator-kennedy-grills-facebook-lawyer-senate-intelligence-committee-hearing?mbid=nl_th_59f8e7b35f46de0ae14fce22&CNDID=49424385&spMailingID=12263801&spUserID=MjI2Njc4OTU2NzYxS0&spJobID=1280044640&spReportId=MTI4MDA0NDY0MAS2
 Zuckerberg said: “Calling that (allegation) crazy was dismissive and I regret it.” Facebook post. Mark Zuckerberg. 28 September 2017. facebook.com/zuck/posts/10104067130714241
 Axios. ‘Exclusive interview with Facebook’s Sheryl Sandberg.’ 12 October 2017. axios.com/exclusive-interview-facebook-sheryl-sandberg-2495538841.html
 Electronic Efficient Frontier. ‘Section 230 of the Communications Decency Act.’ eff.org/issues/cda230
 WIRED. ‘Memo to Facebook: How to tell if you are a media company.’ 12 October 2017. wired.com/story/memo-to-facebook-how-to-tell-if-youre-a-media-company/
 The New York Times. ‘Facebook faces a new world as offices rein in a wild web.’ 17 September 2017. nytimes.com/2017/09/17/technology/facebook-government-regulations.html?emc=edit_ca_20170919&nl=california-today&nlid=55638422&te=1
 The Atlantic. ‘Can Germany fix Facebook?’ 3 November 2017. theatlantic.com/international/archive/2017/11/germany-facebook/543258/?utm_source=nl-atlantic-daily-110217&silverid=MzYyMTYwMTE1ODM1S0