Can/Should the Internet Be Regulated?

Trolling. Phishing. Online scams. Dark web. Identity theft. Click bait. Ransomware. Spyware. Adware. Malware. Cyber attacks. Zero day attacks. Trojan horses. User data tracking, collection and selling. Algorithm amplification and manipulation. Election hacking. Deep fakes. Anonymity.

The growing misuse and outright lawlessness of the internet makes the wild west look like an unruly kindergarten class.

But what to do?

Historically, the law lags years, sometimes decades, behind new technologies. For instance, Congress passed the Telecommunications Act in 1934, nearly 15 years after radio had established itself as a major force in American life. It then took Congress more than 60 years to amend that by-then ancient act. However, the word “internet” is mentioned a total of only 11 times in the Telecommunications Act of 1996, primarily to simply define the term and to encourage its development. There is only one reference to “electronic mail” and no appearance of the “world wide web” in the Act, even though nearly three-quarters of Americans were already surfing the net.

Over the subsequent quarter century, Congress has approached regulating the internet with the reticence of a child faced with broccoli. For one thing, when it comes to the internet and social media (or, as the cool kids now apparently refer to it, SoMe—and, no, I don’t know if that’s pronounced “SEW-me” or “sum”) or even technology issues generally, most legislators have no idea what they’re talking about. This legislative ignorance reached its nadir on April 17, 2018, when then Utah Senator Orrin Hatch asked Mark Zuckerberg: “So, how do you sustain a business model in which users don’t pay for your service?” An initially thunderstruck Zuckerberg smirkingly replied “Senator, we run ads.”

This combination of legislative tech ignorance, mistrust in regulation, and the now too-big-to-fail social media business model thwarts industry and governmental action, both domestically and internationally.

But we’re getting ahead of ourselves.

Scapegoating Section 230

Internet regulatory challenges can be boiled down to four A-B-C-D categories:

  • Access: How easy it is for nefarious actors worldwide to gain access to and operate online without vetting or oversight
  • Business: How algorithmic “engagement” is designed to generate and increase advertising revenue and profits for public companies that have become linchpins to the global economy
  • Content: How social media algorithms aggressively target, push and amplify content regardless of source or veracity with only token moderation
  • Data: How most tech companies track, collect and monetize personal user data

As far as U.S. law is concerned, the First Amendment is not a major roadblock to internet content regulation. The U.S. government, for instance, restricts broadcast media speech via licenses and myriad regulations such the prohibition of “obscene, indecent and profane content from being broadcast on the radio or TV” and equal access laws. Individual media companies and industries self-regulate (the Hays Code then MPAA ratings by the movie business, for instance, and ESRB ratings for video games), and the invisible hand of the market tamps down remaining media excesses—except the internet, where excess is a business model.

In the U.S., most of the internet regulatory effort is directed primarily at a single aspect of just one the three aforementioned A-B-C-D categories: Social media content. And most federal legislative effort at regulating social media content is more of a reactionary hammer than customized scalpel, aimed at either eliminating or amending the so-called Section 230.

“Section 230,” technically 47 U.S. Code § 230, is an add-on to the anti-pornography Communications Decency Act (CDA) of 1996, itself part of the Telecommunications Act of 1996, written seven years before the opening of Friendster, the first modern social media site. Section 230 was designed primarily to indemnify ISPs against posting of legitimate medical and educational discussions of human physiology and functions to differentiate it from plain old pornography, as well as to foster free expression on the new media. For 25 years, Section 230 has protected ISPs, content providers and social media services from all manner of content.

However, a bipartisan effort seems to have determined that Section 230’s protections have reached a point of diminishing returns. Considering its limited intentions, however, and given the scope of problems we face from an unfettered internet, focusing solely on Section 230 is shortsighted.

U.S. Legislative Attempts

On April 27, the U.S. Senate Judiciary subcommittee on Privacy, Technology and the Law held a hearing called “Algorithms and Amplification: How Social Media Platforms’ Design Choices Shape Our Discourse and Our Minds.” Testifying were representatives of Facebook, Twitter and YouTube, along with well-known social media critics. Both conservatives and liberals found defenses of the social media status quo by the social media representatives deficient, to say the least.

Some lawmakers call for the zero-sum elimination of Section 230 outright, without offering a replacement regulatory framework. Of course, internet regulation faces a far more formidable hurdle than legislative unwillingness, laziness or ignorance. Far more dangerous legislative landmines are defining “bad” content and deciding who makes and enforces those determinations, along with facing the inevitable charges of government censorship. And, of course, unlike traditional media, the internet does not recognize national borders.

Regardless of their coverage, merits or flaws, piecemeal domestic internet regulatory attempts are mere fig leaves where far more comprehensive coverage is needed to contain the expanding online misbehaviors that threaten our civil and political structures, if not our lives. We’ll explore perhaps the even more intractable international internet issues in an upcoming Weekly Riff post.