How to fix Big Tech the old-fashioned way

By | April 3, 2024

When it comes to tech companies, lawmakers often seem to be talking about several things at once. Sometimes lawmakers are angry at tech companies don’t take enough take action to protect their users, holding companies accountable for harm, especially harm to children. At other times, they complain that tech companies are taking too much take action by unfairly banning certain users and content from social media.

These two issues – liability and deplatforming – are sometimes treated as separate issues, but are closely linked.

In January, social media executives appeared before the Senate Judiciary Committee during hearings on the harm the platforms have done to children. Tech companies currently enjoy broad immunity from liability for content posted on their platforms thanks to a provision in federal law known as Section 230. Senators from both parties have said they want to repeal Section 230, and during the hearing they executives castigated why they should not be held financially liable for the injuries caused, and demanded further protection of children from harmful messages and algorithmically curated content.

If Section 230 were repealed and platforms were sued, they would invariably remove content that harms children to avoid future lawsuits. In other words, they should deplatform harmful content and possibly the people posting it.

But while many elected officials may welcome such actions, others are moving in the opposite direction. For example, state lawmakers in Texas and Florida have passed laws that would severely limit social media platforms’ ability to deplatform individuals and content. The platforms have challenged the laws in the Supreme Court, and during an oral argument in February, the justices appeared to struggle with whether states can prevent deplatforming.

This tension between content moderation and deplatforming also impacts the people who run social media sites. While Elon Musk was skeptical of deplatforming and content moderation before he bought Twitter (now and deplatforming the left. to a competing app, Mastodon.

These different actions raise a big question, perhaps one of the central riddles in the age of social media: how should we think about deplatforming? Should technology platforms allow all content and all users to post, in accordance with the principles of equal access? Or should they restrict content and users that are harmful to children, other users or society?

This question may seem new, but it is actually very old. For hundreds of years, American law has grappled with the issue of whether, when, and how to deplatform individuals and content. Not on social media platforms, which obviously didn’t exist, but at public utilities and other infrastructure companies.

Laws in these areas provide important insights into how social media and other technology platforms should be regulated. The first step is to start thinking of technology a little differently, less as a product or service and more as a utility.

Companies that provide infrastructure or utility-like services have long been categorized differently from services that sell ordinary goods. In the 18th and 19th centuries, ferries, mills, blacksmiths, inns and quays belonged to this category. With industrialization, successive generations of Americans added the telegraph, telephone, railroads, pipelines, airlines, and other businesses, mostly in the transportation, communications, and energy sectors. In a new book, my co-authors Morgan Ricks, Shelley Welton, and Lev Menand call this category of companies NPUs: networks, platforms, and utilities.

NPU companies form the basis for modern trade, communication and social connections – just like today’s technology platforms. They also tend to ultimately possess monopoly or oligopoly power, much like our modern technology giants. As a result, courts, state legislatures, and Congress have imposed a distinctive set of regulations on these companies throughout history. To ensure access to these important services, NPUs typically had to “accept all entrants,” including even their competitors, in a non-discriminatory manner.

Some old cases provide some instructive examples. For example, in 1881, just five years after Alexander Graham Bell invented the telephone, a transportation company in Louisville, Kentucky. wanted a telephone service so customers could contact it. But the phone company refused to serve them. Why? Because the telephone company owned a competing carrier. The carrier sued, and a Kentucky court ruled that the phone company was “obligated to serve the general public … on reasonable terms and with impartiality.” Over time, policymakers went further and banned many NPUs from owning other companies. This prevented them from transferring power from one sector to other trading areas.

But that obligation to serve all users was never absolute. NPUs were also allowed to exclude some customers, as long as the exclusion was reasonable. For example, in 1839 the New Hampshire Supreme Court declared that innkeepers and common carriers “are not bound to receive every one. The character of the applicant, or his condition at the time, may be a good reason for his exclusion.” A passenger who attempts to “commit an attack” or “damage the business of the owners” may be disqualified. In another case, a court allowed a telephone company to deny service to a user who repeatedly used a shared phone line to disrupt the calls of others.

These and other examples of deplatforming across a wide range of networks, platforms, and utilities in American history offer lessons for thinking about today’s social media platforms. Reasonable exclusions from NPUs generally fit into three categories. The first was to ensure the quality of service. A railroad didn’t have to take a passenger who didn’t want to pay for a ticket or someone who wanted to stop the train. Second, American tradition generally allowed exclusion to protect other users or society itself from harm. A steamboat did not have to allow a known thief to come aboard. A tram had an obligation to exclude a rider if it was reasonably foreseeable that he would injure another person. Current law prohibits carrying weapons on planes, and airlines can ban passengers who have been violent in the past. Finally, and most controversially, the American tradition allowed for a limited degree of exclusion related to social norms. In the Jim Crow era, this meant unacceptable and discriminatory exclusions based on race. But exclusions based on social norms sometimes have more reasonable justifications: for example, broadcasters are subject to rules for broadcasting indecent programs.

What history shows is that deplatforming is an endemic problem for any network, platform or utility – it’s not a challenge unique to social media or even technology platforms. The reason is simple: for any network, platform or utility company, it is unworkable to serve literally everyone, without exception, because there will always be some bad actors.

At the same time, giving essential utility owners the ability to exclude whoever or whatever they want is problematic. Should one person have the power to exclude individuals from services essential to modern commerce and social life? Should a corporation whose legal duties are to its shareholders, not to the public good, have a free hand even if increasing their profits hurts people?

Historically, Americans did not leave such decisions solely to self-regulation by those who controlled the platforms. In the 19th century, judges imposed conscription on NPU companies and also allowed reasonable exceptions. If a platform violates the duty of service – or if a user is injured or harmed – the user can file a lawsuit, leading to liability for the platforms. This liability obviously created an incentive for the platforms to exclude those likely to harm others.

In other words, liability and deplatforming can work together to strike a balance between duty and protecting users from harm.

This reasoning suggests that it would make sense to remove the Section 230 liability shield from social media platforms and allow individuals to sue them for all kinds of damages and injuries. Lawsuits after the fact, if history is any guide, would likely push the platforms to change their behavior ahead of time. Ultimately, a case-by-case process will lead to a set of stable rules about what types of behavior are allowed.

At the same time, lawsuits have some serious disadvantages. Unlike the small amount of deplatforming that happened with phones or train cars, technology platforms have millions of users and millions more posts. The 19th century approach also puts courts and judges at the center of the action, which could result in a patchwork of potentially idiosyncratic rules across the country.

Another alternative – one better suited to scale – would be for our elected representatives in Congress to create rules that govern technology platforms as utilities, and include rules on deplatforming and content moderation. These may relate to procedural rights and responsibilities and may include some substantive rules applicable to extreme cases such as incitement to violence and indecency.

This approach also brings challenges. Debating and adopting such rules would be difficult. In our polarized environment, people have different views on what should be deplatformed. For social media platforms, the First Amendment will also appropriately limit what is possible. But the First Amendment is not absolute even today, and it is unclear how the Supreme Court will apply it to the social media regulatory cases coming up this term.

Whatever path lies ahead, history shows that reasonable deplatforming is not only common, but inevitable. Without this, platforms can become socially destructive rather than helpful. Accepting that reality is the first step in figuring out how to balance the desire for platforms to serve the public – and the need for platforms to protect the public.

Leave a Reply

Your email address will not be published. Required fields are marked *