Few, if any, corners of federal communication law draw attention as does Section 230. Referred to by a book of the same title as “the 26 words that created the internet,” the law holds that online platforms are generally not responsible for the content users post on their sites. In the words of the Electronic Frontier Foundation, the law “creates a broad protection that has allowed innovation and free speech online to flourish.”

The law is also nearly 25 years old and is frequently under fire from politicians and commentators on opposite sides of the political spectrum. Why does this unique statute engender such fevered debate?

David Touve, senior director at the University of Virginia Darden School of Business Batten Institute, recently answered five questions on Section 230, its impact and what a future iteration could look like.

What is “Section 230” and what does it mean for social media?

Section 230, or 47 U.S. Code § 230, is that corner of U.S. law which clarifies that internet companies are not considered liable for content posted by users, and these companies can remove content considered objectionable without losing this protection.

Importantly, companies can still be found liable for content that may involve certain criminal activity or intellectual property infringement. The law also places an obligation upon these service providers to inform users that policies and applications exist to filter content.

In practice, and perhaps far too bluntly stated, Section 230 is the law that most would agree establishes that social media companies like Twitter are not immediately responsible for the content — words, images, videos — you post to their service. Additionally, these companies can moderate and remove your posts across a range of circumstances.

Nearly all of the 54 titles in U.S. law contain a section numbered 230. The fact that “Section 230” now refers, in most conversations, to a single one of these sections stands as evidence of just how important and even controversial this law can be.

There seems to be widespread discussion that this element of the law drafted in 1996 needs to be updated. Why?

I think the discussion regarding Section 230 is so widespread because there are multiple and quite different perspectives on the law and whether it needs to be updated. Not to sound like an armchair economist, but on the one hand, there are those who argue the law affords internet companies — and social media platforms, in particular — too much freedom when it comes to the moderation and removal of content posted to their platforms, resulting in bias. From this perspective, internet companies are moderating more speech than they should.

On the other hand, there are those who argue these companies are not sufficiently stringent in their moderation and removal of content; in particular, when it comes to comments, images, or videos considered hateful or truly offensive. From this point of view, internet companies are moderating less speech than they should.

Those two positions point in completely opposite directions as far as their motivation for any reform of Section 230 and their preferences for what reform might look like.

There are also those who argue that the properties managed by internet companies are akin to private property, rather than a truly public square. From this perspective, these companies should have the right to moderate content in accordance with any rules or guidelines in place for their online communities — no matter how difficult that task may be or disappointing the results.

This third position sees no reason for reform. The law is working as it should, even if these companies continually struggle with how to best manage the conversations on their platforms.

How should we see the relationship between the events of recent weeks, during which significant numbers of social media accounts were suspended, and the move to reform 230.

In the context of the events of recent weeks, the debate over Section 230 is both highly relevant yet also overshadowed by what I view as more important issues. Lives were lost, and we experienced a transfer of power that few people would describe as truly “peaceful.”

But to the question at hand, Internet services have made these sorts of decisions — removing content and even suspending accounts — every day for the past two-and-a-half decades. During that time, the debate over Section 230, while heated at times, held a steady temperature.

The stakes and the temperature of the debate were raised to entirely new levels, however, when the social media account in question was that of the President of the United States.

Furthermore, the full scope of the technology stack impacted by Section 230 became clear, with all the big players making news: Apple, Google, Facebook, Stripe, Shopify, Amazon and even UVA alumni-founded Reddit, among others.

If the debate over Section 230 were a musical, the events of early January would be the end of the first act just before intermission. The stakes have been raised, every cast member is on stage singing at the top of their lungs, and all of the prior musical themes merge in a crescendo that quickly and loudly builds and the highest of the high notes are hit.

And now, it is like we are waiting in the lobby during that intermission for the lights in the theater to flicker, so we can get back to our seats. What happens next?

What do you think a natural evolution of the law may look like?

My sense of the debate would suggest that the only clear common ground on Section 230 reform would be the call for companies to provide clear and visible policies for moderation and account suspension, and that the process through which these policies are enacted be transparent and fair.

Even Jack Dorsey, co-founder and CEO of Twitter, highlighted during U.S. Senate testimony that much of the debate over Section 230 can be linked to the “in good faith” language of the law.

Additionally, while likely outside the bounds of Section 230, there appears to be shared interest in better disclosure regarding the algorithms used on these platforms for the promotion of content — in particular, apparent misinformation.

Beyond that common ground, I think it is very difficult to anticipate a specific evolution of the law here in the U.S. As described above, there is no clear consensus regarding either why the law should be changed or how the law would be changed.

Repealing the law in whole would, in essence, send us back in a time machine to the mid-1990s. We would operate under the “moderator’s dilemma,” which the law that is now Section 230 was created to resolve. Services would not be liable for content posted to their platform, but that protection from liability would vaguely erode if content moderation or removal took place.

Trying to further refine, rather than revoke, the law also presents a nontrivial challenge, not just because the Senate is now split 50/50.

To try to define in the law explicitly how and which content should or should not be moderated simply places Congress in the same awkward situation the various social media platforms are themselves struggling to navigate.

Not to mention, some would argue that it is not up to Congress to legislate the nuances of speech — that nuance is navigated through the courts. Remember, tucked into the language of the law is perhaps the most controversial phrase: “whether or not such material is constitutionally protected.”

In essence, a completely unmoderated social media landscape seems as unlikely an outcome as Congress passing a law that defines which content should or should not be moderated.

Do you think a rewritten Section 230 dramatically changes anything among the handful of hugely powerful internet companies? Is anything different at Facebook on the “day after?”

I think things would undeniably be different for Facebook after any revision of Section 230 took effect. Perhaps more importantly, any decision to rescind or rewrite the law would impact nearly every service and site accessible from within the United States, not just a handful of big-name internet companies.

“Interactive computer services” are defined broadly in the law and include anything from internet service providers like Comcast and Verizon, search and social media companies like Facebook and Google, or that parenting blog with a comments section run by your neighbor down the street.

This incredibly broad scale of potential impact is exactly why I think the reality of Section 230 reform can be far more complicated than just about anyone could fully anticipate.

Additionally, the scale of this complexity expands further as soon as we consider that any company — large or small — on the internet inherently operates globally. Yet the laws governing internet companies are not consistent across country borders.

For example, it was not a formal change in U.S. law that made so many websites suddenly very interested in whether we liked cookies. Instead, these pop-up requests are largely the result of new privacy legislation outside the United States — the General Data Protection Regulation (GDPR) in the EU, in particular.

When the Telecommunications Act of 1996 was passed, less than 2 percent of the world’s population was online, and the majority of those early netizens — like me — dialed up and dialed in from the United States.

Half a century later, the majority of people with access to the internet are located outside the U.S. Going forward, the U.S. becomes an increasingly smaller portion of that universe as the remaining 50 percent of the world’s population without access to the internet gains that access.

Thus, the less obvious takeaway from the debate over Section 230 is that it is just one of many debates and potential legal reforms taking place across an internet that is now fully global. The United States is no longer the center of this online universe.