Advertisement

SKIP ADVERTISEMENT
bars
0:00/39:24
-39:24

transcript

I Love Section 230. Got a Problem With That?

In a special bonus episode of “The Argument,” Jane Coaston defends the law that made the internet as we know it.

ross douthat

Hey “Argument” listeners, it’s Ross. We’ve got a very special bonus episode this week about Section 230, the internet law that’s back in the spotlight after Facebook and Twitter banned Donald Trump. Michelle and I will be with you tomorrow at the usual time. But for this episode, our colleague Jane Coaston will take it from here.

jane coaston

I’m Jane Coaston. And this is a special episode of “The Argument.” [THEME MUSIC] So just to get started, we are in the midst of quite a week in American democracy. In response to the events that took place after a number of pro-Trump forces attempted to stage an insurrection slash protest on the Capitol, Facebook and a number of other websites have banned Trump, including Twitter. We’ve also seen Amazon ban the social media site Parler from its cloud services and the removal of roughly 70,000 QAnon-associated accounts from Twitter. So we’re seeing renewed calls to either eliminate Section 230 of the Communications Decency Act or amend its protections with the argument that this is viewpoint discrimination and that these companies are being protected as they discriminate against the president.

I think that Section 230 is awesome and perfect and great and should not be changed. And I want to stand in front of it and protect it and hold it. I am libertarian. I’m a registered libertarian. So my perspective on Section 230 is that it’s a short, well-written piece of legislation that has been misinterpreted, I think, purposefully. My guests today don’t agree with me. Klon Kitchen is the director of tech policy at the Heritage Foundation. It’s a conservative think tank here in D.C. And at the other end of the see-saw is Danielle Citron, a law professor at the University of Virginia. She’s also the author of “Hate Crimes in Cyberspace.” Danielle, I’m interested to see what you think about the past couple of days with regard to the president’s accounts or to people who are tied with this movement that helped kind of cause what happened on the Capitol.

danielle citron

So the purpose of Section 230 was to incentivize monitoring, right, to have good Samaritans block and filter offensive speech. And so in some respects, in many, what we’ve seen over the past week is Section 230 at its finest. The idea of incentivizing self-monitoring to address destructive activity online is how I would frame the removal of President Trump. He had been for the last eight months, basically, a one-way route to disinformation. And so Section 230 provides a legal shield for the over- and under-filtering of speech so that moderators can make mistakes and can remove destructive speech. And so at least vis-à-vis Twitter’s response and its operationalizing of its content moderation policy to keep up public officials and figures and ask the question, is it in the public’s best interest, it became pretty damn clear, it was no longer in the public’s interest to keep Trump up. In having this conversation, I think it’s pretty important to note that these are private actors. So the First Amendment doesn’t apply to them naturally, right? They’re not state actors. They’re not state agents. They’re not being jawboned and coerced in any meaningful way.

jane coaston

Right, you don’t have a fundamental right to a Twitter account.

danielle citron

Right, it’s a privilege. And it’s a privilege that’s assessed by their content moderation practices and policies— in the way the Cox and Wyden wanted when they conceived of Section 230 of the Communications Decency Act in 1996.

jane coaston

It was then-Representative Chris Cox and then-Representative, now Senator, Ron Wyden who wrote Section 230. And it’s interesting because the rest of the Communications Decency Act was struck down as being unconstitutional. But Section 230 was saved.

danielle citron

Right, so, you know, when you say, you can’t be on Twitter, you can be somewhere else. The president can have a press conference, right? I’m not too worried about his accessibility to a speech platform. But as we move down the stack and let’s say a cloud service provider like Amazon says to Parler, we’re not going to, you know — you have to work pretty hard to find another cloud service provider. And it’s going to take some time. And it’s expensive. And so my goal is I hug Section 230, right, in some respects. But I want to reform it. I want to keep it, but I want to condition it on reasonable content moderation practices. And I think what’s reasonable further down the stack in more power is going to be different from what’s reasonable at the top of the stack.

jane coaston

Klon, I want to get your thoughts on this. Has anything that’s happened in the last couple of days changed your mind on 230 in any way?

klon kitchen

No, this doesn’t change my mind on 230. And in one sense, I’m going to agree with what’s already been said. So, for example, with Amazon making the decision it did regarding Parler. Parler is making its brand the idea that it’s going to be a moderation-less social media network. Well, when you choose that, a couple of things happen. Number one, you tend to be a gross place on the internet. Because when there’s no moderation, lots of terrible things show up on your platform. And because you’re not getting rid of it, it kind of metastasizes and kind of takes over everything. But then, two, you begin exposing yourself to existential liabilities. And that’s what actually happened here recently. The actions that Twitter and others have taken are specifically in response — not necessarily to the political content to what the president or anyone else was saying — but because of the follow-on interpretations and actions that some would-be violent people were doing on those networks. And because Parler was choosing not to address that violent content, that then exposed Apple and Google and Amazon to potential liability concerns as well. You had a legitimately assessed violent kind of center of gravity developing on apps like Parler. It wasn’t being addressed. And while Parler can make whatever decision it wants in terms of what content moderation policies it’s going to adopt, they have no right to drag Apple and Google and Amazon down with them. And so they made a completely coherent business decision that, OK, look, if you’re going to do that, we’re not going be a part of that. So that, to me, isn’t a market failure. That’s a market decision. And I think a lot of people confuse that. And the reason that gets confused — and this is where I’m going to kind of circle back and maybe make a slightly different point than what Danielle was making — is that the last couple of days aren’t happening in a vacuum. And a lot of conservatives have just concluded that they’re not treated fairly online. And we can argue the individual merits of any one assertion about bias or mistreatment. But the reality is that polling demonstrates that distrust of these platforms in terms of their political activities and how they treat people online is overwhelmingly shared bipartisan. Seventy percent of polled Americans say that these companies exercise too much political power. And that’s Republicans and Democrats. And so the reality is is that these companies have absolutely failed to win and to keep the public trust and that in an especially heightened context like what we’re experiencing right now, that is a real problem. Because everything that happens, regardless of the fact pattern, gets interpreted through that lens. And that is a real challenge that we have not yet begun to deal with well.

jane coaston

Klon, I want to jump in there. Do you think in some ways that some of the elites on the right — and I know I’m using the term elites and I hate it too — but do you think that some people encourage that line of thinking? When you have Matt Gaetz essentially arguing that the Twitter terms of service should not be used in replacement of the Constitution, a lot of people are making it seem as if getting banned from Twitter in essentially a no shirt, no shoes, no service type issue is a First Amendment issue. And they’re doing so for political gain, for financial gain. It just seems to be the kind of encouraged misunderstanding, the fanning of the flames of misunderstanding that we saw that helped us get into this mess.

klon kitchen

Yeah, so I really just out of kind of general respect try not to judge people’s motives. I will simply say that the point you’re making is true. But there is a lot of confusion about where the First Amendment falls in this conversation, why some of these decisions are made, how they’re being applied. And it is undoubtedly true that there are some, frankly, on both the political right and the left, but certainly on the political right, who have seen political advantage in stoking not only confusion, but — but large-scale frustration over these issues. And have sought to kind of wring the political utility out of that. And what we’re realizing now — and this shouldn’t be a surprise — is that you can’t constantly describe this industry as a monster and not eventually expect a mob with pitchforks to show up. And that’s what’s happened.

danielle citron

Klon, you were talking about how, you know, these companies — and I love your very clear framing that these are just market decisions these companies are making. And I am in wholehearted agreement. But Section 230, their threat is not in a legal incentive. They’re immune from liability.

jane coaston

Do you think it’s true that there’s a misunderstanding of Section 230 happening on the left or more of a, “this is what we want Section 230 to be, we want the moderation that 230 permits.”

danielle citron

There’s always this question of what’s the problem. Conservatives think the problem is one thing. And liberals think the problem is another thing. And it’s like they’re not talking to each other at all.

jane coaston

I mean, these are private entities I always use the example — because people get annoyed at it. I always use the example of the comments on Infowars. Infowars has a terms of service in which you’re not supposed to be threatening or racist on Infowars comments, as determined by Infowars. Breitbart has the same thing. They have a terms of service. If you go to the terms of service of any of these websites— so if Facebook was being anti-conservative or if Twitter was being anti-conservative or if any of these websites were being quote, unquote, “biased,” that’s fine.

danielle citron

Absolutely.

jane coaston

There’s no neutrality. They don’t have to be neutral. They can edit as they want to. And so—

danielle citron

Absolutely.

jane coaston

— I think that the entire conversation about, you know, you get into these horrific — there’s a “neutrality provision of 230” myth which we can get into a little bit about how people misunderstand this. But there isn’t. If any of these websites that allow user content wanted to be biased, they can be.

klon kitchen

So on the point that’s being made right now in terms of these are private companies, they can do what they want, that’s absolutely true. There’s no denying that.

jane coaston

It’s like we are true free market conservatives, Klon.

klon kitchen

Well, OK, and that’s— I mean, look, I got it. But there’s two points here though. One, there’s still no denying the reality of these companies and their role and their power and capacity to shape economic, social, and political conversation in the United States. It’s how people consume information. It’s how people educate themselves. It’s how people share their views. Now I understand that in the American context, that is an inherently messy process. But when people say, look, if I’m not treated fairly online, that has an impact on the news that I am able to find and the way in which I am able to participate in this portion of the public square. Now that’s not an irrational point to be made. Now we can look back at them and say, well, yeah, but like you kind of signed up for this. It’s a messy operation. You don’t have to have Twitter to participate fully in the United States, so on and so forth. And that’s all technically true. But that does nothing to lessen the real sense of aggrievement, rightly or wrongly, and sense of kind of being pushed further and further out. And then all of that underlying kind of angst is then amplified by, I think, at least some very clear examples of at least some confused, if not biased application of some of these rules. And so I’m simply articulating the kind of — the social realities behind what we’re seeing here. And on the libertarian point, you know, I’ll simply say that it can be credibly argued that Section 230 is itself a government intervention in the market. It’s not the natural state of things. It was an action taken by the government to intervene in how the market was acting. And so it’s not an inherently libertarian position to identify Section 230 as this kind of sacrosanct thing. There are lots of good libertarian reasons to argue that. But I also think that it’s not incoherent for a libertarian to argue otherwise.

jane coaston

I think that when we’re talking about Facebook or we’re talking about Section 230 as a whole, I think that there’s arguments about, this is how we have conversations online. A lot of the conversation we’re actually having right now is about Twitter. And Twitter, while being the president’s favorite social media outlet, is actually not very widely used by most Americans.

klon kitchen

But it is the way media talks to itself.

jane coaston

It is the way media talks to itself, which I think makes it seem more important and then, I think in some ways, exaggerates its importance to ourselves. And so I think that there’s been a host of people saying, you know, media needs to get off Twitter, which I think is completely true. But we need to be clear here, this is a conversation that for a lot of people, they are not participating in the same internet that I might be participating in, that you might be participating in, that you, Danielle, might be participating. As a libertarian thinking about Section 230, because Section 230 ensures that speech made on these platforms, third party user speech is free and those platforms — we know about the court cases that led to this. We know that those protections are there so that these entities can act as private actors. That’s why I think, as a libertarian, I think that it’s worth defending. But we invited you both on here. Because you both disagree with me in many ways when I say that Section 230 is a perfect shining diamond of legal genius. So I’d like to actually start with you, Klon, about reforming Section 230, which is a surprisingly short piece of legislation. And it basically just says that if you have user content, if you have third party content, you are not liable for that content. You can edit, moderate, or delete that content or users of that content as ever you see fit. So I’m interested to see from both of you what you think a reform to that might look like.

klon kitchen

Yeah, so I can give a couple of things. One of the big issues that needs to be addressed: We need to clarify the line between acceptable editing and labeling and becoming a publisher who no longer enjoys the protections of Section 230, right? That’s really where a lot of this comes down. Because the reality is that, you know, some labels are simply that. They’re just labels. And, you know, it actually may be helpful to a user to see that label on a piece of content as they consume it. There are other labels and there are other contexts in which that label clearly has an influence on how that content is understood in terms of its relative truthfulness.

jane coaston

Hang on, hang on. OK, when we’re thinking about labeling this content and acting, you know, what editing looks like, by whose measure are we talking about going too far? Because I think that that is the piece of this that actually really gets me is that we each have our own interpretations. I always used to joke that no one knows what general knowledge is because what I think is general knowledge — just like everyone has all these thoughts on the Battle of Stalingrad and they don’t have those thoughts. This is a battle of perceptions. It is not an objective understanding of whether or not how Twitter or Facebook or Infowars.com or Disqus, which runs comments for a lot of these websites, it’s not a neutral understanding based on the voice of God of where the editing goes too far here.

klon kitchen

Well, I mean, that’s true. But let’s also understand that Section 230 wasn’t handed down by the hand of God. It was handed down by government. And so the government is a legitimate stakeholder in this conversation and has the ability and, I think, the responsibility to kind of speak into, “OK, here’s this benefit. We’re going to more clearly define the expectations for receiving that benefit.” I mean, it’s very important that people understand that Section 230 immunity is a privilege. Now people will futz with that word and they’ll want to argue about that. But at the end of the day, it is a privilege. And it’s earned by complying with conditions. And if an entity chooses to comply with those conditions, then by all means, enjoy this. But at the same time, if they choose not to, that doesn’t mean that they’re being denied a right or being punished. It means they’re making a choice. And so when I talk about labels and how that needs to be better considered, that’s the government as a legitimate stakeholder coming into the conversation and saying, “We are happy to extend this. We’re going to think more clearly and more explicitly about what those conditions are.”

jane coaston

Danielle, I see you nodding a little bit. This is a very visual medium, podcasting. But can you jump in here?

danielle citron

So I think the problem is that we under-filter speech and we don’t condition the liability on responsible practices.

jane coaston

Again, though responsible as defined by whom?

danielle citron

Well, I would say it should be reasonable content moderation practices in the face of clear illegality that causes serious harm. And that’s my proposal how we would amend the part of Section 230 that has to do with under-filtering speech, which is section 230c1. But where Klon, I think, is focused is on the over-filtering of speech that you’ve got to do in good faith. And in some sense, that license, that legal shield is conditioned on acting in good faith. Now we haven’t litigated it. Why? Because what’s the cause of action for taking down someone’s speech? There is none, right? A private actor, it’s not the government, as Jane importantly underscores, right, talking about private party. And I like your no shoes, no shirts, right? That’s what terms of service are. Like you want to play in our sandbox, here are our rules. And so I think where we’re focused, Klon, are two different problems. I think the over-filtering of speech is something we can interrogate to figure out what is good faith. And you’re saying, maybe it’s not good faith to have what I would call counter speech, you’re calling a label. Do you know what I’m saying? Because maybe some labels are not counter speech, they really are shaping the message. In some ways, that makes them a content creator. And so it takes them out of 230 altogether. Sometimes folks are — 230 debates sort of talk past each other. Because we’re not diagnosing the same problem. And so by my light, Section 230 does a lot of great things. And it encourages moderation. But it also provides, what I would say, a safe haven for despicable actors.

jane coaston

And I think you and I — I know with your work and my work, we’ve both experienced the work of despicable actors on the internet. I was joking darkly last week, like I remember the first time I got photoshopped into a gas chamber in about 2015 on Twitter. That was, what a time.

danielle citron

Right? But I mean despicable actors, not just individuals who you could sue —

jane coaston

Right.

danielle citron

— and who could be criminally prosecuted because their words and actions, but I mean sites whose raison d’etre is abuse, like non-consensual porn sites.

jane coaston

Yeah.

danielle citron

Those kinds of bad actors aren’t just, of course, in the business of NCP, but also deep fake sex videos, the illegal sale of guns. You talked about, Jane, the overbroad interpretation of Section 230, which I think has really gone amok, so that we are not just immunizing speech, legally protected speech or otherwise, gun sales. What does that have anything to do with speech and wanting to preserve a diverse set of voices as Section 230’s purpose part of the statute describes, right? The sale of guns ain’t that. And so the overbroad interpretation of Section 230 where we ought to focus is on the provision that has so licensed these really bad actors — whether it’s non-consensual porn sites, deep fake sex video sites, the sort of 4chan and 8chans that enable so much abuse and do nothing about it. So I want to focus our energy there. Because I think that’s actually the real problem. I think we keep Section 230 and the kind of moderation it permits and encourages and provides a license for, as like, Klon, you would say. It’s not a right. It’s a privilege. It’s a license by the government, this immunity. But I think you’ve got to earn it. And I think Cox and Wyden were right about that.

klon kitchen

Well, and this — at least on portions of this, there’s actually some good overlap between us, which shows you that when you take Section 230 out of the political context, there’s actually real room here for improvement. So in something I’ve written, I specifically call for a, what I call, a bad Samaritan carveout. And I specifically say that we should remove liability protections for any interactive computer service that purposefully with the conscious object to promote solicit or facilitate material or activity that the service provider knew or had reason to believe would violate federal criminal law. That’s an obvious thing. And that doesn’t have to be politically aligned. That’s simply about a good society and bringing 230 into compliance with its original intent. [MUSIC PLAYING]

jane coaston

So we were just talking about Klon’s bad Samaritan carveout. And one of the problems I think we have here is that all of these ideas depend on interpretation. What Representative Alexandria Ocasio-Cortez thinks is good faith or reasonable and what Senator Ted Cruz thinks is good faith or reasonable are going to differ widely. And so I think my concern here is that, for instance, I would prefer not to be photoshopped into gas chambers or have my life threatened on the internet. That’s just me. But I’m aware that there are many people who would find that to be less offensive, obviously, until it violates federal law but would find, say, sex workers on Instagram or see antifa groups who are talking on the internet or see John Brown gun groups talking on the internet, would find that far more objectionable and say that it’s a good faith interpretation that we should curtail their speech, but the people yelling at me for being mixed race on the internet, their speech is fine. My concern here is that this seems to be a battle of interpretations. Obviously, federal law notwithstanding, Section 230 does not protect you if you want to post child pornography on the internet. But my concern here is about these interpretations. How do you both think about that when you’re thinking about your political opposite working on these reforms when they are working from their own interpretation of what reasonable is, or even what hate speech is?

klon kitchen

Yeah, so I would say you’re describing democracy. Or, I mean, like A.O.C. and Ted Cruz are part of an institution where that kind of stuff gets worked out and we come to, you know, whatever the acceptable compromise is. And you’re right, it’s messy. And it’s not clean. And it allows for the types of ambiguities that we’re all kind of pulling the string on right now in this conversation. At the same time, at the point where the federal government decides that it’s going to involve itself in this conversation, that is the inevitable reality that we have to deal with. Now as companies, they get to decide what they want to do. I’m fine with a company deciding whatever rules they want to decide in terms of who they will and will not allow on their platform. I do think that there’s another related question that is very important in terms of, OK, well are you going to enforce those rules fairly and evenly across the board? And so that’s what some of this gets into. But just generally into your broader point about like who decides what, well, I mean, again, when we’re talking about law and this provision that the government has provided to industry, well, the government is a key stakeholder in that. And so they certainly have a voice in that definition.

danielle citron

So reasonableness, as I’ve sort of described in my work in my proposal for Section 230 would be interpreted by the court. So what’s reasonable in the face of clear illegality that causes serious harm is something courts do all the time, right? Reasonableness is imbued in tort law. It’s in Fourth Amendment. It’s in statutory interpretation. Like literally reasonableness is the lifeblood of legal commitments. And we’re good at it. Courts are good at it. And so I think that if we’re going to amend Section 230 and condition it on reasonable content moderation practices in the face of clear illegality, you know, courts do that all the time. What’s reasonable, right? And I think what I’ve argued — so in my book, “Hate Crimes in Cyberspace,” I sort of went a middle ground and said, yeah, bad Samaritans that principally host cyberstalking and nonconsensual porn, right, they should not enjoy the immunity. And I sort of think, you know what? I wasn’t ambitious enough in 2014. Because I’m still frustrated that we don’t have enough what I would call technological due process. We don’t have enough transparency and accountability. And at the core of reasonableness would be having transparent procedures, clear policies, mechanisms of robust accountability that we just don’t have. And I think the frustration that no matter where you are on the political spectrum is that these really powerful companies are making decisions. We don’t understand their rules. They have really clear, very comprehensive rules inside. But they don’t tell us on the outside. They just say hate speech, we ban bullying and stalking. And they don’t say much more. They should say much more.

jane coaston

What would that look like? I’m interested to see if there were what you term technological due process, how would that change how people interact with one another on Reddit or on the comments on a newspaper website or on Twitter or on Facebook?

danielle citron

So all these private companies, right, it’s not the government, would have to be really clear about what their rules of the road are and explaining very specifically with detailed examples about what they mean by the term hate speech. Because it could mean very narrow to very broad. What do you mean by stalking? What do you mean by threats? And then have processes of accountability. So when your speech is removed, you, A, are notified it’s removed. You notify the public it’s — you know, it says this has been removed. And you give that person a chance to appeal, to have some mechanism of appeal that they then understand why it was removed and what the terms are. OK, so it’s removed, but we’re going to put it back on or we’re going to give you a chance, we’re going to suspend you after three strikes. And I think some of this distrust is in large part because of the frustration that those rules, those processes of accountability aren’t transparent.

klon kitchen

Well, and what’s interesting is that if you ask the companies why they don’t publish their kind of detailed rules of the road or community standards, one of the arguments they’ll make is, well, by doing that, then we give a roadmap to bad actors on how to avoid our tripwires. So they’ll say —

danielle citron

They figured it out already, Klon, right? Like that whole, “they’ll game us —”

klon kitchen

Right, yeah, when we sat down with Senator Kyl who headed the review for Facebook at Covington & Burling, that was the argument that like we can’t put it out there because, you know, they’re going to learn the rules of the road and they’ll just navigate through the seams.

jane coaston

If we are thinking about a positive example for these companies, is there anyone who you think is good at content moderation? What would it look like in an ideal scenario for you?

klon kitchen

Woo, no one is coming to mind [LAUGHING] as being particularly good. Look, let me start by what this isn’t in my view. So I don’t think that this is an outcome where I’m anticipating that conservatives are completely left alone, they can say whatever they want, wherever they want, whenever they want. That is not an expectation that I have. I do think — and Danielle was touching on this point. Companies like Facebook and Twitter and everybody else, they do have to answer the very real dynamic that no one trusts them, that this isn’t just the Heritage Foundation. This isn’t — right? No one trusts them. And that is an existential challenge for them as a business entity. And they’re either going to get with it and begin addressing the underlying, you know, lack of trust, or they are going to — independent of what the three of us think — get regulated into oblivion. Like that is the inevitable place where this goes. And so as someone who is a free marketer and as someone who understands these companies, frankly, to be at the core issue of some national security issues that I focus on and other things, I don’t want that to happen. That is a bad outcome. But the status quo is unsustainable.

jane coaston

So without Section 230, a lot of our conversations about user-generated content are moot. But what does good regulation look like? What does a fair regulation look like to you?

klon kitchen

So, one, define good faith more clearly. I think that’s essential. That’s fundamental. Strike otherwise objectionable. So in part of the text when it lists the types of things that companies are being incentivized to remove from their channels, it has this list of a reasonable belief that is obscene, lewd, lascivious, filthy, excessively violent, or harassing. And then attacks on this last statement of, “or otherwise objectionable.” Now, in a reasonable context, that’s actually fine language. But that has actually been used and interpreted so broadly so as to include essentially anything they want. And I think that taking another hard look at that specific language and possibly it just being struck from the provision is something worth being debated. I also want to make it very clear that there should be no effect on antiterrorism, child sex abuse, or cyberstalking laws. So I don’t want to disincentivize these companies from taking that content down. I think we need to make sure that they know that they are completely protected in being very aggressive in taking terrorism, sex abuse, and that kind of stuff down. I talked previously about the bad Samaritan carveout. I think that’s huge. And then here’s one that not a lot of people talk about, but I think is really important. I don’t think that Section 230 protections should be made contingent on exceptional access or similar law enforcement cooperation. So exceptional access is this notion that, essentially, you know, government back doors into these programs so that, you know, terrorists and other bad guys can be identified and found. I think that there are sufficient laws already on the books for dealing with this. There are some very real technical limitations on that idea. And I think it just confuses the issue when you make Section 230 and you start intermingling that conversation with exceptional access or even things like antitrust.

jane coaston

Danielle, what do you think? If we had the reforms that you’re thinking about, what would that look like as a positive?

danielle citron

Right, so just note though, you know, Klon are really kind of focused on two really different things.

jane coaston

Right.

danielle citron

And I’m focused, by contrast, on the underfiltering of speech. Because in some respects, it is a free pass for the worst actors and doesn’t incentivize enough clarity of policies. So you said, like how would it operate? What would reasonableness give us, right, if we’re going to condition the immunity on reasonable content moderation practices in the face of clear illegality? And so what I think it would give us is policies that are coherent and clear in the face of clear illegality. I think what it would give us were opportunities to respond. And so there are sites, I think, that we might see disappear because they’re being rightfully sued for illegality that’s happening on their platforms that they’re making money off of. And I think something that’s really important that we haven’t acknowledged clearly enough is that these platforms, they’re not free speech platforms. Let’s be clear, they’re advertising hubs. They’re making money off of our personal data. They’re not really giving a rip about democracy. So I want to make sure that the disciplining function of reasonableness would at least give us room for conversation with these platforms in a way that’s much more clear and transparent and making sure that if there’s enough indication that there’s illegality happening on, let’s say, the dating app Grindr, that you can’t just say, I’m reasonable even though I have no content moderators. Do you know what I’m saying? Like, Jane, it’s a way in which, I think it’s actually much more modest than— in some sense, you might say, well, Danielle, you should be ambitious. Like have them all face liability, all of these sites, and too bad, so sad. And then the market will ensure that with legal incentives, they’ll have to respond. But I think Section 230 has given us a lot.

jane coaston

Right, I’m actually the Goldilocks in between both of you in which I think my concern is that a over-filtering would throw out clearly bad faith actors that are doing this for financial gain. It would also throw out sites that provide— I keep going back to what we’ve seen with FOSTA and SESTA, the Fight Online Sex Trafficking Act and the Stop Enabling Sex Traffickers Act that essentially poke a hole in Section 230, creating an exemption that means that website publishers would be responsible if third party users are found to be posting ads for prostitution, but then also websites that provide support to sex workers or people who are involved in that industry who are like, we are adults, this is consensual activity, it might even be legal where we are but we are not able to use this website because of these laws. That’s a concern for me. However, I also think that with the points you referenced, Danielle, and both of you about the idea of what should be filtered out, you know, the idea of the internet that I think that we had in the early 1990s before Section 230 as either, this is going to be the age of Aquarius and just this great bringer of wisdom or it’s the Wild West where if you want to find bomb manuals, you can— which you still can. Neither of that turned out to be true. I think we’re still building the airplane we’re flying in when it comes to online content. I still have to say, though, that I am concerned with how speech has been interpreted repeatedly by the state writ large, whether or not that’s the government or the courts. I remain concerned of what that would actually look like. There is no means by which you can say, you guys are cool, not you guys that doesn’t at some point relate to someone making a decision about what content is permissible and what content is not permissible in a way that I— I think they would somehow find a way to make me mad.

klon kitchen

Well, I’m confident that’s true. And I’m confident they would find a way to make me mad. Danielle and I certainly are focusing on different aspects of Section 230. But the point is that Section 230 actually deals with both aspects, both the under-moderation and the over-moderation. And so the fact that we’re kind of emphasizing one aspect over the other in this conversation, one, means this is a more balanced conversation. But then two, demonstrates that those things aren’t inherently at cross purposes, right? These things are complementary. And they need to be addressed as much. The thing that makes this even harder though — and this is kind of bringing us back to the top of the conversation — is that there are some on the political right who are engaged in this conversation in an effort to kind of settle scores. And then there are some on the political left that are engaging in this conversation that are trying to engage in social engineering. And all the while, there are American people who are caught in the middle who just want to be able to get online and not be manipulated or abused. And I think that’s a fair expectation.

danielle citron

I think we’ve also should think, as well, about privacy legislation. I know, Klon don’t get scared of me. So much of this goes back to our data and that we live in a land of collect it all, use it all. So long as you don’t lie to us, right, you can do it. And I just want to stop that land that we live in, right? We need to constrain the collection of lots of our intimate data and restrain its use. The free for all isn’t serving us. And it’s allowing for a lot of, you know, business models based on manipulation based on our data. And so I think any meaningful conversation about Section 230 should always be paired with conversation about — not only are we the sort of ordered, I think, Wild West in Section 230 land, but we’re also kind of the Wild West as to data protection. And that’s got to change.

jane coaston

Well, both of you, thank you so much for joining me.

danielle citron

Thank you.

klon kitchen

My pleasure.

jane coaston

The Argument is a production of The New York Times Opinion Section. The team includes Alison Bruzek, Vishakha Darbha, Elisa Gutierrez, Phoebe Lett, Isaac Jones, Paula Szuchman, Kate Sinclair, and Kathy Tu. Special thanks to Michelle Harris.

I can say that I would probably agree with both of you more than I would agree with a certain Missouri Senator who talks about this issue sometimes. But that’s neither here nor there.

Listen and subscribe to “The Argument” from your mobile device:

Apple Podcasts | Spotify | Google Play | RadioPublic | Stitcher | RSS Feed

In this special bonus episode, Jane Coaston makes her hosting debut on “The Argument” to discuss one of her favorite subjects: Section 230. As scholar Jeff Kosseff defined it, the “26 words that created the internet,” is part of the Communications Decency Act of 1996, and it protects websites from liability. The law also allows internet companies to moderate third-party content on their sites.

The banning of President Trump from many social media platforms has led to renewed calls from both political parties to amend or revoke Section 230. Jane debates what changing the law might mean with Klon Kitchen, director of the Center for Technology Policy at the Heritage Foundation, and Danielle Keats Citron, a professor at the University of Virginia School of Law and author of “Hate Crimes in Cyberspace.”

Image
Credit...Oliver Contreras for The New York Times

Background Reading:


How to listen to “The Argument”:

Tune in on iTunes, Google Play, Spotify, Stitcher or your preferred podcast listening app. Press play in the above player, and find a transcript by midday Friday above the center teal eye. Tell us what you think at argument@nytimes.com.


Meet the Host

Jane Coaston is the host of “The Argument.” Previously, she was the senior politics reporter at Vox, with a focus on conservatism and the Republican Party. Her work has appeared on MSNBC, CNN and NPR and in National Review, The Washington Post, The Ringer and ESPN the Magazine, among others. She is also a former resident fellow at the University of Chicago’s Institute of Politics. She attended the University of Michigan, and lives in Washington, D.C.


“The Argument” is a production of the New York Times Opinion section. The team includes Alison Bruzek, Phoebe Lett, Elisa Gutierrez, Vishakha Darbha, Kate Sinclair, Kathy Tu, Paula Szuchman and Isaac Jones. Special thanks to Michelle Harris. Theme by Allison Leyton-Brown.

Jane Coaston is the host of Opinion’s podcast, “The Argument.” Previously, she reported on conservative politics, the GOP, and the rise of the right. She also co-hosted the podcast “The Weeds.”
@janecoaston

Advertisement

SKIP ADVERTISEMENT