The Best of Both Worlds: Free Speech Is Easy When It Isn't Free
Unpacking the TikTok deal's free speech nightmare, social media's legal immunity, government coercion tactics, and the quest for consistent free speech standards.
This past month has brought a few schedule changes on my end. A few of them, unfortunately, make publishing on Monday impractical. I will try to publish on Wednesday mornings, maintaining the same cadence as during the summer: two “Roundup” type articles per month and two more commentary-based pieces focused on security and democracy.
Last week, President Trump announced a TikTok deal that would finally comply with the law passed in April 2024. The best summary I have seen so far is from Consensus Drift
Here is a part of the summary:
The proposed deal
The administration’s description of the current framework says TikTok’s U.S. recommendation system will be “retrained from the ground up—reviewed and analyzed under U.S. supervision” and “operated in the United States outside of ByteDance’s control.”
The U.S. buyers would “lease a copy of the algorithm” from ByteDance, then have Oracle oversee the re‑creation and security of a U.S. version. This U.S. version would be “fully inspected” and retrained using U.S. data that would not be shared outside of the United States.
What is the upshot of this structure?
Together, what all of this means is that, to certify ongoing compliance with PAFACA’s no‑cooperation rule while running a licensed and retrained algorithm, U.S. officials (through Oracle and the new TikTok’s auditors) will likely need to verify how that TikTok recommendation algorithm functions. That verification would be an oversight mechanism into the very system that ranks and amplifies TikTok users’ speech.
He goes on to recommend some potential guardrails to protect what is clearly potential for the federal government to monitor and moderate speech via algorithmic oversight.
Even if oversight is kept out of government hands, there is still a clear political lean of who would be overseeing the algorithm functions, as well as with social media ownership in general:
The addition to the TikTok ownership group of the Murdochs, whose media empire has been a bedrock of the conservative movement in the US, would further cement right-leaning owners atop the world’s largest social media properties.
Elon Musk, Trump’s largest financial backer in the 2024 presidential election, controls X, the parent company of Twitter. Meanwhile, Meta Platforms, the owner of Facebook and Instagram, is controlled by Mark Zuckerberg, who donated $1mn to Trump’s inauguration. (Financial Times Subscribers. Gift Article)
What is more bothersome than consolidated media control is that social media companies get to have the best of both worlds when it comes to speech and content moderation. On the one side, they have Section 230 immunity for content posted on “their platform.” On the other hand, they receive First Amendment speech protection to curate our “personal” feeds as they see fit.
I can’t think of a better position to be in—to be able to moderate content with impunity but have zero consequences for the content that is “published.” This is not how the rest of the publishing world works. The NYTimes was sued for publishing a “series of articles before the 2024 election” that were allegedly “aimed at hurting [President Trump’s] candidacy and caused “enormous” damage to his “professional and occupational interests.” There, at least, there was a plausible avenue for the case to advance on the merits of the published content.1 No such thing happens to social media platforms.
The government also seems to get the best of both worlds—being able to censor speech as long as it is done in an accepted way. The government's ability to coerce others into suppressing speech seems to follow a “do as I say, not as I do” approach, or more cynically, “don’t get caught.” Here are some examples of what I mean:
National Rifle Association of America v. Vullo, decided May 30, 2023, was a unanimous Supreme Court decision in which a New York State government official used her regulatory position to punish the NRA after the Parkland High School shooting for advocating for gun rights. The Supreme Court said:
The takeaway is that the First Amendment prohibits government officials from wielding their power selectively to punish or suppress speech, directly or (as alleged here) through private intermediaries.
This follows what I think most would expect from a government speech-coercion case. But what about Murthy v. Missouri? This case was decided a year later, on June 26, 2024, with the exact same judges. It was about “jawboning,” where the federal government used various methods to pressure social media companies into removing information on COVID-19 that it deemed “misinformation.” While the court found that there was no standing, there were some troubling exchanges during oral arguments about the merits.
Justice Kavanaugh, who served in the George W. Bush White House, said government press aides “regularly call up the media and berate them.”
Kagan, who served in the Obama administration put it this way: “Like Justice Kavanaugh, I’ve had some experience encouraging the press to suppress their own speech,” she said. “You just wrote a story that’s filled with factual errors. Here are the 10 reasons why you shouldn’t do that again. I mean, this happens literally thousands of times a day in the federal government.” (Gateway Journalism Review)
These quotes from the two justices are perplexing to me for several reasons, one of which is that they literally ruled a year ago that this exact behavior was inappropriate. And before you split hairs, yes, Justice Kagan, the 45th Solicitor General, could absolutely be described as a “government official” who “wielded [her] power to selectively punish speech.”
The Supreme Court’s inability to maintain even a modicum of consistent judicial philosophy is reflective of many in the political arena. I started reading “The Argument” a few weeks ago. The Substack is a self-described publication that “make[s] a positive, combative case for liberalism.” I’ve read a few of their pieces, like this one: “A left-wing Trump isn’t the answer. This is.” and found myself nodding along through most of it. Here is an excerpt that really resonated with me:
The political writer Ross Barkan recently argued at New York Magazine. “And beyond retribution itself, a Democratic president could simply implement progressive policy goals with far more ease in this new era.”
The idea is fundamentally misguided. But the thinking that leads there is not hard to understand. […]
As a matter of principle, I don’t particularly want to live in a country where presidential candidates vie to run the country as elected strongmen, with each party looking to exact revenge on the other.
Yet this same writer goes on to write another article 1 week later, titled “Am I a big fat hypocrite on speech?” I think this piece was an attempt to distinguish degrees of severity in the Biden Administration’s censorship and Trump’s cancellation of Jimmy Kimmel, where the author thinks the former was ok, but the latter unacceptable. I’m not sure if he is a hypocrite because many people and organizations (see above) do the exact same thing he is doing. He is drawing free speech restraints around the speech he likes.
You can’t really get the best of both worlds when it comes to speech—getting to defend only the stuff you like. The problem with stances like the Argument’s is that their starting point is a position on an issue, not a legal standard. How he feels about COVID policies colors his view on speech, and if one were to extrapolate a legal standard, all of a sudden, the government would have the right to censor all types of speech.
So let’s try it. What would the standard be if we applied his criteria across the board? Two items in particular stood out to me: “bad information” and “public health emergencies” as reasons to support government censorship.
What if “public health emergencies” were the legal standard? Well, COVID-19 was declared a public health emergency on January 31, 2020. It ended May 11, 2023. Surely, an “emergency” that lasts for over three years cannot be a valid reason for government censorship? Nor, I hope, would it be acceptable for that censorship to last the entire time.
I think someone could make a reasonable argument that there are limited, valid reasons for restraining broad swaths of speech during national emergencies, like September 11, when the government is trying to stabilize a crisis situation, but these restraints should be measured in hours, not years. Even then, it would be hard to craft a standard that wouldn’t provide carte blanche in different scenarios. Congress doesn’t do well clawing back power from the President or enforcing notification requirements.2 After all, we have 48 active “National Emergencies,” one of which dates back to the Carter Administration.
“Bad information” is another problematic standard. How is the information bad? Presumably, this means the information is false. If bad speech is false speech, then the legal standard would be something along the lines of unprovable speech is acceptable to censor. Perhaps we quantify it—if something has a 49% chance of being bad/false/wrong, it can be censored. But who says it is false? Will it be the government that determines this? That may well be the case with the potential TikTok Algorithm Oversight committee. If so, I think the entire Democratic Caucus in both houses would be in trouble right now because it appears that the government feels that the Dems are lying when they say they do not want to extend healthcare to illegal immigrants. Allowing those in power to determine truth is the wrong way to go.
But what if the information was just complicated and complex? What if there was doubt or a possibility that the censor could be wrong? Is the correct answer—not just from a liberal perspective, but a practical one—to stifle debate? Shouldn’t there be more speech to determine if there is a better answer? In a democracy, if the electorate doesn’t like your policy answer, you can’t get frustrated and shut off the debate; you need to do a better job of framing the problem. It may feel futile, but if people are not trusting you on vaccines, I don’t think they are going to trust you more when you start censoring them.
What if the stakes are high? The author goes on to say, “trying to save hundreds of thousands of lives is not the same as silencing Jimmy Kimmel.” But this trivializes political speech. Speech is the lifeblood of a democracy, and just because some speech has severe policy consequences doesn’t mean it is somehow a more important speech that warrants a different threshold for censorship. While “lives saved” is a compelling standard, I’m curious if we would be ok censoring speech around those who enjoy cheeseburgers, soda, alcohol, or motor vehicles—all items that have equally high consequences for Americans.3
While I generally believe the legal standards we have in place are sufficient, I am open to some adjustments. I think privacy should be given more weight than free speech defenses in some, very specific instances. For example, recorded executions or assassinations, like Daniel Pearl or Charlie Kirk. It’s admittedly a tough call; both were newsworthy events that shaped political discourse, but I think I would be ok with a court evaluating the value the video footage of the actual killing has on speech against the family members of the deceased. No child should be worried about stumbling across a video of her father getting murdered. I think I could extend the same argument to people who want certain images removed from the internet. Those who make pornography, for example, particularly in non-professional settings, and even if it was consensual at the time, should have the right to remove it.
Do these additional restrictions make me a big fat hypocrite? Of course, I don’t think so, and I never claimed to be a “free speech absolutist.” I’m arguing for agnostic legal standards that don’t hinge on the censor’s priors for a particular piece of content. The scenarios above aren’t content or ideology-based. Privacy legal standards exist, and a conceivably narrow application of a non-content-specific standard could possibly pass strict scrutiny. This is a much different standard than “bad information.”
I am not a conservative because I think it is fine for traditions, institutions, and cultures to change. I don’t think we have to pick a point and say “we’ve peaked” and call it quits. I am not a leftist because I think individuals should not be subservient to the state, politically or economically. I am a liberal because I believe that individuals should have a choice in politics, culture, and the market, and that the levers of governmental power should amplify this choice.
The only way to ensure that liberalism survives is to rebuild our laws and institutions that reflect this politically and ideologically agnostic, choice-driven worldview, and this rebuilding begins with how we frame our views on classic liberal ideals, like free speech. There will always be guardrails around these ideals, but the guardrails need to be crafted carefully and applied across the spectrum and applied evenly. If your ideas can’t survive debate, then it follows that you are going to have a hard time surviving democracy.
The case was tossed 4 days after it was filed, albeit without prejudice.
The War Powers notification requirements are a prime example.
I don’t think we need to live and die by the 1st Amendment in an absolutist sense. Anarchy is not liberalism. There are valid and legal restrictions around advertising, for example, and this is much different than the executive branch coercion to censor differing viewpoints. There is also a tendency to conflate policy with speech. Vaccine mandates may be the best course of action, and while speech will be critical to the efficacy of these policies, they are not the same thing.