FeaturedPolitics and lawThe Social Order

How the War on Disinformation Became a War on Truth

Americans did not learn the term “disinformation” until the Cold War neared its end, and they might be surprised to learn that the word has Bolshevik origins. It derives from the Russian dezinformatsiya, which, in the 1920s, was used to describe false and distorted information from an enemy bent on undermining the faith of the Soviet people in the Party’s decisions. In the class-struggle paradigm, sowing doubt about officially approved wisdom was synonymous with disinformation.

Soviet journalists and party instructors studied “bourgeois” propaganda and supposed disinformation in order to combat it. Select party cadres also studied the methods of “bourgeois” disinformation, so as to turn them against capitalist enemies abroad. Thus, any engagement with disinformation involved recruitment to fight enemies—either by combating those enemies’ disinformation or by wielding it against them.

×

Finally, a reason to check your email.

Sign up for our free newsletter today.

Abstract academic study of disinformation is impossible; the study puts researchers in an agonistic mode. Their work product assumes use against an enemy, either offensively or defensively. In these dual purposes lies the essence of the concept: researchers must become not disinterested users but political operatives. Moreover, combating disinformation tends to evolve into producing it.

The term “disinformation” took root in America along these lines but gained widespread use only after 2016, when Hillary Clinton’s unexpected loss to Donald Trump stunned progressives. As reports emerged that Russian hackers had targeted Democratic National Committee servers, a theory of Russian interference in Trump’s favor quickly took hold. While some Russian social-media activity was detected, the idea that it had swayed the election became a dominant narrative—promoted by partisans and the media.

Eight years later, with tech industry leaders shifting toward a conservative coalition, the arbiters of information have switched sides. If the war on disinformation—as we knew it—has ended, a postmortem is in order. The concept served the progressive Left well, first as a way to explain Trump’s 2016 victory, and then as a tool to suppress inconvenient stories. But it came at a steep cost: the war on disinformation ultimately became a war on truth.

In media studies, the concept of “affordances” holds that one’s environment or technology dictates a certain mode of operation. You can’t walk or climb on a lake, but you can swim through it. Nomen est omen: as the ship is named, so will she sail. Once the concept of disinformation arose in the West, it imposed its own dictating affordances—its ideological power, its transformative spell. By design, the crackdown on disinformation was guided not by truth but by political approval.

Targeting foreign interference seemed, at first, like a legitimate end. But blaming social media for promoting hostile ideas became a handy tool in domestic political struggles. The focus shifted to so-called political extremists—conveniently located on the opposite side of the political spectrum of the progressive generals in the disinformation war. Fearing stricter regulation as retaliation for Trump’s rise—a threat that frequently became explicit—digital platforms created proxy mechanisms to combat disapproved messages, outsourcing the function of targeting to “fact-checking” agencies and disinformation researchers. Such groups appeared rapidly, signing lucrative contracts with social-media companies and winning plush sinecures at elite universities. (See “The Plot to Manage Democracy,” Autumn 2024.)

Illustrations by Dante Terzigni

Seductive was the power not just to distinguish right from wrong but to silence the wrong on a mass scale. This political capture was accompanied by a kind of regulatory capture: the mission of “combating disinformation” expanded its scope, first magnifying its object and then broadening its reach—from targeting disfavored theories about Covid-19’s origins to enforcing consensus on vaccines and, eventually, to censoring dissent on issues like the binary nature of sex and the racial history of the United States. Predictably, the platforms’ self-imposed regulatory censorship evolved into outright suppression of individual users.

As demand brought many intelligent minds to the field, the concept of disinformation spawned nuanced sub-concepts. For example, if disinformation means the deliberate distortion of reality, misinformation refers to unknowingly false or distorted information. Misleading information, meantime, is about guiding the public toward incorrect conclusions or actions. These variants, too, came into the censors’ sights.

Perhaps the apogee of information-control ambitions was the concept of malinformation—malevolent information that is essentially true but that might be harmful if shared publicly. This could include true personal stories about vaccine side effects, as they might discourage vaccination. Yet here the self-appointed information authorities committed an obvious overreach. Listing side effects is legally required, for example, in pharmaceutical commercials and on product labels. But disinformation experts, working alongside bureaucrats, platforms, and vaccine producers, decided that such transparency didn’t align with promoting vaccination.

Hiding vaccine side effects was an act of suppressing unsanctioned truth. The fight against disinformation fully revealed its inherently censorious nature. The project no longer sought to promote truth over falsehood; now it sought to achieve specific social goals. In 2021, Katherine Maher, the former chief executive officer of the Wikimedia Foundation and currently president and CEO of National Public Radio, captured this shift with a revealing statement: “In fact, our reverence for the truth might be a distraction that’s getting in the way of finding common ground and getting things done.”

Another key characteristic of the disinformation concept was its capacity to identify and target enemies. In the Soviet Union, people were conditioned to believe that “enemy voices”—such as the Voice of America and Deutsche Welle radio stations—equaled dezinformatsiya. Such framing works in reverse, too: disinformation must come from enemy voices. The notion conjures enemies and portrays them as existential threats.

For the Soviets, these voices came from foreign rivals across the border; for American disinformation experts, the enemies sat across the aisle. Claiming recently that disfavored Silicon Valley firms experienced “shadow debanking,” the tech entrepreneur Marc Andreessen offered a striking analogy: denying bank services to customers based on their political views resembles sanctions on enemy regimes like Iran, but here “you are sanctioning American citizens—with no law, no due process, no appeal.” A similar thing happened with platform censorship. Curtailing disinformation among fellow citizens, rather than foreign enemies, assigned them the status of enemies—and sowed discord and division. It did just what disinformation was alleged to do.

The direct involvement of the state in censorship, which would mean violating the First Amendment, remains unproven. What is undeniable is that digital platforms communicated not just with researchers and fact-checkers but also with public officials. Meta CEO Mark Zuckerberg has acknowledged pressure from the Biden administration regarding Covid content and regrets that “we were not more outspoken about it.” He mentioned FBI warnings about “potential Russian disinformation” in the Hunter Biden laptop story, admitting that “in retrospect, we shouldn’t have demoted the story.” He emphasized that, ultimately, it was Meta’s decision “whether or not to take content down.” Formally, no state censorship may have occurred, and the Supreme Court declined to pursue the argument in Murthy v. Missouri (2024), a First Amendment challenge to Biden officials’ pressuring of social-media firms. (The Court sent the challenge back to the lower courts for a lack of standing.) Nonetheless, the interactions with officials might be described as advisory censorship. And what happens if you don’t follow the advice?

Combating disinformation evolved into a massive machine. The January 6, 2021, Capitol riot triggered decisive measures. Major social-media platforms “deplatformed” President Trump along with tens of thousands of his supporters. Twitter had developed an interface capable of monitoring 50 million tweets daily and invited disinformation experts to access it—a system revealed only after Elon Musk acquired the platform, renamed it X, and opened its records to the public in the Twitter Files.

Censorship affected a vast population of users. Polling from 2020 shows intense antipathy for social-media moderation, with nine in ten Republicans believing that they were being silenced. Unlike the post-journalistic legacy media, whose biased coverage broadly frustrates underrepresented social groups from, say, the American hinterland, social-media platforms can reach each user individually. Banning and demoting a user’s social-media posts, therefore, strikes individuals as a personal affront.

Not all these bans and restrictions were politically motivated. But the perception that progressive elites, digital platforms, and the “deep state” were engaged in systemic suppression of heterodox politics created a massive backlash and opened new channels for dissent. The reach of the social-media moderation machine far exceeded the viewership of CNN and Fox News combined. Never before had so many Americans been individually restricted or banned from public venues for expressing their unsanctioned views. The personalized voice suppression outraged millions of people individually and likely contributed to Trump’s win in 2024.

Accompanying the fight against unsanctioned disinformation was the production of officially sanctioned disinformation. Indeed, one might call the Hunter Biden laptop story the most efficient disinformation campaign in recent history, involving social media, legacy media, disinformation experts, former intelligence officials, and the eventual president. The campaign was a full-court press reaction to denigrate the New York Post article that (accurately, it turned out) described the contents of Biden’s laptop hard drive, acquired through ordinary journalistic means. It framed the story as somehow emerging from irremediably tainted sources or having been debunked. Subsequently, coverage of then-President Biden’s questionable mental condition regularly asked Americans to deny the plainly obvious. These disinformation campaigns did not come from those usually accused of disinformation but from those who curated and sponsored the combating of disinformation.

The war on disinformation breached the principles of free speech and democracy. But even by its own logic, it was a tactical failure, always backfiring. Combating Covid “disinformation,” for instance, eroded trust in health-care officials and in the government in general. Public confidence in the recommendations of public-health agencies has plummeted, as Kaiser Family Foundation surveys show. Vaccination rates have fallen. The suppression of conservative “disinformation” on gender issues has only intensified opposition to transgender activism. The long-running fight against Trumpist “disinformation” culminated in Trump’s strongest electoral performance. It hardly needs pointing out that these campaigns undermined trust in the people waging them.

The practice of combating disinformation has discredited itself ideologically and failed practically. If it is to survive, it may need to rebrand itself as “trust studies” or “truth studies.” It’s better not to engage with the term “disinformation” at all—it drags “researchers” into political struggle and power abuse. The short saga of battling disinformation has delivered this lesson: any attempts to curate “wrong speech” risk producing outcomes even more antidemocratic than the forces they aim to fight, or failing practically and often backfiring and amplifying contrarian energy.

Meantime, the issue of digital speech abuse, online hate, and extremism persists, eroding social relationships. There is no silver-bullet solution for it, as the problem stems from the very design of the media environment. In the absence of solutions, some trends can be identified as worth watching.

First, the fact-checking and combating-disinformation industry will be dismantled or transformed. Digital platforms have already largely cut ties, and political pressure has now shifted to academia and NGOs. Some universities withdrew even before Trump’s victory; more will follow. NGOs and activists will naturally resist.

Dismantling the industry risks a shift into prosecuting individuals. Cancel culture could devolve into a semblance of McCarthyism but from the opposite side of the political spectrum. If this happens, an ideological witch hunt will contribute to polarization, and polarization will make the next power shift even more dramatic. As the political pendulum swings, hunters may become the hunted.

Curiously, unlike combating disinformation, which is inherently politicized, objective fact-checking may find a market niche. The uncertainty of online information could drive demand for fact-checking of viral stories—if fact-checkers accept that they are content producers for readers, not a flagging service for censors.

Second, attempts to suppress digital speech will increase. Social-media platforms are a convenient tool for online regulation, as they are profit-seeking corporations, vulnerable to political and state pressure. Political elites will use these levers to punish both platforms and users for noncompliance, as we already see in Europe.

The U.S., of course, has a unique condition: the First Amendment protects speech from direct state interference. But digital platforms are controlled by their owners and management, who can decide to take a political side and implement it within a platform’s policies, as Mark Zuckerberg on Facebook and Elon Musk on X have done. Whatever their motives, control is possible—and so is the political affiliation of such control. For now, X might be freer than Twitter was, but nothing—including the First Amendment—prevents platform oligarchs from adjusting filters as they please or as the political climate dictates.

The principles of protecting free speech originated in the print era but often fail to handle digital challenges. The conflict between freedom of speech and the vast digital opportunities for abusing it calls for new regulatory principles—or provokes censorship. The fight over freedom of speech is far from over.

Third, society displays some natural immune response to increasing online hate and polarization. It transpired in so-called news avoidance. The more disturbing the news becomes, the more people switch to avoiding particular news or abandoning news consumption altogether. A similar tendency can be predicted on social media: engagement avoidance.

It started with self-censorship: users withheld expressing contrarian opinions, fearing cancel culture. This mood seems to be growing. People try to engage less, abstaining from comments, reposts, or even “likes.” Many have learned that algorithms watch their behavior, exposing them each time they click something. In times of political instability and amid the psychological hazards of online engagement, more people will choose to abstain from any social-media activity except, perhaps, scrolling.

This choice is healthy for the individual but harmful to democracy. Those who feel psychological discomfort from engagement are likely moderates. By avoiding engagement, they leave the arena to radicals, contributing to polarization. Another outcome: reduced engagement will harm social-media platforms’ profits, while the growing online rage of users who remain active will invite harsher state regulation. Social-media platforms may face the necessity of dealing with this double threat somehow; it is unclear how, or if, it is even solvable.

Fourth, the growing abuse of digital speech, combined with rising state and corporate control over online behavior, will lead to further digital balkanization. The once-unified digital space will disintegrate along the lines of political preferences, national regulations, demographic strata, and technologies of access.

This is already evident in the split between Bluesky and X/Twitter, or between American and European social-media regulation. Paradoxically, after electronic and digital media made the world global, the issue of digital abuse now seems to be reverting globalization back to national stages, suspiciously coinciding with similar deglobalization in international politics.

For now, the most, or even the only, viable method for addressing digital speech abuse remains media literacy. We have entered the second decade of social media. People have learned the risks and pitfalls. Most people are aware by now of the low credibility of online information and the risks of manipulation—commercial, political, and criminal. This awareness increases receptiveness to media-literacy programs that strengthen individual and social immunity, offering the best defense against fakes, hatred, and polarization. So far, media literacy is the only way to improve the digital public sphere without compromising democratic principles.

Illustrations by Dante Terzigni


Source link

Related Posts

1 of 54