Confronting Disinformation Warfare

By: John S. Ehrett*  April 18, 2017.

The hotly contested 2016 presidential election was replete with misinformation, including widely-shared news stories suggesting that Pope Francis had endorsed Donald Trump for president[1] and that a Washington, D.C. pizzeria was secretly a front for child traffickers.[2] The subject of “fake news” has consequently become an increasingly prominent subject in American political debate.[3] The “fake news” controversy centers on the widespread dissemination of provably false news stories and their influence in shaping American voters’ perceptions of candidates and of the race as a whole.[4]

In light of this emerging problem, many have called for a more robust fact-checking regime. Facebook—the locus of much of the conversation about “fake news”—has, for its part, recently introduced a feature allowing site users to see whether or not a news story’s facticity is “disputed.”[5] But where potentially stronger measures are concerned, the conversation must expand to include more factors: there is no public consensus, for instance, about whether the government should involve itself in the “fake news” debate.[6] And in cases where foreign actors may be involved in spreading misinformation, should the federal government take independent measures to defend against such under-the-radar propaganda?

At first blush, First Amendment guarantees of speech and press freedom may seem to prevent government actors from playing a major role in this realm.[7] Likely unbeknownst to many Americans, however, is the fact that existing law does indeed allow for state regulation of foreign “propaganda” and its purveyors by way of the Foreign Agents Registration Act (FARA).[8] A heavy-handed application of this doctrine in the “fake news” context, however, would certainly prove controversial.

Given these intersecting dynamics, this Essay examines the problem posed by “fake news” and propaganda[9] in light of the unique challenges of an increasingly digital media landscape. Much has already been written about the impact of online “fake news” on recent American political upheavals, but little analysis has contextualized this problem in view of existing constitutional and statutory doctrine.[10] This Essay aims to fill that gap, and ultimately concludes that governmental response to the existing anti-propaganda legal framework, where “fake news” is concerned, is likely to do more harm than good. First Amendment law and principles of cyber conflict both militate against state targeting of “fake news” content. A preferable approach would leverage existing diplomatic machinery against likely state facilitators of “fake news” in accordance with the foreign-affairs prerogatives of the executive branch.


Geopolitics and Online “Fake News”


Tracking down the origins of “fake news” stories has proven difficult. Some of the online “fake news” that proliferated in the 2016 election has been traced to Macedonian teenagers who apparently acted independently, seemingly interested more in reaping advertising dollars than in advancing political causes.[11] But more disquietingly from an American geopolitical standpoint, “fake news” dissemination has been widely associated with the government of the Russian Federation.[12] In a 2014 report, investigative journalists Peter Pomerantsev and Michael Weiss argued that “[s]ince at least 2008, Kremlin military and intelligence thinkers have been talking about information not in the familiar terms of ‘persuasion,’ ‘public diplomacy’ or even ‘propaganda,’ but in weaponized terms, as a tool to confuse, blackmail, demoralize, subvert and paralyze.”[13] According to Pomerantsev and Weiss, this information-warfare capacity is being deployed on an international scale, and other nations will follow suit:

Given the frequency and amount of Kremlin propaganda and how transnational it is—affecting events from the crisis in Syria to nuclear negotiations with Iran to the war in Ukraine—news organizations should establish “counter-propaganda editors” to pick apart what might be called all the news unfit to print. We stand before a deluge of disinformation—the Kremlin’s use of disinformation is, and will be increasingly, used by other states.[14]


While Pomerantsev and Weiss suggest that independent media outlets covering Russian politics are the institutions best suited to engage in anti-propaganda strategies, the question of appropriate responses to propagandistic foreign messaging and disinformation also implicates U.S. governmental interests.[15] Indeed, the American government has already taken steps to regulate content of this kind, and the Supreme Court has weighed in on the permissible extent of such regulation.


American Judicial Regulation of Foreign Propaganda


The Foreign Agents Registration Act mandates that any “agent of a foreign principal,” with some limited exceptions, must register as such prior to operating in the U.S.[16] Further, any communications with an intent to exert influence “with reference to formulating, adopting, or changing the domestic or foreign policies of the United States or with reference to the political or public interests, policies, or relations of a government of a foreign country or a foreign political party” must be identified as “propaganda.”[17] This would plausibly extend to purveyors of online “fake news” working under the direction or oversight of a foreign power.

FARA’s mandatory labeling requirement did not prove uncontroversial, and the Supreme Court weighed in on this topic in the 1987 case Meese v. Keene.[18] Meese centered on a U.S. citizen who wished to screen several Canadian films that, under FARA, were classified as “foreign political propaganda.”[19] The citizen challenged the permissibility of such a “propaganda” classification on First Amendment grounds, alleging that his prospective association with media classified as “propaganda” would damage his reputation.[20]

The Court ruled that FARA’s mandatory “propaganda” classification was not per se pejorative, explaining that “the term political propaganda includes misleading advocacy of that kind [casualty reports of enemy belligerents] . . . [but] also includes advocacy materials that are completely accurate and merit the closest attention and the highest respect.”[21] For its part, FARA’s official disclosure requirement was permissible because “[b]y compelling some disclosure of information and permitting more, the Act’s approach recognizes that the best remedy for misleading or inaccurate speech contained within materials subject to the Act is fair, truthful, and accurate speech.”[22]

Despite its apparent applicability to contemporary debates, Meese v. Keene cannot undergird an effort by governmental actors to challenge “fake news” without presenting three distinct problems: first, the problematic First Amendment implications of designating certain expressive content as propagandistic “fake news”; second, the problem of attribution in cyber conflict; and third, the problem of contextual indeterminacy.


1. The Problem of Ex Post Propaganda Designation


As noted at length in Justice Blackmun’s dissent in Meese, an immediate First Amendment objection can be made to a mandatory “propaganda” label that many will reflexively view as pejorative.[23] But even assuming the Meese majority’s logic withstands contemporary scrutiny, it ought not be stretched beyond the particular facts of the case.[24] The Meese Court’s First Amendment argument was predicated on the fact that the materials in question had already been identified as “propaganda” under FARA—that is, the Canadian government had already registered its film distribution office as a foreign agent whose product constituted “propaganda.”[25] Meese did not sanction the ex post classification of a questionable “news” item as “propaganda”: to apply such a classification retroactively to certain “suspect” speech would raise questions of content-based speech regulation.[26]

Furthermore, allowing executive-branch organs to make ex post designations of suspected “fake news” items as “propaganda” subject to FARA would likely open the door to partisan hijacking of the “propaganda” label: in a politically charged context, where the public at large lacks the investigative resources available to the executive branch, any ability to meaningfully challenge government labeling of “fake news” as “propaganda” would be severely limited. Where democratic integrity is concerned, opening this door would be a potentially dangerous step for a court to take: the political salience of the “fake news” issue risks allowing the Meese precedent to become a vehicle of electoral controversy.[27]


2. The Problem of Attribution in Cyber Conflict


Beyond the constitutional concerns, from the perspective of an investigator, tracking down the discrete origins of individual “fake news” reports may prove difficult. This is a variation of the pervasive attribution problem in cyber conflict.[28] Put simply, it may be very difficult to determine where an Internet transmission originated due to the widespread availability of location-masking technologies and worldwide relaying systems.[29]

An even more fundamental problem also might exist: state-sponsored distributors of propagandistic “fake news” content may operate from within countries that are not the sponsors of their propaganda. A successful trace of a propagandistic “fake news” report to Lithuania, for example, does not prove that the propaganda purveyor acted at the behest of the Lithuanian government. For all the observer knows, the purveyor might be a Russian national who has crossed the border temporarily. Unless other clues exist that suggest a foreign power has facilitated the spread of disinformation, efforts to digitally trace back the origins of a propagandistic “fake news” story have limited utility.[30]


3. The Problem of Contextual Indeterminacy


Compounding the issue, all forms of Internet “fake news” are not created equal. For example, attempts to classify individual news items as “fake” or “propagandistic” inevitably run into the problem posed by intentional satire. According to the Internet-coined “Poe’s Law,” “unless there are unmistakable cues that one is being ironic or sarcastic, many parodies are not only likely to be interpreted as earnest contributions, they will, in fact, be identical to sincere expressions of the view.”[31] News items that are factually untrue may, in fact, simply be forms of satire, and not “fake news” as the term is generally used. For example, the number of people and institutions who have reacted poorly to articles in the satirical newspaper The Onion supports the contention that, in the absence of clear statements of veracity, lines between truth and fiction may not always be clear.[32]


A Constitutionally Sound Approach to Combating Propagandistic “Fake News”


The intersection of these three forces—First Amendment constitutional boundaries, varying degrees to which cyber conduct can be attributed to particular actors, and contextual indeterminacy—cuts strongly against governmental anti-propagandist intervention in the online “fake news” problem. Under conditions of persistent epistemic uncertainty—essentially, the fundamental inability in many cases to know exactly where a particular “fake news” item came from, and whether or not a foreign power is responsible for promulgating it—it is likely imprudent to legitimize a powerful labeling tool that could readily be turned against American citizens.

Notwithstanding the obstacles to directly targeted intervention, however, alternative angles for approaching this problem probably exist. For one, while item-specific attribution may be exceedingly difficult, this in no way rules out the possibility of engaging in pattern-based attribution.[33] Without needing to classify individual online news items as “propagandistic” or “fake,” executive-branch intelligence agencies could plausibly find certain states responsible for sponsoring the online dissemination of propagandistic “fake news” to affect U.S. political affairs.[34]

Accordingly, policymakers concerned about the proliferation of “fake news” could adopt some variation of the following prudential maxim: where credible evidence suggests that propagandistic “fake news” is the product of state-sponsored actors, diplomatic engagement with the allegedly sponsoring state should be the primary response. This type of diplomatic engagement could take a variety of forms, from bilateral discussions to UN complaints to economic sanctions.[35] But the bottom line is that this principle would sidestep any potential First Amendment violation. Diplomatic pressures may be aimed at facilitating the compliance of propaganda sponsor states with FARA’s mandatory labeling requirements, but no mandatory ex post labeling of content—or chilling effect on American oppositional speech not within FARA’s purview—need follow.




Ultimately, in contexts where the “fake news” debate takes on an international dimension, the government—consistent with its executive-branch prerogatives to manage issues of foreign affairs[36]—may reasonably push back against providers of state-sponsored disinformation. The exact avenues for such pushback, within the authority of the executive, may vary. That said, “fake news” reports will inevitably continue to circulate under any proposed regime, a necessary consequence of a free press.[37] Policymakers should resist the temptation of employing existing anti-propaganda legal tools as a means of combating online “fake news,” no matter how appealing those doctrines might seem. To do so risks pouring the gasoline of government intervention onto an already-blazing sociocultural fire.

* Student Fellow, Information Society Project; J.D. Candidate, Yale Law School. This piece was inspired by the “Weaponizing Information: Propaganda to Cyber Conflict” Conference, sponsored by the Center for Global Legal Challenges and the Information Society Project.

[1] See Hannah Ritchie, Read All About It: The Biggest Fake News Stories of 2016, CNBC (Dec. 30, 2016, 2:04 AM ET),

[2] Id.

[3] See, e.g., Krysten Crawford, Stanford Study Examines Fake News and the 2016 Presidential Election, Stan. U. (Jan. 18, 2017),

[4] As used in this essay, the term “fake news” refers to “completely fabricated information that has little or no intersection with real-world events.” See David Mikkelson, We Have a Bad News Problem, Not a Fake News Problem, Snopes (Nov. 17, 2016),

[5] See Mark Bergen, Facebook Rolls Out Tools to Curb Fake News After Uproar, Bloomberg Technology (Dec. 15, 2016, 3:00 PM ET),

[6] See Michael Barthel et al., Many Americans Believe Fake News Is Sowing Confusion, Pew Res. Ctr. (Dec. 15, 2016), (“Fully 45% [of Americans] say government, politicians and elected officials have a great deal of responsibility [to prevent fake news from gaining attention.]”).

[7] U.S. Const. amend. I.

[8] 22 U.S.C. §§ 611-621(1938).

[9] Philosopher Jason Stanley has described such propaganda as that which “presents itself as an embodiment of cherished political ideals.” Jason Stanley, How Propaganda Works 81 (2015). The manipulative effect of propagandistic “fake news,” then, is found in the fact that such media presents itself in a way that tends to bolster existing partisan tendencies.

[10] Professor Michael Dorf has advanced some initial thoughts on the subject, though he stops short of suggesting any framework for response by governmental actors. See Michael Dorf, Fake News, Facebook, and Free Speech, Dorf on Law (Dec. 29, 2016, 7:00 AM),

[11] See Craig Silverman & Lawrence Alexander, How Teens in the Balkans Are Duping Trump Supporters with Fake News, BuzzFeed News (Nov. 3, 2016, 7:02 PM),

[12] See Craig Timberg, Russian Propaganda Effort Helped Spread “Fake News” During Election, Experts Say, Wash. Post (Nov. 24, 2016),; Michael Weiss, Russia’s Long History of Messing with Americans’ Minds Before the DNC Hack, Daily Beast (July 26, 2016, 1:00 AM ET), But see Ben Norton & Glenn Greenwald, Washington Post Disgracefully Promotes a McCarthyite Blacklist from a New, Hidden, and Very Shady Group, The Intercept (Nov. 26, 2016, 1:17 PM), (disagreeing with this characterization).

[13] See Peter Pomerantsev & Michael Weiss, The Menace of Unreality: How the Kremlin Weaponizes Information, Culture, and Money 4, Inst. Modern Russ. (2015),

[14] Id. at 41.

[15] Id.

[16] 22 U.S.C. § 612(a).

[17] 22 U.S.C. § 611(o).

[18] 481 U.S. 465 (1987).

[19] Id. at 467.

[20] Id. at 468.

[21] Id. at 477.

[22] Id. at 481.

[23] Id. at 485-89.

[24] See Viereck v. United States, 318 U.S. 236, 243-44 (1943) (holding, in interpreting FARA, that although “Congress undoubtedly had a general purpose to regulate agents of foreign principals in the public interest by directing them to register and furnish such information as the Act prescribed, we cannot add to its provisions other requirements merely because we think they might more successfully have effectuated that purpose”). Furthermore, it bears mention that actions under FARA typically take the form not of ex post mandatory labeling but rather of prosecution. See Attorney Gen. v. Irish People, Inc., 684 F.2d 928, 956 (D.C. Cir. 1982); United States v. German-Am. Vocational League, 153 F.2d 860 (3d Cir. 1946).

Particularly notable, for this essay’s purpose, is Judge Bazelon’s concurrence in Irish People. See Irish People, 684 F.2d at 956 (Bazelon, J., concurring) (disagreeing with “the notion that the power of prosecution may be used selectively to manage the information put before the American people in debates over foreign policy. In this context, i.e., selectively labelling a newspaper as an organ of propaganda for a foreign agent, issues of constitutionally improper motivation surface disturbingly”).

[25] Meese, 481 U.S. at 469-70.

[26] Cf. Reed v. Town of Gilbert, 135 S. Ct. 2218, 2226 (2015) (“Content-based laws—those that target speech based on its communicative content—are presumptively unconstitutional and may be justified only if the government proves that they are narrowly tailored to serve compelling state interests.”).

[27] A possible worst-case scenario might involve the juxtaposition of factually erroneous government messaging against other communications labeled by the government as “fake news” or “propaganda.” This outcome is not necessarily farfetched: laws regulating U.S. political organs’ ability to directly transmit pro-United States media content to U.S. citizens have been severely cut back in recent years. See, e.g., John Hudson, U.S. Repeals Propaganda Ban, Spreads Government-Made News to Americans, Foreign Pol. (July 14, 2013, 7:06 PM), (“For decades, a so-called anti-propaganda law prevented the U.S. government’s mammoth broadcasting arm from delivering programming to American audiences. But on July 2, that came silently to an end with the implementation of a new reform passed in January.”).

[28] See, e.g., Jon R. Lindsay, Tipping the Scales: The Attribution Problem and the Feasibility of Deterrence Against Cyberattack, 1 J. Cyber Security 53 (2015); Thomas Rid & Ben Buchanan, Attributing Cyber Attacks, 38 J. Strategic Stud. 4 (2015); Nicholas Tsagourias, Cyber Attacks, Self-Defence and the Problem of Attribution, 17 J. Conflict & Security L. 229 (2012).

[29] Proxies and the Tor network are examples of such technologies. See, e.g., Brad Chacos, How (and Why) to Surf the Web in Secret, PC World (Nov. 7, 2012, 3:30 AM PT), (explaining how these tools are used).

[30] See Graeme Park & Mauno Pihelgas, Cyber Information Exchange—Collaboration for Attribution of Malicious Cyber Activity, in Mitigating Risks Arising from False-Flag and No-Flag Cyber Attacks 8, 8 (Mauno Pihelgas ed., 2015) (“It is not enough to just locate a source IP address (unless looking solely at active defence): the identity of the attackers must be determined, as well as the parties they were acting on behalf of must also be unmasked.”).

[31] Scott F. Aikin, Poe’s Law, Group Polarization, and the Epistemology of Online Religious Discourse 2 (Jan. 22, 2009) (unpublished manuscript),

[32] See, e.g., Kevin Fallon, Fooled by “The Onion”: 9 Most Embarrassing Fails, Daily Beast (Nov. 27, 2012, 5:55 PM ET),

[33] Cf. Robert M. Lee, The Problems with Seeking and Avoiding True Attribution to Cyber Attacks, SANS Digital Forensics & Incident Response Blog (Mar. 4, 2016), (“[I]dentifying patterns of activity . . . [is] a starting place in how we search the network for threats. Then tactical level threat intelligence analysts aren’t biased by true attribution but can use some element of attribution to learn from threats they've observed before while attempting to avoid cognitive biases.”).

[34] This could be accomplished by gathering widespread evidence of state-sponsored disinformation campaigns aimed at achieving a particular political outcome. Such disinformation campaigns, taking the form of “fake news” reports, would constitute propaganda that ought, under FARA, to have been registered as such. See Pomerantsev & Weiss, supra note 13, at 4.

[35] Executive-branch governmental actors additionally have access to diplomatic tools that might exert indirect pressure on state facilitators of propagandistic “fake news.” See, e.g., James J. Carafano et al., U.S. Comprehensive Strategy Toward Russia 15, Heritage Found. (2015) (“Use public diplomacy to counter anti-American and pro-Russian propaganda by the Russian government. U.S. efforts should include international broadcasting, a new Russian satellite channel, the Internet, social networking, print media, and revamped academic, student, and business exchange programs.”).

[36] See Saikrishna B. Prakash & Michael D. Ramsey, The Executive Power over Foreign Affairs, 111 Yale L.J. 231, 355 (2001) (“Although the executive power essentially meant the power to execute the law, we have demonstrated that the phrase also had a secondary, foreign affairs meaning. The power to represent the nation and its citizens in the international arena was a potent part of the executive power.”).

[37] See, e.g., Eugene Volokh, Fake News and the Law, from 1798 to Now, Wash. Post (Dec. 9, 2016),