Disinformation: How will it manifest?

George Khoury
10 min readAug 7, 2020

Repetition is the message

Disinformation is effective only if it becomes instantiated in our order as a legitimate commentary on it, only if it becomes true enough. And the primary way that can happen is through the same process that regular, open-sourced information also comes to be included: through repetition.

The purveyors of disinformation are well aware that it’s not necessarily about the contents of a message, nor so much its point of origin as it is about its repetition. To be sure, the content can be so outrageous and debunkable that the more rational reader is dumbfounded with the message’s increasing veracity. “How can so many people ascribe truth to such claims…and who came up with this?” But we need only look to Hitler’s disinformation minister to understand its central law, or as Goebbels put it: “Repeat a lie often enough and it becomes the truth.” Once a lie achieves omnipresence and becomes part of the recognizable familiar order, it becomes closer to being perceived as a truth. However, what the repetition of disinformation engenders is not simply a single trail of a meme’s iteration, a single disinformed truth, but along with iterations of other disinformed messages, they come to consubstantiate a whole other ecosystem of (dis)information that stands opposite to modern reason, or what’s been termed illiberalism.

Repetition’s Avatars

Identifying and sanctioning the source of disinformation is important to be sure, but in answer to the question presented, it is secondary to repetition. If we can quickly find the originating source of a seemingly novel piece of disinformation, as was the case in the recent “Plandemic” video that went viral through Youtube, that is of course desirable. In that instance, it was immediately clear who the originating source was. But most likely, what we will encounter is an iteration of the same disinformed message that had been already in a series of repetitions[1]. Thus it is with the lens or intention of targeting repetition that will be core to any strategy of curbing such messages, which I will discuss in the next question. Furthermore, even if we were to identify the originating source, it is highly likely that a meme would find its way out into the larger channels of the information ecosystem through other sources, or avatars of the same source (and we haven’t even spoken of bots yet). So in the end, it is disinformation’s repetition that is the most fundamental force driving disinformation’s manifestation.

Level 1 — Primary Sources

Going deeper, we are exposed to disinformation’s repetition through primary sources, but also indirectly or virally through secondary. As it pertains to direct exposure, the speed, reach and cost-effectiveness of online news organizations makes this source important to monitor. News.orgs are not beholden to a costly media business model with high overhead but can exist in multiple avatars while obfuscating their true identity. According to a Brookings Institute study in 2017, online newspapers are the most reported source through which consumers are exposed to disinformation. Further, these bad faith organizations often mirror each other’s messages, creating an echo chamber effect that can unmoor consumers from the more objective open-sourced information sphere. But how do they repeat their disinforming memes specifically? Is it solely through their primary sites? First off, primary sources are not limited to .orgs or .coms, but any and all portals established by that news.org. A twitter handle belonging to Your News Wire,[2] for instance, who were among the first to break Pizza gate, or their Facebook page, or Reddit identity, or their discord account, can all be considered primary sources. Thus, it is through their various avatars they can repeat their disinformation or “reports.” Even through a single organization we can view how a message can be repeated into concrete existence, not to mention how this repetition can be, and often is furthered by affiliate organizations with similar interests. The genesis of a single disinforming meme or entire campaign is invariably accomplished through this cascading or branching effect where disinforming stories or messages are perpetually repeated across the varying avatars of primary sources.

So at one level, the most important way disinformation will manifest is through its repetition by primary sources established by particular organizations, most importantly, (fake) news.orgs. Looking closer, however, reveals a far more insidious repeater: the useful idiot.

Level 2: Influencers

Zooming in here on the streams of repetition, the power of the opinion leader or “influencer” is key to secondary source exposure. Politicians are exceptionally notorious for adding validity to this different truth if it means they can augment or maintain their power. Donald Trump is perhaps the most flagrant example of this, often repeating disinforming claims and championing their cause. Take the birther movement for example, Trump may not have begun this sampling of disinformation, but he certainly added veracity to it by using his national influence as a credibility marker. Furthermore, Trump is being increasingly censored by social media, which is inadvertently leading the social media platforms to better oversight. I’ve used the term “super repeater” in the past to identify such a high profile repeater. The chain reaction a super repeater sets off can be seen in how other aspiring influencers quickly swarm in to use the same disinformed position for a similar power grab, albeit at varying levels of office or prestige.

Thus, the most useful idiots are of course those with a large following. However, there are other important opinion leaders or influencers such as credentialed experts who may not have a massive following, but due to their perceived ethos or credibility, can be championed quickly in the illiberal disinformation sphere if they’re willing to add credence to such logic. Take the Michigan internist Dr. Sam Fawaz as a sterling example of such a voice. Dr. Fawaz is currently waging a Youtube video campaign against the expert community’s handling of Covid-19. Although not fully disinformed, Dr. Fawaz does use tactics such as gaslighting to suggest the expert community is practicing pseudo-science, among other easily debunkable claims. But given his credentials and political platitudes, he’s increasingly attempting to position himself as the expert Truther, opposite to those like Dr. Fauci who’s been increasingly the target of disinformation campaigns. Dr. Fawaz has seen his viewership grow significantly in proportion to his inflammatory rhetoric and projected sense of urgency, which creates the micro-celebrity fallacy, which can often encourage an agent along. However, the most powerful useful idiot(s) is the collective of followers that repost or retweet false news. According to the largest study ever conducted on the topic by data researchers out of MIT[3], false news travels much faster than that of accurate information, adding a modern nuance the oft-cited quote, “A lie travels around the globe while the truth is putting on its shoes.”

Television also plays a vital role in the legitimating function of disinformation due to its perceived ethos or credibility. Fox is, of course, the most prominent station to have legitimated disinformation campaigns either through direct repetition (birther movement), avoiding analysis (pizza gate), and/or by priming (or reinforcing) its audience with generalized skepticism.[4] However, the One America News Network (OAN) is fast becoming a fixture of mass broadcast Television news, whose growth relies on the Foucauldian adage: any publicity is publicity. The entire Sinclair media system is also hard at work in reifying and recirculating disinformation as well, but through the local television news level, where a less scrutinized level of trust is formed between the viewer and those news teams.

Level 3 — Individual Users, artificial or otherwise

So far, we’ve identified the fundamental force that instantiates disinformation’s truthiness, i.e., repetition. We’ve also zoomed in to gleam the most important primary and secondary sources through which it will manifest in the coming election, i.e. news.orgs, television news, super repeaters, or useful idiots. Lastly, we’ve come to understand how all these components help to legitimate each other vis-à-vis repetition but also through their perceived ethos, thereby orchestrating an entire sphere of illiberalism, which attempts to at least posit itself as an alternative market of truth but is in actuality devoid of any objectivity, and acts instead as a lacuna of veiled ideological dogmatisms. However, I have yet to speak on a specific phenomenon that can work on all these levels, with seemingly infinite reach, all controlled by a single source.

The malicious social media bot or what is referred to as “Sybils” is, in essence, a social media account controlled by a central agent. However, we mustn’t think of this agent as controlling the account’s every move, rather the agent programs the automation that automates the account, a kind of once removed master of puppets. Most bots are designed to mimic a certain demographic identity, and the sophistication in how this works is evolving. This “identity” however is in actuality an algorithm derived from the data harvested from real users’ online behaviors.

Thus, bot collectives or armies have been assembled in attempts to sway opinions on virtually any issue, i.e., presidential elections, Brexit, pandemic, world/national markets, etc., in increasingly sophisticated manners. The taxonomy of these strategies includes what is called astroturfing, wherein a bot army is deployed across a particular social media platform programmed to voice support for a particular person or policy, giving real users the impression of some kind of grassroots (opposite to astroturf or fake grass) upswelling or movement among a certain demographic.[5] Smoke screening is another strategy aimed at confusing a truthful message or policy. These highly coordinated attacks may repel certain demographics from engaging with important policy information and expert perspective.[6] These are only two of many Sybil strategies that have been identified, but there are seemingly endless variations whose detection and dissolution are vital. What’s missing altogether right now is a coordinated response informed by what we do know about Sybil detection. Up until very recently, social media platforms have been largely reluctant to integrate bot detection and dissolution techniques. Reasons vary as to why, but are mostly rooted in economic incentive, either by virtue of direct expenditure, or indirect via image loss (being seen as censoring). Private corporations are under no obligation to take proactive steps for anything other than increasing their company’s value.

However important bot detection is, we still have to reckon with a rival army that is by some counts even more destructive to the information eco-system: our warm-blooded selves. In fact, real users are found to have spread more disinformation than bots over Twitter between 2006–2017.[7] The research of human psychology as it applies to why users spread disinformation is endlessly fascinating, and of course ongoing. However, certain theories have been promising in this arena such as those proffered by sociologist Pierre Bourdieu. The Foucauldian scholar sees such action as having largely to do with issues of identity, self-aggrandizement, and group affirmation. These ideas are echoed in the literature discussing information theory and Bayesian decision theory, wherein information is seen more as a commodity or currency used for social status as opposed to informing and or self-correcting. Taken to the extreme then, we can see how users may totally neglect the civic spirit of information sharing and implode further in their group or within themselves. A major contributor to this implosion can be seen in the misuse of Youtube.

The season of Youtube radicalization is in full swing and yet the organization has yet to significantly detect and inhibit the spread of disinforming content, and the measures they have taken are rather draconian. While removing or banning content from certain sources is desirable, there still remains seemingly endless disinforming content that has little to no marker as such. The integration of Truth meters that rate particular tweets or posts, such as those utilized by Twitter and Facebook are being looked at according to the Google subsidiary, but a major obstacle to its integration is the chance of producing “false positives,” wherein content is deemed disinforming when it is actually not, which can rapidly deteriorate a platform’s image and thus its economic bottom line. As of today, human detection of disinforming content, especially that of video, remains exceedingly better than A.I.

Level 4

Engineered social tampering works on an even more expansive scale with the help of big data firms. Not entirely malicious, big data provides the information which disinformation campaigns utilize in order to manipulate perception. In the 2016 election, for instance, Cambridge Analytica utilized its massive arsenal of data mining to identify a rather small number of on-the-fence voters in key areas. This triangulation provided the logistics necessary to pinpoint a vulnerability in U.S elections, which was successfully exploited in Donald Trump’s favor. The campaign focused mainly on Facebook, which has led the social media giant to severely restrict access to user data by outside entities ever since. Tbus, researchers have yet to gain access to Facebook’s social graphing in total and are thus left to speculate on Facebook’s word alone.

I haven’t spent much time discussing the power of the disinforming meme (as in the visual aids) that attempt to reduce complex issues to a simplistic representation that appeals to “common sense.” Nor have we reaffirmed in detail how powerful Youtube is in this endeavor, or the deployment of deep fakes therein. Nor have I discussed in depth the various social-mediated messaging options available, and how to manipulate upvotes on Reddit or other similarly designed “meritocratic communities.” Nor, have I gone into discussing the various rhetorical fallacies deployed in the content of said messages, such as strawman fallacies, gaslighting, and/or false equivalences. All of these modes and the contents therein deserve deep scrutiny no doubt, but what I’ve attempted to answer here is the most important ways false claims can become truth in the upcoming election season, which is through their primary and secondary repetitions.

[1] At the time of this writing, Sinclair broadcasting is planning to recirculate the plandemic theory and give voice to its advocates.

[2] At the time of this writing, Twitter is reviewing the possibility of sanctioning any and all handles that are linked to fake news organizations.

[3] E. Ferrara et al., 2018

[4] Fox will soon be hosting a television program headed by a Qanon conspiracy theorist.

[5] See Ratkiewicz et al., 2011.

[6] See Abokhodair et al., 2015.

[7] See Vosoughi et al, 2018.

--

--

George Khoury
0 Followers

Former global studies visiting professor in E. Asia, now residing in Portland, Oregon, where I’m finishing my doctoral degree and watching the meteor blaze.