3. MITIGATING FACTORS FOR THE DISSEMINATION OF DISINFORMATION

Introduction

The current chapter explores the most recent research into ways of countering the spread of disinformation online. It sets out with an overview of the narrative, argumentative and discursive mechanisms that make disinformation attractive to audiences and proceeds to map out the most important developments in the field of combatting disinformation. Its aim is to identify the most relevant approaches and skills that informed citizens need to have in order to develop resilience to online disinformation and to become active in stopping the spread of said disinformation.

The chapter first analyses the reasons why disinformation is so attractive to audiences, by looking into the underlying mechanisms that support it and enhance its appeal. This analysis provides a clearer understanding and a solid foundation for investigating the most recent approaches in the development of skills, competences and attitudes that allow individuals to develop resilience to disinformation.

Countering disinformation is a multi-dimensional endeavor that presupposes an extensive and multi-layered involvement on the part of each individual who accesses online sites and social media. Developing the awareness and skills needed to identify, assess and report disinformation attempts is one of the main objectives of DOMINOES project.
Moreover, the current section also examines the most relevant institutional efforts to signal disinformation attempts as well as to develop the media literacy competences of the young generation as well as of adult citizens.
Overall, the chapter summarises best practices in countering online disinformation and can be further used in developing teaching and researching competences in the field. 

Digital competences addressed

1.1 Browsing, searching and filtering data, information and digital content;
1.2 Evaluating data, information and digital content;
1.3 Managing data, information and digital content;
3.3 Copyright and licenses. 

3.1 Understanding the Narrative, Argumentative and Discursive Mechanisms that Make Disinformation Attractive

Kanchi Ganatra. Aitana Radu. Ruxandra Buluc

Abstract

This section examines three types of mechanisms which make fake news attractive to readers: narrative, argumentative, and discursive mechanisms. In the first part, these mechanisms are described in detail, revealing the psychological and social appeal they lend to misinformation. We define and discuss the various mechanisms at play and demonstrate why they are so successful in captivating and convincing an audience. In the second half, these mechanisms will be further illustrated through the case study of a real fake news article. This section shows clear examples of narrative, argumentative, and discursive mechanisms in action, providing insight into how fake news can be identified and avoided on social media and on the Internet in general. More broadly, an understanding of these mechanisms can allow individuals to engage more critically with the claims they encounter in day-to-day life, leading to a more informed public. The scope of this section is to shed light on the mechanisms that draw people to fake news so as to better understand how to counter their influence. The section also provides an exercise of how fake news can be identified on the basis of narrative, discursive and argumentative mechanisms employed.

Main research questions addressed

● What are the narrative characteristics of fake news?
● What are the argumentative characteristics of fake news?
● What are the discursive characteristics of fake news?
● How can they be identified in a real-life situation? 

Narrative Mechanisms

Research on fake news often speaks about “alternative narratives” (Brites et al, 2019) or “storytelling” (Barber 2020) or “narratology” (Ryan et al 2004). In this context, narratives can be understood as an essential process in the framing and distribution of fake news. However, it can sometimes be difficult to define what “narrative'' means. Abbott (2002) defines narrative as relating to a story or shared understanding within a group or as representations of events, an encompassing view. Similarly, Stein and Glenn’s (1979) schema represents narratives as compositions of one or more episodes that can be chained together in various ways. Storytelling then, as a narrative mechanism, is a powerful tool. Our lives are profoundly shaped by the stories around us, narratives that instill identities, connections, values, and beliefs. Storytelling is central to education, as characters and events can make abstract principles relatable and memorable. Drama and conflict are compelling, engaging our imagination and pushing us to think about alternative futures and creative solutions.

In the wrong hands, however, storytelling can become a weapon, used to mislead, divide, and dehumanise others. Narrative appeals can offer simple solutions to complex problems, framing nuanced events as antagonistic ‘us vs them’ battles, such as in fake news stories relating to migration. Terre des homes, an NGO working on humanitarian issues has identified several such antagonistic narratives related to migrants: (1) Migrants are criminals; (2) Migrants do not need help and/or protection; and (3) All migrants want to come to Europe. Fake news often manipulates in this way, using a narrative mechanism to immerse readers in an alternate reality. In the following sections, we can see the various tools or mechanisms that are employed in order to harness narrative engagement for fake news.

Prominence
The Prominence-Interpretation Theory developed by Fogg (2002) unpacks what makes fake news attractive, engaging or appear credible. It does so by breaking down the process of approaching information, focusing on the stages in which users consume (fake) news. Proponents of this theory claim that, first a user notices something (prominence), and then they interpret what they see (interpretation). For gaining successful engagement of the reader/viewer, both the processes must take place - otherwise, it results in simply scrolling over the piece of news and no assessment is performed. In light of this information, then, it is important for a news article to be ‘noticeable’ or ‘prominent’ or to ‘grab the viewer’s attention’. If it is not prominent, the process of news consumption cannot progress.

The ways in which fake news can attract consumers' attention are creative and expansive. Some of these factors which make fake news attractive or prominent include: sensationalisation, hypernarrativity, calls to curiosity (such as through click-baiting); celebrity endorsement; etc. Budak (2019), for example, conducted an in-depth study on the fake news phenomenon surrounding 2016 presidential elections in the US. She explains that while traditional news focused more on policies related to the economy, elections, women or the environment; the more popular ‘news’ found on channels like social media was fake and of a negative or hyper-partisan nature. Building on Budak’s work, Baptista and Gradim (2020) have gathered the most frequent words used in detected fake news. These include “sex”,” death”,” corrupt”,” illegal”, ”alien” or” lie”, referring to sensational or outrageous content, unlike traditional media. This further illustrates that to inspire and sustain narrative attractiveness, fake news needs to be prominent and dramatic.

Framing
Another particularly interesting tactic used in making fake news prominent and attractive is framing. Framing refers to the use of “words, images, phrases, and presentation style” to present information, in order to influence “an individual’s understanding of a given situation” (Druckman 2001). Not only do we see examples of this in our everyday lives, e.g., in the form of advertising and marketing; a wide number of studies have also shown the use of framing within journalism and policy-making. Media researchers such as Tandoc and Seet (2022), for example, have provided compelling support for what Druckman (2001) calls the framing effect through their work on “Wording as Framing”. Tandoc and Seet argue that although several terms can have similar denotations, these terms may evoke different meanings in the consciousness of people based on the beliefs and values individuals associate with them.

Miko- łajczak and Bilewicz (2015) have found, for example, that in the context of the abortion debate using the term “foetus” will result in higher support for abortion compared to using the term “child” because people attribute greater humanness to “child” than to the term “foetus”. Similarly, when “climate change” is used instead of “global warming,” people are more likely to believe the phenomenon to be true (e.g., Benjamin, Por, and Budescu 2017; Schuldt, Konrath, and Schwarz 2011).

Framing an issue in a certain way (e.g., climate change as a political issue) can activate certain information, beliefs, and cognitions that the recipient already possesses in their consciousness (Nelson, Oxley, and Clawson 1997). In this way, frame-setting is a mechanism which allows fake news narratives to interact with pre-existing beliefs, identities and priorities of those exposed to the message, i.e., their own personal narratives.

Comprehension and Attentional Focus
When the user finds a certain article or post is sufficiently prominent, they move on to reviewing or engaging with the element, something Fogg calls ‘interpretation’. At this point, if the goal is to sustain the reader’s interest, it is important to gain narrative engagement. To explore this aspect further, we draw on the works of cognitive behaviourists Buselle and Bilandzic (2009), who have written extensively on how and why narrative mechanisms result in engagement. They identify several dimensions of engagement which can be interpreted as representing unique but interrelated engagement processes. They argue that in order to encourage engagement with a narrative, the presented narrative should be easy to comprehend. Interestingly, however, they also claim that although the primary activity of narrative engagement is comprehension, the audience should be unaware when comprehension progresses smoothly, and become aware only when comprehension falters. They describe this as ‘attentional focus’ - a phenomenon where a truly engaged viewer will not be aware that they are not distracted. Essentially, when a consumer reads a piece of news that is so engaging (or so mindless) that the reader does not notice themselves drifting in and out of it - it allows for smooth narrative engagement without any barriers to ease of comprehension.

Fake news articles often exploit this mechanism of easy narrative engagement by using simple language that is easy to understand and stay engaged in. There are rarely, if any, technical words or field-specific jargon. Past research has demonstrated that, in fact, the lexicon used by fake news is more informal and simpler in detail and in technical production, not only in the title of the piece, but also throughout the text. Horne and Adali (2017) state that “[r]eal news articles are significantly longer than fake news articles”, and “fake news articles use fewer technical words, smaller words, less punctuation, fewer quotes, and more lexical redundancy.” In other words, fake news requires less cognitive effort and attention to process and is, therefore, attractive to readers (Horne and Adali 2017; Baptista 2020) This is not only in line with the use of heuristics; but the use of informal lexicon may also encourage the reader to engage with topics more personally or emotionally because it is written in a manner which they find familiar or even comforting.

Emotional Engagement
Emotional engagement (feeling for and with characters) is another component of narrative engagement which Buselle and Bilandzic discuss in their 2019 research. This factor appears to be specific to the emotional arousal component of narrative engagement, but not necessarily to any specific emotion. They, alongside others, hypothesise that this emotional reaction from fake news likely only represents the arousal, however, rather than the degree of these emotions. Put simply, users confronted with fake news may likely feel an emotional response to the story but may not act on these emotions until enough repeated exposure is established; or in other words until the issue becomes part of the audiences’ personal narratives.

Deep Stories
To make misinformation feel engaging, personal, and intuitive, even when the content presented seems far-fetched, the most effective fake news pieces use stories that are tailored to and targeted at specific audiences. Polletta & Callahan (2017) call these targeted pieces “deep stories”: narratives that reinforce what people believe describes their lives. Fake news articles may present stories that resonate with the economic or cultural anxieties of certain groups, for example; and frame their struggles in relatable (if misleading) terms. Once that narrative hook is established, the reader is more open to accepting scenarios they might otherwise consider implausible or even offensive (Polletta & Callahan, 2017). If, for example, a user is repeatedly exposed to fake news stories that fallaciously link their economic concerns with ‘threats’ from migrants and minorities, they might begin to internalise this prejudice, becoming more and more vulnerable to content that reinforces the story they want to believe. The impersonal complexities of economic decline are a much less satisfying narrative than a story that blames a clear ‘other’. In fact, this new found explanation for their experiences may even lead to a long-lasting belief system which encourages people to ignore other points of view on the matter. Proponents of cognitive heuristics call this deliberate ignorance the ‘expectancy violation heuristic,’ which is a strong negative heuristic. It presupposes that if a reader comes across information which does not align with their beliefs and values, they are less likely to find it attractive or credible and may even completely ignore it.

Expectancy violation and targeted news, in turn, are further compounded by narrative mechanisms which propagate false or conspiratorial theories, claiming to offer ‘special knowledge’ that mainstream or expert sources wish to hide. The fact that official sources do not report the same information might be presented - in a self-fulfilling way - as further confirmation of the conspiracy. Of course, the opposite is usually true - any article that claims to reveal a shocking ‘hidden truth’ or tell a conspiratorial, sensational story should be viewed with particular skepticism. The reason evidence and corroborating articles cannot be found is probably because the story presented is false, not because it is a private truth available only to the chosen few.

Verisimilitude
In more subtle ways, narrative mechanisms can “muddy the line” between isolated events and larger trends (Polletta & Callahan). Fake news may begin with anecdotes or events that are true, but later present them as connected points in a larger, untrue (or unverifiable) story about the world. Fake news can misleadingly pull a ‘signal from the noise’, presenting patterns and narratives that are not supported by evidence.

In fact, Introne and colleagues have brought an important perspective to the emerging conversation about fake news and false narratives through their work on pseudo-knowledge (PK). Building on previous work (Introne, Iandoli, DeCook, Yildirim, & Elzeini, 2017), they describe PK as false narratives that have begun to take on the heightened status of a plausible reality within a community. Inspired by cognitive psychologist Bruner (1986), we are informed that narrative and argumentative reasoning are two separate modes of human thought - each subject to different criteria. While arguments are judged according to their veracity (i.e., whether or not they are true), narratives are judged according to their verisimilitude (i.e., whether or not they seem plausible) (Bruner, 1986). Meaning, narratives don’t actually have to be true, only plausible or convincing.

This opens up a compelling avenue for further research on fake news, conspiracies and deepfakes. As Introne et al. have indicated, so far, researchers have focused on how misinformation spreads, how to detect it, and how to reduce its credibility. Underlying this approach, however, is the assumption that misinformation and fake news carry or reinforce false narratives. Intone et al. respond to this assumption by stating that “[T]his may be the case, but our findings demonstrate that fake news is certainly not a requirement for false narratives. Rather, the Internet allows the architects of false narratives to manufacture credibility by drawing information from many credible sources” (Bruner, 1986). Put simply, falsehoods don’t necessarily have to be made of complete untruths. In reality, false narratives can be (possibly even more) attractive when built upon elements of real, verifiable news just so long as they seem plausible.

Argumentative Mechanisms

In addition to using narrative mechanisms, fake news also appeals to readers by using argumentative mechanisms. In contrast with the personalised ‘hook’ of storytelling, argumentative mechanisms use rhetorical flourishes and misapply logic in an attempt to intellectually convince readers of a ‘truth’. Without a clear understanding of how these argumentative mechanisms look and which logical fallacies are commonly employed, fake news can appear authoritative, convincing, and rational, even while making spurious claims.

Argumentative fallacies used in disinformation
Argumentative fallacies more often than not form a very prolific base for fake news in particular and disinformation more generally. There are numerous types of fallacies, however, we will present at this stage several that are more often employed in disinformation campaigns. The table below presents definitions and examples of the most common fallacies: 

Slippery slope

definitionclaim about a series of events that will unstoppably occur and culminate in one major, negative event
example - Liz Wheeler, American news anchor for OAN presents an aquarium’s decision not to announce the gender of a penguin.
“We should ask, where does radical leftist gender ideology lead? Do liberals want human children to be genderless? If so, why? Is this based on biology? And if not, then what? What happens when human children are raised genderless? If gender is destroyed, doens’t that destroy traditional gender roles? And if gender roles are destroyed, doesn’t that destroy gendered relationships? And if gendered relationships are destroyed, doesn’t that destroy traditional marriage? And if traditional marriage is destroyed, doesn’t that destroy the family units? And if people aren’t dependent on their families, then who do they depend on? That’s right, the government. Which is the goal of liberals in the first place. Don’t let transgender penguins fool you.”  

Ad hominem
definition - an attack directed against a person’s character, integrity, reasons rather than the position they are holding or the arguments they are presenting
example - “[I]f Hillary Clinton were a man, I don't think she'd get 5 percent of the vote. The only thing she's got going is the woman's card, and the beautiful thing is, women don't like her.” (Donald Trump in the 2016 election campaign)

False dichotomy
definition - oversimplification of a complex situation and forceful reduction to only two options, out of which only one could be correct
example - “I had a choice, as well: either to trust the word of a madman, or to defend the American people. Faced with that choice, I will defend America every time. “ President George W. Bush regarding the Iraq invasion to prevent Saddam Hussein from using WMD.

Post hoc ergo propter hoc
definition - “after this, therefore because of this”
a faulty causal relationship based on the idea that if something happened before something else, then the first event caused the second one
example - 5G towers became operational and then the COVID 19 pandemic started. Therefore, the 5G towers caused the COVID 19 pandemic.

Cum hoc ergo propter hoc
definition - “with this, therefore because of this” If two events are happening at the same time, then a causality is falsely assumed, and one is said to cause the other.
example - Hospitals are full of sick people. Therefore, hospitals make people sick.

Attacking the strawman
definition - gives the impression that the argument is refuted, without engaging with the actual argument, instead replacing it with a fake argument
example - Chuck Todd: You sent the press secretary out there to utter a falsehood on the smallest, pettiest thing. And I don’t understand why.
Kellyanne Conway: Maybe this is me as a pollster talking, Chuck, and you know data well, I don’t think you can prove those numbers one way or the other, there’s simply no way to really quantify the crowds, we all know that. You can laugh at me all you want [Chuck Todd is starting to laugh], but…
Chuck Todd: I’m not laughing, but the photos are showing…
Kellyanne Conway: Well, but you are, and I think it’s actually symbolic of the way we are treated by the press, the way you just laughed at me is actually symbolic of the way we’re represented and treated by the press. I’ll just ignore it, I’m bigger than that.

Red herring
definition - something small or inconsequential distracts attention from a relevant aspect, idea, argument
example -  The reporter’s question: Can you envision a way of supporting the universal background checks bill?
Senator Lamar Alexander’s answer: Video games are a bigger problem than guns because video games affect people.

Zompetti (2019) outlines a few different types of argumentative mechanisms. One common strategy involves distracting and deflection through pointing out an error related to the opposing view. In this mechanism, the fake news creator employs the ‘ad hominem’ fallacy by attacking the credibility of the opponent instead of addressing the actual topic of discussion. This approach is especially effective in making fake news attractive when a trustworthy journalistic source falters or publishes incomplete data only to later revise an article with updated/newly verified information. As an argumentative strategy, a fake news piece might amplify or distort the significance of a ‘mainstream’ source publishing inaccurate information - setting up the mainstream as a strawman. This can then be generalised as a sign that all associated mainstream sources are untrustworthy or corrupt, implying that only ‘alternative’ (i.e., fake) sources can deliver credible information (Zompetti 2019). If readers can be convinced that traditional outlets cannot be trusted, fake news sources can appear more credible or attractive by offering a ‘different perspective’. This style of argumentation is particularly effective at undermining the credibility of scientists, experts, and public institutions.

In addition, ‘straw manning’ can be even more effective if a fake news piece quotes (or misrepresents) their own ‘expert’ who disagrees with mainstream consensus, claiming to ‘prove’ opponents wrong by ‘cherry-picking’ (Musi and Reed 2015) or arbitrarily choosing rare instances where traditional media commits an error. In this way the straw man can be found and set up retroactively using the ‘Texas sharp shooter’ approach. Generally used in the context of uncovering fallacies, the Texas sharpshooter phenomenon gets its amusing name from folklore about an inept marksman who fires a random pattern of bullets at the side of a barn, and then draws a target around bullet holes, pointing proudly at his success. Similarly in (fake) news, and in science (Nuzzo 2015), authors may pick self-fulfilling data to match their biases and lend credibility to their arguments.

In a classic example of misrepresentation and feigning authority, the example here includes an opinion piece by Piers Corbyn being passed off for “fact” and “science.” As AFP Factcheck, a fact checking website explains: Piers Corbyn is the brother of UK politician and former Labour Party leader Jeremy Corbyn. The former has spread false claims about climate change for decades. He obtained a degree in physics from Imperial College London in 1968, as well as a postgraduate degree in astrophysics from Queen Mary College in 1981. However, he is not a climate scientist and has very little related scientific research or peer-reviewed papers to his name.” So, even though Piers Corbyn is, in fact, a trained scientist, his field does not overlap with that of climatology and his perceived authority or training cannot compensate for that.

Besides, context is important - Corbyn presents climate change as "propaganda," on his weather forecast platform WeatherAction. In a March 2020 tweet, Corbyn falsely claimed the health crisis was a "simulation" by "mega-rich control freaks" Bill Gates and George Soros. In February 2021, he was arrested for comparing vaccination to the Holocaust, according to The Guardian.

Equivalency and Emphasis Framing
Consistent with the above mechanisms, two other methods used to misrepresent and argue for fake news are equivalency framing and emphasis framing where:
➔ Equivalency framing occurs when the communicator uses an alternate word or phrase to describe an event or issue; while the meaning and the indicated outcome of each term are logically identical, using one term instead of the other results in different preferences among message recipients (Chong and Druckman 2007; Druckman 2001). In simpler words, the same news is adjusted semantically in order to make the argument more preferable or agreeable for different sets of viewers.
➔ Emphasis framing, also called value and issue framing, involves the communicator using certain words and concepts when making a statement with the purpose of emphasising specific considerations (Druckman 2001; Druckman 2004). As in, framing issues through added emphasis on the values and beliefs of intended audiences, making the news more attractive, and harder to ignore.

Gish-galloping
As an alternative argumentative strategy, a fake news piece may not seek to directly ‘convince’ or ‘prove’ a falsehood to readers. Instead, fake news might seek to confuse or disorient readers, bombarding them with contradictory or irrelevant information. Readers may encounter a rhetorical strategy known as “gish-galloping”, where a writer or speaker “careens through topics, rattling through half-truth after half-truth… [aiming] both to overwhelm opponents’ ability to respond and to introduce doubt into the minds of audiences” (Johnson 2017). This kind of rhetoric can be intimidating for readers and viewers establishing an air of authority for purveyors of misinformation. Gish-galloping may be especially successful in disorienting consumers who feel that they do not have enough background or experience with the topic to challenge what they are seeing.

Fact Signalling

A related tactic involves what Hong & Hermann (2020: 1) call “fact signalling”: the “performative invocation of the idea of Fact and Reason”. Instead of presenting concrete evidence or using sound reasoning, a fake news piece might condescendingly wield ‘facts’ as a weapon against perceived opponents. The relevance or truthfulness of these ‘facts’ is irrelevant; what matters for this strategy is the affective performance of authority and the belittling of opposing viewpoints. Put differently, this is an appeal to emotion masquerading as rationality. Fact signalling appeals to readers by manipulating “what looks like truth, what sounds authentic, [and] what feels reasonable” (Hong & Hermann 2020: 3)
The following quote from Hong & Hermann’s paper encompasses risks of fact-signalling concisely: “Scholars are increasingly attentive to the ways in which what was once popularised as a ‘fake news’ epidemic is not simply a virulent strain of bad information in a fundamentally rational online ecosystem, but rather a broader crisis and transformation of what counts as truthful, trustworthy and authentic (e.g. Boler & Davis, 2018; also see Banet-Weiser, 2012).”

Impression of Expertise
In order to have the desired effect, there is one more piece of the puzzle which must fit, however. This factor involves the ways in which a fake news distributor can assert their credibility or give an impression of expertise. Zyl and colleagues summarise, for example, the ‘Checklist for Information Credibility Assessment’ put forth by proponents of digital literacy. In their summary, they list parameters such as accuracy, authority, objectivity, currency, and coverage or scope where:
➔ Accuracy indicates the degree to which the news content is free from errors - this may include both superficial errors such as spelling or punctuation, as well as errors within the message itself. In addition, accuracy refers to whether the information can be verified elsewhere. In fact, not only is it an indication of the reliability of the information at hand, but also by extension, the reliability of the website or news source itself.
➔ Objectivity is an exercise in deciding whether the content being presented is opinion or fact, and whether there is commercial interest, indicated for example by a sponsored link.
➔ Lastly, coverage or scope refers to the depth and comprehensiveness of the information presented. One tries to decide if the coverage of the subject at hand is rather superficial, or the author demonstrates an adept understanding of the topic.

According to the proponents of this checklist, therefore, not only does the content matter but also the way in which it is presented (error-free, grammatically correct) and what kind of an impression does it give of the author (do they seem proficient or knowledgeable in the field).

Interestingly, however, in a series of studies conducted by Metzger and her colleagues (2007), it was found that even when supplied with a checklist, users rarely used it as intended (Zyl et al, 2020, 27). Currency, comprehensiveness, and objectivity were only occasionally verified, while checking an author’s credentials was the least preferred method of verification by users. This correlates with findings by Eysenbach and Köhler (2002) who indicate that the users in their study did not search for the sources behind the presented website, nor were they interested in learning more about how the presented information was compiled. This lack of thoroughness is ascribed to the users’ lack of willingness to expend cognitive effort (Eysenbach and Köhler 2002 29).

These studies are compelling especially in the context of the ‘post truth’ era where we are constantly bombarded with new ‘news’. The unprecedented spread of misinformation begs the question - ‘How often and how reliably can we perform an information credibility check in an atmosphere saturated with so-called news?’ Following through a mental checklist, however rudimentary, would presumably get tiring if performed with such frequency. In order to minimise cognitive and decision-making effort, people often skip through the arduous task of executing a credibility assessment as shown by the studies mentioned above.

This apparent attempt by users to minimise mental effort has given rise to other studies on how users apply cognitive heuristics as well as other ‘short-cut’ means to assess news more quickly and with less effort. Sparring research on authority and expertise in the field of fake news has led to the development of a number of descriptive models and theories on how users assess credibility in practice.

Discursive Mechanisms

As described in the previous sections, fake news can gain credibility and appeal to readers through narrative mechanisms (involving storytelling and identity) and argumentative mechanisms (involving claims to authority, logic, and rationality). Yet there are still different - and sometimes more subtle - ways in which fake news can attract readers. Discursive mechanisms involve the ‘big picture’ aspects which affect consumption of fake news. These mechanisms relate to the social and technological contexts in which fake news gets noticed and thrives. We can analyse these mechanisms by examining the form, function, and distribution method of a given fake news piece.
As mentioned in the previous section, the unprecedented increase in the volume of ‘news’ we encounter today makes it nearly impossible to verify every piece of information we come across. To combat this, people rely on wider mechanisms or discourses that form the background for news consumption alongside mental short-cuts also known as cognitive heuristics. Put simply, we rely on shared knowledge and group judgement when judging news which may be false, as in, ‘What do others think about this?’

Reputation, Endorsement and Repeated Exposure
The foundation for “use of cognitive heuristics” approach was established in the 1950s by economist and cognitive psychologist Herbert A. Simon. In refuting the concept of ‘rational choice theory,’ Simon claimed that although people use reasoning to perform a cost-to-benefit analysis, they can never really determine the true costs or benefits of each action because knowing all the costs and outcomes is a human impossibility; something he called ‘bounded rationality’. If we consider bounded rationality as a fundamental feature of cognition, as a consequence, problem solving cannot be exhaustive: i.e., we cannot explore all the possibilities which confront us, and search must be constrained in ways that facilitate search efficiency even at the expense of search effectiveness (Richardson 2017). In simple words, the use of mental shortcuts or heuristics is a method that we apply when faced with a problem, such as deciding whether or not a certain piece of news is fake or real.
Since we physically cannot perform a detailed investigation of every post we see online, we ‘satisfice’ - or rely on mental shortcuts or ‘rules of thumb’ until the acceptability threshold is met. Zyl and others (2020) have applied the theory of cognitive heuristics to fake news consumption. According to them, during the examination of an online news article, we go through metrics such as: reputation, endorsement, consistency, expectancy violation and persuasive intent.

→ The reputation heuristic may be exercised in many different circumstances. Heuristic may represent literally, the reputation of the source as a news reporter. Or it may call upon tropes of brand loyalty, indicating that the source is a website or brand which the user recognizes or is familiar with. For example, one of the first markers which makes news attractive to readers is its reliability, and one of the best ways to ‘guess’ whether a piece of news is reliable is that it was featured on a reputed news outlet and/or an outlet that the reader regards highly.
→ The endorsement heuristic applies the logic of ‘if others believe it, it must be true’. This may include both groups of people - people whom the user is familiar with such as friends and family, but also other people on the internet who the user doesn't personally know but have given a service or source a review or comment sharing their own experience with the source. E.g., when a user sees news from an unfamiliar region or country, they may look to what others have said - if they can find local support for the source of news - they are likely to find the news valuable and believable even if it contradicts common sense.
→ The consistency heuristic indicates that if similar information about an element appears on other sources or websites, it is credible and therefore attractive. For example, if a person hears seemingly unbelievable news from a friend, they may not think much about it. However, if this news later reaches them through another, unrelated source - it may pique the reader's interest, they may find it more attractive than someone encountering this information for the first time.

Fogg’s (2002) web credibility framework is another model through which we can understand what makes fake news attractive. It is built on three categories - operator, content and design where:
➔ Operator refers to the source of the website - the person who runs and maintains the website. According to Fogg, a user makes a credibility judgement based on the person or organisation operating the website.
An example of this would be when a user is faced with news regarding the discovery of a new planet, the user may not be particularly interested in astronomy but may get interested when they see that the source of this information is NASA. In essence, the user may find the news more attractive due to the perceived trustworthiness of the source.
➔ Content refers to the content and functionality of the website where the news is found. Of importance is the currency, accuracy and relevance of the content and the endorsements from external organisations that are deemed respectable.
For example, if a reader comes across a news article on Twitter which was re-tweeted by a reputable news outlet such as the BBC, the original article becomes more attractive, even if the original article was not actually written by the BBC. In this example, the fact that the original writer of the article is unknown or unpopular is compensated for (and made attractive) by the reputation of Twitter as a household name for ‘serious’ or ‘academic’ social media rife with debate; as well as by the endorsement of a big-name journalistic channel which is BBC.
In a similar vein, we would like to point out that there is something to be said also about the channel through which (dis)information is received. In many cultures, fake news can obtain a layer of legitimacy if received through a trustworthy channel - for example through a family member or someone more educated.

Design and Visual Markers
Penultimately, borrowing again from Fogg’s (2002) framework is another superficial but vastly interesting feature - design. Fogg describes design as the structure and layout of the website. Through mutual consensus and with online spaces (particularly social media) increasingly becoming an integral part of our daily lives, we have identified a certain ‘look’ for what news should look like. Fogg breaks down this element of design into four aspects namely:
● Information design as in how the information is structured on the website, does it make sense, is it logical, does it follow a chronological pattern: etc.
● Technical design, the functioning of the website/source on a technical level, e.g., whether it has a search function; do the hyperlinks work as intended; and so on.
● Aesthetic design speaks to the visual presentation of the source including the looks, feel and professionality of the design. When a website presents news in an appropriately ‘intelligent’ style, using sombre or ‘academic’ colours, the reader is more likely to see the source as smart or attractive.
● Interaction design speaks to the ease of navigation, and interaction with the source as well as the user interface - Is it obvious where the reader must click to move on to the next page? Are all the photos readily visible? Is the post interactive in the sense of including motion or visual aids such as graphs to make the article easy to follow?

Click baiting
The most familiar of these discursive mechanisms, in terms of visual design, can be seen in the use of ‘clickbait’ in fake news pieces. Clickbait, essentially an advertising tactic, describes a “[headline] whose main purpose is to attract the attention of readers and encourage them to click on a link to a particular webpage” (Zhou et al 2021: 2). Effective clickbait titles are outrageous, challenging, combative, even amusing, and tempt individuals scrolling past to click and see what the media has to say. They might offer a dramatic question (e.g., “Have you seen what the Prime Minister has to say about THIS EVENT”?) or only share part of the information - causing a ‘cliff-hanger effect’ (e.g., “Queen Elizabeth Says: “Muslim Refugees Are Dividing Nationality, I Fully Agree With Donald Trump We Should...") . 

Queen Elizabeth Says: “Muslim Refugees Are Dividing Nationality, I Fully Agree With Donald Trump We Should

Cable and Mottershead (2018) have studied the cliff-hanger effect specifically in the context of sports news. They explain: “If a headline features a cliff-hanger, for instance, then we will be inclined to click because we want to find out the answers. It is this feeling of deprivation which provokes the reader into making these decisions.” They offer the following example from The Guardian 

This example is typical of a sports article

This example is typical of a sports article. It claims to have uncovered new information, but the headline construction is careful not to give any of this new material away. The title is short and tantalising, a cliff-hanger and points towards a knowledge gap as it gives no answers. These traits of ‘clickbait’ make news more attractive as readers look to bridge the distance between their new-found curiosity and promised knowledge.

Case study - Climate denial 

The Daily Sceptic (TDS)

The Daily Sceptic (TDS)

In this section, we demonstrate features of fake news by exploring the following example.

In October 2022, a link to this article published by The Daily Sceptic (TDS) was posted on Facebook and widely shared on social media platforms including Twitter. The post attempts to refute the position that the unprecedented rise in Arctic Heat during the last few decades has been caused by anthropomorphic climate change. Instead, the article falsely contends that “natural warming” can fully explain the loss of ice in the Arctic.

This article provides a compelling showcase for several types of narrative, argumentative, and discursive mechanisms commonly used in fake news:

The Daily Sceptic’s article is prominent and visually attractive, designed to catch the attention of social media users scrolling past. The title is a good example of ‘clickbait’: Climate Bombshell: Greenland Ice Sheet Recovers as Scientists Say Earlier Loss was Due to Natural Warming Not CO2 Emissions. Sensational cues such as “Bombshell”, and tropes of credibility “Scientists Say…” make casual users scrolling past curious, calling to a knowledge gap as pointed out by Cable & Mottershead (2020). 

Once the quippy title has brought the reader to TDS website, there are many practical and design elements which come into play. These cues are subtle but encourage the reader to trust what they are seeing. The colour scheme and layout used by the website, or the information and aesthetic design, although eye-catching (red), is simple, streamlined, and similar to many legitimate news outlets. In addition, the by-line includes a name likely to be familiar to English language readers, along with a date and a time stamp, providing currency and operator information as added layers of legitimacy. In both - the post shared on social media platforms and in the website version - a large portion of the screen space (especially on mobile phones) is occupied by the large, demanding photo of an ice sheet - attractive ‘evidence’ of the recovered ice sheet in Greenland. Once the reader begins to read, they experience ease of comprehension - an essential requirement for narrative engagement. The language used in the article is itself rather informal, simplified and easy to understand - despite addressing a complex issue as Climate change - something that readers may appreciate and be able to focus attention on.

In terms of building the narrative, TDS’s article begins with an emotionally charged indictment of “the media”, suggesting that most reporting on climate change is baseless:

“A popular scare story running in the media is that the Greenland ice sheet is about to slip its moorings under ferocious and unprecedented Arctic heat and arrive in the reader’s front room any day now (I exaggerate, but not much).”

The text then describes the “scientific world” as confused and “scrambling” to understand a recent slow-down in Greenland’s ice loss. The slow-down in ice loss is real (relating to the El Niño phenomenon), but its significance and link with climate change are greatly misrepresented. The authors describe a world where ill-intentioned ‘mainstream’ scientists collude with activists and governments to enact the “emissions agenda”. This kind of broad, conspiratorial framing of scientists, governments, and ‘the media’ is common with fake news. A story is presented here which suggests that institutions cannot be trusted, that the government (along with ‘others’) want to change society in ways that might harm you, the reader. Why tolerate emissions regulations and petrol taxes for an ‘agenda’ that might make life more expensive for you? Why believe the ‘popular’ take on climate change when this writer (apparently) proves inconsistencies in climatology? The narrative framing of this article is emotionally engaging, exploiting a disillusionment many readers may feel towards governments and experts; specifically in the context of inflation and higher energy costs caused by the aftermath of COVID-19 pandemic and the war in Ukraine.

Even if the reader is not entirely convinced by this narrative framing, they may come away confused and unsure what to believe. This article treats academic discourse and anecdotal ‘evidence’ with the same legitimacy, following up out-of-context quotes from climatologists with vague ‘refutations’ from other Daily Sceptic posts. The implication is that expert sources or scientific journal articles are no more trustworthy than opinion pieces, social media posts, or personal blog posts. Fake news articles use these false equivalencies to create confusion and doubt in readers.

This article presents a clear example of several logical fallacies and appeals that are common in fake news. We can begin with the tagline under the website: “Question everything. Stay sane. Live Free.” - an attempt to give the impression of neutrality, apoliticality and stoic skepticism. When readers read this kind of a tagline, they may see this website as trustworthy, put together by authors interested in finding out the ‘real truth’ about climate change.

This is a sophisticated example of fake news, as the author does, in fact, cite two peer-reviewed sources from Nature (conducted by Japanese climatologists ) and Quaternary Science Reviews. At first glance, then, this seems to be a post supported by scientific research. However, the author selectively quotes or cherrypicks and misinterprets the content of the scientific articles, employing the Texas sharpshooter fallacy. If we follow the article link to the Japanese study, for example, we immediately see a finding that contradicts with The Daily Sceptic’s article - “Both natural variability and anthropogenic forcing contribute to recent Greenland warming by reducing cloud cover”.

It is interesting that The Daily Sceptic’s website provides a link to the original study at all. If a reader does not click through to read the original source (or if they are inexperienced in reading scientific articles), it may be easy for them to trust The Daily Sceptic’s misrepresentation. An average reader approaching their website, especially through a social media channel, would be very unlikely to click through several pages to get to the study due to the concepts of heuristics and satisficing. As discussed above, a majority of online news consumers are likely to expend as little cognitive effort as possible. In this case, simply seeing that a link to the original article is provided might be enough to ‘satisfice’ and meet the minimum acceptability threshold. The author from TDS does not cite and link these articles for their scientific content, but instead ‘name-drops’ them as an appeal to authority, lending the post credibility. In addition to these citations, the author also includes a graph later on in the body of the article as well as several mentions of different Professors and Doctorate holders at prestigious and well-known universities (MIT, University of East Anglia, Manchester Met University and Aarhus) to increase the endorsement value of the article.

In discussing climate change, the author also creates a false dilemma between anthropomorphic climate change and short-term phenomena like El Niño. The author presents these as opposing or competing events, when in fact they have little relation. Climate change does not imply warming temperatures everywhere all the time, and a year of slowing ice loss does not ‘disprove’ climate change as a trend. The author uses this false dilemma, straw manning or oversimplifying the situation, to jump to the erroneous conclusion that corrupt scientists are fighting to “preserve the fiction” that human activity is responsible for climate change.

At this point, we have an appealing narrative framing (us vs them, distrust of experts and the ‘mainstream’, etc) and an argument that relies on logical fallacies. Along with, design elements and discursive support from ‘viral’ sharers on social media, this article is successful in attracting viewers to land and stay on their web page when browsing from social media. Whether the article actually convinces readers of its narrative or argument is not necessarily measurable, but it is easy to imagine how the article could, at least, successfully sow seeds of doubt in the mind of an unsuspecting reader.
In conclusion, fake news is not a new phenomenon. Yet, the current proliferation of fake news presents a unique challenge, something distinct from ‘typical’ misinformation in its complexity, distribution, and decentralization. The unprecedented increase in availability of technological devices, internet connections, and online sources of information means that any person who is in possession of a device with an internet connection can potentially become a consumer or distributor of fake news.

This phenomenon makes the current era of fake news particularly challenging. We are faced with the dilemmatic convergence of inclusivity in publishing and increased difficulty in judging credibility of online information. The internet (and social media in particular) has ‘democratised’ media creation, allowing users to quickly and easily share stories, articles, photos, and videos with others. This has brought clear benefits, as social media can empower individuals to express themselves, organise, and access information in ways that would not have been possible otherwise. Creating and sharing content globally has never been easier. Yet there are also risks and drawbacks with this changing communication landscape. It can be difficult to verify sources of information, and the structure of social media websites incentivises sensationalism and emotional engagement.
Despite extraneous efforts to establish gatekeeping or verifying mechanisms, the fleeting nature of digital information simply does not allow for policing of shared information. And while this lack of policing is an excellent development from the perspective of including public and traditionally marginalised populations - the layperson, the citizen journalist - there is also a wide scope for manipulation of this freedom. As Zyl and colleagues point out, digital content is easy to publish anonymously, and easily plagiarised and altered. 

1. Abbott (2002) as quoted in Tamul, Daniel J. and Jessica Hotter. (2019) “Exploring Mechanisms of Narrative Persuasion in a News Context: The Role of Narrative Structure, Perceived Similarity, Stigma, and Affect in Changing Attitudes.” Collabra: Psychology
2. Banet-Weiser, S. (2012). Authentic™. In Authentic™. New York University Press.
3. Baptista, J. P., & Gradim, A. (2020). Understanding fake news consumption: A review. Social Sciences, 9(10), 185.
4. BARBER, J. F. (2020). Fake News or Engaging Storytelling?. Radio's Second Century: Past, Present, and Future Perspectives, 96.
5. Benjamin, D., Por, H. H., & Budescu, D. (2017). Climate change versus global warming: who is susceptible to the framing of climate change?. Environment and Behavior, 49(7), 745-770.
6. Boler, M., & Davis, E. (2018). The affective politics of the “post-truth” era: Feeling rules and networked subjectivity. Emotion, Space and Society, 27, 75-85.
7. Brites, M. J., Amaral, I., & Catarino, F. (2019). The era of fake news: digital storytelling as a promotion of critical reading. In INTED2019 Proceedings (pp. 1915-1920). IATED.
8. Turner, V. W., & Bruner, E. M. (1986). The anthropology of experience.
9. Bryanov, K., & Vziatysheva, V. (2021). Determinants of individuals’ belief in fake news: A scoping review determinants of belief in fake news. PLoS One, 16(6), e0253717.
10. Burkhardt, J. M. (2017). History of fake news. Library Technology Reports, 53(8), 5-9.
11. Busselle, R., & Bilandzic, H. (2009). Measuring narrative engagement. Media psychology, 12(4), 321-347.
12. Cable, J., & Mottershead, G. (2018). 'Can I click it? Yes you can': Football journalism, Twitter, and clickbait. Ethical Space, 15(1/2).
13. Druckman, J. N. (2001). The implications of framing effects for citizen competence. Political behavior, 23, 225-256.
14. Eysenbach, G., & Köhler, C. (2002). How do consumers search for and appraise health information on the world wide web? Qualitative study using focus groups, usability tests, and in-depth interviews. Bmj, 324(7337), 573-577.
15. Fogg, B. J. (2003, April). Prominence-interpretation theory: Explaining how people assess credibility online. In CHI'03 extended abstracts on human factors in computing systems (pp. 722-723).
16. Hong, S. H. (2020). “Fuck Your Feelings”: The Affective Weaponization of Facts and Reason. In Affective Politics of Digital Media (pp. 86-100). Routledge.
17. Horne, B. D., & Adali, S. (2017, May). This just in: Fake news packs a lot in title, uses simpler, repetitive content in text body, more similar to satire than real news. In Eleventh international AAAI conference on web and social media.
18. Introne, J., Iandoli, L., DeCook, J., Yildirim, I. G., & Elzeini, S. (2017, July). The collaborative construction and evolution of pseudo-knowledge in online conversations. In Proceedings of the 8th International Conference on Social Media & Society (pp. 1-10).
19. Johnson, A. (2017). The multiple harms of sea lions. Harmful Speech Online, 13.
20. Machete, P., & Turpin, M. (2020). The use of critical thinking to identify fake news: A systematic literature review. In Responsible Design, Implementation and Use of Information and Communication Technology: 19th IFIP WG 6.11 Conference on e-Business, e-Services, and e-Society, I3E 2020, Skukuza, South Africa, April 6–8, 2020, Proceedings, Part II 19 (pp. 235-246). Springer International Publishing.
21. Meneses, J. P. (2018). Sobre a necessidade de conceptualizar o fenómeno das fake news. Observatorio (obs*), (1), 37-53.
22. Metzger, M. J. (2007). Making sense of credibility on the Web: Models for evaluating online information and recommendations for future research. Journal of the American society for information science and technology, 58(13), 2078-2091.
23. Mikołajczak, M., & Bilewicz, M. (2015). Foetus or child? Abortion discourse and attributions of humanness. British Journal of Social Psychology, 54(3), 500-518.
24. Musi, E., & Reed, C. (2022). From fallacies to semi-fake news: Improving the identification of misinformation triggers across digital media. Discourse & Society, 33(3), 349-370.
25. Nuzzo, R. (2015). Fooling ourselves. Nature, 526(7572), 182.
26. Pengnate, S. F., Chen, J., & Young, A. (2021). Effects of clickbait headlines on user responses: An empirical investigation. Journal of International Technology and Information Management, 30(3), 1-18.
27. Polletta, F., & Callahan, J. (2019). Deep stories, nostalgia narratives, and fake news: Storytelling in the Trump era. Politics of meaning/meaning of politics: Cultural sociology of the 2016 US presidential election, 55-73.
28. Quinn, Ben. “Piers Corbyn Arrested Over Leaflets Comparing Vaccine Programme to Auschwitz.” The Guardian, 5 Feb. 2021, www.theguardian.com/uk-news/2021/feb/04/piers-corbyn-arrested-over-leaflets-comparing-covid-vaccine-programme-to-auschwitz.
29. Ravaja, N., Saari, T., Kallinen, K., & Laarni, J. (2006). The role of mood in the processing of media messages from a small screen: Effects on subjective and physiological responses. Media Psychology, 8(3), 239-265.
30. Richardson, R. C. (2017). Heuristics and satisficing. A companion to cognitive science, 566-575.
31. Ryan, M. L., Ruppert, J., & Bernet, J. W. (Eds.). (2004). Narrative across media: The languages of storytelling. U of Nebraska Press.
32. Schuldt, J. P., Konrath, S. H., & Schwarz, N. (2011). “Global warming” or “climate change”? Whether the planet is warming depends on question wording. Public opinion quarterly, 75(1), 115-124.
33. Stein, N. L. (1982). What's in a story: Interpreting the interpretations of story grammars. Discourse Processes, 5(3-4), 319-335.
34. Tandoc, E. C., & Seet, S. K. (2022). War of the Words: How Individuals Respond to “Fake News,”“Misinformation,”“Disinformation,” and “Online Falsehoods”. Journalism Practice, 1-17.
35. Watson, C. A. (2018). Information literacy in a fake/false news world: An overview of the characteristics of fake news and its historical development. International Journal of Legal Information, 46(2), 93-96.
36. Pengnate, S. F., Chen, J., & Young, A. (2021). Effects of clickbait headlines on user responses: An empirical investigation. Journal of International Technology and Information Management, 30(3), 1-18.
37. Zompetti, J. P. (2019). The Fallacy of Fake News: Exploring the Commonsensical Argument Appeals of Fake News Rhetoric through a Gramscian Lens. Journal of Contemporary Rhetoric, 9.
38. Van Zyl, A., Turpin, M., & Matthee, M. (2020). How can critical thinking be used to assess the credibility of online information?. In Responsible Design, Implementation and Use of Information and Communication Technology: 19th IFIP WG 6.11 Conference on e-Business, e-Services, and e-Society, I3E 2020, Skukuza, South Africa, April 6–8, 2020, Proceedings, Part II 19 (pp. 199-210). Springer International Publishing. 

Co-funded by European Commission Erasmus+
ANIMV
University of Malta
University Rey Juan Carlos
Logo New Strategy Center

Project: DOMINOES Digital cOMpetences INformatiOn EcoSystem  ID: 2021-1-RO01-KA220-HED-000031158
The European Commission’s support for the production of this publication does not constitute an endorsement of the contents, which reflect the views only of the authors, and the Commission cannot be held responsible for any use which may be made of the information contained therein.


Ivan, Cristina; Chiru, Irena; Buluc, Ruxandra; Radu, Aitana; Anghel, Alexandra; Stoian-Iordache, Valentin; Arcos, Rubén; Arribas, Cristina M.; Ćuća, Ana; Ganatra, Kanchi; Gertrudix, Manuel; Modh, Ketan; Nastasiu, Cătălina. (2023). HANDBOOK on Identifying and Countering Disinformation. DOMINOES Project https://doi.org/10.5281/zenodo.7893952