6. PUBLIC POLICY ANALYSIS

WHOLE-OF-SOCIETY APPROACH

Valentin Stoian-Iordache

Introduction

The chapter analyzes the main policies against disinformation adopted by the European Union and by Romania, Malta and Spain.

It identifies two policy models and argues that most actors have chosen an approach based on increasing media literacy competences and on making disinformation more difficult to spread and high-quality information easily accessible. Finally, the section proposes the "whole-of-society" approach as a solution to disinformation, going beyond policies enacted by authorities and empowering actors in society to combat the challenge of disinformation. In the first part, the literature in the field is summarized and a secondary analysis of the relevant policy models is conducted.

The literature is presented in a chronological order, focusing on recent academic works in the field. Insights from disciplines such as communication sciences, law and political science are leveraged in order to understand how policy models evolved and were implemented by national and supra-national actors.

Secondly, the section presents EU-level policies, with a focus on the Code of Practice against Disinformation and on the several Communications issued by the European Commission on the topic. These documents are analyzed in order to highlight the European Commission's approach, focused on a "light touch", in order to avoid actual coercive content removal. The next sections are dedicated to highlighting the national models in three countries: Romania, Malta and Spain. These are presented with the aim of understanding how different countries approached the issue of disinformation in their particular ways, despite being part of the EU and having to follow the same overall legislation.

Finally, the theoretical model elaborated by Ivan, Chiru and Arcos (2021), entitled the "whole-of-society approach" is adapted and employed for grounding a solution to the problem of disinformation. While the model was developed for supporting a nation's intelligence efforts, in this case, it is adapted to the national-level effort to combat disinformation. 

Digital competences addressed

1.1 browsing, searching and filtering data, information and digital content;
1.2 evaluating data, information and digital content;
2.3 engaging citizenship through digital technologies. 

Main research questions addressed

● What policy models to combat disinformation have been adopted by the EU and by member states?
● What solutions can we propose to address the problem of disinformation?  

Combating the effects of disinformation in the online environment – setting the scene

Saurwein and Spencer-Smith (2020) analyze European and national-level policies against disinformation by using two theoretical frameworks: "assemblage" and "multi-level governance". The ecosystem of disinformation-producing actors is described as a socio-technical assemblage in which producers and sharers of disinformation interact. These have both political and financial motivations. Furthermore, social media users receive and re-share disinformation, while algorithms keep people in the same "content bubble". Policy-makers who engage in the struggle against disinformation are presented as being part of a multi-level governance network, according to the authors.
According to the authors, the EU does not aim to forbid disinformation, but to increase its costs to the point at which it becomes costly to spread it. According to the two authors, the European Commission understands disinformation as a form of market failure, provoked by the fact that "bad information" is "cheap” while "good information" is expensive. Thus, the overall policy direction has been "demonetize bad information" and subsidize high-quality journalism (Saurwein and Spencer-Smith, 2020, 5-6).

Conversely, the authors present the cases of France, Germany and the UK. The first two adopted a "tough approach" including the banning of pieces of disinformation, while the latter undertook a far lighter form of regulation. France introduced a law allowing judges to immediately order the removal of online content if proved to be disinformation. Conversely, the UK established guidelines for potential future regulation. This included imposing a "duty of care" for the platforms. Similarly to France, Germany adopted a tough approach based on the removal of unlawful content (Saurwein and Spencer-Smith, 2020, 9-10).

Mardsen, Meyer and Brown (2020) also discuss EU-level policies against disinformation. They argue in favor of a model called co-regulation which should include actors such as state regulators, civil society and social media providers. This is the only way to ensure, according to the authors, that policies against disinformation are compatible with freedom of speech, as presented in several CJEU decisions, which disallowed AI-enhanced content-based filters. Jason Pielemeier (2020) praises policies undertaken by platforms to decrease the costs of high quality information. This is, according to him, the only way to prevent disinformation in a way compatible with freedom of speech, given that, according to him, sanctioning disinformation under criminal law is almost impossible. Pielemeier (2020, 933-944) also praises the EU Code of practice on disinformation, which he presents as a way to determine platforms to self-regulate before any regulation would be imposed on them.

Durach, Bargaoanu and Nastasiu (2020) follow Mardsen, Meyer and Brown's approach (2020) in identifying, four models of regulation: a) self-regulation by platforms; b) co-regulation between supra-national and national authorities, on the one hand, and private actors, on the other, c) direct regulation and d) audience-centered regulation (or demand-side solutions) that focus on the audience through measures such as increasing media literacy and supporting fact-checking. Audience-based regulation is exemplified by different fact-checking organizations such as EUvsDisinfo, Correctiv (Germany) or Demagog (in Czech Republic) and by different media literacy programs which were promoted through "Media Literacy Week". Similarly to Mardsen, Meyer and Brown (2020), Durach, Bargaoanu and Nastasiu (2020) also defend co-regulation as the best equilibrium between efficiency and legitimacy (for more on media literacy see also 3.3).

Pherson, Ranta and Cannon (2020) employ scenario analysis to present possible futures of the regulation models on disinformation. They rely on two drivers: who initiates the policy (government and private sector) and the type of policy (content-based or user-focused) adopted. The authors elaborate four possible scenarios:
      ● "Pinocchio warnings" - government-mandated warnings on suspicious content;
      ● "the Alt-net" - and alternative internet created by the government which one can access only after extensive verification;
      ● "Rigid gateways" in which Internet providers establish a protocol for verifying content and a standards board;
      ● "T-cloud", a space which is handled by internet providers where only certified users can post information and which is accessible for a fee.

Hedvig Ördén (2019, 2020) claims that EU policies focused on disinformation have pursued incoherent aims: both a unified narrative, and content pluralism, if this includes only "high quality content”. The main reason for this is, according Ördén (2019, 2020), a competition of different epistemic communities: the security/defense establishment (which sees the informational coherence as the key value to be defended) and the the media/journalistic/fact-checker community (which focused primordially on information pluralism as the relevant referent object). Ördén foresees incoherence in the implementation phase, which will give rise to more conflict between these communities.
  

Case study 1 - EU-level policies  

Valentin Stoian-Iordache.

The EU Commission first showed interest in the problem of disinformation after becoming aware of its impact on the political events of 2016 in the US and the UK. The Commission requested the elaboration of a report, which was issued in 2018, and was entitled "A multi-dimensional approach to disinformation". It discusses how disinformation harms a democratic society, presents the measures taken by platforms and proposes five directions of action and policy goals. These could be summarized as increasing transparency in online information, especially by flagging paid or misleading posts, creating a transparency index for sources of information, improving media literacy and helping young adults identify fake news, developing tools for assessing the veracity of information, by empowering journalists and by electronic programs, and creating a diverse media ecosystem without government interference (EU Commission 2018a).

The overarching document which sets out the Commission's policies in the area is the "Tackling online disinformation: a European approach” (EU Commission 2018b) Communication. This showcases a number of principles on which the Commission planned to ground its actions and several measures which the Commission planned to adopt in the future. The principles could be summarized as: transparency regarding the origin of the information, diversity of information in the information ecosystem so that citizens can make informed decisions, fostering the credibility of information by showing which sources are trustworthy and fashioning inclusive solutions by increasing media literacy and raising awareness (EU Commission 2018b).

The Communication (EU Commission 2018b) and its follow-up report, issued in December 2018 (EU Commission 2018c), presented a series of measures and the way in which they would be implemented afterwards. For example, before adopting the Code of Practice on Disinformation, the Commission convened a multi-stakeholder forum which represented the initial meeting of the signatories. The Commission also aimed to create a network of fact-checkers and began workshops with the fact-checking community. It proposed new funding calls to support technological innovation in the area of combating disinformation, such as blockchain and automatic algorithms that separate disinformation (see also Chapter 5 for more information on blockchain solutions). Further, the Commission supported the integrity of the 2019 European Elections by helping electoral authorities exchange good practices, supported media literacy by organizing a week dedicated to it and making it mandatory for states to increase it, and established funding calls to support independent journalists.

In addition to the "Tackling online disinformation: a European approach" Communication, the European Commission and the High Representative for Foreign Affairs, jointly adopted the Action plan against disinformation in December 2018 and evaluated its implementation in 2019 (EU Commission and HRVP 2018; EU Commission and HRVP 2019). The Action plan was divided into four major pillars: improving the capabilities to detect, analyze and expose disinformation, strengthening response, mobilizing the private sector and raising awareness and improving societal resilience.

The two institutions increased their Strategic Communication capabilities, especially those of the EEAS and implemented a Rapid Alert System in the case of Disinformation, which was especially useful for the case of the Notre Dame Cathedral Fire. The two institutions also held seminars for journalists addressing the topic of disinformation, and improved the communication of EU-level policies on the topic. The Commission also supported the creation of a European Branch of the International Fact-Checking network, launched the Social Observatory for Social Media Analysis and helped national election authorities improve security in the 2019 European Parliament elections.

Another direction of action for the European Commission has been coordinating the self-regulation of social media through the Code of Practice on Disinformation (EU Commission 2018d). The implementation of the initial version of the code was assessed, revised and issued in an improved form in 2022. The Code relies on five directions of action:

① Improving the scrutiny of ad placements refers to stopping the monetization of fake news by not allowing sites which misrepresent themselves to place ads on the platforms;
② Making political and issue-based advertising more transparent by labelling it as such and showing who funded it;
③ Eliminating automated behavior of fake accounts (bots);
④ Empowering consumers through making quality content more visible;
⑤ Empowering the research community by making data available and easy to use.

The Commission assessed the implementation of the Code of Practice in 2020 and was not satisfied with the progress achieved (EU Commission 2020a). Despite some advances in some areas, such as eliminating misleading advertisements (hundreds of thousands of actions by Google and tens of thousands of actions by Twitter), improved labeling or complete ban of political ads and cracking down on inauthentic behavior, the overall appraisal was rather negative. One of the main criticisms was that platforms evaluated only advertisements that they hosted themselves and not the content of materials shared by users. The Commission was also dissatisfied with the insufficient verification of issue-based advertising and the inadequate use of tools to increase the visibility of high quality news. Finally, the lack of proper cooperation with fact-checkers and insufficient release of data to researchers was negatively appraised (EU Commission 2020a). Further, the Code was seen as lacking clear definitions, which inhibits coordinated actions in ambiguous areas, and insufficient focus on some specific areas such as micro-targeting of political advertising, fairness of access to political advertising (EU Commission 2020a).

Another document, issued specifically for the period of the pandemic, was the Tackling COVID-19 disinformation - getting the facts right (EU Commission and HRVP 2020) Communication. It focuses on the need for better strategic communication aimed at combating disinformation narratives, enhancing the Rapid Alert system, improving the exchange on best practices on issues such as micro-targeting and on cooperation with the WHO for promoting correct information on the COVID-19 virus. The Communication also grants an important role to the platforms, especially as promoters of correct content on the virus, such as that issued by the WHO and through the promotion of fact-checked opinions on the pandemic. Further, the Commission aims to raise awareness and increase social media literacy, especially through funding Erasmus+ and European Solidarity Corps projects on the issue of disinformation (EU Commission and HRVP 2020).

The updated and improved Code of Practice against Disinformation was issued by the European Commission in 2022 (European Commission 2022a) It was signed by 33 social media companies and trade federations (EU Commission 2022b). Building on the previous version of the Code, the reinforced variant included 44 commitments across the five pillars. The novelties include strengthening the demonetisation of disinformation, including better oversight of those buying advertising on the platform, better cooperation with fact-checkers (committing to the reporting of the number of third-party audits of buyers of disinformation), better control of intermediaries buying advertising in the name of other websites, and improved verification of the content of third-party messages, including advertisements, with the aim of removing disinformation.

On the issue of political advertising, no agreement on the definition of political and issue-based advertising could be reached. In the enhanced code, parties committed to cooperate to achieve this definition, and to put in place mechanisms to clearly distinguish political advertising and paid-for content, even if this is then further relayed by individuals by messaging apps. According to the Code, sponsors of political ads must be clearly identifiable and the main directions of the contracts signed with them have to be made public. Further, a repository should be created for these advertisements. Political and issue-based ads places must be archived in a repository which should be made public (EU Commission 2022a).

Platforms also undertook to eliminate a wider array of inauthentic behavior such as malicious deep fakes, hack-and-leak operations, fake accounts and bot-driven amplification, and the use of influencers and to combat AI-generated content through legally acceptable automated verification systems. Finally, signatories agreed to exchange information about malicious practices (EU Commission 2022a).

On the issue of media literacy and critical thinking, platforms agreed to support their development, especially to conduct campaigns highlighting the modus operandi of malicious actors. Moreover, parties to the Code undertook to prioritize quality content and to release information about criteria used to prioritize and de-prioritize information. The verification of the authenticity of digital content through automated tools and a better cooperation with fact-checkers to flag disinformation even through direct access to the platforms was another commitment that signatories undertook. In order to improve the quality of information online, platforms agreed to issue warnings from authoritative sources on pieces labeled as disinformation, but to also include a system to contest abusive flagging. In order to improve cooperation with the scientific community, the Code foresaw the release of anonymized datasets to interested researchers and, in very specific cases, of personal data, but only to researchers who have received security vetting (EU Commission 2022a).

With reference to fact-checking, signatories agreed to financially support independent fact-checking organizations, and to compile and publically issue reports on the way the decisions of fact-checkers were implemented. To better enforce the Code, the Commission will create a Transparency Center and a task force (EU Commission 2022a).

While the issue of information attacks is not mentioned in the Security Union Strategy (EU Commission 2020b), its implementation report (EU Commission 2022c) focuses on the measures taken against Russian propaganda, such as the banning of RT and Sputnik. Finally, the Strategic Compass (European Council 2022) mentions the actions which the EU will take against disinformation: increasing the ability to understand and analyze the threat, imposing significant costs on perpetrators, and helping independent media support itself financially. Then, the Compass envisions the creation of a toolbox to strengthen the Union's strategic communication capacities, especially of CSDP missions abroad, and the increase of the Rapid Alert System. Eventually, according to the Compass, a data space storing information on all relevant information related incidents will be created (European Council 2022).

In the next section, we will examine the effort made at a national level, in three countries: Spain, Malta and Romania, to counter disinformation. 

Case study 2 - Spain  

Cristina Arribas, Manuel Gertrudix, Ruben Arcos

In addition to the measures coordinated with the European Union, since 2018 Spain has carried out different actions in the fight against disinformation through its institutions and has established permanent coordination mechanisms between the different bodies of the Public Administration, highlighting the Permanent Commission to Combat Disinformation, established in March 2019. The disinformation issue is included in the National Cybersecurity Strategy of 2019, which is focused more on malicious actors - state and non-state - than on the study of disinformation as a specific risk. The National Security Strategy of November 2021 anticipates the further development of a national strategy to combat disinformation campaigns.

As mentioned in section 4.1., in 2020 Spain also implemented a Procedure of Action against Disinformation, which was adopted through a ministerial order. It aims to bring three European documents in Spanish legislation and to implement them at a national level. It assigns the main responsibility for the fight against disinformation to a Permanent Commission established by the Department of Homeland Security, which will coordinate inter-ministerial actions.

The procedure relies on four levels of action, beginning from passive monitoring to detect disinformation attacks, analyzing any suspected incidents, briefing decision-makers on the nature of the attack and deciding on a response and coordinating it through the National Security Council. Regarding the management within the framework of the European Union, four different levels are implemented:

⁌ Level I (national) involves the collaboration with StratCom for identifying and analysing disinformation events, especially those related to Spain;
⁌ Level II refers to the exchange of information to support the actions against disinformation campaigns using the RAS system;
⁌ Level III concerns exchange of information to support the decision-making process at the political level through the State Secretary of Communication;
⁌ Level IV involves decision-making and coordination at the political level of the National Security Council.

The structure of the institutional system addressing disinformation:

  1. The National Security Council.
  2. The Situation Committee.
  3. The Secretary of State for Communication.
  4. The Standing Committee against Disinformation.
  5. The competent public authorities: Secretary of State for Communication, Presidency of the Government (DSN), National Intelligence Center (CNI), Communication offices of Ministries, and other relevant bodies.
  6. The private sector and civil society, the media, digital platforms, academia, the technology sector, non-governmental organizations, and society at large.  

Case study 3 - Malta  

Aitana Radu

As previously discussed in Chapter 4, Article 82 of the Maltese Criminal Code criminalises the spreading of false news, and makes it an offence to “maliciously spread false news which is likely to alarm public opinion or disturb public good order or the public peace or to create a commotion among the public or among certain classes of the public” (Criminal Code (Malta), Art. 82).

The offence carries a possible three-month prison sentence. This provision is aligned with Wardle and Derakhshan’s definition of disinformation, as there is a requirement of (a) false information, (b) intention (i.e., maliciously), and (c) harm (i.e., to public opinion or good order). Furthermore, the provisions are also partially aligned with the European Commission’s definition of disinformation (‘false or misleading information that is created, presented and disseminated for economic gain or to intentionally deceive the public, and may cause public harm’), but lacks the element of economic gain.

Similar provisions can be found in Article 9(1) of the Press Act, which stipulates that
“whosoever shall maliciously, by any means mentioned in article 3, spread false news which is likely to alarm the public opinion, or disturb public order or the public peace, or to create a commotion among the public or among certain classes of the public, shall on conviction be liable to imprisonment for a term not exceeding 3 months or to a fine (multa) or to both such imprisonment and a fine: Provided that, if any disturbance ensues in consequence of the offence or if the offence has contributed to the occurrence of any disturbance, the offender shall be liable to imprisonment for a term of not less than one month but not exceeding six months and to a fine (multa)."

Moreover, while at present Malta has no specific internet content blocking/filtering laws, it should be noted that the absolute majority of laws including criminal and civil laws are to a larger extent technologically neutral and, therefore, can be interpreted to include related activities (Camilleri, 2021), especially if we consider Article 82 of the Criminal code together with Article 9(1) of the Press Act.

Within the context of the COVID-19 pandemic, the Maltese state carried out various actions to ensure a continuous flow of official information on the evolution of the pandemic:
» Employing daily briefings by the Superintendent of Public Health Charmaine Gauci , which were transmitted on both television and social media and which included a Q&A session where both journalists present there and people viewing online could ask questions;
» Publishing daily information updates on the number of cases/vaccines on the official Sahha page.

In addition to these measures, the government, through the Department of Information, employed a social media campaign, developed by WHO, aimed at flagging disinformation in the context of the COVID-19 pandemic.
 

Case study 4 - Romania 

Valentin Stoian-Iordache

Romania did not adopt legislation allowing the state to prohibit disinformation, with the sole exception of the Decree establishing a state of emergency adopted in March 2020 (Decree 195/2020). Romania declared a state of emergency on the 16th of March 2020, which was in force until the 15th of May the same year (Law 55/2020). Between May 2020 and March 2022, Romania maintained a “state of alert”, which included less severe restrictions. The only legislation that expressly allowed authorities to forbid content online and to eliminate websites was in force during the two months of the state of emergency.

Decree 195/2020 permitted authorities to block access to different websites which were considered to spread disinformation. On the basis of this decision, the National Authority for the Administration and Regulation of Communication blocked 15 websites which published disinformation related to COVID-19 (ANCOM, 2020). After the end of the state of emergency, there was no legal basis to keep the websites closed and they reopened (Europa libera, 2020). No new legal acts were adopted in the wake of the Russian invasion of Ukraine, but Romania directly applied EU regulation 2022/350 of 1 March 2022, and thus banned Russian government-linked websites such as RT and Sputnik.

An EU-funded project entitled “Strategic Planning for consolidating resilience against disinformation and hybrid threats” was launched by the Ministry of Foreign Affairs with the cooperation of the National University of Political Studies and Public Administration. This will create a public policy to strengthen the MFA's ability to combat disinformation in the area of its responsibility (Ministry of Foreign Affairs, 2020).

Romania approaches disinformation as part of the wider concept of hybrid threats. As such, this is mentioned in the Strategy for National Defence adopted in 2020. In the Strategy, disinformation is described as one of the possible tools employed by enemy actors. According to the document, disinformation can be used to weaken public support for the state's policies. Also, the lack of a robust legal framework for combating disinformation is considered a vulnerability of the Romanian state (Romanian Presidency 2020).

Within the context of the COVID-19 pandemic, the government initiated a public information campaign encouraging people to only get informed from official sources. This was supplemented by several information campaigns on the benefits of vaccination and of wearing masks (Romanian Government 2020). The first campaign featured a video that explained the main characteristics of fake news such as emotionally charged expressions, miracle cures against the coronavirus and the recommendation to share further through social media and messaging apps. The second featured posters showcasing people wearing masks and asking citizens to show that “they care” by wearing masks. Also, the person in the picture was seen addressing the viewers and saying that they are wearing a mask so that “everyone’s effort was not in vain” (Timpul, 2020). Further, daily briefings by the government presented the number of coronavirus cases and deaths and recommended the actions to be taken by the general population (Ministry of Internal Affairs, 2022).  

The whole-of-society approach - a possible solution to the problem of disinformation 

Irena Chiru

The policies recently promoted at the European level certainly illustrate the political determination to fight disinformation while integrating and correlating this effort to the strategic communication strategies and plans of the EU. In addition, in different national contexts, proactive and reactive disinformation strategies have been implemented; however, the analysis of different national case studies show significant differences not so much in scope but in pace, reach, systematization, impact, etc.

As it has already been highlighted by the existing debate on the topic, the complexity of disinformation and the rapid and multiple swifts in its manifestations require not just “more initiatives” but anticipatory, innovative and inclusive approaches. Given the specificity of the phenomenon, these cannot be attributed or assigned only to policy makers, media platforms or education providers. Instead, a whole-of-society approach is needed, one that should embrace the principles of multi-stakeholderism (Ivan, Chiru, Arcos 2021), proactive and adaptive behaviors:

» Multi-stakeholderism - promoting joint instead of in silo initiatives, collaborative instead of competitive approaches, diverse social, cultural and professional backgrounds instead of traditionally invoked expertise in countering disinformation (e.g. media studies but also first-line practitioners working with vulnerable groups);
» Proactive behaviors – anticipating new disinformation trends and investigating the side-effects or unintended outcomes of well-meaning interventions. This must be precluded by a good understanding the vested interests, the actors, and the economic and political mechanisms of disinformation;
» Adaptive behaviors – designing measures of intervention based on a cultural and historical awareness of local specificities that may incline people to be more or less skillful in detecting disinformation (e.g. culturally embedded fears and ideas over global power struggles may predispose people to believe in distorted content while also offering them a psychological payoff).

Beyond involving all relevant actors - governmental actors and digital and tech companies, media outlets, civil society organizations (CSOs), citizens, education and research entities, a whole-of–society model of intervention promotes interaction, jointly generated education and knowledge that capacitates society resilience. According to this model, a new framework, including channels to facilitate rapid and impactful communication of relevant information in between state and non-state actors, is created, one in which concurrently:
Governments encourage independent, professional journalism, avoid censoring content, make online platforms liable for misinformation and fund efforts to enhance news literacy and independent academic research that can inform media intervention and public understanding;
Media industry focuses on high-quality journalism that builds trust and attracts greater audiences and call out fake news and disinformation without legitimizing them;
Technology companies strengthen online accountability and invest in technology to find fake news and identify it for users through algorithms and crowdsourcing;
Educational institutions develop news literacy programs and create opportunities for expert-led independent analysis and business models underpinning this work;
Citizens take cognitive distance, develop meta-cognitive awareness and dissociative thinking, question institutions and agents of power and protect themselves from false news and disinformation by following a diversity of people and perspectives.

In conclusion, countries have different approaches to the problem of tackling disinformation. As observed in the literature, two main groups of policy models can be discerned: one based on a strong response, including removing content from the internet and another one relying on a softer approach. This involves supporting high quality journalism, empowering fact-checkers and making disinformation more "expensive" to spread. While the European Commission, Romania and Spain opted for the second model, Malta comes closer to the first, even if it did not go as far as Germany or France (fake content is not removed immediately, but only if negative consequences occur.

While policies to combat disinformation have taken many forms, in order to address the phenomenon comprehensively, actors across the whole society should be included. This includes journalists, governments, social media platforms and especially education institutions which need to promote media literacy and critical thinking. This approach, dubbed "whole-of-society," is what the chapter proposes as the only solution wide-ranging enough to meaningfully combat the phenomenon of disinformation.  

1. ANCOM 2020, Decizii ANCOM pentru implementarea prevederilor Decretului nr. 195 din 16 martie 2020 și Decretului nr. 240 din 14 aprilie 2020, https://www.ancom.ro/decizii-decret-stare-de-urgenta_6253 , accessed 17.08.2022
2. Bennett, A., & Checkel, J. T. (Eds.). (2015). Process tracing. Cambridge University Press.
3. Camilleri, C. (2021). Regulating disinformation on social media: a European perspective (Bachelor's thesis, University of Malta), https://www.um.edu.mt/library/oar/handle/123456789/87671, Accessed 8.02.2023
4. Criminal Code, Chapter 9 of the Laws of Malta, available at https://justice.gov.mt/en/pcac/Documents/Criminal%20code.pdf;
5. Decree 195/2020, https://legislatie.just.ro/Public/DetaliiDocumentAfis/223831, accessed 19.08.2022
6. Department of Homeland Security of Spain https://www.dsn.gob.es/es/actualidad/sala-prensa/procedimiento-actuaci%C3%B3n-contra-desinformaci%C3%B3n
7. Durach, F., Bârgăoanu, A., & Nastasiu, C. (2020). Tackling disinformation: EU regulation of the digital space. Romanian journal of European affairs, 20(1).
8. Europa Liberă, ANCOM: Au fost deblocate toate site-urile închise pentru fake-news, 15 mai, 2020, accessed 17.08.2022
9. European Commission (2018a), A multi-dimensional approach to disinformation: Report of the independent high level group on fake news and online disinformation. Publications Office of the European Union, 2018.
10. European Commission (2018b). Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions: Tackling Online Disinformation: A European Approach COM (2018) 236 final.
11. European Commission (2018c) Report from the Commission the European Parliament, the European Council, the European Economic and Social Committee and the Committee of Regions on the implementation of the Communication "Tackling online disinformation: a European Approach" COM(2018) 794 final
12. European Commission (2018d), Code of Practice on Disinformation, https://digital-strategy.ec.europa.eu/en/library/2018-code-practice-disinformation, Accessed 25.07.2022
13. European Commission (2020) Commission Staff Working Document: Assessment of the Code of Practice on Disinformation - Achievements and areas for further improvement 10.9.2020 SWD(2020) 180 final
14. European Commission (2021) Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions: European Commission Guidance on Strengthening the Code of Practice on Disinformation 26.5.2021 COM(2021) 262 final
15. European Commission (2022a) The Strengthened Code of Practice on Disinformation 2022, https://digital-strategy.ec.europa.eu/en/library/2022-strengthened-code-practice-disinformation, Accessed 25.07.2022
16. European Commission (2022b) Signatories of the 2022 Strengthened Code of Practice on Disinformation, https://digital-strategy.ec.europa.eu/en/library/signatories-2022-strengthened-code-practice-disinformation, Accessed 25.07.2022
17. European Commission and HRVP (2018), Joint Communication to the European Parliament, the European Council, the European Economic and Social Committee and the Committee of Regions: Action plan against disinformation 5.12.2018 JOIN(2018) 36 final
18. European Commission and HRVP (2019), Joint Communication to the European Parliament, the European Council, the European Economic and Social Committee and the Committee of Regions Brussels: Report on the implementation of the Action Plan Against Disinformation 14.6.2019, JOIN(2019) 12 final
19. European Commission and HRVP (2020), Joint Communication to the European Parliament, the European Council, the European Economic and Social Committee and the Committee of Regions: Tackling COVID-19 disinformation - Getting the facts right 10.6.2020, JOIN(2020) 8 final
20. Ivan, C., Chiru, I., & Arcos, R. (2021). A whole of society intelligence approach: critical reassessment of the tools and means used to counter information warfare in the digital age. Intelligence and National Security, 36(4), 495-511.
21. Law 55/2020, https://legislatie.just.ro/Public/DetaliiDocument/225620, accessed 19.08.2022
22. Marsden, C., Meyer, T., & Brown, I. (2020). Platform values and democratic elections: How can the law regulate digital disinformation?. Computer law & security review, 36, 105373.
23. Ministry of Foreign Affairs 2020, “Planificare strategică privind consolidarea rezilienței în fața dezinformării și a amenințărilor de tip hibrid” [ Strategic planning on consolidating resilience against disinformation and hybrid threats], https://www.mae.ro/node/55926, accessed 19.08.2022
24. Ministry of Foreign Affairs, European Union, and Cooperation of Spain https://www.exteriores.gob.es/es/PoliticaExterior/Paginas/LaLuchaContraLaDesinformacion.aspx
25. Ministry of Internal Affairs, 2022, Informare COVID-19, Grupul de comunicare strategică [COVID information. Group for strategic communication], 3.03.2022, https://www.mai.gov.ro/informare-covid-19-grupul-de-comunicare-strategica-3-martie-ora-13-00-2/, accessed 19.08.2022
26. National Security Strategy 2021 https://www.dsn.gob.es/es/documento/estrategia-seguridad-nacional-2021
27. Ng, K. C., Tang, J., & Lee, D. (2021). The effect of platform intervention policies on fake news dissemination and survival: an empirical examination. Journal of Management Information Systems, 38(4), 898-930 .
28. Ördén, H. (2019). Deferring substance: EU policy and the information threat. Intelligence and National Security, 34(3), 421-437.
29. Ördén, Hedvig. Securing Judgement: Rethinking security and online information threats. Diss. Department of Political Science, Stockholm University, 2020.
30. Order PCM/1030/2020/ Procedure of Action against Disinformation https://www.boe.es/diario_boe/txt.php?id=BOE-A-2020-13663
31. Pherson, R. H., Mort Ranta, P., & Cannon, C. (2021). Strategies for combating the scourge of digital disinformation. International Journal of Intelligence and CounterIntelligence, 34(2), 316-341..
32. Pielemeier, J. (2020). Disentangling disinformation: What makes regulating disinformation so difficult?. Utah L. Rev., 917..
33. Press Act, Chapter 248 of the Laws of Malta, available at https://legislation.mt/eli/cap/248;
34. Romanian Government 2020, Get informed only from official sources https://gov.ro/ro/media/video/informeaza-te-doar-din-surse-oficiale&page=1, accessed 17.08.2022
35. Romanian Presidency, Strategia Națională de Apărare a Țării pentru perioada 2020-2024 [Strategy for National Defense 2020-2024], https://legislatie.just.ro/Public/DetaliiDocumentAfis/227499, 2020 accessed 17.08.2022
36. Saurwein, F., & Spencer-Smith, C. (2020). Combating disinformation on social media: Multilevel governance and distributed accountability in Europe. Digital Journalism, 8(6), 820-841.
37. Tangcharoensathien, V., Calleja, N., Nguyen, T., Purnat, T., D’Agostino, M., Garcia-Saiso, S., ... & Briand, S. (2020). Framework for managing the COVID-19 infodemic: methods and results of an online, crowdsourced WHO technical consultation. Journal of medical Internet research, 22(6), e19659.
38. Timpul 2020, “Poartă mască. Arată că îți pasă”, 08.07.2020, http://timpul.info/articol/23278/poarta-masca-arata-ca-iti-pasa.html, accessed 13.02.2023  

Co-funded by European Commission Erasmus+
ANIMV
University of Malta
University Rey Juan Carlos
Logo New Strategy Center

Project: DOMINOES Digital cOMpetences INformatiOn EcoSystem  ID: 2021-1-RO01-KA220-HED-000031158
The European Commission’s support for the production of this publication does not constitute an endorsement of the contents, which reflect the views only of the authors, and the Commission cannot be held responsible for any use which may be made of the information contained therein.


Ivan, Cristina; Chiru, Irena; Buluc, Ruxandra; Radu, Aitana; Anghel, Alexandra; Stoian-Iordache, Valentin; Arcos, Rubén; Arribas, Cristina M.; Ćuća, Ana; Ganatra, Kanchi; Gertrudix, Manuel; Modh, Ketan; Nastasiu, Cătălina. (2023). HANDBOOK on Identifying and Countering Disinformation. DOMINOES Project https://doi.org/10.5281/zenodo.7893952