Publicador de contenidos

Digital platforms as quasi-sovereign actors: Influence operations and Global Security Law

Digital platforms as quasi-sovereign actors: Influence operations and Global Security Law

ESSAY

19 | 03 | 2026

Texto

Although platforms remain subject to national legislation, this legislation struggles to exert complete control over national sovereignty

En la imagen

A Facebook fans party organized in New York [Facebook]

Introduction

“These (Influence Operations) can use a combination of methods, such as false news, disinformation, or networks of fake accounts aimed at manipulating public opinion (we refer to these as “false amplifiers”). […] While information operations have a long history, social media platforms can serve as a new tool of collection and dissemination for these activities. Through the adept use of social media, information operators may attempt to distort public discourse, recruit supporters and financiers, or affect political or military outcomes.”[1]

These words weren't written in an academic journal, a policy report, or any strategic plan. They come from a document published by Facebook in April 2017. Now, in 2025, Influence Operations (IOs) are taking on greater prominence in the context of hybrid warfare.

 Currently, IOs occur primarily through cyberspace, where human vulnerabilities are exploited to exert control over cognitive domains. As Facebook indicates, digital platforms serve as a new tool for collecting and disseminating information. However, as various authors have argued, digital platforms, as intermediaries, “acquire a quasi-sovereignty over the cyberspaces under their control.”[2]

This paper hypothesizes that, with regard to transnational information environment, digital platforms act as quasi-sovereign actors, shaping information gaps, amplifying state and non-state narratives, and wielding de facto regulatory power. They establish, enforce, and judge the rules. This implies a struggle to assert legal and political control over their operations, posing significant challenges to traditional concepts of state sovereignty and responsibility in global security law. The EU has demonstrated, in an ambitious example of co-production, its ability to establish a legal framework that leverages the quasi-sovereignty of digital platforms to defend its digital resilience. This model contrasts sharply with American “libertinism” or Russian authoritarianism.

 This leads to the following question: How do digital platforms function as quasi‑sovereign actors in transnational influence operations, what do this imply for Global Security Law and how has the EU sought to regulate their role in shaping information environments?

The methodology for preparing this essay has consisted of the compilation and analysis of legal sources, policy documents and academic literature. Structurally, the paper consists of three parts. The first conceptualizes digital platforms as quasi-sovereign actors. The second examines the implications of this quasi-sovereign status for information operations and the Global Security Law. Finally, it analyzes the EU regulatory model.

1. Digital platforms as quasi-sovereign actors: norm-setting, enforcement and adjudication.

1.1. Private gobernance and the excercise of Quasi-sovereign authority

Defining “digital platforms” may pose a challenge, since there is no universally accepted definition. The non-existence of a definition does not entail the impossibility of its regulation at a global level. Platform law is understood as “comprehensive regulation between systems supported by international law and specific local regulation applicable to specific platforms (ecosystems).”[3]

Fabio de Bassan indicates that one of the great challenges is the multidisciplinarity of this field of study, which entails perspectives from disciplines such as economics, sociology, computer science, law... but this also leads him to emphasize that the concept must be understood from a perspective adjusted to the needs of Global Law. This leads to defining digital platforms as: “Hardware or software structures that provide technological services and tools, programs and applications for the distribution management and creation of free or paid digital content and services, including through the integration of multiple media (integrated digital platforms)”[4]

However, among the existing legislation, the definition provided in Art. 3 of the EU Digital Services Regulation (DSA) stands out:

“‘online platform’ means a hosting service that, at the request of a recipient of the service, stores and disseminates information to the public, unless that activity is a minor and purely ancillary feature of another service or a minor functionality of the principal service and, for objective and technical reasons, cannot be used without that other service, and the integration of the feature or functionality into the other service is not a means to circumvent the applicability of this Regulation.”[5]

For the purpose of this essay, this definition has more value because, on the one hand, the EU legislative framework tends to influence international standards, but on the other hand it highlights the relevance of information and excludes offline services. In such a way that it allows to focus on hosting services.

Likewise, A. Minbaleev notes that there are two simultaneous approaches towards digital platforms: The first approach involves understanding them as legal persons, the second as information environments where agents interact.[6]Minbaleev tends to the second aproach.

1.2. Private gobernance and the excercise of Quasi-sovereign authority

Digital platforms have assumed forms of authority over their users that, increasingly, simulate functions traditionally associated with sovereignty wherever a virtual community exists. [7] This, on a cyberspace that, in its initial formation, promised disassociation with the exercise of the same. Although it is not a conventional territorial sovereignty, due to its capacity to control most of the transnational information, various authors have considered the, as a quasi-sovereign actors.[8]

This phenomenon is similar to what Shoshana Zukoff describes in her work Digital Surveillance, where she describes a new form of sovereignty administered by private actors:

“What we have seen in Facebook is a living example of the third modernity that instrumentarianism proffers, defined by a new collectivism owned and operated by surveillance capital. The God view drives the computations. The computations enable tuning. Tuning replaces private governance and public politics, without which individuality is merely vestigial. And just as the uncontract bypasses social mistrust rather than healing it, the post-political societal processes that bind the hive rely on social comparison and social pressure for their durability and predictive certainty, eliminating the need for trust. Rights to the future tense, their expression in the will to will, and their sanctification in promises are drawn into the fortress of surveillance capital. On the strength of that expropriation, the tuners tighten their grasp, and the system flourishes.”[9]

The quasi-sovereign nature is manifested above all in three practices interspersed between them. These are norm-setting, enforcement, and adjudication.[10] Which, together, affect how information is produced, circulates and is responded to.

Regarding the first of these practices, digital platforms have to be understood as influential norm-setting actors in the digital ecosystem, and not as neutral legal intermediaries or merely economic entities. Platforms establish standards through a wide range of regulatory instruments, including: terms of service, community guidelines, content moderation rules, data governance policies, algorithmics... Although many of these rules lack official legal status, they operate as de facto binding standards for platform users, influencing what is acceptable behavior in the interaction that occurs in cyberspace. On many occasions, states see their ability to regulate these spaces limited or late.[11]

Regarding the enforcement of powers, this is carried out through content moderation systems, algorithmic ranking, labeling practices, and account sanctions. These mechanisms can allow narratives to be amplified, marginalized or eliminated.[12] Therefore it directly influences the probability of success of an influence operation. The algorithmic governance exercised by these platforms constitutes a way of exercising power that is different from traditional censorship, since it does not prohibit the circulation of information, but rather structures visibility, prioritizing exposure to one content or another.[13] In Hybrid War contexts, knowing how to play with the algorithm can be crucial to expand a propaganda network and make a specific narrative prevail.

Finally, in terms of adjudication, digital platforms have established mechanisms to engage in dispute resolution on content moderation. Facebook's Oversight Board is a clear example of how digital platforms have internalized resolution processes that can bear similarities to judicial processes. Rather, the Oversight Board has jurisdiction over conflicts involving extremist language or disinformation. But unlike ordinary courts, mediation interests depend on the private interests of the company, and not on the common good.[14]

Taken together, these three practices position digital platforms as key actors for de facto digital governance and therefore to be able to exercise digital sovereignty. Although they are not Direct Participants in Hostilities, their control over the cognitive domain allows them to shape the scenario through which states can carry out their influence operations or even be participants in the elevation/decrease of tensions. This emphasizer, this emphasizes their role as bellatores,[15] since through their regulatory power they are able to bring stability to states or the opposite.

2. Implications of Quasi-Sovereignty on Influence Operations and Global Security Law

2.1. How Influence operations work

Influence Operations (IO), generally applied in the context of hybrid conflicts, are strategic efforts that seek to alter the perceptions, decisions or actions of a target audience. Generally, they exploit information environments rather than kinetic force.[16] In traditional military doctrine, “information and warfare operations, also known as influence operations, include the gathering of tactical information about an adversary as well as the dissemination of propaganda in pursuit of a competitive advantage over an opponent.”[17]

These operations are crucial for co-production in the scope of Global Security Law. Since they tend to be carried out during periods of “peace”, although they have strong implications for political processes, public order and the stability of the State. This raises challenges about how pre-existing legislation such as the principles of force, non-interventionism and sovereignty should be approached. By exploiting transnational information networks, they create grey areas in which traditional legal sources become difficult to apply.[18]

2.2. Influence operations from the leans of International Law

As Dale Stephens indicates, there is currently no treaty or customary law source directly applicable to IOs. However, different branches of public international law are invoked indirectly to assess their legality. This includes Jus ad Bellum, International Human Rights Law & the Law of Armed Conflict (LOAC). In each of these branches, they are generally considered legal.[19] However, there is an ambiguous terrain that is greatly influenced by the quasi-sovereignty of digital platforms as normative gatekeepers of information flows.

2.2.1. Jus ad Bellum

In the case of jus ad bellum, it has been interpreted that the prohibition of the use of force, as stated in Art 2.4 of the UN Charter, in most cases only applies to the use of Kinetic weapons. The interpretation framework for an IO to be considered unlawful based on Art 2.4 is very limited. Based on the conclusions of the Nicaragua case, it could be interpreted as unlawful a “campaign that clearly manipulated individuals to organize into armed groups and to undertake physical attacks against another government. An example of this would be in February 1991, when George H.W. Bush incited the Iraqi army to overthrow the Saddam Hussein regime.[20]

In this context, the principle of non-intervention has been advanced as a more plausible legal basis for assessing the legality of influence operations. According to Brian Egan, “cyber operation by a State that interferes with another country's ability to hold an election or that manipulates another country's election results would be a clear violation of the rule of non-intervention.” However, despite increasing academic speculation about cases in which an IO violates the principle of non-intervention, any state has formally pursued an international legal claim for unlawful intervention based solely on influence operations before an international court.[21]

The quasi-sovereignty of digital platforms further complicates the application of the principles of non-use of force and non-intervention. IOs are rarely driven by the direct action of states, but are mediated by private actors operating through social platforms. According to Liivoja & Väljataga, although non-state actors are the ones that operate, if they are linked to a state, their actions through digital platforms must be directly attributable to said state.[22]

In the case of Russia, for example, the legal framework that regulates public administration has been adjusted to facilitate a cooperation between public authorities, the private sector and civil society to produce the structure required for this kind of strategic approach. This is known as network governance. In the media, NSAs are described as having the resources that the state needs to meet its foreign policy objectives, while Russia provides the financing and competence necessary to decide. Although in some cases this leads to suggesting that it is about subcontracting the provision of services to non-state actors. This could raise doubts about Russia's own capabilities, but far from it, its relationship with non state actors is understood as “mutual dependence”. Since the structures organized by the state are the only ones capable of sustaining the political network, which includes the propaganda network.[23]

Moreover, as clarified by the Tallinn Manual 2.0: “Coercion must be distinguished from persuasion, criticism, public diplomacy, propaganda, retribution, mere maliciousness and the like in the sense that, unlike coercion, such activities merely involve influencing the voluntary actions of the target State”.[24] Therefore, the IO would be unlawful only if it affected an exclusive sovereign matter such as domestic or foreign policy. But here there is a problem, the moment a state manages to control the country's foreign policy, it is impossible for it to act.

When platforms exert quasi-sovereign control over the informational environment, the differentiation between voluntary persuasion and structural coercion becomes increasingly indistinct. If the choices made by platform governance consistently affect or limit a state's ability to carry out foreign or domestic policy, this usual legal standard for coercion may not be enough to describe how influence works in digital spaces.[25]

 2.2.2. Human Rights Law

In the digital sphere, International Human Rights Law (IHRL) provides the most important framework for assessing influence operations, notably through Article 17 ICCPR (right to privacy) and Article 19 ICCPR (freedom of expression).[26]

In the case of Art.17, this protects the right to privacy against arbitrary or unlawful interference and applies equally in online and offline contexts. In the digital sphere, human rights bodies have recognized the collection, storage, and analysis of online information by the State may also imply privacy. This applies even when this information is public and seeks to protect the individual from common OI practices such as profiling, monitoring, or covert use of information. An example of this would be the Cambridge Analytical scandal, where the data of millions of Facebook users were exploited to report psychographic profiles for political persuasion campaigns.[27]

 But it is important to emphasize that there are three exceptions in which this right can be legitimately limited: when the interference is justifiable due to non-compliance with national laws, when it involves aspects concerning national security or when it is necessary/proportionate.[28] Likewise, the legal borders still remain.

Regarding Art 19, freedom of expression also refers to seeking, receiving and disseminating information through digital platforms. In democratic regimes, the free flow of information and the “marketplace of ideas” take on special relevance. This right generates a complex duality. On the one hand, it is thanks to this right that disinformation (the key component of malignant information operations) can spread. But on the other hand, this is what makes it possible to attack disinformation, through the dissemination of fact checking and truthful information. That is why the right is not absolute and is restricted to what is necessary for national security or public order. Furthermore, Art. 20 prohibits propaganda and advocacy of hatred and violence (especially in cases of ethnic cleansing).[29] Likewise, sometimes articles 17 and 20 conflict.

In election contexts, platforms have had successes and limitations in preventing intervention through IOs. For example, during the US elections Tik Tok identified and removed several fake accounts even though it did not have to do so based on national legislation.[30] In the case of the 2024 European parliamentary elections, digital platforms found many limitations in their self-regulatory approaches.[31]

The quasi-sovereignty of digital platforms creates a normative gap: platforms regulate speech impactfully but are not formally bound by international human rights obligations or subject to international judicial oversight. Even though their policies shape the informational landscape in which rights are exercised or restricted.

2.2.3. Law of the Armed Conflict (LOAC)

IOs have turned out to be key elements in the course of armed conflicts. In the case of International Armed Conflicts, Shoigu himself noted that the Russian victory in the occupation of Abkhazia and South Ossetia (2008) was thanks to the IO. On the other hand, in the case of Non-International Armed conflict, it was due to an IO that 1500 ISIS fighters were able to capture Mosul.

In this sense, the applicable legislation would be Article 51(2) of AP1 which prohibits “acts or threats of violence the primary purpose of which is to spread terror among the civilian population.”

3. Regulatory Responses to Digital Platforms’ Quasi-Sovereignty by the EU

Faced with the climax of tension arising from the current context of hybrid war, sovereign agents have adopted various regulatory models to manage the quasi-sovereign power of digital platforms, especially, although not exclusively, in relation to IOs. These responses range from comprehensive regulatory frameworks (EU) to more libertarian and informal governance (US) and even state control and subordination of platforms (Russia). This section provides an analysis of these three types of approaches through which states seek to highlight their own sovereignty over that of privately governed information environments.

The EU has developed the most ambitious legislative framework, in the sense that it is the largest example of co-production in this area. This is based on two main pillars, which are the Digital Market Act (DMA) and the Digital Services Act (DSA).[32]

The first of them, the DMA, sets clear rules for online platforms. Its objective is that no Very Large Online Platform (VLOP) or Very Large Online Search Engine (VLOSE) abuse their position as ‘gatekeepers’ in digital environments and certain obligations and prohibitions are imposed on them to comply with.[33]

Regarding the DSA, as indicated in a background note from the Parliamentary Assembly of the Mediterranean, the EU's efforts to combat disinformation and illegal content associated with IoT have been based on this regulation. In a way, the events that emphasized the need to establish this regulation were the War in Ukraine and the Israel-Hamas conflict. The DSA is the fundamental pillar of the EU's digital strategy, seeking to ensure the responsibility of quasi-sovereign digital platforms regarding disinformation, illegal content such as hate speech, and various societal risks. It incorporates overarching principles and strong assurances to safeguard freedom of expression and the rights of other users.[34]

Likewise, another great innovation has been the Strengthened Code of Practice on Disinformation. Previously, the Code of Practice on Disinformation functioned as a voluntary self-regulatory instrument for Digital Platforms. However, digital platforms are now required to implement the measures established in this Code of Practice, including: Allow users to detect “harmful false and/or misleading information,” to take follow up actions; provide tools to identify “authenticity or accuracy,” offer “factual accuracy of sources through fact-checks from fact-checking organizations that have flagged potential Disinformation,”; defund diffusion of disinformation.[35] As a consequence of the application of this code in six months, especially in the context of the 2024 European elections, the main digital platforms eliminated a large amount of content: TikTok (250,000 videos); Youtube (19,000 videos); LinkedIn (20,000 posts). Likewise, according to Mündges & Park, “overall, platforms are only partly compliant with the Code.”

There is a diplomatic element that makes these regulations especially interesting: the Brussels Effect. As mentioned in an article in the Chicago Journal of International Law: “EU regulators wield powerful influence on how social media platforms moderate content on the global scale. This is because the DSA's regulatory regime will incentivize the platforms to skew their global content moderation policies toward the EU's instead of the U.S.'s balance of speech harms versus benefits.” The clearest example of the recent impact of the Brussels Effect is found in the Moldovan elections of 2025. Since, due to Moldova's accession process to the EU, it has adapted part of the European legislation.[36] This has been one of the key elements in preventing election rigging by Russia.

Conclusion

This paper has argued that digital platforms act as quasi-sovereign actors in cyberspace and the cognitive domain. Their quasi-sovereignty stems primarily from their capacity to establish, enforce, and adjudicate rules within information environments. Although platforms remain subject to national legislation, this legislation struggles to exert complete control over national sovereignty, often having to relinquish control in these matters. This makes them highly relevant actors in the context of hybrid warfare, as they serve as instruments of intervention through Digital Platforms which, by exploiting human vulnerabilities, seek to interfere in the internal affairs of states.

The prominent role of digital platforms in the new geopolitical paradigm of the 21st century presents a challenge for major sources of Global Security Law to adapt to the interpretation required by these new circumstances. In the case of jus ad bellum, the principles of use of force and non-intervention should be interpreted beyond kinetic weaponry. This is primarily because digital platforms complicate attribution and present new transnational ways of exercising force and intervening illegitimately, whether by state-affiliated actors or not. On the other hand, the International Human Rights Law (IHRL) has proven to be the source with the greatest impact on digital platforms, especially Article 19. However, its interpretive potential is still far from being realized, as platforms like Facebook end up operating in favor of power structures, for or against freedom of expression and the right to privacy. Regarding LOAC, cases like that of Georgia demonstrate that operational intelligence (OI) is relevant for understanding when a conflict begins.

The EU, at the sovereign level, represents a valid response, through its Digital Framework Directives (DFDs) and Digital Services Directives (DSDs), to how the quasi-sovereignty of digital platforms should be regulated to comply with Articles 17 and 19 of the Human Rights and Freedoms Act (HRIL). It also represents the best defense mechanism against malicious online activities (IOs) by strategic rivals. Similarly, through the Brussels Effects, the EU is, in part, proposing norm settlement. This European model is being implemented in states that suffer from IOs in asymmetric power relations, as is the case in Moldova.

Although this research has covered the EU legislative framework, it opens the possibility of conducting a comparative study of three different models for managing the quasi-sovereignty of digital platforms: specifically, American “libertarianism” and Russian authoritarianism. Similarly, Digital Media Directives (DMAs) and Digital Services Agreements (DSAs) have only recently been implemented, and it would be important to review their adaptability to the new challenges in information environments.

-----------

Disclaimer use of AI

During the preparation of this work, generative artificial intelligence has been used for two purposes. The first of them is support in the compilation of sources as a supplementary measure to the Google Schoolar search. The second, due to lack of time, has been used for spelling, grammar or fluency correction of some section. Due to the latter, false positives can occur using detection tools. All the arguments presented in this work are genuine of the author.


[1] Jen Weedon et al., «Information Operations and Facebook», Facebook, 2017.

[2] Luca Belli y Jamila Venturini, «Private Ordering and the Rise of Terms of Service as Cyber-Regulation», Internet Policy Review 5, n.o 4 (2016), https://doi.org/10.14763/2016.4.441.

[3] Ludmila Konstantinovna Tereschenko’, et al., «Digital Platforms in the Focus of National Law», Legal Issues in the Digital Age., 2024, https://cyberleninka.ru/article/n/digital-platforms-in-the-focus-of-national-law/viewer.

[4] Bassan Fabio, Digital Platforms and Global Law (Edward Elgar Publishing, 2021).

[5] «Article 3, the Digital Services Act (DSA)», accedido 19 de diciembre de 2025, https://www.eu-digital-services-act.com/Digital_Services_Act_Article_3.html.

[6] Konstantinovna Tereschenko’, et al., «Digital Platforms in the Focus of National Law».

[7] Virginia Haufler y Catherine Waddams, «A Public Role for the Private Sector: Industry Self-Regulation in a Global Economy», Political Studies 50, n.o 3 (2002): 652-53.

[8] Michael Bollerman, «Digital Sovereigns Big Tech and Nation-State Influence», arXiv:2507.21066, preprint, arXiv, 1 de junio de 2025, https://doi.org/10.48550/arXiv.2507.21066.

[9] Shoshana Zuboff, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power (Hachette UK, 2019).

[10] Hannah Bloch-Wehba, «Global Platform Governance: Private Power in the Shadow of the State», Dedman School of Law, 2019.

[11] Josep Ibáñez Múñoz, «The Normative Dimension of Platform Governance: Big Tech and Digital Platforms as Normative Actors», Spanish Yearbook of International Law 25 (diciembre de 2021): 128-37.

[12] Robert Gorwa et al., «Algorithmic Content Moderation: Technical and Political Challenges in the Automation of Platform Governance», Big Data & Society 7, n.o 1 (2020): 205395171989794, https://doi.org/10.1177/2053951719897945.

[13] Emiliano De Cristofaro et al., «Revealing The Secret Power: How Algorithms Can Influence Content Visibility on Social Media», accedido 19 de diciembre de 2025, https://www.researchgate.net/publication/385177342_Revealing_The_Secret_Power_How_Algorithms_Can_Influence_Content_Visibility_on_Social_Media.

[14] Kate Klonick, The Facebook Oversight Board: Creating an Independent Institution to Adjudicate Online Free Expression, s. f.

[15] Among some authors, the term technofeudalism has become popular to describe the capabilities of digital platforms in cyberspace. In this sense, there are those who have even gone so far as to define the state as a primus inter pares in the information environment. Although the state enjoys the authority, the quasi-sovereignty of the platforms grants them power. In this state-vassallic relationship, the state relegates control over a cyberspace that it can hardly govern to these lords. What turns digital platforms into bellatores. In the same way, the users would no longer be the laboratories.

[16] «Influence Operations in Cyberspace and the Applicability of International Law», accedido 19 de diciembre de 2025, https://www.e-elgar.com/shop/gbp/influence-operations-in-cyberspace-and-the-applicability-of-international-law-9781035307289.html.

[17] Rand Waltzman, The Weaponization of Information: The Need for Cognitive Security (RAND Corporation, 2017), https://doi.org/10.7249/CT473.

[18] «Influence Operations in Cyberspace and the Applicability of International Law».

[19] Dale Stephens, «Influence Operations & International Law», Journal of Information Warfare 19, n.o 4 (2020): 1-16.

[20] Stephens, «Influence Operations & International Law».

[21] «Remarks on International Law and Stability in Cyberspace», U.S. Department of State, accessed December 19, 2025, //2009-2017.state.gov/s/l/releases/remarks/264303.htm.

[22] Samuli Haatja, «Autonomous Cyber Capabilities under International Law», NATO CCDCOE Publications, 2021, https://ccdcoe.org/uploads/2021/05/Autonomous_Cyber_Capabilities_210525.pdf.

[23] Marthe Handå Myhre y Mikkel Berg-Nordlie, «“The state cannot help them all”. Russian media discourse on the inclusion of non-state actors in governance», East European Politics 32, n.o 2 (2016): 192-214, https://doi.org/10.1080/21599165.2016.1168299.

[24] Michael N. Schmitt, ed., Tallinn Manual 2.0 on the International Law Applicable to Cyber Operations, 2.a ed. (Cambridge University Press, 2017), https://doi.org/10.1017/9781316822524.

[25] A specific case of this is Belarus. Russia has managed to establish a puppet government through the Union State. As various studies have shown, Russia has used Telegram and Meta apps to advance its influence over the country. Currently there is a digital resistance that is shown through programs such as Digital Belarus, which act from the European diaspora. Confronting both narratives in Belarus. Here comes the question, how could this situation be legally managed? Technically Minsk allows these interferences, but some of them could even be related to “soft ethnic cleansing” by discouraging Belarusian culture.

[26] Stephens, «Influence Operations & International Law».

[27] Jonathan Heawood, «Pseudo-Public Political Speech: Democratic Implications of the Cambridge Analytica Scandal», Information Polity 23, n.o 4 (2018): 429-34, https://doi.org/10.3233/IP-180009.

[28] These exceptions tend to generate public debate. An example of this would be the debate surrounding messaging platforms with encryption models such as Telegram. The application is involved in controversy since it is one of the platforms preferred by the Kremlin in its information operations. Crimes are also committed through it thanks to the privacy it offers. Although on the other hand, in authoritarian states Telegram has become, due to that same anonymity, a platform through which citizens are able to combat those same information operations and mobilize democratically. What has been the recent case of the Georgia protests. Furthermore, instruments with the ability to decrypt have been controversial. See the case of the CNI, using Pegasus against the Catalan leader Carles Puigdemont.

[29] A problematic case has been that of Nagarno-Karabakh. In order to influence the region, Russia has launched influence operations to promote extremism between both sides. The EU, in contrast, has used EU EastStratcom to promote post-conflict reconstruction transitional justice.

[30] «El nuevo Informe de Transparencia de TikTok detalla los últimos intentos de influencia política en la plataforma», La Vanguardia, May 23, 2024. https://www.lavanguardia.com/ciencia/20240523/9670941/nuevo-informe-transparencia-tiktok-detalla-ultimos-intentos-influencia-politica-plataforma-ep-agenciaslv20240523.html.

[31] Gautam Kishore Shahi et al., A Year of the DSA Transparency Database: What It (Does Not) Reveal About Platform Moderation During the 2024 European Parliament Election, s. f.

[32] Maria Luisa Chiarella, «Digital Markets Act (DMA) and Digital Services Act (DSA): New Rules for the EU Digital Environment», Athens Journal of Law (AJL) 9, n.o 1 (2023): 33-58.

[33] «Digital Markets Act», December 12, 2025, https://digital-markets-act.ec.europa.eu/index_en.

[34] Marie-Therese Sekwenz et al., «Doing Audits Right? The Role of Sampling and Legal Content Analysis in Systemic Risk Assessments and Independent Audits in the Digital Services Act», SSRN Scholarly Paper no. 5235646 (Social Science Research Network, Abril 29, 2025), https://doi.org/10.2139/ssrn.5235646.

[35] Ronan Ó Fathaigh et al., «The Regulation of Disinformation Under the Digital Services Act», Media and Communication 13, n.o0 (2025), https://doi.org/10.17645/mac.9615.

[36] Mariana Tacu, Mass Media in the Republic of Moldova and Building Resilience Against Disinformation in Covering the European Path, s. f.

BUSCADOR NOTICIAS

BUSCADOR NOTICIAS

Desde

Hasta