For citation, please use:
Kolotaev, Yu.Yu., 2026. Digital Securitization: How Online Platforms Change the Security Landscape. Russia in Global Affairs, 24(1), pp. 81–97. DOI: 10.31278/1810-6374-2026-24-1-81-97
Over the last two decades, the development of digital technologies has fundamentally altered the nature of political communication and the very architecture of international security. Online platforms—from social media to generative artificial intelligence (AI) operators—have turned from elements of communication infrastructure into active participants and even arbiters of global discourse. Operating beyond state jurisdiction amid geopolitical conflicts, their influence on threat perceptions, narratives, and agendas is reshaping the interaction of states, societies, and digital organizations, challenging traditional conceptions of state sovereignty, information autonomy, and agency.
The ‘digital divide’ is now a matter not of varying access to technology, but of information barriers, digital fragmentation, and regulatory collisions—both amongst platforms, and between them and states. In parallel, academic discussion of digital security has shifted from ‘objective’ threats to matters of language, discourse, and social practice.
Digital securitization can be viewed as a discursive and intersubjective (Mikelis, 2021) process, in which communication between agents—the securitizing actor (a government or online platform) and the audience—leads to agreement regarding (or imposition of) a perceived threat and the means of neutralizing it. Within this process, political actors compete and clash over the interpretation of events, using rhetoric, symbols (emojis, icons, and memes), and technical means of content management.
Digital securitization has a distinct two-level nature, in that platforms are both agents and objects of securitization: they shape threat narratives and influence the interpretation of security issues, but they are also themselves a threat subject to government regulation and control.
This paper examines the mechanism and implications of platforms’ involvement in digital securitization.
Digital Intermediaries as Agents and Targets of Securitization
Traditional media of the 20th century were characterized by one-way (Apuke, 2017, p. 135) “non-interactive” transmission, and by fairly distinct division into national or international outlets. However, starting in the late 20th century, digital platforms began undermining traditional media’s monopoly. Social networks, video hosting services, and other types of digital intermediaries broadened, accelerated, and leveled communication.
New media have clearly demonstrated their ability to shape the information agenda (Gilardi et al., 2022). By facilitating distribution of content, digital platforms have gained the right to restrict it through moderation and regulation.
Online platforms act less like individual digital services, and more like institutionalized agents with their own infrastructure (Plantin et al., 2018, p. 301). Their decision-making—including about content and moderation—is distributed among owners, developers, automated systems, and specialized boards. This distinguishes platforms from more regulated infrastructure (such as cloud hosting) and from traditional media conglomerates with a centrally-determined editorial policy. Thus, a modern digital platform is both a market service and an information arbiter. Such platforms include YouTube, Telegram, Twitter/X, Google Maps, TikTok, and generative AI services (e.g., ChatGPT, Midjourney).
These functional properties of digital media determine the way users perceive the international environment. For the general public and even experts, global politics is a ‘mediated reality.’ Intermediaries wield control over the perception of international “reality,” becoming a crucial instrument for framing security.
The growing influence of online platforms on global politics has forced diplomats and political parties to adapt their discourse, leading to ‘Twitter diplomacy’ (Collins et al., 2019, p. 78). Yet this influence is most clearly evident in the phenomenon of the online space’s two-level securitization.
The first level of securitization features platforms as subjects of securitization determining threats by prioritizing certain security narratives. By engaging users in multilateral communication, and “democratizing” the securitization process (Umansky, 2022; Vultee, 2022), social media have allowed the general public to consume but also produce security messages. For example, during the COVID-19 pandemic (Shrum et al., 2025) and the 2025 California wildfires (Leingang, 2025), users created threat narratives challenging official versions of events. As a result, new media and their users jointly become agents of securitization.
However, the audience’s opportunities depend on a platform’s moderation rules and interests. Terms of use structure user communication. By providing a platform for expressing opinions, digital intermediaries determine the admissibility of statements and, to some extent, their nature. Messengers practice fact verification and signaling about emergency events, while information aggregators (generative and conventional news services) influence discourse through their algorithmic selection and distribution of information.
Online media have multiple levels of agency, within which competing acts of securitization are possible. An act of securitization can be initiated either ‘from above’ by government or expert structures, or ‘from below’ by spontaneous grassroot networks. The diversity of practices complicates a clear and strict classification of securitization acts in the digital space into official or alternative, institutionalized (regulatory) or situational. Digital platforms’ acquisition of excessive power and the diversity of securitizing agents raise the risk of speech being limited under “surveillance capitalism” (York, 2021).
The second level of securitization features platforms as objects of securitization by states. Of particular concern for most states is the foreign (often U.S.) jurisdiction of their largest and most influential online platforms (Kohl, 2021). Such platforms can influence the security discourse of sovereign states, but they are not themselves subject to those states’ standards. This challenge to traditional political mechanisms, and states’ communicative power, has forced states to respond.
As a result, in the mid-2010s, the digital space became an arena for competition between states and platforms. States, as well as supranational structures, began to securitize not only threats’ migration into the digital sphere, but also the activities of digital intermediaries. State regulators are particularly concerned about personal data processing and the removal of illegal content. But they must balance between regulating the digital space, i.e., to retain the ability to control security narratives without disrupting the development of the digital economic sector.
Thus, online platforms have become an effective resource of domestic governance and soft power tool of foreign policy. The need for state regulation of online media has led to the formation of a ‘two-level securitization’ of the online space. The digital platforms’ capability to frame security through moderation mechanisms and algorithms has challenged states’ agency, leading to state regulation of online media and to the formation of a ‘two-level securitization’ of the online space.
The First Level of Securitization: Digital Platforms’ Impact on Political Discourse and Security
Digital platforms’ growing influence has seriously challenged many states’ national security, including Russia’s. The Arab Spring, election fraud, online protests, and digital activism (Sorce and Dumitrica, 2022) were all driven by the proliferation of new media. The surge in online activity during the COVID-19 pandemic enhanced the status of digital intermediaries (Schaupp, 2023). By the early 2020s, they had (1) consolidated as gatekeepers, (2) developed increasingly powerful ‘media effects’ tools, and (3) begun a widespread practice of deplatforming.
The notion of the ‘digital gatekeeper’ emerged as digital platforms used ‘soft’ and ‘hard’ moderation (Dias Oliva, 2020; Gorwa, 2024) to regulate political content. ‘Hard’ moderation includes filtering algorithms.
For instance, when the Gaza War began in 2023, Meta[1] and TikTok were accused of deprioritizing and ‘shadow banning’ Palestinians (Shankar et al., 2023). Arabic-language posts were removed much more often than Hebrew-language ones, as the Hebrew-language hate speech detector was dysfunctional (Wall Street Journal, 2023), and systematic Israeli takedown requests were largely satisfied (Drop Site, 2025).
When the Ukraine conflict escalated in 2022, Meta* selectively suspended its rules on hate speech to permit some forms of Russophobic content (Vengattil and Culliford, 2022). Similarly, chatbots with large language models (offered by OpenAI, Anthropic, DeepSeek, and xAI) may use hard moderation by rejecting requests, or providing responses based on biased algorithms or presets.
Soft moderation involves commenting (e.g., Community Notes) on messages without deleting them (Chuai et al., 2024). For example, Twitter labeled as false certain claims regarding the COVID-19 pandemic (Allen et al., 2024) and the 2020 U.S. presidential election (Bradshaw et al., 2023), attaching links to ‘fact-checking’ articles and labeling Donald Trump’s tweets as inaccurate (“false information, manipulated content, and possible incitement of violence” (Chipidza and Yan, 2022, pp. 1642, 1646).
User agreements and community rules have a similar effect. YouTube, for example, requires that platforms label AI-generated content (Rob — TeamYouTube, 2024), especially on the subjects of conflict and elections. Some platforms may even establish quasi-judicial bodies to resolve content disputes (Wong and Floridi, 2023). However, due to the proprietary nature of most platforms, their procedures remain largely opaque.
Users, and even specialists, have no direct control over the probabilistic response-generation process (even with reasoning LLMs), other than adjusting the results upon repeat requests. Documented terms of use and platform policies do not provide sufficient transparency.
Platforms’ regulatory actions are not necessarily politicized, but a platform’s biased decisions may result from political partiality, commercial interests, owners’ personal motives, or—especially in the case of the U.S. (Khilji, 2025)—political pressure from the platform’s home state.
As digital gatekeepers, online platforms have media effects on their audience: including framing, priming, and agenda-setting (Kolotaev and Kollnig, 2021). While the concept’s validity and generalizability are debatable (Bodrunova, 2019, p. 133), it illustrates some of media’s informational power, exploitable by platforms and their operators.
The 2016 Brexit referendum was one of the most striking examples of digital media’s use for political framing. The Vote Leave campaign paid Facebook[2] over £2.7 million for targeted advertising (Manthorpe, 2018) that framed the referendum in terms of immigration and economic sovereignty. Despite mixed opinions about the actual level of online influence on the referendum, it prompted public and academic securitization of platforms’ discursive influence, leading to the regulation of political advertising and also showed that platforms’ commercial interests cannot be separated from their own or others’ political interests.
Digital news aggregators and messengers play a special role in agenda-setting, through the selection of topics (Lukyanova and Solovev, 2024) and prioritization of news.
Early studies of Google News (Segev, 2008) emphasized that the greater visibility of news about major powers and crises leads to greater user perceptibility. Although recent observations have questioned this thesis, they nonetheless point that there are certain indicators of discursive influence, such as the impact of tonality in aggregated news on public opinion about individual countries. The public may “perceive the nations that received the most negative coverage less favorably” (Young and Atkin, 2023, p. 126), which results in increased problematization of certain topics in the public consciousness to the detriment of other newsworthy topics.
Soft moderation in the form of content labeling has the same media effect as priming and agenda-setting. Community Notes on Twitter/X and Warning Labels on TikTok shape users’ associations with the (un)trustworthiness of content through so-called ‘emotional labels’ (Chuai et al., 2024). Yet warning labels may actually increase engagement with the labeled content (Chipidza and Yan, 2022, p. 1645). “Less active Twitter users tended to engage more with labeled tweets, depending on how the label was written” (Papakyriakopoulos and Goodman, 2022, p. 2548). This creates a paradoxical effect, whereby platforms simultaneously label and promote topics, also influencing the agenda and audience engagement.
Aside from general media effects, digital platforms have increasingly practiced deplatforming (Jhaver et al., 2021)—the exclusion of individuals, political forces, and even entire countries. For example, the deletion of Donald Trump’s Twitter account in 2021, and of Russian media channels on YouTube in 2021 and 2022.
Deplatforming is the most obvious manifestation of platforms’ agency, as they are directly linked to government issues of domestic or foreign policy and affect the space of state sovereignty. In other cases, platforms’ agency manifests in upholding their own standards and resisting foreign states’ interference. For instance, Google Maps (operating under U.S. jurisdiction) partially renamed the Gulf of Mexico and the Persian Gulf under pressure from the new U.S. administration (The Guardian, 2025), but Telegram’s founder Pavel Durov refused Western European requests to “silence” conservative voices in Romania ahead of a presidential election run-off there (Reuters, 2025a). Telegram says in its FAQ that “no single government or bloc of like-minded countries can intrude on people’s privacy and freedom of expression. Telegram can be forced to give up data only if an issue is grave and universal enough to pass the scrutiny of several different legal systems around the world.”
All the attributes of agency discussed (gatekeeper status, media effects, and the potential for deplatforming) illustrate how the digital platform acquires discursive power.
Platforms’ securitization practices can be categorized as classic discursive (prioritization, fact-checking, commentary) and non-discursive (algorithmic content suppression, deplatforming (overt or shadow-banning), use of linguistic classifiers to prevent ‘threatening’ messages). The latter constitute new socio-technological practices of securitization.
The structure of the modern media landscape reflects a clear imbalance, with the largest platforms wielding a key influence on discursive and non-discursive representation and perception of politics and international relations.
The Second Level of Securitization: State Control of Digital Platforms
The conflict between platforms’ interests and states’ sovereignty has led to a second level of digital securitization. Digital intermediaries are perceived as a threat to national interests and security. Leading countries have increasingly implemented measures for ‘digital sovereignty,’ such as data protection, the independence of infrastructure and internal services, and the creation of their own alternative platforms (Sytnik, 2025). States lacking their own digital infrastructure have fallen behind, but ever more countries are developing various digital regulation strategies. These range from standardization and platform self-regulation to the outright blocking of resources.
For Russia, this problem is particularly acute in the context of the Internet blockade, where the blocking of individual resources within the country has resulted not only from violations of Russian law but also from restrictions on the presence of its political forces and media (Lukyanov, 2024) on foreign platforms.
Russia’s struggle for digital sovereignty has involved a protracted conflict with Western social networks, especially Facebook, Twitter, and Youtube, which repeatedly blocked Russian content (Alekseev, 2022). Russia faced the challenge of both maintaining its participation in the global digital space and protecting and standardizing its own information space. It responded with legislation like the Sovereign Internet Law (Russian State Duma News, 2019), certain restrictions, and requirements for the domestic storage of Russians’ user data.
Other countries followed suit. For instance, in 2020, India banned 59 Chinese apps, including TikTok, citing national security threats (Phartiyal, 2021). Its Intermediary Guidelines and Digital Media Ethics Code forced “significant” platforms to appoint domestic content managers and promptly remove illegal content (The Hindu, 2021). When the Indo-Pakistani conflict escalated in May 2025 (Khilji, 2025), Pakistani channels were blocked en masse.
In Europe, much regulation is at the supranational level. In the 2010s, the EU focused on protecting user rights and limiting market monopolization. The 2018 General Data Protection Regulation established strict standards for processing personal data, applicable to all platforms, including foreign ones (Fahey, 2025). However, starting with the 2022 Digital Services Act, large platforms must disclose their recommendation algorithms and undergo external audits (Pathak, 2024). Furthermore, platforms that meet the criteria of ‘digital gatekeepers’ must not discriminate against third-party services. Antitrust investigations (Chee, 2025)—for instance, fining Google €2.42 billion in 2017 and €1.49 billion in 2019 (European Commission, 2019)—have sought to protect European startups and digital infrastructure from large American players.
Platforms have responded to government restrictions in different ways. In October 2025, Meta decided to stop political, election, and social advertising on its platforms in the EU (Reuters, 2025b), given the legal uncertainty created by the new EU rules for political advertising. The ability of Europe’s largest commercial platforms to self-prohibit political campaigning further attests to their agency.
The countries that first embarked on digital securitization have developed unique experience. China’s Great Firewall has long blocked access to Twitter/X, replaced with national platforms such as WeChat (Kalathil, 2017), which China now itself exports. The Digital Silk Road initiative, promoting Chinese technological standards abroad, demonstrates the geopolitical importance of securitization and digital sovereignty. China’s growing presence in the global digital space has prompted Western states to impose export controls (on AI and AI-related components) and increase pressure on Chinese platforms (TikTok).
Platforms’ agency may create not only international, but domestic political challenges, as was the case for Twitter (given its political loyalties) at the end of Donald Trump’s first presidency (Zhang et al., 2025, p. 229). The platform’s ideological loyalty to a particular political group can be an object of securitization.
Governments are also actively developing regulations for AI agents and data aggregators. The 2024 AI Act aims to ensure safety, transparency, and oversight of AI use (European Commission, 2024). It classifies AI systems by risk level, bans dangerous AI, provides new rules for general-purpose models, and requires certification, data audits, and algorithm transparency.
Synthetic platforms and deepfake technologies are also subject to regulation. Synthetic platforms are prohibited from using artificially created content without clear labeling (Zheng, Shu, and Li, 2025).
Regulatory lag vis-à-vis AI is particularly evident in states’ formalization of already-established architectures. America’s July 2025 AI Action Plan (Executive Office of the President, 2025) defines AI as strategically important and suggests simplifying regulatory procedures and institutionalizing interagency cooperation. Meanwhile, China’s Global AI Governance Action Plan (Permanent Mission of the PRC to the UN, 2025)—also presented in late July 2025 at the World Artificial Intelligence Conference in Shanghai—focuses on building a multilateral regulatory environment: from standardization mechanisms and information exchange to the creation of dedicated open platforms. Despite the differences in their approaches (the U.S. plan is a national initiative, while the Chinese one suggests international cooperation), both cases show the states’ desire to institutionalize an already securitized space, and shift from ad hoc governance to a full-fledged regulatory architecture. In the context of securitization, this means governments’ shift towards preventive governance, where regulation begins to determine (but not yet does so) the parameters of digital security.
Thus, digital platforms’ securitization is forcing states to not only restrict foreign services, but also create alternative models, which intensifies competition among them (Rebro et al., 2021). This creates new challenges for states (for instance, European startups face high GDPR compliance costs) and complicates international cooperation.
The complex interplay of national interests, local conflicts, and foreign information agents leads to a general reconceptualization of state sovereignty.
* * *
Online platforms’ infrastructural, moderating, and political-economic power gives them influence over representation and perception of international life. They are involved in securitization as agents, articulating threats and framing perceptions of security, and as objects, perceived as threats to state sovereignty. These interrelated processes constitute double-level digital securitization.
These interactions create a new digital international order, featuring conflicts between states and platforms. Given the interaction of securitization and commercial interests, the proliferation of AI, and the external vulnerability of states’ information environments, digital regulation is an issue of not only law, but also geopolitics.
Currently, government responses have mostly consisted of localization requirements, bans on individual services, lawsuits and antitrust investigations, the creation of sovereign infrastructures, and cross-border pressure. In the future, however, governments will likely shift from reactive to preventive measures, including: a fixed level of acceptable moderation autonomy for platforms; algorithmic transparency standards; user rights to appeal platform decisions; and (inter)national mechanisms for auditing platform decisions, particularly concerning AI.
Given the fragmentation of the global digital environment, Russia faces the task of forming a coherent doctrine and regulation strategy of operation in the digital space, which will undergird its digital sovereignty.
This article is part of Project #116471555 supported by Saint-Petersburg State University.
[1] Meta Platforms Inc. is recognized as an extremist organization and banned in the Russian Federation.
[2] Facebook is owned by Meta Platforms Inc., which is recognized as an extremist organization and banned in the Russian Federation.
__________________________
Alekseev, D., 2022. Идите в бан: Почему Роскомнадзор заблокировал Фейсбук и Твиттер [Go Banned: Why Roskomnadzor Blocked Facebook and Twitter]. Izvestia, 5 March. https://iz.ru/1300962/dmitrii-alekseev/idite-v-ban-pochemu-roskomnadzor-zablokiroval-facebook-i-twitter
Allen, M.R. et al., 2024. Characteristics of X (Formerly Twitter) Community Notes Addressing COVID-19 Vaccine Misinformation. JAMA, 331(19), pp. 1670-1672.
Apuke, O.D., 2017. Social and Traditional Mainstream Media of Communication: Synergy and Variance Perspective. Online Journal of Communication and Media Technologies, 7(4), pp. 132-140.
Bodrunova, S.S., 2019. Термин “фрейминг” в политической коммуникативистике: рождение и созревание большой идеи в теории медиаэффектов [‘Framing’ as a Term in Political Communication Studies: How a Big Idea Grew and Matured in the Media Effects Theory]. Zhurnal politicheskikh issledovanii, 3(4), pp. 127-141.
Bradshaw, S., Grossman, S., and McCain, M., 2023. An Investigation of Social Media Labeling Decisions Preceding the 2020 U.S. Election. PLOS ONE, 18(11), p. e0289683.
Chee, F.Y., 2025. Record EU Fine Punished Google’s Innovation, It Tells Court as It Seeks to Annul Decision. Reuters, 28 January. https://www.reuters.com/technology/record-45-bln-eu-fine-punished-its-innovation-google-tells-eu-court-2025-01-28/
Chipidza, W. and Yan, J. K., 2022. The Effectiveness of Flagging Content Belonging to Prominent Individuals: The Case of Donald Trump on Twitter. Journal of the Association for Information Science and Technology, 73(11), pp. 1641-1658.
Chuai, Y. et al., 2024. Did the Roll-Out of Community Notes Reduce Engagement with Misinformation on X/Twitter? Proceedings of the ACM on Human-Computer Interaction, 8(CSCW2), pp. 1-52.
Collins, S.D., DeWitt, J.R., and LeFebvre, R.K., 2019. Hashtag Diplomacy: Twitter as a Tool for Engaging in Public Diplomacy and Promoting US Foreign Policy. Place Branding and Public Diplomacy, 15(2), pp. 78-96.
Dias Oliva, T., 2020. Content Moderation Technologies: Applying Human Rights Standards to Protect Freedom of Expression. Human Rights Law Review, 20(4), pp. 607-640.
Drop Site, 2025. Leaked Data Reveals Massive Israeli Campaign to Remove Pro-Palestine Posts on Facebook and Instagram. Drop Site News, 11 April. https://www.dropsitenews.com/p/leaked-data-israeli-censorship-meta
European Commission, 2019. Antitrust: Commission Fines Google €1.49 billion for Abusive Practices in Online Advertising. Press corner, 20 March. https://ec.europa.eu/commission/presscorner/detail/en/ip_19_1770
European Commission, 2024. AI Act. Shaping Europe’s Digital Future. https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai
Executive Office of the President, 2025. America’s AI Action Plan. https://www.whitehouse.gov/wp-content/uploads/2025/07/Americas-AI-Action-Plan.pdf
Fahey, E., 2025. Data Protection and Regulation of Social Media. In: S. Lucarelli and J. Sperling (eds). Handbook of European Union Governance. Edward Elgar Publishing, pp. 143-155.
Gilardi, F. et al., 2022. Social Media and Political Agenda Setting. Political Communication, 39(1), pp. 39-60.
Gorwa, R., 2024. The Politics of Platform Regulation: How Governments Shape Online Content Moderation. Oxford University Press.
Gorwa, R., Binns, R., and Katzenbach, C., 2020. Algorithmic Content Moderation: Technical and Political Challenges in the Automation of Platform Governance. Big Data & Society, 7(1).
Jhaver, S. et al., 2021. Evaluating the Effectiveness of Deplatforming as a Moderation Strategy on Twitter. Proceedings of the ACM on Human-Computer Interaction, 5(CSCW2), pp. 381:1-381:30.
Kalathil, S., 2017. Beyond the Great Firewall: How China Became a Global Information Power. Center for International Media Assistance. https://www.cima.ned.org/publication/beyond-great-firewall-china-became-global-information-power/
Khilji, U., 2025. How Platform Shifts on Content Moderation Are Escalating Harm in the India-Pakistan Crisis. Tech Policy Press. https://techpolicy.press/how-platform-shifts-on-content-moderation-are-escalating-harm-in-the-indiapakistan-crisis
Kohl, U., 2021. Jurisdiction in Network Society. In: N. Tsagourias, N. and R. Buchan R. (eds.) Research handbook on international law and cyberspace. Edward Elgar Publishing, pp. 69-96.
Kolotaev, Y. and Kollnig, K., 2021. Political Influence of Online Platforms: YouTube’s Place in European Politics. Vestnik of Sankt- Petersburgskogo Universiteta. Mezhdunarodnye otnosheniya, 14(2), pp. 225-240.
Leingang, R., 2025. ‘A Flood of Disinformation’: Rumors and Lies Abound amid Los Angeles Wildfires. The Guardian, 16 January. https://www.theguardian.com/us-news/2025/jan/16/disinformation-los-angeles-wildfires
Lukyanov, F.A., 2024. Here’s the Real Reason Why the U.S. Sanctioned RT. Russia in Global Affairs, 19 September. https://eng.globalaffairs.ru/articles/sanctions-rt-lukyanov/
Lukyanova, G.V. and Solovev, A.Yu., 2024. Особенности формирования повестки дня в мессенджере Телеграм [Peculiarities of Agenda Setting in Telegram]. Izvestiya Saratovskogo Universiteta. Sotsiologiya. Politologiya, 24(1), pp. 90-97.
Manthorpe, R., 2018. Remain v Leave: Scale of Facebook Ad War Revealed. Science, Climate & Tech News. Sky News, 21 October. https://news.sky.com/story/remain-v-leave-scale-of-facebook-ad-war-revealed-11530148
Mikelis, K., 2021. Securitization: Theoretical Underpinnings and Implications. In: G. Voskopoulos (ed.) European Union Security and Defence: Policies, Operations and Transatlantic Challenges. Cham: Springer International Publishing, pp. 39-54.
Plantin, J. C., Lagoze, C., Edwards, P. N., and Sandvig, C., 2018. Infrastructure Studies Meet Platform Studies in the Age of Google and Facebook. New Media & Society, 20(1), pp. 293-310.
Papakyriakopoulos, O. and Goodman, E., 2022. The Impact of Twitter Labels on Misinformation Spread and User Engagement: Lessons from Trump’s Election Tweets. In: Proceedings of the ACM Web Conference 2022. ACM, pp. 2541-2551.
Pathak, M., 2024. Data Governance Redefined: The Evolution of EU Data Regulations from the GDPR to the DMA, DSA, DGA, Data Act and AI Act. SSRN Scholarly Paper. https://papers.ssrn.com/abstract=4718891
Permanent Mission of the PRC to the UN, 2025. Global AI Governance Action Plan. https://un.china-mission.gov.cn/eng/zgyw/202507/t20250729_11679232.htm
Phartiyal, S., 2021. India Retains Ban on 59 Chinese Apps, Including TikTok. Reuters, 26 January. https://www.reuters.com/article/technology/india-retains-ban-on-59-chinese-apps-including-tiktok-idUSKBN29U2G6/
Rebro, O. et al., 2021. Категория цифрового суверенитета в современной цифровой мировой политике: вызовы и возможности для России [The Notion of ‘Digital Sovereignty’ in Modern World Politics: Challenges and Opportunities for Russia]. Mezhdunarodnye protsessy, 19(4), pp. 47-67.
Reuters, 2025a. Telegram’s Durov Says French Spy Chief Asked Him to Ban Conservative Romanian Voices. Reuters, 19 May. https://www.reuters.com/world/europe/telegram-founder-says-france-asked-him-ban-conservative-romanian-voices-2025-05-19/
Reuters, 2025b. Meta to halt political advertising in EU from October, blames EU rules. Reuters, 25 July. https://www.reuters.com/sustainability/meta-halt-political-advertising-eu-october-blames-eu-rules-2025-07-25/
Rob — TeamYouTube, 2024. New Disclosures and Labels for Generative AI Content on YouTube — YouTube Community. https://support.google.com/youtube/thread/264550152/new-disclosures-and-labels-for-generative-ai-content-on-youtube?hl=en
Russian State Duma News, 2019. Принят закон о “суверенном интернете” [Sovereign Internet Law Adopted]. Russian State Duma News. http://duma.gov.ru/news/44551/
Schaupp, S., 2023. COVID-19, Economic Crises and Digitalisation: How Algorithmic Management Became an Alternative to Automation. New Technology, Work and Employment, 38(2), pp. 311-329.
Segev, E., 2008. The Imagined International Community: Dominant American Priorities and Agendas in Google News. Global Media Journal, 7(13).
Shankar, P., Dixit, P., and Siddiqui, U., 2023. Are Social Media Giants Censoring Pro-Palestine Voices amid Israel’s War? Al Jazeera https://www.aljazeera.com/features/2023/10/24/shadowbanning-are-social-media-giants-censoring-pro-palestine-voices
Shrum, W. et al., 2025. Alternative theories of COVID-19: social dimensions and information sources. Journal of Public Health Policy, 19 February. https://doi.org/10.1057/s41271-025-00560-2
Sorce, G. and Dumitrica, D., 2022. Transnational dimensions in digital activism and protest. Review of Communication, 22(3), pp. 157-174.
Sytnik, A.N., 2025. The World Majority’s Social Media versus Data Colonialism. Russia in Global Affairs, 23(2), pp. 71-74.
The Guardian, 2025. Mexico Sues Google over Changing Gulf of Mexico’s Name for US Users. The Guardian, 9 May. Available at: https://www.theguardian.com/world/2025/may/09/mexico-google-lawsuit-gulf-of-mexico
The Hindu, 2021. Govt Announces New Social Media Rules to Curb Its Misuse. The Hindu, 25 February. Available at: https://www.thehindu.com/news/national/govt-announces-new-social-media-rules/article33931290.ece [Accessed 26.05.2025].
Ulrich, A., Kramer, O., and Till, D., 2022. Populism and the Rise of the AfD in Germany. In: C. Kock and L. Villadsen (eds) Populist Rhetorics: Case Studies and a Minimalist Definition. Cham: Springer International Publishing. pp. 107-139.
Umansky, N., 2022. Who Gets a Say in This? Speaking Security on Social Media. New Media & Society . https://journals.sagepub.com/doi/full/10.1177/14614448221111009
Vengattil, M. and Culliford, E., 2022. Facebook Allows War Posts Urging Violence against ‘Russian Invaders’. Reuters, 3 October. https://www.reuters.com/world/europe/exclusive-facebook-instagram-temporarily-allow-calls-violence-against-russians-2022-03-10/
Vultee, F., 2022. A Media Framing Approach to Securitization: Storytelling in Conflict, Crisis and Threat. 1st ed. New York: Routledge.
Wall Street Journal, 2023. Inside Meta, Debate Over What’s Fair in Suppressing Comments in the Palestinian Territories. The Wall Street Journal. https://www.wsj.com/tech/inside-meta-debate-over-whats-fair-in-suppressing-speech-in-the-palestinian-territories-6212aa58?st=au8oyn512lz4w1x&reflink=mobilewebshare_permalink
Wong, D. and Floridi, L., 2023. Meta’s Oversight Board: A Review and Critical Assessment. Minds and Machines, 33(2), pp. 261-284.
York, J.C., 2021. Silicon Values: The Future of Free Speech Under Surveillance Capitalism. Verso Books.
Young, A. and Atkin, D., 2023. An Agenda-Setting Test of Google News World Reporting on Foreign Nations. Electronic News, 17(2), pp. 113-132.
Zhang, Y. et al., 2025. Trump, Twitter, and Truth Social: How Trump Used Both Mainstream and Alt-Tech Social Media to Drive News Media Attention. Journal of Information Technology & Politics, 22(2), pp. 229-242.
Zheng, G., Shu, J. and Li, K., 2025. Regulating deepfakes between Lex Lata and Lex ferenda — a comparative analysis of regulatory approaches in the U.S., the EU and China. Crime, Law and Social Change, 83(1), p. 1.