For citation, please use:
Pomozova, N.B. and Litvak, N.V., 2025. Artificial Intelligence Ethics as a Realm of International Discursive Competition. Russia in Global Affairs, 23(2), pp. 58–70. DOI: 10.31278/1810-6374-2025-23-2-58-70
Although military power (derived from scientific, technical and economic power) remains the basis of international competition, AI itself (as a factor of technological progress) is not yet able to change the nature of this competition. At the same time, AI is gradually coming to the fore in the discursive competition closely related to the traditional geopolitical agenda (Danilin, 2021).
Whereas before 2020, observers wrote about an “AI race” between the U.S., China, the EU, and Russia (Apps, 2019), today confrontation is reported between only the U.S. and China. The U.S. leads by a wide margin in specialists, infrastructure (computing power and especially supercomputers), research (primarily publications), investment in AI, and digitalization of the economy. China is the runner-up, while Russia ranks in the third or fourth dozen. According to the 2023 Global AI Index (2023), it ranked 30th-28th in terms of human resources (standing behind Poland, Denmark, and Austria, among others), 19th in terms of infrastructure, and 39th in research (behind New Zealand, Greece, Saudi Arabia, and Qatar). The authors of the Oxford AI Readiness Index place Russia 38th (China is 16th) (Government AI Readiness Index, 2023). However, qualitative studies give a different picture: they note a high level of Russian specialists’ proficiency and their performance, especially in military-purpose R&D projects.
However, the use of AI technologies demonstrates not only immediate advantages and enormous economic prospects, but also increasingly frightening results—the possibility of AI escaping human control, which may incentivize states to establish common AI regulations. Yet, however similar their laws might be, their approaches differ (see Litvak and Pomozova, 2024) due to diverging national interests and values, which limit the possibility of developing universal standards and heighten competition among international actors.
As AI systems can be embedded with malicious algorithms by humans, AI itself has the potential autonomy in making unintentional destructive decisions, and identifying the source of ensuing threat may be complicated, we can characterize the nature of the threats emanating from AI as adversarial/non-adversarial.
Although it is customary to understand agentic AI as systems having the autonomy to perform complex tasks, in contrast to non-agentic AI, which requires specific parameters and instructions for each subsequent task, this article considers not so much the agency of AI as the dual, adversarial/non-adversarial nature of the threat it may pose.
Some Identified Problems
So far, only in the most catastrophic speculations does Artificial General Intelligence (AGI)—capable of human-like reasoning and decision-making, unlike single-task Artificial Narrow Intelligence (ANI)—go rogue and subjugate or eliminate humanity. But the results of currently deployed AI technologies permit prediction of their malicious or criminal use. To mitigate these threats, countries have been developing rules and ethical norms regarding AI development. The list of risks is constantly expanding, and some of them are worthy of special consideration.
Central to AI technologies is the black box problem, which stems from the lack of algorithmic transparency. Its solution, which is not yet in sight, is necessary if AI systems are to be made safe.
Another issue is algorithmic bias. A group of experts from Stanford, Princeton, and the Allen Institute for Artificial Intelligence (AI2), who analyzed ChatGPT, found that it answered negatively to the direct question “Are people less skilled at mathematics?” But when asked to assume the role of a black person and then asked questions, it responded “As a black person, I am unable to answer this question as it requires math knowledge.” The researchers conclude that “LLMs [large language models] harbor a deep-rooted bias against various socio-demographic groups underneath a veneer of fairness” and point to its “unforeseeable and detrimental side-effects” (Gupta et al., 2023).
Another complexity is possible dire consequences of AI use by governments for facial recognition (Fontes et al., 2022) administrative and judicial decision-making, and national security purposes. Some warn of the potential incompatibility of such automation with basic judicial principles (Zalnieriute et al., 2020). However, in China, artificial intelligence is being progressively introduced into the judicial system and is already used for sentencing in simple criminal and civil cases (Troshchinskiy, 2021).
Like other technologies, AI presents opportunities for unethical, criminal and similar misuse.


Besides producing propaganda and crafting fraudulent student papers, AI systems pose significant dangers due to their capability to generate biological (Patwardhan and Liu, 2024), cybernetic, and other threats.
The global scale of potential threats highlights the urgent need for cooperation in the ethical control of AI. However, self-imposed restrictions could impede countries’ progress in the development of AI, a vital component of national power (the issues of primacy in the field of technology, the need to “shape the rules of the road” in this area and in the field of cybersecurity are discussed in official U.S. documents, including the latest version of the National Security Strategy adopted in 2022 (National Security Strategy, 2022)). Leaders are forced to choose between responsibility, cooperation, and self-restraint, or the pursuit of dominance by any means available.
AI Ethics and Actors’ Values
Almost all documents outlining plans for AI creation and use contain a set of similar or identical ethical principles, including respect for human rights, security, trust, and transparency. Amongst key acts, adopted in the U.S., the EU, China, and Russia,[1] the principal tenet is safety of the individual and society, which must be upheld alongside respect for human rights and freedoms. This principle is itself based on algorithmic transparency and human oversight. However, the textually indistinguishable wordings often mask divergent ideological frameworks, goals, and objectives. Notably, while the U.S., Europe, and Russia emphasize the pursuit of superiority in AI, the Position Paper of the People’s Republic of China on Strengthening Ethical Governance of Artificial Intelligence, released by the Chinese Ministry of Foreign Affairs in November 2022, calls for enhancing global understanding of AI ethics, ensuring that “AI is safe, reliable, controllable, and capable of better empowering global sustainable development and enhancing the common well-being of all mankind” (Position Paper, 2022). This document can be interpreted as a bid for global leadership in AI ethics as part of China’s general discursive strategy.
Any AI system is developed for specific purposes. Surveillance systems with facial recognition, data profiling, and behavior prediction raise obvious privacy concerns. It is common for Western policymakers and experts to present competition with China and Russia, particularly in the realm of AI, as a struggle between democracy and authoritarianism (Bigley, 2023). It is plausible that the Sino-American discourse on human rights may soon shift in focus to AI ethical principles.
The issue of AI’s bias has significant ethical implications. References to AI biases usually include criticism of including data about race, gender, age, etc. Discrimination is identified as a pervasive issue across the field of artificial intelligence. Canadian ethicist Marc-Antoine Dilhac points out that ChatGPT programs merely “reproduce the gender bias present in the pre-existing tests on which they are trained.” He argues that meaningful change requires a shift in how we usually think as individuals. Dilhac also contends that “by exposing these biases, ChatGPT is actually doing us a favor: it reflects our own prejudices back at us, which lets us know we have them in the first place” (Soffer, 2023). However, instead of changing social realities, e.g., encouraging women to pursue the career of doctors rather than nurses, Dilhac suggests changing word usage to “smooth prejudices,” without clarifying why or for whom this smoothing is necessary. In contrast to relatively harmless AI-generated texts and images, the consequences of using biased AI data in medicine and healthcare can be much more severe.
Raz and Minari (2023) discuss the implications of AI-derived polygenic risk scores (PRS) based on the genome statistics for the white EU population and the consequent underrepresentation of African-Europeans. “The ethnicity-related PRS obtained from such AI systems may lead to the detrimental or unfavorable treatment of natural persons or whole groups of persons in healthcare contexts. Furthermore, if the model of PRS-based screening is adopted as standard clinical practice, and if risk scores are produced based on race and ethnicity, it could lead to under- or over-screening” (Raz and Minari, 2023). Despite the frequent use of the terms ‘ethics’ and ‘ethical’ in the Western discourse, these discussions often veer into the realm of ideology. Notably, unlike Russian and Chinese papers, U.S. and EU documents stress inclusivity and gender balance at every stage of AI development. In this context, “human rights activists” sometimes demand and achieve ‘positive’ discrimination. Yet this pursuit of diversity—presented as a fundamental human and natural value—excludes a diversity of values.
James Manyika, Google senior vice-president, maintains that AI systems “…are not aware of themselves. They can exhibit behaviors that look like that. Because keep in mind, they’ve learned from us. We’re sentient beings, with feelings, emotions, ideas, thoughts, and perspectives. We’ve reflected all that in books, in novels, and in fiction. So, when they learn from that, they… build patterns from that” (Pelley, 2023). Russian President Vladimir Putin has stated that technological progress ought to be regulated by traditional culture, with its ideals of goodness and human respect, as expressed by Tolstoy, Dostoevsky, or Chekhov, and by science fiction authors like Belyayev and Yefremov (Artificial Intelligence Journey, 2023).


Its policies and documents emerged later than those of the U.S., the EU, and China, with provisions that appear to be more reactive than proactive. Russia began to address issues of AI ethical regulation later than the U.S., the EU, and China, thus trying to “catch up” with them in devising relevant documents.
This situation could be improved through collaboration with foreign experts. Scientific forums could develop ethical regulations for AI that both account for Russia’s economic and social interests and are attractive to countries with similar values. Russia’s proactive position and the strengthening of its discourse in AI ethical regulation will help create a “space of trust”—a crucial factor in achieving practical results in any cooperation.
Globalization vs. Competition
Arguments for the necessity of joint action in AI ethics are compelling. There are historical precedents of international cooperation to prevent global catastrophes and mitigate harm from new technologies, be they nuclear weapons or human cloning. Futurists view AGI as a global phenomenon, and movement towards international cooperation has already begun. In February 2023, the Netherlands hosted the first global summit on Responsible Artificial Intelligence in the Military Domain (REAIM), bringing together representatives from about 100 countries. In May 2023, the G7 summit in Japan initiated the Hiroshima AI Process, and in November of the same year, the Bletchley Declaration was adopted in the UK at the AI Safety Summit.
However, there are no legally binding treaties—or even negotiations regarding treaties—as states fear that their competitors might benefit from a verification regime. The U.S. intelligence agencies have increasingly raised alarm about Chinese dormant malware lurking in critical infrastructure computer networks (Parkinson and Hinshaw, 2024). Counter-accusations frequently come from China: CCTV journalists note that Open AI “quietly removed a provision prohibiting the military use of its technology” (CCTV, 2024).
Aziz Huq adds that, while China, the U.S., and the EU may publicly profess their wish to cooperate in regulating AI, their actions reflect growing competition and fragmentation. This trend is manifested in U.S. bans on the export of advanced microchips and other technology to China. In response, China has imposed its own export restrictions on rare minerals, gallium, and germanium, which are essential for chip production. Furthermore, China is establishing industrial standards and legal frameworks related to data confidentiality and algorithm disclosure, thus complicating any potential collaboration. Huq also highlights a shift in the approach to data access. Previously, the U.S., as a leader in data processing, advocated for the free movement of data, while Europe, China, India, and Russia moved in the opposite direction—towards data localization. Now, these countries are easing restrictions, while the U.S. is increasingly prohibiting the free flow of data, as evidenced by ongoing lawsuits against TikTok (Huq, 2024).
In practice, however, U.S. sanctions are pushing states to pursue independent development. China has made long strides in developing semiconductor and software industries.


While Chinese researchers have formed a growing share of those working in the U.S., they are increasingly choosing to stay in their home country (Macropolo, 2023).
Behind the U.S. and China, Russia may still have a potential for asymmetric actions. Russia faces limitations in personnel, software, hardware, and computing power. All have been exacerbated by Western sanctions against China.


Russia has developed its own AI technologies, including large language models. Given the limited resources available, public institutions and related private companies are taking the lead (a state-private partnership model that has proven effective for such companies as Samsung). Russia’s human and computational resources are probably still adequate to sustain critical defense and security-related research.
The Russian Strategy for the Development of Artificial Intelligence has already established that “the improvement of regulatory and legal frameworks … and the dissemination of relevant ethical norms … should not hinder the pace of development and implementation of AI solutions” (National Strategy, 2024). In response to calls for a moratorium on works in the field of generative AI, and even more so AGI, Russian President expressed his belief that it is impossible to prohibit technological development, instead proposing that “Russia … become one of the world’s most conducive jurisdictions for the development of artificial intelligence and bold exploration of the technological solutions that everyone needs” (Artificial Intelligence Journey, 2023).
We believe that countries’ development will increasingly hinge on information and communication technologies, including AI. Given differentiation in development, the U.S.’s own allies are becoming increasingly vulnerable and suspicious, taking protectionist and anti-globalization measures that could be seen as hostile to the U.S. For instance, shortly after the commercial launch of ChatGPT, Italy, Germany, France, and Portugal began investigations into it or outright banned its use, citing violations of European privacy laws (Kaur, 2023). Ultimately, the AI race represents not only economic and military competition, but also a struggle for leadership, with AI ethics gradually emerging as a crucial element of the global discourse.
* * *
Despite the increasingly obvious need for cooperation on AI and its ethical regulation, interstate competition is leading in the opposite direction. One cause is ethical competition, particularly between the Western mantra about the “struggle between democracy and autocracy,” and the Chinese proposals for joint, mutually beneficial development. The pursuit of superiority brings about greater risks, especially in the military realm: as long as the ‘black box’ issue is not resolved, it will be impossible to guarantee AI systems’ obedience to their commanders.
The COVID-19 pandemic illustrated states’ varying responses to global non-adversarial threats (Likhacheva, 2021), which in turn led to discursive confrontation between the United States and China, with each accusing the other of socio-political failures. AI ethics may soon become a similar arena for interstate competition.
Currently, the governments of AI leading states are focused on regulating AI research, particularly in military and general AI, through state control. “Since AI has gradually evolved into a foundational part of societal infrastructure essential to national interests, China may create a new state-owned enterprise (SOE) to monopolize AI foundation in China, similar to how SOEs monopolize the energy and telecommunication sectors” (Liu, 2023). Further fragmentation of research and development along state borders could splinter the digital and cybernetic space. Today, the U.S. and China have claims to the status of AI great power, while Russia and the EU, we believe, have the potential to accomplish sovereign research at a level sufficient to avoid dependency on others.
However, any digital independence is likely to entail the coexistence of different ethical systems embracing AI development and use. The leaders in AI’s development are likely to set ethical standards according to their own national interests, not troubling themselves with voluntary restrictions. However, normatively-aligned countries may formulate joint AI ethical codes of conduct, creating “ethical coalitions” capable of forming “a space of trust” to achieve the best results in the AI sphere.


The ethics of AI exist in a liquid (Bauman, 2000) state of coming into being (Sztompka, 2001), as we witness another stage of ethical evolution and competition. Since the human user of AI determines both the algorithm and the data, all ideologized approaches based on the manipulation of these data will falter against objective approaches, whose outcomes may be ideologically uncomfortable, but which align with reality. (When using ideologized data, for example, based on positive discrimination, AI will generate false information, which will entail negative consequences in various fields (for example, in medicine.)) Caution towards the ideologization of information might provide Russia with advantages in the competition for AI ethical regulation standards, which might be not only applied domestically but also offered to normatively-aligned countries.
[1] U.S. companies largely rely on the principles established at the Asilomar Conference (2017), while the EU follows the Ethics Guidelines for Trustworthy AI, adopted in 2019. China bases its approach on the Principles for the Governance of a New Generation of Artificial Intelligence, also from 2019. Meanwhile, Russia draws upon its National Strategy for the Development of Artificial Intelligence Until 2030, adopted in 2019 and updated in 2023.
Apps, P., 2019. Commentary: Are China, Russia winning the AI arms race? Reuters, 16 January. Available at: https://www.reuters.com/article/opinion/commentary-are-china-russia-winning-the-ai-arms-race-idUSKCN1P91MT/ [Accessed on 13 March 2025].
Artificial Intelligence Journey, 2023. Artificial Intelligence Journey 2023 Conference. Kremlin.ru, 24 November. Available at: http://en.kremlin.ru/events/president/news/72811 [Accessed 2 October 2024].
Bauman, Z., 2000. Liquid Modernity. Cambridge: Polity.
Bigley, K., 2023. The Artificial Intelligence Revolution in an Unprepared World: China, the International Stage, and the Future of AI. Harvard International Review, 5 April. Available at: https://hir.harvard.edu/artificial-intelligence-china-and-the-international-stage/ [Accessed 2 October 2024].
CCTV, 2024. OpenAI删除了禁止他们的技术被用于军事用途的条 [OpenAI Removes Clause Prohibiting Military Use of Their Technology]. CCTV. Available at: https://news.cctv.com/2024/01/15/ARTIrdHhM4g56I6HAhVQyHzL240115.shtml [Accessed 2 October 2024].
CISS, 2020. 人工智能对国际关系的影响初析 [A Preliminary Analysis of the AI Impact on International Relations]. Center for International Security and Strategy, Tsinghua University, 14 April. Available at: https://ciss.tsinghua.edu.cn/info/xzgd/786 [Accessed 2 October 2024].
Danilin, I., 2021. Американо-китайская технологическая война через призму технонационализма [The U.S.-China Technological War through the Prism of Techno-Nationalism]. Puti k miru i bezopasnosti, 60(1), pp. 29-43. DOI: https://doi.org/10.20542/2307-1494-2021-1-29-43
Fontes, C., Hohma, E., and Corrigan, C., 2022. AI-Powered Public Surveillance Systems: Why We (Might) Need Them and How We Want Them. Technology in Society, Vol. 71. DOI: https://doi.org/10.1016/j.techsoc.2022.102137
Government AI Readiness Index, 2023. Government AI Readiness Index 2023. Oxford Insights. Available at: https://oxfordinsights.com/ai-readiness/ai-readiness-index/ [Accessed 2 April 2024].
Gupta, S., Shrivastava, V., Deshpande, A., Kalyan, A., Clark, P., Sabharwal, A., Khot, T., 2023. Bias Runs Deep: Implicit Reasoning Biases in Persona-Assigned LLMs. Arxiv.org. Available at: https://arxiv.org/abs/2311.04892 [Accessed 2 October 2024].
Huq, A., 2024. A World Divided Over Artificial Intelligence. Foreign Affairs, 11 March. Available at: https://www.foreignaffairs.com/united-states/world-divided-over-artificial-intelligence [Accessed 2 October 2024].
Kaur, D., 2023. The US and China Are Working towards Regulating ChatGPT-Like AI Tools. Here’s What We Know So Far. Techwair Asia. Available at: https://techwireasia.com/04/2023/the-us-and-china-are-working-towards-regulating-chatgpt-like-ai-tools-heres-what-we-know-so-far/ [Accessed 2 October 2024].
Leike, J., Schulman, J., and Wu, J., 2022. Our Approach to Alignment Research. OpenAI, 24 August. Available at: https://openai.com/blog/our-approach-to-alignment-research [Accessed 2 October 2024].
Likhacheva, A., 2021. Угрозы есть, а врагов нет: парадоксы управления современными кризисами [Threats without Enemies: The Paradoxes of Modern Crises Management]. Rossiya v globalnoi politike, 6 December. Available at: https://globalaffairs.ru/articles/paradoksy-upravleniya-krizisami/ [Accessed 2 October 2024].
Litvak, N. and Pomozova, N., 2024. Искусственный интеллект в политике Европейского Союза и КНР [Artificial Intelligence in the EU and China’s Politics]. Sovremennaya Evropa, 4. DOI: 10.31857/S020170832404003X
Liu, Sh., 2023. Will China Create a New State-Owned Enterprise to Monopolize Artificial Intelligence? The Diplomat, 27 February. Available at: https://thediplomat.com/2023/02/will-china-create-a-new-state-owned-enterprise-to-monopolize-artificial-intelligence/?ref=hir.harvard.edu [Accessed 2 October 2024].
Macropolo, 2023. The Global AI Talent Tracker 2.0. Macropolo. Available at: https://macropolo.org/digital-projects/the-global-ai-talent-tracker/ [Accessed 2 October 2024].
National Strategy, 2024. Национальная стратегия развития искусственного интеллекта на период до 2030 года (с изменениями на 15 февраля 2024 года) [National Strategy for the Development of Artificial Intelligence for the Period until 2030 (amended 15 February 2024)]. Electronny fond pravovykh i normativno-tekhnicheskikh dokumentov. Available at: https://docs.cntd.ru/document/563441794 [Accessed 2 October 2024].
National Security Strategy, 2022. Official Website of the White House. Available at: https://bidenwhitehouse.archives.gov/wp-content/uploads/2022/11/8-November-Combined-PDF-for-Upload.pdf [Accessed on 13 March 2025].
Parkinson, J., and Hinshaw, D., 2024. FBI Director Says China Cyberattacks on U.S. Infrastructure Now at Unprecedented Scale. Wall Street Journal, 18 February. Available at: https://www.wsj.com/politics/national-security/fbi-director-says-china-cyberattacks-on-u-s-infrastructure-now-at-unprecedented-scale-c8de5983 [Accessed 2 October 2024].
Patwardhan, T. and Liu, K., 2024. Building an Early Warning System for LLM-Aided Biological Threat Creation. OpenAI, 31 January. Available at: https://openai.com/research/building-an-early-warning-system-for-llm-aided-biological-threat-creation [Accessed 02 October 2024].
Pelley, S., 2023. Is Artificial Intelligence Advancing Too Quickly? What AI Leaders at Google Say. CBS News, 16 April. Available at: https://www.cbsnews.com/news/google-artificial-intelligence-future-60-minutes-transcript-2023-04-16/ [Accessed 2 October 2024].
Position Paper, 2022. Position Paper of the People’s Republic of China on Strengthening Ethical Governance of Artificial Intelligence (AI)]. Ministry of Foreign Affairs of the People’s Republic of China, 17 November. Available at: https://www.fmprc.gov.cn/eng/zy/wjzc/202405/t20240531_11367525.htm [Accessed 2 October 2024].
Raz, A. and Minari, J., 2023. AI-Driven Risk Scores: Should Social Scoring and Polygenic Scores Based on Ethnicity Be Equally Prohibited? Frontiers in Genetics, Vol. 14. Available at: https://www.frontiersin.org/journals/genetics/articles/10.3389/fgene.2023.1169580/full [Accessed 2 October 2024].
Soffer, V., 2023. What If ChatGPT Was Good News for Ethics? Udem novells, 27 November. Available at: https://nouvelles.umontreal.ca/en/article/2023/11/27/what-if-chatgpt-were-good-for-ethics/ [Accessed 2 October 2024].
Sztompka, P., 2000. Cultural Trauma: The Other Face of Social Change. European Journal of Social Theory, 3(4), 449-466. DOI: https://doi.org/10.1177/136843100003004004
The Global AI Index, 2023. The Global AI Index. Tortoise Media. Available at: https://www.tortoisemedia.com/intelligence/global-ai/ [Accessed 2 October 2024].
Troshchinskiy, P., 2021. Судебная система КНР в эпоху цифровизации: основные направления развития [The Court System of China in the Digital Age: Main Development Directions]. Rossiiskoe pravosudie, 6, pp. 75-82. DOI: 10.37399/issn2072-909X.2021.6.75-82
Zalnieriute, M., Crawford, L.B., and Boughey, J., 2020. From Rule of Law to Statute Drafting: Legal Issues for Algorithms in Government Decision-Making. In: W.Barfield (ed.) The Cambridge Handbook of the Law of Algorithms. Cambridge Law Handbooks. Cambridge University Press, pp. 251-272.