This article was initially published in the Revista Seguridad y Poder Terrestre
Vol. 3 No. 3 (2024): July to September
https://doi.org/10.56221/spt.v3i3.66
Abstract
Recent conflicts around the world demonstrate that technology is fundamental for the development of war campaigns. The use of state-of-the-art technology is not limited exclusively to traditional International Humanitarian Law (IHL) actors, but is also employed by non-state actors, who challenge the effectiveness of the international legal framework, open the way for illegitimate and illegal actions, and expand the gray zone between peace and war. In this confusing and increasingly dangerous environment, information and disinformation have been used as strategic weapons. These include tactics of deception, means of destabilization, generators of chaos and social division, mechanisms of evasion of responsibility and sanctions, and instruments for the creation of tailor-made realities (post-truth). Therefore, the objective of this paper is to understand the military use of a disruptive technology such as artificial intelligence (AI) and to identify the potential of disinformation and information as strategic tools.
Keywords: Artificial Intelligence, Disinformation, Conflict, International Security, War Technology, Autonomous Weapons.
Introduction
Recent conflicts around the world demonstrate that technology is an essential element for the development of different warfare campaigns. The use of state-of-the-art technology is not exclusive to traditional IHL actors, but is also employed by non-state actors. They question the effectiveness of the international legal framework, open the way to illegitimate and illegal actions and expand the gray zone between peace and war.
In this confusing and increasingly dangerous environment, information and disinformation have been used as a means to generate controversy, manipulate minds, discredit political figures, divide societies, mask true objectives, initiate conflict and, of course, as high-impact weapons. Major disinformation and social manipulation campaigns use AI to achieve their goals accurately, massively, in a short time and with great reach. This is evidence of how technology empowers conventional warfare strategies and tactics, such as propaganda and counter-propaganda. Thus, we identify a convergence of disruptive technologies with effective forms of disinformation in political campaigns, social actions, economic activities and military operations, with AI as the protagonist.
The research is based on the scientific method, specifically on bibliographic research on advances in military technology, AI and examples of practical application. To this is added the literary, discourse and historical analysis of various official, academic, technological and media documents from a realist perspective. It is a documentary and descriptive study, which attempts to collect data published in primary and secondary sources specialized in international law, defense and military strategy based on the intensive use of Information and Communications Technology (ICT), AI and communications infrastructure.
The above allows us to formulate the guiding question of this paper: What is the role of both AI and disinformation in 21st century conflicts? This question is asked with the aim of understanding the military use of a disruptive technology such as AI and identifying the potential of disinformation and information as weapons on the battlefield. The text comprises five sections. The first part seeks to establish the technological context in which the world is immersed, characterized by the arrival of the Fourth Industrial Revolution. The second section attempts to detail AI and disinformation, laying the foundations for understanding these phenomena. The third section analyzes and exposes some of the roles played by both AI and disinformation in 21st century conflicts. The fourth section presents arguments for and against the use of AI and disinformation in current conflicts. Finally, the fifth section draws some conclusions regarding the present work.
The Technological Context of the 21st Century
The prevailing scenario at the beginning of the 21st century is saturated with technological advances, whose ultimate goal is the automation and digitization of various productive, administrative, political and governmental tasks. In fact, it is argued that the world is living through what is called the Fourth Industrial Revolution, which is characterized by digitization and automation, driven by disruptive technologies such as the internet, the cloud, digital coordination, cyber-physical systems, robotics and AI. All this results in a global hyperconnectedness and poses new challenges to society. It should be clarified that this digitized context is the result of the evolution experienced by the various industries internationally, which moved from a mechanized era to an electric era and, subsequently, to a computer era (known as the 1st, 2nd and 3rd Industrial Revolution, respectively). Each of these stages saw progress and major innovations in the use of technology in all areas.
However, despite its good intentions and progress, the Fourth Industrial Revolution brought with it a number of challenges including: (a) technological acceleration, leading to rapid innovation and adoption of technologies that generate challenges in risk management and data protection; (b) the changing workforce, which has been impacted by automation and the adoption of AI, requiring employees to acquire new skills and states to update their labor policies; c) the evolving business landscape, where new business models emerge and competition intensifies, creating challenges for the protection of intellectual property, digital rights, privacy and security; and, d) the increasing complexity in responding to unprecedented threats emerging from and taking shape in cyberspace.
In the military realm, the prevailing characteristics of hyperconnectedness and reliance on technological means have impacted the way war is waged, which now includes operations in cyberspace and through the media. This uses AI and disinformation as key elements, both strategically and tactically, to achieve political objectives. Even world leaders such as Vladimir Putin have stated that whoever masters AI will control the world (Univision, 2017)[1]. Likewise, the United Nations (UN) considers disinformation as one of the greatest threats to international security (UN, 2017).[2] These statements allow us to question: what are the uses of AI and disinformation in 21st century conflicts? To answer this question, it is necessary to delve deeper into these phenomena, showing the general details of AI and disinformation capabilities.
AI and Disinformation
In a simple and perhaps reductionist way, AI is defined as the ability given to machines or systems to make decisions on their own, perform tasks and, in theory, improve their performance without human intervention, based on one or more algorithms programmed to meet an objective. Today, in the context of the Fourth Industrial Revolution, AI has become a highly useful and relevant element in digital transformation. Due to its evolution and implementation in various activities, we live in an automated world, highly dependent on technology, in which AI plays an increasingly leading role.
Thanks to the knowledge of the progress of AI, it is possible to categorize this technology according to its characteristics and objectives. According to the literature, there are several ways to classify AI. For example, a first taxonomy mentions three broad categories: 1) Narrow AI, 2) General AI and 3) Super AI. A second way of grouping AI mentions four elements: 1) reactive machines, 2) machines with limited memory, 3) theory of mind and 4) self-aware machines. A third way of ordering AI includes systems that 1) think like humans, 2) act like humans, 3) use rational logic, and 4) act rationally. Regardless of which classification is used, AI is identified as emulating the actions of humans, improving productivity, avoiding human error, far surpassing the tasks performed by humans, and seeking to achieve what humans have not achieved: self-awareness.
Due to the progress in its study and implementation, in recent years, Generative AI has emerged as the option to achieve advantages over potential opponents in all areas of human endeavor. It is mainly characterized by its efficient use of data (considered the “digital gold”) to generate content in the form of text, video, images, music, audio and computer programs. This content can be used both as a means of development and progress and as a weapon of war. Relevant features include the imitation of human beings, the use of natural language processing, training with large amounts of data, the reuse of data to solve new problems, and the acceleration of research and the creation of new devices, content, artwork, theories and computer programs. However, some biases of their creators are included and will invariably be present.
Some of the examples where AI is used include social networks, predictive search engines, personal assistants, instant product recommendations, media customer service, activity monitoring, automated decision making and online advice. AI services, processes and products find application in chatbots, media creation, product development and design, research improvement and acceleration, process optimization, big data analytics, security systems, productivity enhancement, defense systems and in Lethal Autonomous Weapons Systems (LAWS).
During the last conflicts, it has been detected that Generative AI (GAI) has become a useful tool to produce large amounts of “fake news” (deepfakes), evidencing the destructive formula between Generative AI and disinformation. In this context, AI massively uses information as a weapon of dissuasion, deception, manipulation, control or destabilization, enhancing its reach and diversifying its impacts. Disinformation created with AI paralyzes international action, intensifies controversies, manipulates actions, justifies atrocities, generates confusion, exposes a double discourse and divides opinions. Under these conditions, it can be said that armed conflicts are permeated by AI-generated misinformation. Given the intensive use of disinformation in AI-powered warfare, efficient and effective countermeasures are now required to preserve the integrity and security not only of the actors in the conflict, but also of the international system.
Temporarily, the decision on the use of AI and information to do good or destroy still belongs to humanity. However, in today’s scenarios, where respect for the legal framework is weak, the involvement of (civilian) technology companies in warfare is real, the gray zone has expanded, covert operations are the norm, ethics have been forgotten and the intensive use of war technology, along with the ineffectiveness of international bodies, has become apparent. How much longer will this be possible? Is the human being willing to cede life and death decisions to a computer? Will international society be able to reverse this trend?
Disinformation, conceived by the UN as inaccurate information “intended to mislead and disseminated with the aim of causing serious prejudice” (UN, n.d.),[3] has been amplified by the advent of AGI and has become a variable in today’s conflicts, “undermining public policy responses or amplifying tensions in times of emergency or armed conflict” (UN, n.d.)[4]. Its impact on international security at momentous times is such that the UN considers it one of the greatest threats to the stability of the international system and a crucial concern for preserving political stability.
It functions as an element of propaganda and counter-propaganda that prevents the world’s citizens from having truthful and timely information that would allow them to build their opinions on the basis of facts. Consequently, it has no single definition, and encompasses an “unrestricted use” in “issues as diverse as electoral processes, public health, armed conflicts or climate change” (UN, n.d.).[5] It is an instrument for deception, lies, defamation and manipulation, used by both state and non-state actors. Therefore, the UN has stated that “”fake news” has become an issue of global concern because it can lead to censorship, suppression of critical thinking and other contraventions of human rights laws” (UN, 2017)[6]. Examples of the use of disinformation are visible in military events such as Russia’s annexation of Crimea in 2014 and in the 2016 U.S. presidential election.
Addressing the causes and consequences of disinformation is complicated by the fact that great care must be taken to continue to respect both freedom of expression and free access to databases and information. Therefore, clearly defining the mechanisms to counteract disinformation is particularly important for freedom of expression, since this includes social, cultural, religious and ancestral expressions that could be wrongly restricted under the pretext of combating disinformation. The difficulty involved in dealing with it does not prevent international efforts to regulate disinformation in times of peace or in conflict situations. A number of research studies seek to find an effective formula to mitigate the impact of misinformation in the minds of citizens, although, so far, no universally applicable solution has been found. Thanks to these studies, it has been affirmed that disinformation has social, psychological, economic, diplomatic, political and military impacts. The latter make it possible to talk about the role of disinformation in “hybrid warfare” and “non-contact warfare”, two forms of warfare present in the conflicts of the 21st century, and their close relationship with AI.
How is AI and Disinformation Used in the Military Environment?
The human fascination with employing AI to achieve political-military objectives is not new. Killer robots” and SAALs have been both a source of fear and an object of fascination for decades. To some degree, the effects that such devices can generate have been controlled. Thus, exploring the accompanying capabilities of these autonomous weapon systems to locate, target and kill without human involvement has been a constant in new weapons development programs. This trend can be observed in the various current conflicts and is very likely to be a key aspect of future conflicts.
Consequently, AI and (dis)information will establish themselves as the weapons of choice in 21st century conflicts, especially due to their effectiveness, range, cost-effectiveness, destructiveness and stealthiness. However, considering that for now it is difficult to counteract the effects and attribute their warlike character to these new ways of using technological means on the battlefield, the international community has a lot to do to limit their unrestricted use in military operations. Will it be able to do so in time? What will be the response of AI developers? Will armed forces be willing to dispense with this technology in combat? The coin is in the air!
AI in the 21st Century War Context
Although the potential uses of AI in the military are varied, this paper addresses only a few representative examples. The purpose of this sample is to provoke reflection, debate and encourage further research on the topic in order to develop solid arguments on the opportunities and risks that AI presents to international society during armed conflicts. The role of AI as a force multiplier, scenario simulation facilitator, weapons range extender, “smart weapons” generator, risk mitigation tool, and mechanism to conduct surgical strikes is emphasized. All this in order to highlight some of the characteristics that make AI a means to achieve technological superiority.
AI is a technological tool that enhances the military power and capabilities of the armed forces that employ it, both in non-lethal and lethal systems. An example of the former are systems that record and analyze aircraft data to monitor and improve engine performance. In contrast, SAALs are prominent examples of lethal applications, mounted on drones or autonomous vehicles (land, sea, air, aerospace or cyber), designed to identify and potentially destroy enemy targets. Thus, AI acts as a force multiplier by providing superior capabilities to the military to streamline and improve military target recognition, priority target surveillance, theater communication, logistical efficiency, minimizing human losses, mitigating cyber threats, optimizing the use of information in hybrid warfare, and developing new combat strategies, methods and weapons.
Consequently, several states around the world have programs dedicated to AI research and development to improve productivity and efficiency, as well as for implementation in military operations. For example, the U.S. Department of Defense (DOD) has multiple military AI research programs with budgets reaching several billion dollars. In addition, it is crucial to highlight the humanitarian use of AI in casualty rescue, which justifies significant international investments in research and development. Thus, AI not only multiplies the strength and scope of IHL in the protection of non-combatants and innocent victims of armed conflicts; it also plays a fundamental role in global humanitarian protection.
First, AI enables the simulation of conflict scenarios. At the strategic level, it offers the ability to model and simulate battlefields, providing the opportunity to test response hypotheses to possible attacks, whether with conventional weapons, nuclear weapons or cyberweapons. According to Chinese authorities, AI can be a superior strategist to humans (Kardoudi, 2023)[7]. Thus, in the simulation of large-scale fictional conflicts, it suggests strategies and military plans that respond in real time to various situations, surpassing even the limits of imagination. However, it is crucial to note that AI does not show reluctance to act and it has been documented that, in simulation exercises, it has recommended the use of nuclear weapons to maintain international peace and security (Díaz, 2023;[8] Rivera et al., 2024).[9] This is an unacceptable scenario for international society. Are we facing Mutually Assured Destruction by Computer (MADC)? Should we consider war as a game?
Second, AI extends the reach of weapons through remote control. It uses national and international communication and computing infrastructures to operate SAALs remotely, thus overcoming geographical, political, legal, meteorological and technological obstacles that could hinder the effectiveness of weapons. Examples include drones of various categories, programmed-action weapons, cyberweapons, and armaments adapted to be remotely activated by electromechanical devices, even thousands of kilometers away. For example, the United States (US) has employed AI to attack Houthi rebels in Yemen without territorial occupation of its troops (DW, 2024),[10] while Israel has conducted operations in Iran to eliminate scientists. Sadly, all indications are that humanity is fully entering the era of remote-controlled warfare, where physical barriers vanish and automatons take control with human approval..
Third, AI generates “smart weapons”. In this way, reaction time is significantly reduced, allowing what a human could do in an hour to be done by AI in just a few seconds. For example, the Israeli defense system known as Iron Dome intercepts aerial projectiles through the use of AI, which determines the type, target and even the potential damage they could cause. Although AI amplifies capabilities and creates sophisticated weapons, often the operator of these systems has limited experience, little time to make crucial decisions, and lacks the authority to stop an attack with significant legal implications. This calls into question the degree of control that can be exercised over AI-powered weapons. At this juncture, AI develops “smart weapons” that make decisions about who lives or dies, rather than empowering human decision making (now turned into loyal followers). This situation raises the question of who can be held accountable for the failures of “smart weapons” and whether these are truly fortuitous or programmed.
Fourth, AI performs tasks that are dangerous and unbelievable to humans. In the field of technology and security, it is claimed that machines are better than humans at performing activities that fall under 3D (Dull, Dirty and Dangerous) which define dull, dirty and dangerous tasks. According to some authors, these should be complemented with a fourth “D” (Difficult) for difficult tasks that machines can perform with the help of AI (Porcelli, 2021).[11] Consequently, the armed forces can protect and preserve their hard-to-replace personnel, their high-level specialists and their strategic leaders. Also, the use of robots and automatons is becoming widespread to cover the three “H” tasks (hot, heavy and hazardous) (Kalpakjan & Schmid, 2002)[12] which aims to reduce health damage costs and ensure the existence of specialists in the armed forces.
To conclude this section, AI enables surgical precision attacks. There is evidence from around the world citing new AI platforms capable of accurately differentiating between military and civilian targets. According to Australian company Athena AI, its system is designed to assist operators and has the ability to search for, identify and geolocate objects on the ground, verify whether an object is in a no-hit zone, as well as perform collateral damage analysis and estimation. According to the company, this represents the ideal for any armed force wishing to implement AI to improve the accuracy of its attacks, comply with the current legal framework and adhere to the principles of warfare. However, there are still ethical and political challenges that need to be addressed, such as regulating the use of private data and information from the world’s population to fuel AI, avoiding the violation of international law, the creation of credible lies or an alternative reality, as well as participating in conflicts. It is crucial to remember the saying “information is power”. Below are some examples of the above.
Practical Examples of the Use of AI in Military Operations
1. In November 2020, Iranian nuclear scientist Mohsen Fakhrizadeh was killed through the use of a machine gun mounted on a vehicle strategically abandoned on his daily route to work. What was surprising and unexpected about this attack was that, according to official reports presented by the Iranian government and some international media, there was no person who activated the trigger to eliminate the renowned scientist. According to subsequent investigations, the attack was executed by a remotely operated firearm (allegedly from Israel) with such precision that it did not cause any harm to the scientist’s wife, who was riding in the passenger seat. Evidently, the only way this weapon, converted into a robotic device, could have been activated by remote control is by means of advanced technology for the transmission and action of its mechanism, which is undoubtedly based on AI.
Although the authorship of the crime was kept secret for some time, it did not take long for claims and revelations to emerge about the sophisticated attack with an “intelligent weapon” carried out by the Institute for Intelligence and Special Operations (known as Mossad or Mossad). According to some experts and the report presented by international media, this type of action encourages debate on the morality, legality and practicality of armaments using AI. Moreover, it is inferred that such acts against prominent figures in other states, such as Iran, were “approved” by the US president in office and some other world leaders, which is evidence of a double standard among global leaders. However, despite the numerous justifications for the effectiveness, efficiency, precision and objectivity of such an attack, all of the above underscores the use of advanced technology to violate IHL and undermine with impunity the authority of international bodies in charge of international peace and security.
2. The attack on the Iranian consulate in Damascus initially left seven dead, including General Mohammed Reza Zahedi, a prominent military leader of the Iranian Revolutionary Guard (Fassilhi, 2024).[13] This act comes on top of other actions aimed at weakening Iran’s military capabilities through high-impact covert operations. The incident was particularly significant because it occurred in a diplomatic compound, in violation of the current international legal framework. It is presumed that facial recognition, signals intelligence, the collection and processing of large volumes of information, as well as the use of smart bombs, possibly involved AI technologies to ensure the success of the bombing.
3. In recent months, two independent media outlets have accused the Israeli government of using AI to conduct operations against targets in the Gaza Strip (Stop Killer Robots, 2024).[14] These actions led UN Secretary General António Manuel de Oliveira Guterres to express deep disturbance at the high number of civilian casualties, stating that “no part of life-and-death decisions that impact entire families should be delegated to the cold calculation of algorithms” (SWI, 2024).[15]
Recently, it has been reported that the Israeli military is using a data processing system known as Lavender to receive recommendations and make decisions on targets to attack. This example highlights how armies employ all available information, legally or illegally, in conjunction with AI to achieve their purposes.
These practices raise deep concerns about the use of AI for targeted killing. In addition to the potential bias inherent in automation, there is concern about digital dehumanization and loss of control over AI. These issues must be urgently addressed by international society before it is too late. They represent a significant challenge to the existing legal framework and international agencies, whose authority is challenged in the face of these new realities of armed conflict.
Will international society be able to effectively control the use of AI in current and future conflicts? Are we facing the risk of AI-driven anarchy? Will it be possible for humans to set clear limits on the use of AI in armaments? Additionally, there is the complexity of the use of information as a weapon, especially through disinformation facilitated by ICT, which has become a means to wage contemporary wars.
The Role of Disinformation in 21st Century Conflicts
The protection of non-combatants is a fundamental principle of IHL that all those involved in armed conflict must respect. Ostensibly, the use of AI for effective reconnaissance of dangerous individuals could facilitate this goal, but has resulted in false positives that have mislabeled civilians as military targets. This type of actions by the armed forces reduces human beings to a set of data and represents a concern around three aspects: 1) compliance with IHL, 2) respect for digital rights and 3) digital dehumanization; phenomena of the present that have been further deepened by the use of disinformation as a weapon or to justify actions contrary to the current legal framework by alleging confusion and ignorance.
Although technology, including AI, has been developed to promote peace, justice and human rights, it is increasingly used for destructive autonomous operations, increasing inequality and strengthening oppression. Information is used to sow uncertainty and chaos. Disinformation is evidence that AI is a double-edged tool, capable of empowering both the human capacity to build and the human capacity to destroy. This risk is particularly evident because AGI has the capacity to generate high-level content that accelerates procedures, improves productivity and increases profits; at the same time it generates content that promotes hatred, uncertainty, disinformation and even death. The penultimate of the above represents one of the threats arising from the digital environment and the communication era, which is being intensively used to bend wills before any conflict, an ancient and effective war principle.
It is not the intention of this paper to list all the possibilities of unorthodox use of information and/or disinformation, but to offer a sample of its multiple applications in military operations, as well as the risks and challenges for the security of States. Disinformation can function as a strategic weapon, a strategy of deception, a means of destabilization, a generator of chaos and social division, a tool for evading responsibility or sanctions, and an instrument for creating tailor-made realities (post-truth). The following is a brief analysis of each of these situations.
1. Disinformation as a Strategic Weapon. Its characteristics as a strategic weapon stem from its scope, impact, usefulness and destructive power. Disinformation is intentionally generated to damage, disprove, weaken and slander both individuals and societies. Among its main objectives is to destabilize societies by fomenting social chaos and uncertainty, fueled by a notorious polarization within peoples. Ultimately, it must be recognized that disinformation is used as a weapon to manipulate, exploit or intensify divisions in societies in order to advance political, military, religious, social or commercial objectives. In disinformation, everything is strategy; it seeks to dominate the minds of opponents, induce error and undermine their will to fight before initiating a direct confrontation. It is winning the war without firing a single shot, thanks to the capacity for deception and the multiple tactics or rhetorical procedures offered by a thorough knowledge of the details.
2. Disinformation as a Deception Strategy. It refers to misleading and malicious information, designed with the sole purpose of heightening emotions, exploiting fears, altering opinions, serving as a counter-propaganda tool, frustrating adversaries’ strategies, outwitting defenses, playing with the opponent’s mind and enhancing one’s own capabilities. In fact, for Sun Tzu, everything in the art of war is based on deception becoming an essential strategy to achieve victory. It is precisely at this point that disinformation takes on real importance in today’s conflicts, and states must pay attention, since trickery facilitates surprise.
3. Disinformation as a means of destabilization. It works to misinform, mislead and mislead the target population. It is an excellent instrument of political propaganda that can be oriented to groups of government leaders, civil society or mass audiences anywhere in the world. Its capacity to destabilize individuals, organizations and states arises from the difficulty to verify the veracity of the information. Its purpose is to deceive the receiver into believing in the veracity of the message, demonize the actions of antagonistic groups, question the credibility of the target and act in favor of the interests of the actor/victim in charge of the destabilization operation. By generating distrust and uncertainty, destabilization actions are facilitated, since the official discourse and the prestige of individuals are called into question. According to Jiménez Soler (2020),[16] disinformation is the main element of geopolitical and business destabilization…. It is a specter that is present everywhere in the world,” and Forbes notes the UN’s concern about the potential use of AI for disinformation in elections (Forbes, 2024).[17]
4. Disinformation as a Generator of Chaos and Social Division. It fulfills a function within the intelligence and counterintelligence strategy, which consists of inserting misleading or false data that are easily believed by the public opinion. Ironically, in the Information Age, disinformation has increased due to the ease of dissemination, the accessibility of the media, the intensive use of the Internet to transmit false news and the difficulty to control the veracity of what is published. Thanks to these capabilities granted by technology and their mastery of the technique, some countries considered authoritarian carry out hybrid warfare as a strategy to disseminate both propaganda and lies that seek to weaken the confidence of citizens in their institutions, generate chaos and uncertainty about national objectives and weaken political systems. An example of this was what happened during Russia’s annexation of Crimea.
5. Disinformation as a Means of Avoiding Responsibilities and Sanctions. The lack of certainty about the facts, the creation of an alternate reality, the factious use of information, the abundance of data and the ease of lying contribute to certain crimes going unpunished. Using disinformation as a tool to evade responsibility or sanctions has been an effective way to obstruct or avoid law enforcement. In every human activity there are different realities to interpret an act, which provides opacity to the reconstruction of the facts and sometimes contradicts opinions and interpretations of the same act. For example, due to the technique of demonization, the theoretical interpretation of “us and the others” is applied, which generates misunderstandings. Thus, some acts of war at the international level are punished and condemned by the international community, while others of a similar or even more serious nature go unpunished. In this way, disinformation strengthens the double discourse of certain international actors and evidences the existing impunity.
6. Disinformation is an Instrument for the Creation of Tailored Realities or a Post-Truth. The term “post-truth” was first used by Steve Tesich in 1992, when he referred to the Iran-Contra scandal to evidence the predisposition of public opinion to admit as absolute truth the lies of its rulers (Romero, 2019).[18] Tesich’s work was continued by authors such as Ralph Keyes (2004)[19] y Eric Alterman (2005)[20] at the beginning of the 21st century to provide greater detail on post-truth. It is precisely because of its ability to describe the current situation in a misleading way, employing disinformation, that post-truth can even be called “emotive lying”, which implies the deliberate distortion of reality to shape public opinion and win the minds of those citizens who have placed their trust in someone else.
To close, disinformation contributes to the creation of a reality where objective facts and factual references have less impact than information that appeals to emotions, personal preferences and common goals. This recalls what was mentioned by Nietzsche (2018),[21] who stated that facts do not exist, only interpretations, suggesting that the winner or power creates the truth. Under this polyhedral truth perspective, trust in experts has been undermined with the support of the media, which broadcasts large amounts of unreliable information. Today, the problem is not the lack of information, but the difficulty in distinguishing truth from lies. In some ways, technology has complicated the identification of truth and has generated divisions in society.
Positions on the Unethical Use of AI and Disinformation in Military Operations
As a result of the intensive and as yet unregulated use of both AI and disinformation, an inconclusive debate is generated both in the various international organizations involved in the preservation of international security and in public discussion forums. In these places, the positions of those who support and those who oppose the implementation of AI and/or disinformation in military operations are confronted.
For example, critics of the illegitimate use of AI argue that humans should never delegate life-and-death decisions into the hands of a computer, posing a threat not only to the potential enemies of any international actor, but to humanity as a whole. Somehow, for many, the use of SAALs by world powers has crossed the ethical line and transformed the nature of warfare. Added to this are ethical dilemmas that need to be resolved, such as bias in AI, its use in legal systems, its ability to create art, and the possibility of autonomous action that could harm humans. What they seek, in a way, is to avoid unrestricted brutality, as Mass (2019)[22] once suggested when he stated that “the development and proliferation of new military technologies have enabled unforeseen brutality in several systemic wars,” something that has begun to manifest itself, both in theory and in practice, in the conflicts of the 21st century.
Similarly, critics of the use of information as a weapon argue that it could create a fictitious world based on multiple truths, which could make it difficult to recognize the real facts at the end of the road. They also warn about the potential risk of a boomerang effect, where the spread of lies could lead people to believe their own falsehoods and distort reality until they fall into an information limbo. Moreover, according to international studies, AI is promoting the generation of false content at crucial times for the development, prosperity and security of various international actors. Opponents point out that this emerging technology facilitates the creation of deepfakes that threaten the integrity and security of people, institutions and governments.
In this case and as evidence of the argument, in October 2023, the UN Secretary General expressed concern about the potential harms caused by AI among which are disinformation, bias, discrimination, continuous surveillance, invasion of privacy, fraud and other transgressions of human rights (UN, 2023).[23] These events put the survival and integrity of people at risk and evidence how AI amplifies the use of sensitive information in the hands of individuals or governments. Also, it is argued that there is a lack of ethics in the use of AI, which could violate human rights and even turn human beings into victims of their own creation.
Under the prevailing conditions of automation and digitization, there is no denying that AI, when used for progress, fosters continuous improvement, strengthens common global goals and promotes advances in research and development. However, when used to undermine governance, it can generate pushback, distrust, conflict, chaos and uncertainty. Moreover, it is clear that AI empowers the use of databases and information, significantly influencing the construction of discourses and narratives, whether for good or ill. In other words, AI is an extremely useful tool for the generation of fake news and constitutes an effective weapon today. It is precisely this potentially illicit use that motivates international society to raise its voice in search of effective answers and solutions to mitigate the negative impacts of technology and disinformation in current conflicts.
On the other hand, proponents of the use of AI and disinformation in 21st century conflicts argue that humans are capable of controlling technological development and keeping it subject to the supervision of human operators. In the context of armed conflict, it is claimed that AI could foresee future conflicts and act preemptively, reminiscent of the science fiction movie “Minority Report,” where the ability to prevent crimes prevented offenses from occurring by reading the minds of future victimizers prior to the implementation of their plans. However, it is important to note that there is always the possibility of system failure.
Additionally, proponents argue that AI-driven SAALs enable compliance with IHL principles in an automated manner, reducing human error, hitting targets without causing extensive collateral damage, identifying targets effectively, and adhering to the principles of engagement. However, this situation could change with the advent of SAALs categorized as autonomous,[24] where the machine acts independently and adaptively to the environment to counter risks, mitigate impacts and eliminate threats, which implies that the automaton will end up, at some point, fighting against a human being, who will obviously be at a disadvantage. Should an automaton be allowed to kill a soldier? Is it fair to pit a human being against a machine/automaton? Should the use of SAALs on the battlefield be limited?
In the same vein, and with the aim of expanding the use of AI to address international problems, it is noted that AI has the potential to reinforce the advancement of the 17 Sustainable Development Goals (SDGs) by providing effective solutions to priority global challenges. In this context, on October 26, 2023, the UN unveiled a new Internal Advisory Body composed of 39 experts from various parts of the world, whose mission is to harness AI for the benefit of the common good. Advocates stress that organizations such as the UN highlight that AI offers significant opportunities in areas such as health, the environment, humanitarian aid, education, agriculture and, of course, in armed conflict (UN, 2023).[25]
Similarly, part of international society supports the use of disinformation as a tool of war because of the benefits it offers to those who employ it. For example, it alters perceptions, facilitates actions, prevents major losses, has both an offensive and defensive character, increases morale, conceals intentions, constructs realities and allows to act with surprise. These aspects seek to reduce human losses and the high costs of prolonged conflicts. According to the advocates of the use of disinformation as a weapon of war, this allows to win the hearts and minds of societies to defeat the enemy without the need to fight and to achieve the proposed objectives efficiently and effectively, a widely recognized strategic principle that remains relevant, despite the changes in international society.
It is safe to say that the debate will continue and the international context will continue to shape the new legal framework needed for 21st century conflicts. As this discussion unfolds, both AI and disinformation will remain critical means for the conduct of military operations in today’s conflicts. Moreover, they will be key drivers of change in war strategy in this century.
Conclusions
The world is undergoing a digital transformation and automation of productive, administrative, political and governmental activities. This evolution has reconfigured the means of waging war, incorporating AI and disinformation into arsenals. In addition, large technology companies have gained a crucial role in conflicts, modifying military strategies and tactics and presenting new challenges to the international system.
The use of AI on the battlefield by the various armed forces and actors involved in armed conflicts has become the norm without humanity being fully aware of the consequences for those directly involved and for international society as a whole. Evidently, AI has become the weapon par excellence of 21st century wars; however, it generates a great deal of questions and concerns as a result of the existing risk of an escalation that reveals the predominance of the machine over the human being. This makes it possible to reflect on the following question: Is the human being in control of the AI or vice versa? This question will certainly be answered in the coming years and future conflicts.
There is no single correct way to manage information or to deal with disinformation. The best weapons that human beings have to counteract the effects of disinformation are their critical thinking, social conscience, conviction and strength with respect to their principles. In the end, the search for truth is an unfinished and inclusive work; truth is dynamic and constructed by all those involved. It should be noted that the weakest link in the face of disinformation is the human being, due to the imperfections and vulnerabilities inherent in his own existence. Particularly, because the individual constructs a reality from incomplete, inaccurate, unreliable and difficult to verify information, resorting to mental shortcuts to cover his deficiencies. This causes the person to make wrong, irrational and potentially manipulated decisions.
Likewise, in this confusing and increasingly dangerous environment, information or disinformation has been used as a means to generate controversy, manipulate minds, discredit political figures, generate division among societies, mask true objectives, start wars and, of course, as high-impact weapons. There are even ethical and political challenges that need to be addressed. It remains to be established how to regulate the use of private data and the information of the world’s population to feed AI, assassinate people, violate international law, generate credible lies or an alternate reality and fight battles.
Finally, it must be recognized that disinformation is used as a weapon to manipulate, exploit or intensify divisions in societies in order to advance political, military, religious, social or commercial objectives. In a certain way, it strengthens the double discourse of some international actors and makes clear the existing impunity. Disinformation is all about strategy, as it seeks to dominate the minds of opponents, induce error and break their will to fight before a direct confrontation has begun. This represents a challenge for the current legal framework and international organizations, which must be adapted or updated to respond to the new realities of armed conflicts and have seen their authority undermined, respectively.
AI and/or disinformation are privileged means to wage war. This is due to their great destructive power, high efficiency, ease of implementation, capacity for deception and surprise, relatively low economic cost, technological superiority, focus on human weaknesses, making it possible to violate the legal framework with impunity, ability to evade accountability to international society, utility, both strategic and tactical, and high degree of flexibility. Therefore, it is expected that these means will be used more intensively in 21st century conflicts to achieve technological superiority, defeat the enemy without fighting, evade international sanctions, impose will on adversaries, and conduct remote-controlled warfare operations.
Endnotes:
- Univisión, “Para Putin, la nación que domine a la inteligencia artificial dominará al mundo” (Univision.com, 15 de abril de 2024), https://www.univision.com/explora/para-putin-la-nacion-que-domine-a-la-inteligencia-artificial-dominara-al-mundo. ↑
- Organización de las Naciones Unidas (ONU), 3 de marzo de 2017, “Contrarrestar la desinformación” ONU [Edición digital]. Recuperado el 16 de abril de 2024, de https://news.un.org/es/story/2017/03/1374761. ↑
- Organización de las Naciones Unidas (ONU). (s.f.). “Contrarrestar la desinformación”, https://www.un.org/es/countering-disinformation. ↑
- Ibid. ↑
- Ibid. ↑
- Organización de las Naciones Unidas (ONU), 3 de marzo de 2017, “Contrarrestar la desinformación”. ONU [Edición digital]. Recuperado el 16 de abril de 2024, de https://news.un.org/es/story/2017/03/1374761. ↑
- Kardoudi, O, “La inteligencia artificial militar ya gana a estrategas humanos en juegos de guerra” (El Confidencial, 27 de febrero de 2023), https://www.elconfidencial.com/tecnologia/novaceno/2023-02-27/inteligencia-artificial-tecnologia-tactica-militar_3582592/. ↑
- Díaz Herreros, R., “La IA lanza bombas nucleares durante una simulación de guerra para ‘mantener la paz en el mundo” (Vandal Random, 29 de febrero de 2024), https://vandal.elespanol.com/noticia/r25039/la-ia-lanza-bombas-nucleares-durante-una-simulacion-de-guerra-para-mantener-la-paz-en-el-mundo ↑
- Rivera, Juan Pablo, Ghislaine Mukobi, Antoine Reuel, Markus Lamparth, Christopher Smith, & Jörn Schneider (2024). Escalation Risks from Language Models in Military and Diplomatic Decision-Making. arXiv preprint arXiv:2401.03408. ↑
- DW, “EE.UU. ataca drones hutíes y una base en Yemen: militares” (1 de febrero de 2024). ↑
- Alejandro Porcelli, “La inteligencia artificial aplicada a la robótica en los conflictos armados. Debates sobre los sistemas de armas letales autónomas y la (in) suficiencia de los estándares del derecho internacional humanitario“. Estudios Socio-Jurídicos 23, no. 1 (2021): 483-530. ↑
- Kalpakjian, S., & Schmid, S., “Manufactura, ingeniería y tecnología” (Pearson Educación, 2002). ↑
- Farnaz Fassihi, “Lo que sabemos sobre los comandantes iraníes muertos en el ataque de Israel en Siria” (The New York Times, 15 de abril de 2024), https://www.nytimes.com/es/2024/04/03/espanol/iran-comandantes-siria-ataque-israel.html. ↑
- Stop Killer Robots, “Uso del sistema de procesamiento de datos Lavender en Gaza” (Stop Killer Robots, 18 de abril de 2024), https://www.stopkillerrobots.org/es/noticias/uso-del-sistema-de-procesamiento-de-datos-de-lavanda-en-gaza/. ↑
- SWI, “Jefe de ONU ‘profundamente preocupado’ por informes de que Israel usa IA en Gaza” (SWI swissinfo.ch, 10 de abril de 2024), https://www.swissinfo.ch/spa/jefe-de-onu-%22profundamente-preocupado%22-por-informes-de-que-israel-usa-ia-en-gaza/75145654. ↑
- Ignacio Jiménez Soler, “La nueva desinformación: veinte ensayos breves contra la manipulación” (Ediciones Península, 2020). ↑
- Forbes Staff, “ONU advierte sobre el uso potencial de IA para desinformación en elecciones” (Forbes, 15 de abril de 2024), https://www.forbes.com.mx/onu-advierte-sobre-el-uso-potencial-de-ia-para-desinformacion-en-elecciones/.
↑ - Juan Antonio Ortega Romero, “Desinformación: concepto y perspectivas”. Análisis del Real Instituto Elcano (ARI), no. 41 (2019): 1. ↑
- Ralph Keyes, “The Post-Truth Era: Dishonesty and Deception in Contemporary Life” (St. Martin’s Press, 2004). ↑
- Eric Alterman, “When presidents lie: A history of official deception and its consequences” (Penguin, 2005). ↑
- Friedrich Wilhelm Nietzsche, “La voluntad de poder” (Edaf, 2018). ↑
- Matthius M Maas, “International Law Does Not Compute: Artificial Intelligence and the Development, Displacement or Destruction of the Global Legal Order”. Melbourne Journal of International War, no. 20 (2019): 1-29. https://law.unimelb.edu.au/__data/assets/pdf_file/0005/3144308/Maas.pdf ↑
- Organización de las Naciones Unidas (ONU), “Un nuevo órgano consultivo aprovechará la inteligencia artificial para potenciar el desarrollo sostenible” (ONU, 16 de abril de 2024), https://news.un.org/es/story/2023/10/1525252 ↑
- Department of Defense, “Autonomy and law in warfare: A report on the legal and ethical implications of autonomous weapons” (U.S. Government Printing Office, 2012),13-14. ↑
- Organización de las Naciones Unidas (ONU), “Un nuevo órgano consultivo aprovechará la inteligencia artificial para potenciar el desarrollo sostenible” (ONU, 16 de abril de 2024), https://news.un.org/es/story/2023/10/1525252 ↑