Author

Artificial Intelligence in National Security: Legal Limits

Summary

Terms such as big data, mechatronics and sensorization, decision making, neuromorphic computing and computer perception are some of the concepts and technologies integrated into National Security that are constantly evolving. These scientific advances have the existing national and international legislation as the only limit to avoid their indiscriminate use. In this sense, this article analyzes the current and possible uses of Artificial Intelligence as a means that are added to different attack and defense strategies within the framework of National Security, as well as whether the existing legislation to limit its use complies with its purpose.

Keywords: Artificial intelligence, national security, national and international legislation, artificial intelligence in national security.

Introduction

In certain spaces of analysis it is common to look for a meta-message in each event or image, as is often the case with the covers of the English magazine of international relations and economics, The Economist, which, in July 2020 published “Towards a new normality 2021–2030. In this writing, 50 world experts deduced how life in society would change, based on the current and future trend, and among the 20 points discussed, they considered the implementation of Artificial Intelligence (AI) in every human act, from domestic issues to transcendental decision-making by governments. This information cannot go unnoticed, since technological advances in recent years have accelerated what -for many specialists- would be considered a “Fourth Industrial Revolution”.

These changes have been affecting all structures of life in human communities, ranging from the absolute and controlled benefit of scientific advances to apocalyptic visions in the future that exceed autonomy, given their speed and efficiency. Faced with these possibilities, industrialized countries have started discussions on the influence of AI, including the first outlines to regulate or include specific guidelines for its use by States in defense of their interests.

In the current circumstances, where security has become one of the neuralgic aspects for development, Peru must be at the forefront and make use of all instruments that guarantee the safeguarding of the integrity of its citizens and fulfill the objective of peaceful coexistence, being necessary to rethink the existence of legal limits in the use of AI. If there are no minimum requirements, the indiscriminate use of AI could threaten, directly or indirectly, against the dignity of the citizen (in matters such as private life and privacy) and even against his integrity (weapons with high degrees of autonomy), for which reason it is necessary to foresee the nearby circumstances.

In this sense, this article presents the general panorama of the use of AI in National Security, including its areas of greatest development, and analyzes the current legal criteria that limit its use, evaluating whether these tend to be effective against upcoming events.

Start of the Fourth Industrial Revolution

Academics and specialists point out that, since the beginning of the current millennium, the world has entered the first phase of what they call the “Fourth Industrial Revolution”. In this regard, Klaus Schwab, founder of the World Economic Forum, points out that this new restructuring of the planet is related to the digital world and is characterized by “… a more ubiquitous and mobile internet, by smaller and more powerful sensors that are getting cheaper and cheaper. , and by artificial intelligence and machine learning[1]. Faced with this statement, António Manuel de Oliveira Guterres, Secretary General of the United Nations, warned in 2018 that the world is not yet ready for the last revolution because the possibilities of social chaos and the indiscriminate use of AI are increasing in all human situations, which would lead to human masses replaced by more efficient machines and the use of autonomous weapons without restriction.

Since “the development of AI” is the axis of this process of social transformation, a first problem appears: conceptualize it, so there is a permanent debate to define it since the mere mention of the word “intelligence” complicates it. Therefore, when talking about AI there are constant elements that, although they do not result in an absolute definition, do help to frame the idea.

For Nicolas Miailhe and Cyrus Hodes, AI is a set of “agents” (programs that run on computer systems) capable of learning, adapting and developing in dynamic and uncertain environments[2]. In this sense, the notion of intelligence is added to those of autonomy and adaptability through the ability to learn from a dynamic environment. As a matter of fact, the Ibero-American Data Protection Network points out that, although there is no single definition of AI, it could be said that, in its conception, it is an “umbrella” term, since it includes different ideas and computer techniques that are evolving, from algorithms to computational systems of Deep Learnig[3].

On the other hand, AI can be categorized into four approaches: (1) systems that think like humans (computational systems with preceding information that they process in order to predict events or behaviors), (2) systems that think rationally (with the similarity of human logic that is used as an alternative to solve problems through inferences), (3) systems that act like humans (systems that can perform human functions and require intelligence) and (4) systems that act rationally (the so-called “technological singularity”, machines with the ability to automate intelligent behavior)[4].

The possibility of amplifying the repercussions of the use of AI every day is becoming part of the reality, to such a degree that both countries and international organizations seek to find related points to regulate the use of AI, among which the Organization stands out of the United Nations, the European Union or the Organization for Economic Cooperation and Development (OECD)[5]. So there is a panorama in which AI is becoming ubiquitous in various private and / or state areas such as National Security.

Artificial Intelligence in National Security

In this context, there are several public and private settings in which the various AI techniques are being developed. For example, in the state issue of State Policies, National Security is considered primarily, which is defined by the Center for Higher National Studies as “… the situation in which the State is guaranteed its existence, presence and validity, as well as its sovereignty, independence and territorial integrity and its heritage, its national interests, its peace and internal stability, to act with full authority and free from all subordination, in the face of all kinds of threats”[6].

Considering that it is an obligation of the State to safeguard its sovereignty, it must have the greatest amount of resources and technologies that allow it to face all kinds of threats. It is here where AI constitutes a gravitant factor due to the speed and effectiveness of the result in the attack – defense binomial, a basic element of any National Security protection strategy. For this reason, in the case of attack, results are sought with no or low amount of collateral effects and in the case of defense, the total protection of the protected asset is prioritized; However, all this is no longer limited solely to the physical plane, but encompasses the virtual field, which is why the figure of cybersecurity appears[7].

At the global level, nations and supranational organizations seek to develop projects that generate new knowledge and contribute to improving the security of countries, both at the particular level of one dimension and at the multidimensional level or, as the Higher Center for Defense Studies mentions. National (CESEDEN), on the perfect trinomial to balance the actions of AI applied to the military field: the logical factor (referring to data processing), the physical factor (weapons) and the human factor (continuous knowledge of the situation[8].

Risks of the use of AI in aspects related to National Security

Of the possible risks of the use of AI -in the context of National Security- it cannot be ignored that Peru is a country that acquires external technology and that, currently, in technological matters there is a “race” between two powers: the People’s Republic of China and the United States. On the one hand, Chinese investments go through clusters that are part of a triangulation between the support of the State, the party and private companies in order to consolidate their presence by 2035 and – ten years later – defeat their main rival. On the other hand, the United States continues to lead research in this field. However, the result of such “competition” is uncertain, although it could be affirmed that there will be regional monopolies and “floating” information in aspects of National Security that will become vulnerability when depending on another State[9].

Peru’s position as a technology-importing country leads to the reflection that no human act or creation is infallible and, in the case of new technologies, the level of uncertainty is high. This does not mean that – over time – these errors will be relatively overcome by the same AI, which is characterized by its rapid evolution.

In this regard, in cases about data protection, the Ibero-American Data Protection Network finds two essential risks: the pre configuration of the algorithm and the quality of the information. In the first risk, the prejudices of the creator could be introduced, consciously or unconsciously, while in the second risk, the way in which the AI ​​is programmed could cause generalities in its data processing that would harm third parties, without neglecting that the AI it could also obtain false data that would skew its performance[10].

In this sense, these risks could occur both in access to low-quality information and due to strategic maneuvers by an adversary to skew their behavior. Situation that is not framed in mere speculation since systems can fail and, both biases and human errors (intentional or not), can magnify these failures with catastrophic consequences due to the fact that AI brings threats to the digital plane and of the information. In this way, the use of algorithms modifies the security risk scenarios for citizens, organizations and States[11].

Another risk to take into account is the level of vulnerability of the AI ​​to cyberattacks, which would allow the system to be manipulated remotely and make it act contrary to its objectives when it is “hacked”; in that way, private information would be obtained illegally. This risk would increase if a single algorithm is trusted as it also presents risks associated with the security of the code and, if it is corrupted or altered, it could leave organizations and States defenseless[12]. From another perspective, the possibility is presented in which those responsible for managing AI could make use and abuse of systems through cyber surveillance as a method of social control, or by carrying out disinformation campaigns due to the fact that mentioned software has the ability to analyze huge amounts of data from citizens or companies, being able to “profile” the monitored element for purposes other than that of State protection.

International and national legislation

AI should not be thought of as a weapon per se as it is only a man-made means of solving certain tasks. The use made of it is very different, turning it into a sophisticated instrument or intermediary for the protection of National Security, which brings AI to the field of ethical debate and, subsequently, to the legal one.

In the legal aspect, it can be mentioned that after the Second World War, any discussion that dealt with specific issues on technology called “intelligence” was very remote. Authors such as José Luis Calvo Pérez have as their original debate on the matter the “Convention on prohibitions or restrictions on the use of certain conventional weapons that may be considered excessively harmful or of indiscriminate effects”, dated September 15, 1980[13]. Additionally, the “Clause Martens”, a declaration of principles in favor of humanity, its conclusions being extensively applicable to similar situations[14].

With the passage of time and in the face of the possibilities of error, vulnerability or misuse of AI, the concern about rapid scientific progress against a slowed down legal debate leads States to seek expansion or make regulation more specific. This situation has been occurring in the European Union, where it is perceived that the issue is exceeding the protocols for the “race” between Chinese and American advances, avoiding becoming a technological colony of either of the two powers[15]. In this regard, the conclusion reached by the analysis of the Federal Polytechnic School of Zurich is that a global agreement on the use of AI must be based on five ethical principles: transparency, justice and equity, non-maleficence, responsibility and privacy. However, this analysis underscores that there are substantial disagreements in how these principles are interpreted, why they are considered important, and how they should be implemented[16].

Additionally, it should be emphasized that the regulation of AI should be related to four important basic axes in the debate: (1) the black box (the regulation should not be generalized because the algorithms used “are qualitatively different … much of the processing, storage and use of the information is carried out by the algorithm itself and in a not very transparent way within a practically inscrutable black box of processing”[17], (2) the biases of the algorithms (it is related to the system programmer and the biases that could be programmed or a basically developed algorithm that can cause discrimination situations), (3) the ethics of selection (understood as the cases in which the machine makes decisions in a conflictive scenario, which for a human being would lead to a moral dilemma, in the case of the system it must be defined who, in such a situation, would be responsible) and (4) the handling of information ( Like Big Data systems that handle and analyze large amounts of data in a short time, regulation should focus on “how and to whom -people or organizations- are granted access to such data. Information is power and, indeed, an undue burden is placed on the individual to manage their privacy rights”[18].

In this regard, countries with technological capacity that investigate and develop AI as a weapon or security implement seek to limit any legislation that affects them with International Humanitarian Law because, if there is greater specificity in the legislation, they consider it would be an obstacle to future research[19]. However, António Guterres, Secretary General of the UN, during the Conference on technology in Lisbon, was emphatic in pointing out that any autonomous weapon based on AI must be prohibited by International Law, since if AI has a function in favor of humanity exists in the facet in which the use would involve the replacement of men by machines, consequently (in the case of AI as an autonomous weapon) “weapons […] will have the possibility of killing by themselves”[20].

Regarding the Peruvian case, it is incipient when it comes to formulating legislative proposals on issues related to AI in Defense and National Security. However, its advances are more focused on the issue of Cybersecurity; that is to say, AI not turned into a weapon of war, but as part of the defense mechanisms in cyberspace. This is corroborated in the “Cybersecurity Report 2020: risks, progress and the way forward in Latin America and the Caribbean”, where it is mentioned that Peru does not have a National Cybersecurity Strategy, but does present progress on issues of Cybersecurity[21].

In 2000, the Congress of the Republic of Peru promulgated the “Law that Incorporates Computer Crimes into the Penal Code – Law No. 27309”, which penalized those who violated, in some way, a database; Five years later, the National Police created the High Technology Crime Investigation Division (DIVINDAT). Subsequently, in 2001, the “Personal Data Protection Law – Law No. 29733” was enacted in order to guarantee the fundamental right to the protection of personal data; while in 2013 the “Law on Computer Crimes – Law No. 30096” was updated due to the fact that the systems and databases fell into a state of violation in the face of new cyberattacks and that may evolve with the advancement of science and technology[22]. Finally, in 2019, the opinion of the Cybersecurity Law project that seeks to establish the regulatory framework on digital security in Peru was approved, and one of the most specific regulations on the subject was promulgated, the “Cyber ​​Defense Law – Law No. 30999” that regulates military operations in and through cyberspace in charge of the executing bodies of the Ministry of Defense within their sphere of competence, in accordance with the law”. With the aforementioned norm, the Peruvian State can carry out operations using its forces in cyberspace, always based on the Charter of the United Nations (Article 51) and the provisions of International Human Rights Law and International Humanitarian Law[23].

Conclusions

From what has been analyzed, it is clear to point out that the process of adaptation to the so-called “Fourth Revolution” has begun, although specialists point out that the exponential growth of technological advances versus the almost non-existent debate on the subject does not allow the world to be prepared for said social change. The characteristic par excellence of the “Fourth Revolution” is the presence of AI, which does not have a definitive concept, but there are aspects that are the basis for defining it, such as autonomy and the ability to adapt through learning. In that sense, as the industrialized powers have the monopoly on next-generation AI, countries like Peru become a “captive acquirer” of the technology, which means higher costs in programming and updating.

The international debate on the legality of the use of AI when it is used in autonomous weapons is usually in charge of the powers that investigate and develop it and try to limit it to the principles of International Humanitarian Law; On the other hand, in the area that involves AI on Cybersecurity, the rules are constantly being updated. In the Peruvian case, since 2000 it is intended to be at the legislative forefront of scientific advances, but directed especially to care and penalties against database transgression.

Final Notes

  1. Klaus Schwab, “The Fourth Industrial Revolution”. (Madid: Penguin Random House, 2016), 13.
  2. Nicolas Miailhe y Cyrus Hodes. «La troisième ère de l’intelligence artificielle.» Comprendre l’essor de l’intelligence artificielle, Instituto Veolia Nicolas Miailhe Cyrus Hodes https://www.institut.veolia.org/sites/g/files/dvc2551/files/document/2018/03/Facts-AI-03_La_troisieme_ere_de_lintelligence_artificielle_-_Nicolas_Miailhe_Cyrus_Hodes.pdf (Consulted on May 17, 2021).
  3. Ibero-American Data Protection Network, “General recommendations for the treatment of Data in Artificial Intelligence.”, (Ministry of Justice and Human Rights. June 21, 2019) https://www.minjus.gob.pe/wp-content/uploads/2019/12/RECOMENDACIONES-GENERALES-PARA-EL-TRATAMIENTO-DE-DATOS-EN-LA-IA.pdf (cited May 20, 2021).
  4. Jairo Andrés Villalba Gómez, “Emerging Bioethical Problems of Artificial Intelligence.” Diversitas: Perspectives in Psychology (Universidad Santo Tomás) XII, nº 1 (February 2016): 137 – 147.
  5. For example, countries like Canada, China, Denmark, the United States, France, Finland, India, Italy, Japan, Mexico, Singapore, South Korea, Sweden, Taiwan, the United Arab Emirates, and the United Kingdom have national plans on AI. See María Belén Abdala, Santiago Lacroix Eussler, and Santiago Soubie. “The politics of Artificial Intelligence: its uses in the public sector and its regulatory implications.” (Center for the Implementation of Public Policies for Equity and Growth. October 2019), https://www.cippec.org/wp-content/uploads/2019/10/185-DT-Abdala-Lacroix-y-Soubie-La-pol%C3%ADtica-de-la-Inteligencia-Artifici….pdf (cited June 23, 2021).
  6. Center for High National Studies, “Research lines”, (Center for High National Studies. 2019), https://www.caen.edu.pe/wordpress/direccion-de-investigacion/lineas-de-investigacion/ (accessed June 14, 2021).
  7. Abdala, Lacroix Eussler and Soubie, The politics of Artificial Intelligence.
  8. In the same text, the case of the European Defense Agency (EDA) is analyzed, which is the conjunction of professionals of the specialty in groups called “CapTechs” (Capability Technology Group), which currently number twelve, see Higher Center for National Defense Studies (CESEDEN) “Artificial intelligence applied to defense.” Higher Center for National Defense Studies (June 2018). http://www.ieee.es/Galerias/fichero/docs_trabajo/2019/DIEEET0-2018La_inteligencia_artificial.pdf (cited June 14, 2021).
  9. Joaquín Fournier Guimbao, «Artificial Intelligence: a race towards a technological future.» (Spanish Institute for Strategic Studies. July 13, 2021), http://www.ieee.es/Galerias/fichero/docs_opinion/2021/DIEEEO89_2021_JOAFOU_Inteligencia.pdf(cited on August 4, 2021).
  10. Ibero-American Data Protection Network. “General recommendations for the treatment of Data in Artificial Intelligence.” (Ministry of Justice and Human Rights. June 21, 2019), https://www.minjus.gob.pe/wp-content/uploads/2019/12/RECOMENDACIONES-GENERALES-PARA-EL-TRATAMIENTO-DE-DATOS-EN-LA-IA.pdf ((cited May 20, 2021).
  11. Abdala, Lacroix Eussler and Soubie, The politics of Artificial Intelligence.
  12. Ibíd, 13.
  13. This Convention has its origins in the year 1968, the year in which both the Secretary General of the United Nations and international organizations and the International Committee of the Red Cross, will deal with “the need to prohibit and limit the use of certain methods and means of war, and asked him to take whatever measures were necessary to comply with the provisions of the resolution”, United Nations Organization, “Convention on prohibitions or restrictions on the use of certain conventional weapons that may be considered excessively harmful or having indiscriminate effects.” (United Nations Audiovisual Library of International Law. September 15th, 1980) https://legal.un.org/avl/pdf/ha/cprccc/cprccc_ph_s.pdf (cited July 28th, 2021).
  14. Based on the statement of the delegate of Russia at the Hague Peace Conference of 1899, von Martens, which in its original version drawn up in the Convention regarding the laws of land warfare (Hague II) of 29 of July 1899, would be translated as follows: “While a more complete Code of the laws of war is being formed, the High Contracting Parties deem it appropriate to declare that, in cases not included in the regulatory provisions adopted by them, the populations and belligerents remain under the guarantee and regime of the principles of the Law of Nations recommended by the customs established among civilized nations, by the laws of humanity and by the demands of public conscience ” Rupert Ticehurst, “The Martens Clause and the Law of Armed Conflict.” (International Review of the Red Cross, 1996) 324 – 339.
  15. Fournier, Artificial Intelligence: A Race to a Technological Future.
  16. Rafael De Asis, “Artificial Intelligence and Human Rights.” (E- Archive of the Carlos III University of Madrid. April 2020), https://e-archivo.uc3m.es/bitstream/handle/10016/30453/WF-20-04.pdf?sequence=1&isAllowed=y (cited June 16, 2021).
  17. Abdala, Lacroix Eussler and Soubie, The politics of Artificial Intelligence, 17.
  18. Ibíd.
  19. José Luis Calvo Pérez, «International debate on lethal autonomous weapons systems. Technological, legal and ethical considerations.” (Ministry of Defense – Spanish Navy. General Marine Magazine 278, nº 3, 2020) 457 – 469.
  20. His statement was as follows: “As Secretary-General of the United Nations, my concern is to ensure that the UN is capable of supporting cutting-edge technologies to make the most of their positive impact, both on people and on the planet, and, in turn, limit its misuse, in addition to pointing out that “the militarization of artificial intelligence represents a serious danger… [because] it will make it very difficult to avoid the escalation of conflicts and guarantee respect for international humanitarian law on the battlefields. He concluded by noting: “Machines that have the power and discretion to take human lives are politically unacceptable, they are morally disgusting and must be prohibited by international law”. UN News, “Autonomous weapons must be prohibited under international law” (United Nations, November 5, 2018), https://news.un.org/es/story/2018/11/1444982 (cited June 16, 2021).
  21. Inter-American Development Bank; Organization of American States, “Cybersecurity Report 2020: risks, progress and the way forward in Latin America and the Caribbean.” (Inter-American Development Bank. July 2020), https://publications.iadb.org/publications/spanish/document/Reporte-Ciberseguridad-2020-riesgos-avances-y-el-camino-a-seguir-en-America-Latina-y-el-Caribe.pdf (cited July 16, 2021).
  22. The purpose of the Law is “… to guarantee the fundamental right to the protection of personal data, provided for in article 2 numeral 6 of the Political Constitution of Peru, through its adequate treatment, within a framework of respect for the other fundamental rights that are recognized in it”, Congress of the Republic, “Law on the protection of personal data – Law No. 29733.” (El Peruano – Legal Norms. June 21, 2011). https://diariooficial.elperuano.pe/pdf/0036/ley-proteccion datos-personales.pdf (consulted on July 16, 2021).
  23. Article 51.- No provision of this Charter shall impair the inherent right of self-defense, individual or collective, in the event of an armed attack against a Member of the United Nations, until the Security Council has taken the necessary measures to maintain international peace and security. The measures taken by the Members in exercise of the right of legitimate defense will be immediately communicated to the Security Council, and will not affect in any way the authority and responsibility of the Council in accordance with this Charter to exercise at any time the action it deems necessary with the in order to maintain or restore international peace and security. United Nations Organization, “Charter of the United Nations.”, (United Nations. June 26, 1945). https://www.un.org/es/about-us/un-charter/chapter-7 (cited 2021 May 23).

SHARE

Leave a Reply

Your email address will not be published. Required fields are marked *

The ideas contained in this analysis are the sole responsibility of the author, without necessarily reflecting the thoughts of the CEEEP or the Peruvian Army.

Image: CEEEP

NEWSLETTER