By:

William J. Perry Center for Hemispheric Defense Studies, Washington, DC

Generative Artificial Intelligence: A Prospective Analysis of its Implications for Security and Defense

This article was originally published in the journal Security and Land Power:

Vol. 4 Núm. 1 (2025): Enero a Abril

Summary

Humanity is currently experiencing the so-called Fourth Industrial Revolution (4IR), which represents a profound break with the past. While previous revolutions were manifested in intellectual, economic, technological and, in some cases, political and social fields, this new stage is distinguished by the accelerated, exponential and convergent deployment of disruptive technologies. The first 25 years of the 21st century have seen what Fareed Zakaria calls “the age of revolutions,” characterized by profound transformations in politics, culture, identity and geopolitics, driven by a digital environment that has duplicated the advances of the past 250 years. It is essential to understand the physical and psychological effects of these changes, which generate both positive and negative consequences for society. Among the most relevant innovations are generative artificial intelligence (GAI) and quantum computing, which constitute, on the one hand, an existential challenge and, on the other, an opportunity for progress in various fields of knowledge. In particular, AGI stands as a key element in the military and strategic domain, transforming decision making and conflict management, while introducing unprecedented risks in terms of cybersecurity, disinformation and the arms race. The lack of balance between technological progress and human oversight could seriously affect global stability. Finally, this article analyzes AGI from a prospective perspective, assessing the threats, challenges and opportunities it poses in the field of security and defense, both in the present and in the near future.

Keywords: Generative artificial intelligence (GAI), security and defense, cybersecurity, Fourth Industrial Revolution (4RI), threats and opportunities.

Introduction

A few years ago, artificial intelligence (AI), in particular Generative Artificial Intelligence (GAI), occupied a marginal space in the public debate. However, in November 2022, the technology company OpenAI presented a computer program capable of simulating a conversation with a user: the conversational chatbot ChatGPT, named after its acronym for Generative Pre-trained Transformer.[1] Today, after the advances recorded in the last two years, AI has become a central topic in the global media, generating concern among leaders in science, business, journalism, public service, education, security, defense and international politics.

In their book Superagency, Reid Hoffman and Greg Beato highlight the uniqueness of ChatGPT compared to other technologies, emphasizing its ability to interact in a fluid and accessible way with the public. This versatile and compelling system allows complex concepts such as quantum mechanics or the composition of consumer price indexes to be clearly explained. Nevertheless, the phenomenon of “hallucinations” – errors in the generation of information – raises both astonishment and concern.[2]

Within the context of accelerated digital transformation, AGI stands as a fundamental technology in the military and strategic sphere. Its ability to generate knowledge, analyze large volumes of data and create advanced simulations makes it an essential tool for tactical and strategic decision-making. However, its use entails significant risks, such as the spread of misinformation, vulnerability to cyber-attacks and the acceleration of the AI-based arms race. Global competition for leadership in this technology could generate geopolitical tensions, redefining power dynamics and transforming security and defense into a highly automated and unpredictable environment.

If humanity fails to strike a balance between technological advancement and human control, there is a risk that its own creation will pose a threat. The lack of parity between the development of AGI and the human capacity to oversee it can lead to scenarios in which technology makes decisions without the intervention of reason and ethics. This risk is particularly critical in the security and defense domain, where unsupervised automation could compromise global stability. In this sense, AGI is not only an opportunity to improve operational efficiency, but also an urgent challenge in terms of governance, regulation and strategic control.

The general public, as well as many specialists in the field, continues to ignore significant aspects of this new era of AGI. Innovative capabilities and human reactions to them have the potential to change the relationship between people, reality and truth. Knowledge inquiry, along with the physical evolution of humanity, the dynamics of diplomacy, the international system, as well as security and defense, are fundamental issues for the coming decades that should be a priority for leaders in all sectors today.

The IAG’s capabilities, as impressive as they are, may seem insufficient due to its accelerated growth. Powers not yet imagined and whose scope is not yet understood are destined to impact everyday life. Future systems will facilitate significant and mostly beneficial advances that will result in improved health and wealth generation. However, these capabilities come with technical and human risks, both known and unknown. AGI and other technologies converge and operate in ways that their creators did not foresee and, in some cases, do not understand, and this pattern is likely to continue.

Many of the current challenges in this technology persist, as in the case of conversational agents similar to ChatGPT. These are based on large language models, known as Large Language Models (LLM). This specific type of machine learning construct, an AI program, is designed for language processing, as it can recognize and produce text. These structures process and generate language using a neural network architecture, in which multiple nodes (connected neurons) perform a complex series of interconnected computations.

The IAG seems to compress human time scales. Proposed goals for the future appear closer than one might think. For example, machines capable of defining their own objectives are closer than expected. If we are to live up to the risks involved, we need to react and act in the shortest possible time. As the relationship between humans and machines becomes ubiquitous, society will need to determine the appropriate nature of that relationship.

Despite the obstacles, it is essential to regulate AGI. Studies and legislative proposals in democratic countries require greater political will. There have been comments from political leaders, including the president of Colombia, Gustavo Petro, who, in an intervention with the media, stated that the world powers, the wealthiest ones, have digital spaces called “clouds”, which they control in a discretionary manner and for whose services they charge. Faced with this dystopian scenario, Petro proposed that “the clouds should be common property, in order to ensure that people have access to the knowledge developed by humanity without being charged for it”. These statements evidence a lack of understanding of the magnitude of the challenge facing humanity, due to a lack of knowledge of the potential of emerging, accelerating, exponential and convergent digital technology, in particular of AGI, as described in this article.[3]

The most ambitious legislation to date is the one adopted by the European Union (EU) in 2024, after an extensive deliberation process. Research and deployment of AGI is classified according to a scale based on risk levels. Any use that causes direct harm, violates fundamental human rights or compromises critical systems and infrastructure, as well as public transport, health or welfare, is considered unacceptable. High-risk applications are also subject to strict oversight and accountability mechanisms. In this context, high-risk AGI must be transparent, secure, subject to human control and adequately documented.

Regulations for the use of IAG in the security and defense domain are not limited to the enactment of new regulations. They also encompass the implementation of governance structures, codes of conduct, arbitration procedures, contractual compliance mechanisms and oversight systems. The integration of these elements requires the active participation of the citizenry in order to ensure appropriate use of this technology in a highly sensitive area.[4]

The answers to these questions can be approached from the perspective of safety and efficiency throughout history or from a philosophical approach. Individuals, societies, nations, cultures and religions will need to establish the limits, if any, to the authority of GSI in the security domain. Likewise, it will be necessary to decide whether this technology will be allowed to act as an intermediary between humans and reality.

In the near future, humanity will be forced to decide between preserving the traditional role of entrepreneurship – which could imply transferring primacy in the generation of new knowledge to AGI – or, in contrast, renouncing the limitations of the biological intellect in favor of a renewed alliance with AGI in the realm of advanced thinking. Society will therefore have to choose between defining its goals and using AGI to achieve them, or allowing this technology to assist in determining those goals. What is most urgent is for humanity to give human dignity a modern and sustainable definition, capable of providing philosophical guidance for future decisions in this area.

Consequently, society will be faced with the dilemma of establishing its goals and using AGI as an instrument to achieve them or, alternatively, delegating the formulation of these objectives to this technology. It is imperative to redefine the concept of human dignity in a contemporary and sustainable manner, establishing a philosophical framework to guide future decisions in this area.

In the context of the advancement of AGI and its present and future capabilities, Henry Kissinger, a scholar of geopolitics and global conflict, argues that it is necessary to strategically rethink approaches to defense, security and diplomacy. AGI is emerging as a determining factor in international relations, characterized by its immunity to fear and bias. This technology introduces a new possibility of objectivity in strategic decision making, which can benefit both belligerent and pacifist actors. However, preserving the subjectivity inherent in human wisdom is essential to ensure responsible use of force in conflict situations. In any scenario, AGI will reveal both the best and worst expressions of humanity.[5]

The Era of IAG and Cybersecurity

AGI is a technology designed to perform tasks that traditionally require human intelligence, and it is rapidly becoming a reality. Machine learning, a process by which this technology acquires knowledge and develops skills in periods considerably shorter than those of human learning, has been steadily expanding in multiple applications, standing out in areas such as security and defense.

Today, machine learning using deep neural networks has produced insights and innovations that have long eluded human thinkers, generating text, images and videos that appear to have been created by humans. IAG, driven by new algorithms and increasingly abundant and affordable computing power, is becoming ubiquitous. One example of the technological competition between China and the U.S. is the uproar among Chinese tech companies, which have developed their own AI-powered chatbots. IAG’s two Chinese models, DeepSeek-V3 and DeepSeek-R1, have launched an advertising campaign based on praise of their technological capabilities, comparing them to US ones such as OpenAI and Meta (Hoskins and Rahman-Jones 2025). However, an investigation has now been denounced and opened for the alleged theft of technology from US companies by DeepSeek.[6]

Humanity, in collaboration with the IAG, has developed an innovative and extremely powerful mechanism for exploring and organizing reality. This is, in many respects, inscrutable to human understanding. AGI accesses reality in a different way than humans do. If the feats performed by this technology become benchmarks, it will be able to access aspects of reality in a unique way. Its operation portends a breakthrough toward the essence of things, a progress that philosophers, theologians and scientists have pursued with partial success for centuries. However, as with all technology, AGI is as much about its capabilities and promise as it is about its practical effectiveness.

The advent is of great relevance both historically and philosophically. Attempts to halt the advances of this technology, if they materialize, will enable future generations, trained to do so, to face the implications of their own inventiveness and keep it under control. Non-human forms of logic have been created, with scope and acuity that surpasses humans. To date, however, AGI is complex and inconsistent. Successful efforts to limit the progress of this technology will enable empowered generations to come to manage the implications of their own creations and keep them under control. Consequently, this will ensure a more balanced future between innovation and responsibility.

The advance of AGI may be inevitable, but its fate is not sealed. Its advent is both historically and philosophically significant. Attempts to halt the development of this technology, should they materialize, will allow future generations, properly trained, to deal with the implications of their own inventiveness and keep the technology under control. Non-human forms of logic have been developed, endowed with a scope and acuity beyond human capabilities. Despite this, so far, AGI is proving complex and inconsistent. In some cases it achieves superhuman levels of performance, while in others it commits basic errors, with totally absurd results. With the possibility of intangible software taking on social functions traditionally reserved for humans, the question arises as to how the evolution of AGI will influence human perception, cognition and interaction.

As humankind approaches the limits of its cognitive capacity, it has turned to the use of computers to enhance thinking and overcome these restrictions. These tools have shaped a digital environment distinct from the physical realm in which human activity traditionally took place. As dependence on digital technology grows, a new era is dawning in which the rational mind is no longer solely responsible for discovering, understanding and cataloging the elements that make up the world and its reality.

Digitalization has transformed all levels of organization and human knowledge. Electronic devices, such as computers and telephones, provide unprecedented access to information. Companies have become data collectors in the digital environment, leveraging search engines and user-generated information. This situation gives corporations power and influence beyond that of many sovereign states. Fearful of transferring power to the private sector, governments have begun to take advantage of this area, applying fewer regulations and restrictions. They have also designated cyberspace as a strategic domain in which they must innovate to outperform their competitors, which is one of the great challenges for public-private partnerships, essential for the proper and secure use of cyberspace.

Society, in general, has little understanding of what has happened and what is happening in cyberspace. This is explained, in part, by the speed of events and the abundance of information. Despite the achievements made, the digitalization process has led to less contextualization and conceptualization. The new digital generations, at least for now, do not consider necessary the formulation of concepts that, throughout history, have compensated for the limitations of collective memory. Such search engines employ AGI to answer queries. In this process, humans delegate relevant aspects of their thinking to technology; however, the information obtained is not self- explanatory, as it depends on the context. To be useful, or at least meaningful, it must be interpreted through the lens of culture and history.

The advantage of contextualizing information is that it initiates a process that transforms information into understanding. In turn, this facilitates the formation of convictions that translate into wisdom. However, the Internet provides information from millions of sources and users, which deprives individuals of the solitude and time needed for sustained reflection, an element that historically has been crucial for the development of convictions. As solitude and time diminish, both the strength of convictions and fidelity to them weaken. These elements, combined with wisdom, facilitate access to and exploration of new horizons.

The digital world, and in particular the IAG, shows little patience for wisdom. Its values are shaped by approval rather than introspection. This reality contradicts the enlightened idea that reason is the primary component of consciousness. Furthermore, the historical constraints imposed by distance, time and language have become obsolete, as the digital realm offers an inherent connectivity of remarkable relevance.

With the expansion of information, computer programs have been used to classify, refine and evaluate data through patterns, guiding the response to queries directed to the IAG (which completes sentences sent by text message, identifies the book or establishment sought, and incorporates suggestions for articles and entertainment according to observed behavioral patterns). With the increasing use of AGI in everyday life, the traditional role of the human mind in shaping, organizing and evaluating decisions and actions is transformed.

Security, Defense and Technology in the New World Order

Throughout history, security and defense have been fundamental pillars for the survival of any organized society. While culture has evolved in its values and politics in its interests and aspirations, the need for self-defense, whether autonomously or through alliances, has remained unchanged.

In every era, technological advances have strengthened security by improving threat surveillance, enhancing capabilities and projecting influence across borders. In times of war, these advances have been instrumental in achieving maximum operational readiness. A prime example was the beginning of the modern era, when the introduction of firearms, naval guns and instrument navigation changed the shape of conflict. In this context, Carl von Clausewitz emphasized that force is equipped with the inventions of art and science to counter the opposing force.

During the 20th century, the capabilities, purposes and strategies of nations were established, at least in theory, in a balance of power. Despite this, the calibration of strategic means and purposes spiraled out of control. The technologies used to ensure security and defense multiplied and acquired greater destructive potential. At the same time, the methods used to achieve the defined objectives became more elusive.

Today, the irruption of cyber capabilities and AGI has added extraordinary levels of complexity to these calculations. The utility of cyber systems lies largely in their opacity and denial-of-service capabilities, operating at times on the ambiguous frontiers of cyberspace: disinformation, intelligence gathering, sabotage and traditional conflict without a recognized doctrine. However, each advance has brought with it new vulnerabilities.

The use of AGI carries the risk of further complexifying the conundrums of modern strategy, going beyond human intentionality and understanding. The convergence of nuclear weapons with AGI represents a significant danger, as it amplifies conventional, atomic and digital capabilities, making it difficult to predict and maintain security relationships between rivals, as well as to contain conflicts. In addition, defensive functions must operate at various levels, which makes their implementation indispensable.

The solution to this complexity does not lie in eliminating or disarming existing technological capabilities. At present, nuclear, cyber and AGI technologies play an inescapable role in global strategy. If certain countries choose to slow down the advance of these tools and reject the implications of their capabilities, the result will not be a more peaceful world, but an even more unbalanced one, in which the development and application of the most formidable strategic capabilities are carried out with less respect for the principles of responsibility, democracy and international balance. In this context, both the national interest and the moral imperative recommend that the US not give an inch in these areas – nuclear, cyber and AGI – but strive to consolidate them and continue to lead globally.

At this point in time, prior to the convergence of AGI and quantum computing, a security dilemma of an existential nature is faced. If we see a tight competition between China and the U.S. to lead and dominate a single, perfect and unquestionably dominant IAG, what would be the consequences? According to Dr. Henry Kissinger, Craig Mundie and Eric Schmidt, authors of the book Genesis. Artificial Intelligence, Hope, and The Human Spirit,[7] several possible scenarios emerge:

  1. Humanity will lose control in an existential race between multiple actors immersed in a security dilemma.
  2. Humanity will suffer from the exercise of supreme hegemony by a victor who is not subject to the checks and balances traditionally required to guarantee a minimum of security to others.
  3. There will not be a single supreme IAG body, but various manifestations of superior IAG at the global level.
  4. The companies that develop and supply IAG will be able to accumulate social, economic, military and political power of a totalitarian nature.
  5. Private corporations could use their influence in defense matters in combination with their commercial interests, resulting in a conflict of interest with government institutions.
  6. The IAG could reach its greatest relevance and express itself in a more widespread and lasting way in non-national structures, such as religious ones.
  7. Uncontrolled access and open source use of GSI could enhance the capabilities of criminals to perpetrate crimes both in cyberspace and in the physical world.

Conclusions

The progress of GSI is often measured by comparing it to a human being’s ability to perform specific tasks, with research focusing on superhuman performance in activities such as language translation or driving. However, these assessments overlook that the most powerful force lies in the coordination of groups of individuals to achieve shared goals. In this sense, organizations such as companies, the Armed Forces (Armed Forces) and bureaucracies must join forces through the use of AGI and the processing of large volumes of data around specific goals, and it is essential to understand this technology in order to shape it, given that its evolution is constantly transforming the world. AGI must be used to bring out the best in humanity, open new avenues for creativity and cooperation, and strengthen the most valuable aspects of human life and relationships as the ultimate complement to human endeavor and an optimal lifestyle, always under democratically defined parameters subject to public debate. Although regulation alone is not enough, it represents an essential first step, requiring bold measures and a deep understanding of what is at stake in the coming years in a world where containment seems unattainable, ushering in a future where survival and well-being can coexist with the benefits of AGI. Lack of control does not oblige us to abandon reason or give up commitment to action in the real world, as human dignity allows us to recognize the synergy between human beings and AGI and to accept the need for a posture of faith to face the challenges ahead, as science advances and reveals ever greater mysteries. In this context, Stuart Russell, an expert in IAG, in his work Human Compatible,[8] points out that it is necessary to develop a new relationship between humans and machines[9] that will make it possible to successfully manage the transformations of the coming decades.[10]

Endnotes

  1. Ajay Agrawal, Joshua Gans, and Avi Goldfarb, “ChatGPT and How AI Disrupts Industries”, Harvard Business Review, 2022, accessed February 27, 2025, https://hbr.org/2022/12/chatgpt-and-how-ai-disrupts-industries.
  2. Carl von Clausewitz, On War, edited and translated by Michael Howard and Peter Paret (Princeton, NJ: Princeton University Press, 1989), 75.
  3. Hoffman, Reid and Beato, Greg. 2025. Superagency: What Could Possibly go Right with Our AI Future. Authors Equity.
  4. Peter Hoskins and Imran Rahman-Jones, “Nvidia shares sink as Chinese AI app Spooks Markets,” BBC, accessed February 26, 2025, https://www.bbc.com/news/articles/c0qw7z2v1pgo.
  5. Pedro Huichalaf, “Irruption of Chinese AI models: Technological Cold War”, Huichalaf.cl, accessed February 26, 2025, https://www.huichalaf.cl/irrupcion-de-modelos-de-ia-chinos-guerra-fria-tecnologica
  6. Henry A. Kissinger, Craig Mundie, and Eric Schmidt, Genesis: Artificial Intelligence, Hope, and The Human Spirit (New York: Little, Brown and Company, 2024).
  7. José David Rodriguez, “Petro aseguró que hay ‘feudalismo’ cibernético con las nubes digitales”, Infobae, accessed February 26, 2025, https://www.infobae.com/colombia/2025/01/30/petro-aseguro-que-hay-feudalismo-cibernetico-con-las-nubes-digitales/
  8. Stuart Russell, Human Compatible: AI and the Problem of Control (Allen Lane, 2019).
  9. Mustafa Suleyman and Michael Bhaskar, The Coming Wave: Technology, Power, and The Twenty-First Century’s Greatest Dilemma (Crown, 2023).
  10. Fareed Zakaria, Age of Revolutions: Progress Backlash from 1600 to the Present (New York: W.W. Norton & Company, 2024).

SHARE

Leave a Reply

Your email address will not be published. Required fields are marked *

The ideas contained in this analysis are the sole responsibility of the author, without necessarily reflecting the thoughts of the CEEEP or the Peruvian Army.

NEWSLETTER