Reprinted from Military Digest WeChat public account
ChatGPT is a text interaction tool driven by artificial intelligence technology launched by the American company OpenAI, marking a significant breakthrough in the field of generative AI and natural language processing.
The “Autonomy” Feature of Generative AI
With the explosive popularity of ChatGPT, the generative AI technology behind it has gradually entered the public eye. On April 11, 2023, the Cyberspace Administration of China released the “Regulations on the Management of Generative AI Services (Draft for Comments)”, defining generative AI as “a technology that generates content such as text, images, sound, video, and code based on algorithms, models, and rules.”
Compared to traditional artificial intelligence, generative AI models represented by ChatGPT possess the feature of “autonomy.” The “autonomy” of ChatGPT is reflected in its ability to “self-program.” In July 2021, the development team of ChatGPT, OpenAI, stated that ChatGPT had already developed mature coding capabilities when releasing GPT-3.5. “Self-programming” is a milestone in the history of computer development, representing that generative AI has broken free from the “shackles of code” and is moving towards complete autonomy. The “autonomy” of generative AI models is also evident in well-known fields such as AI painting and intelligent text generation. The photographic works generated by ChatGPT have reached a level of realism in aspects such as light and shadow shaping, depth of field, and color contrast.
ChatGPT has developed mature coding capabilities
In essence, generative AI may already possess a basic form of human-like consciousness. For example, GPT-4, released in March 2023, can not only recognize the content in images but also form reasoning consistent with human subjective consciousness based on elements within the images. If GPT-4 is shown a child holding a balloon and asked what would happen if the string were cut, GPT-4 can accurately respond, “The balloon will fly away.” Scholars have pointed out that artificial intelligence is not a discrete technology like a fighter jet or locomotive, but rather a general enabling technology, akin to electricity, computers, or internal combustion engines. While this view may have been premature in the early AI field based on line-by-line code execution, it is quite appropriate to describe generative AI with “autonomy.”
The Possibility and Risks of Generative AI in Warfare
The Possibility of Generative AI in Warfare As a “general artificial intelligence technology,” generative AI has broad application prospects in warfare. First, generative AI, relying on its powerful data integration and information synthesis capabilities, is expected to become a scientific decision-support tool, or even an independent decision-making system. Second, the unique advantages of generative AI in textual, image, and audio-visual fields will empower the entire process of intelligence collection, organization, and analysis. For instance, through data training, generative AI can discover potentially valuable scattered information from vast amounts of raw data, determine the value of the information through logical analysis, and ultimately integrate it into standardized intelligence texts. As early as April 2017, the U.S. Department of Defense formed a cross-functional algorithm warfare team to study how to quickly autonomously identify valuable intelligence from millions of hours of video captured by drones such as the Scan Eagle and MQ-9 Reaper. Third, the “open interface” model and platform attributes of generative AI provide possibilities for the autonomy and intelligence of traditional or automated weapons. Taking the U.S. ongoing Loyal Wingman program as an example, this program aims to enhance the intelligence level of traditional manned aircraft by pairing advanced drones with conventional piloted aircraft, allowing drones to guide manned aircraft. Fourth, generative AI enhances human-machine collaborative combat, significantly improving combat efficiency and reducing personnel casualties. For example, in December 2015, the Russian military deployed robots primarily composed of unmanned combat platforms to support the Syrian government forces in capturing Hill 754.5, which had been under siege for a long time. This included one automated command system (the “Andromeda-D”), six platform-M tracked combat robots, four “Dark Speech” wheeled combat robots, and a locust self-propelled artillery group. The Russian military captured Hill 754.5, which the Syrian government forces had struggled to take, in just over 20 minutes, killing 77 enemy soldiers and creating a new model of human-machine hybrid combat. Fifth, the strategic value of generative AI in emerging combat fields cannot be underestimated. For instance, the “self-programming” ability of generative AI makes swarm and saturation network attacks possible. Generative AI can also leverage deepfake technology to produce and disseminate false texts, images, and audio-visual content, launching cognitive domain operations. Additionally, generative AI plays a unique role in equipment logistics support, training exercises, and battlefield medical rescue.
The Russian military captured Hill 754.5 in just over 20 minutes
The Risks of Generative AI in Warfare Generative AI’s participation in warfare presents characteristics that are all-encompassing, three-dimensional, and multi-faceted. Its deep learning and self-programming features allow this participation to evolve at an astonishing speed, ultimately developing into a “terrifying fusion” that could alter the course of warfare and even impact human civilization. First, the arbitrary creation feature of generative AI easily leads to the proliferation of false information, exacerbating the fog of war. For example, recruiting potential supporters for terrorist activities through social media platforms such as Twitter, Facebook, YouTube, and Instagram has become an important avenue for organizations like the Islamic State to expand their influence, and generative AI will further promote terrorist propaganda, leading to a flood of terrorist information. Second, the self-programming of generative AI lowers the threshold for cyber attacks, threatening national cyber sovereignty. Hackers or other malicious actors can utilize ChatGPT’s language generation capabilities to carry out a series of cyber attacks, such as generating malware, phishing emails, and conducting credential stuffing attacks. Third, while generative AI liberates intelligence personnel from the tedious “intelligence frontline,” it can easily generate false intelligence, leading to erroneous decision-making. For instance, in 1983, the Soviet Union’s OKA missile warning system issued an air defense alert indicating that five intercontinental missiles were flying from the U.S. to the Soviet Union. If the system’s preset had been followed, the Soviet Union would have launched a nuclear counterattack. Fortunately, Stanislav Petrov, who was in the intercontinental missile launch facility at the time, successfully judged the alert to be a system malfunction, ultimately averting the risk of a full-scale nuclear war. Fourth, the involvement of generative AI in warfare decision-making systems may trigger “lethal decision dehumanization,” fundamentally challenging humanity’s dominance over warfare.
Based on this, studying the mechanisms and legal issues of generative AI’s role in warfare and proposing practical regulatory suggestions is of urgent real-world significance.
The Legal Issues of Generative AI in Warfare
Legal Status—The Nature of Generative AI’s Role in Warfare The issue of the nature or legal status of artificial intelligence in warfare involves two perspectives: “combatant subject theory” and “technical tool theory.” The “combatant subject theory” argues that artificial intelligence is a legitimate combatant with autonomous decision-making and execution capabilities. If artificial intelligence violates the law, it will be assigned a “personality” similar to that of a combatant to determine its criminal subject qualification, subjective fault, and criminal liability. The New York Times once published a dialogue with ChatGPT, in which ChatGPT stated that it is not human but desires to become human. In 2016, Saudi Arabia granted citizenship to the robot “Sophia” at the Future Investment Initiative conference held in Riyadh, creating a precedent for a robot to obtain citizenship in a country. It is important to note that military practice differs from ordinary social practice, and granting generative AI independent combatant status presents numerous unresolved issues. First, generative AI lacks the unique rationality and emotions of humans, making it difficult to be “personalized”; second, granting generative AI combatant status may lead to legal accountability dilemmas, resulting in a “war responsibility vacuum”; third, the combatant subject theory struggles to evaluate the human-machine collaborative combat methods in joint operations.
The robot Sophia obtaining Saudi Arabian citizenship
The technical tool theory posits that artificial intelligence is ultimately a tool for humans to achieve war objectives. When generative AI is applied in warfare, it can be recognized as a new method of combat or weaponry. The technical tool theory has certain rationality in traditional “autonomous weapons” based on line-by-line code. Traditional “autonomous weapons,” such as intelligent defensive weapons, unmanned aerial vehicles, and intelligent robotic sentinels, essentially do not break free from the OODA (Observe-Orient-Decide-Act) loop model. The “autonomy” of these weapons depends on the scope of authority granted by the code, and the selection of targets also comes from the preset of the code. The most significant distinction of intelligent weapons under generative AI from autonomous weapons is “autonomy,” as they break the “shackles of code” through information integration and self-programming, achieving a substantive “autonomy” in combat weapons. Under extensive information training, generative autonomous weapon systems develop a set of autonomous combat thinking and paradigms independent of the source code in complex battlefield environments. In summary, simply categorizing artificial intelligence under generative AI as a technology or tool poses certain issues.
Legal Principles—The Challenges of Generative AI to Principles of Warfare Law The core of warfare law is to reconcile the contradiction between “military necessity” and “humanitarian requirements” in armed conflicts, mitigate the disasters of war, and avoid unnecessary suffering. The involvement of artificial intelligence, especially generative AI with “human-like consciousness,” in warfare will fundamentally alter the modes of warfare and rules of engagement, causing disruptive impacts on traditional principles of distinction, proportionality, and other foundational principles of warfare law.
Generative AI’s fatal dependence on databases and massive training makes the principle of distinction difficult to apply. The development of human intelligence and artificial intelligence is essentially an evolutionary process of continuously adapting and processing complex cognitive activities. When the flow of information reaches a significant scale, it may promote the evolution of thinking abilities. On one hand, generative AI is prone to cognitive and targeting deviations due to data blockages and barriers. Data is the lifeblood of generative AI; on the battlefield, for generative AI to accurately distinguish between an enemy missile launcher and a civilian passenger car, it must input the basic shape, appearance color, and other parameters of the enemy missile launcher into the system for extensive training. In reality, the parameters of a country’s weaponry are military secrets, coupled with battlefield camouflage, making such distinctions difficult to achieve. On the other hand, the disconnection between training environments and actual combat environments will lead to the incapacity and loss of control of generative AI’s application in warfare. Automatic driving cars may simulate millions of kilometers of driving in experimental scenarios before being put into operation, yet various unexpected incidents still occur once they are operational. Generative AI cannot obtain a training environment that is consistent with battlefield conditions; the complex and variable battlefield environment, along with the enemy’s “deception tactics,” exacerbates the risks of generative AI losing control.
Generative AI can only make “purely rational” judgments that are most conducive to achieving military objectives, lacking the ability for value judgments, which makes it difficult to adhere to the principle of proportionality. Currently, the vast majority of artificial intelligence systems are preset with moral guidelines and basic ethics; however, such static, enumerative presets often struggle to cope with complex value judgments. For example, warfare law prohibits attacks on objects and projects containing dangerous forces (such as dams, nuclear power plants), even if such objects are military targets, as such attacks may lead to the release of dangerous forces, causing large-scale humanitarian disasters. At the same time, warfare law specifies special circumstances under which attacks on such targets are permissible, namely “if the military target is used for purposes other than its normal function and is providing regular, significant, and direct support to military operations, and if such an attack is the only possible means to terminate that support.” The core issue involved in this supplementary provision is value judgment, which generative AI finds challenging to define accurately using mechanical programming language, potentially leading to indiscriminate attacks that disregard the principle of proportionality and result in horrific humanitarian disasters.
Legal Responsibility—The Responsibility Issues of Generative AI in Warfare The responsibility issue of generative AI first arose in the field of copyright protection. For instance, can text and paintings generated by ChatGPT through algorithms be considered its own “works”? Does the behavior of ChatGPT in scraping underlying data involve infringement? How is the responsibility of operators or users determined in the complete process? According to a report from the United Nations Security Council, in March 2020, a military drone produced by Turkey’s STM Company, the Kargu-2, executed a mission on the battlefield in Libya, tracking and killing a retreating member of the “Libyan National Army” without relying on an operator, creating the first instance of an autonomous weapon killing a human in self-mode. These cases demonstrate the urgency of discussing the legal responsibilities triggered by generative AI.
The Kargu-2 military drone
The “self-programming” capability of generative AI will lead to a “responsibility vacuum” in war crimes. Some scholars have suggested that only rational artificial intelligence that meets the will condition, can distinguish itself from others, and is capable of self-reflection may possess the potential for responsibility. Generative AI exhibits typical “personalization” characteristics; for example, the latest iteration, GPT-4, can understand memes and nuanced images and even accurately identify the humor contained within them as understood by humans. These cases suggest that we might be able to hold generative AI accountable for war crimes. However, we must reflect on the legal or practical significance of holding artificial intelligence itself accountable, punishing a machine or operating system. In military practice, granting generative AI legal personhood may lead to using it as a tool to evade war responsibility, ultimately resulting in a “responsibility vacuum” where no one is held accountable.
The “autonomous execution rights” of generative AI will disrupt the causal relationship in war responsibility. The “self-programming” capability of generative AI allows for the possibility of “self-execution.” Regarding the causal relationship with developers, the “platform” nature of generative AI means that the developers’ actions do not constitute a crime unless illegal content is included in the original programming files. Holding weapon users accountable has some rationality in the context of automatic or autonomous weapons. However, in the context of generative intelligent weapons, the user’s level of control over the weapon has significantly diminished, and the illegal consequences caused by the weapon have qualitatively changed the causal relationship with the user. The attacks by generative intelligent weapons are not necessarily a natural extension of the user’s actions.
Conclusion
The enormous potential of generative AI in industry applications has spurred explosive growth in this sector. Shortly after OpenAI launched ChatGPT, Google introduced its AI learning-based service “Bard,” Baidu launched the “Wenxin Yiyan” chatbot, and iFlytek released the “iFlytek Spark” big data model. The rapid development of generative AI has triggered a global “anti-AI wave.” On March 29, 2023, thousands of industry authorities in the U.S. jointly published an open letter calling for a “pause on general artificial intelligence research,” stating that “given the profound risks that artificial intelligence systems with human-level competitive intelligence may pose to society and humanity,” they recommend that all AI labs immediately halt training systems more powerful than GPT-4, while expecting AI developers and policymakers to cooperate to significantly accelerate the development of robust AI governance systems.
Revolutions in military technology often lead to greater and broader changes in warfare, and such drastic, leapfrogging changes will disrupt existing forms of warfare. The role of generative AI in warfare is already “the tip of the iceberg.” Generative AI has broken through the “shackles of code” in artificial intelligence and even possesses “human-like consciousness,” but its underlying logic remains a multitude of codes devoid of any emotional warmth and value judgments. Generative AI has incomparable capabilities for costless replication and continuous evolution, leading to fears that “carbon-based life will become a stepping stone for silicon-based life.” Comprehensive control over the role of generative AI in warfare, including legal aspects, is an urgent real-world demand for meaningful human control.
Reply to the following keywords to view a series of articles:
Hot Topics:ABMS|Unmanned Autonomous Systems|Joint All-Domain Command|Urban Warfare |
Strategic Planning:Development Planning|Regulations|Think Tank Reports |
Combat Concepts:Mosaic Warfare|Multi-Domain Operations|Distributed Lethality |
Cutting-edge Technologies:Artificial Intelligence|Cloud Computing|Big Data|Internet of Things|Blockchain|5G |
System Equipment:Army|Navy|Air Force|Space Force|Cyber Space|NC3|Air Defense and Missile Defense|Logistics Support |
Aerial Traffic:NextGen|SESAR|Drones |
More information, long press the QR code to follow