AIGC Use in Military Industry Must Focus on Compliance

AIGC Use in Military Industry Must Focus on Compliance

Wei Yuejiang

Currently, the development of AI (Artificial Intelligence) technology is still in its early stages, and AI products do not possess the cognitive abilities of the human brain. Even when some AI products surpass human capabilities in certain aspects, it is due to the efforts of intelligent individuals who have trained them to achieve wisdom and skills beyond ordinary humans, and these are merely isolated cases. From the perspective of the overall development and application of AI, it has not yet completely freed itself from human control and dependency; it is not an independent entity. However, the technology for AI-generated content (AIGC) is showing a comprehensive and explosive development trend.

At present, AI-generated content technology, also known as generative AI, utilizes existing text, audio, video, charts, images, or digital data from databases on the internet to generate models and related technologies through search, scraping, extracting, synthesizing, summarizing, and applying fixed templates.

In late November 2022, an AI research company in the United States named OpenAI launched the ChatGPT AI chatbot, which not only represents a breakthrough in the next generation of chatbots but will also bring significant changes to the information industry. In February 2024, OpenAI launched the video generation model Sora, which again has a tremendous impact on the media industry, bound to disrupt the traditional film and television industry structure. Meanwhile, the promotion and application of these large AI models have gradually revealed issues such as academic fraud, technological abuse, public opinion safety, and copyright disputes. Currently, both domestic and international disputes regarding AI and corporate infringement have arisen, leading to difficulties for relevant departments in making judgments.

In February 2024, the Guangzhou Internet Court issued a ruling in a case involving generative AI services infringing on others’ copyright, which was dubbed by the media as the “first global AIGC platform infringement case.” This case confirmed that the AI text-to-image platform involved infringed upon the plaintiff’s rights to reproduce and adapt the work, and clarified the obligations that generative AI service providers must fulfill when offering related services.

The plaintiff in this case was a certain company, and the defendant was an AI company. The plaintiff discovered that the defendant’s website had AI dialogue and AI painting functions, and the generated Ultraman image was substantially similar to the Ultraman image for which the plaintiff company had exclusive authorization and independent rights protection, leading the plaintiff to file a lawsuit in court.

Media reports stated that initially, the plaintiff filed a lawsuit for internet infringement liability disputes rather than copyright infringement disputes. However, on February 1, the plaintiff submitted an “Application for Change of Litigation Request,” changing the case to a copyright infringement dispute. The court determined that the Ultraman work involved in the case enjoyed high recognition and could be accessed, reviewed, and downloaded on major video websites like iQIYI. In the absence of special and obvious evidence from the defendant, there was a possibility that the defendant had come into contact with the Ultraman work involved.

As this case is the first infringement case arising from AIGC-generated works, it reminds us that in the application of AIGC, in addition to supplementing and improving the traditional copyright system (such as defining and standardizing whether AI-generated products can be considered works, and determining whether the responsibility for copyright disputes arising from AI-generated products lies with the manufacturer, user, or service provider), it is essential to strengthen intellectual property protection in the application of AIGC and ensure reasonable use.

Why did AIGC infringe in this case? What is the reason? After reading several reports related to this case, the root cause is ultimately the issue of the data training content of AIGC. AIGC is a thing, not a person. Although it does not consume food, it must be trained and taught by humans, similar to how a kindergarten teacher educates preschool children. Only through targeted data-related product training can it automatically generate corresponding products. In other words, if you use internet data products to train AIGC, it will generate products similar to or slightly altered from the content related to the internet. Although the internet is open and shared, it does not mean it can be used free of charge (some works have declared that they cannot be reproduced without permission, and some works can only be reproduced or downloaded after an agreement is signed). Moreover, the authenticity of internet products is often difficult to discern, with both compliant reproductions and non-compliant alterations, adaptations, and rewrites. Therefore, the use of products generated by AIGC inevitably leads to copyright disputes or issues of content authenticity.

Due to the particularity of the military industry, it is best for AIGC in the military field to “consume military rations” rather than “mixed feed.” The current urgent task is for various types of AIGC large models launched in the military sector to comply with regulatory requirements for product registration and online filing (including the model type, algorithms involved, data transmission, etc.). Most importantly, the training of AIGC data products must be compliant and legal, ensuring that it consumes proprietary or purchased data products and avoids unauthorized use. The best approach is to temporarily take a path of intelligence. Companies or enterprises producing AI large models in the military field should establish their own military expert think tanks, training AIGC with data products created by their own military experts (employees of their company or enterprise) or authorized data products obtained through agreements with third parties, as this is the best strategy to ensure safety and reliability with high trust value.

Additionally, when military enterprises purchase and use AIGC, they should first ask whether the data that AIGC is consuming is compliant and legal. If it is not compliant or legal, what will happen if copyright disputes arise during use? Media reports indicated that in the aforementioned case, the defendant argued in court that there was no subjective or objective unauthorized use of the plaintiff’s protected works to train the large model and generate substantially similar images, claiming that the AI painting function of the involved website was implemented through a third-party service provider and was unrelated to the defendant, providing evidence of an “Order Agreement” with the third-party service provider along with relevant screenshots.

Regarding the details of the case and the court’s arguments, the Guangdong Internet Court mentioned that the “Interim Measures for the Management of Generative Artificial Intelligence Services” which came into effect on August 15, 2023, is the first special regulation in China regarding the research and service of generative AI. According to Article 22, Item 2 of this regulation and the defendant’s statement, the defendant is classified as a generative AI service provider. The court did not accept the defendant’s defense that the AI painting service was provided by a third-party service provider and that the defendant bore no responsibility.

2024 is a crucial year for the development and application of AI, as well as a significant year for risks and challenges (AI may lead to changes in employment structure, impact legal and social ethics, infringe on personal privacy, challenge international relations norms, etc., which will have profound effects on government management, economic security, social stability, and even global governance). The militarization of AI development has drawn widespread attention from the international community.

Since the launch of the large language model ChatGPT, which helps military planners better understand the operational environment and determine how to better comprehend concepts of time, space, and troop aspects, in August 2023, the Pentagon officially established a generative AI task force named “Lima,” which will play a key role in analyzing and integrating generative AI tools (including large language models) across the Department of Defense. This task force will operate under the leadership of the Chief Digital and Artificial Intelligence Office of the Department of Defense, responsible for assessing, synchronizing research and development, and applying generative AI capabilities throughout the Department of Defense to ensure that the department remains at the forefront of AI cutting-edge technology while safeguarding national security. In November 2023, the Department of Defense released the “2023 Strategy for Data, Analysis, and Adoption of Artificial Intelligence,” formulated by the Digital and Artificial Intelligence Chief Office, to ensure that U.S. combat personnel maintain decision-making advantages on the battlefield in the coming years.

In January 2024, OpenAI publicly stated that it would revise its previous prohibition on using its large language models for any military or war-related applications, changing the wording of the ban on using its AI technology for military purposes. These actions have heightened concerns about the accelerated deep cooperation between American tech giants and the military. At the same time, the Department of Defense initiated its first digital exercise focusing on the risks of generative AI (AI) systems. Through practical exercises, it will identify unknown threats within large language models, including chatbots, which utilize different text data for training and provide users with the most likely outcomes or decisions based on deployment context. The first phase began in January 2024 and will end in February, with the second phase to be announced. A volunteer team from Scale AI, a commercial AI company collaborating with the Department of Defense, adapted an exercise hosted by the U.S. Marine Corps Advanced Warfighting School and developed an experimental large language model for military planning named “Hermes.” The Pentagon has established a dedicated agency to analyze how generative AI can be used in the defense sector. It has been reported that OpenAI is collaborating with the Pentagon on software project development, including projects related to cybersecurity. Although OpenAI has ruled out the possibility of directly developing weapons, its new policy may allow it to provide AI software to the Department of Defense to assist analysts in interpreting data or writing code. As the Russia-Ukraine conflict has shown, the boundary between data processing and warfare may not be as clear as OpenAI hopes. Ukraine has developed and imported software for analyzing big data, enabling its artillery operators to quickly receive notifications of Russian targets in the area, significantly speeding up their firing rates.

Military enterprises are the vanguards of AI technology development and application in China’s military field. A saying often heard in the daily news broadcast of CCTV Channel 1 goes: “Protecting intellectual property is protecting innovation.” The future arms race in AI will be a competition in the military sector. One of the battlegrounds in the military field is the competition for intellectual property, which ultimately boils down to the competition for original products. We are in an era of digital technology explosion, where we must place more emphasis than ever on original products, protect original products more than ever, and need original products more than ever.

(Author: Wei Yuejiang, Military Commentator.)

(Please indicate the source when forwarding this public account article from the Shanghai Military-Civil Integration Development Research Association public account)

AIGC Use in Military Industry Must Focus on Compliance

Previous Hot Articles:

  • Discussion on the Policy Orientation of National Defense Expenditure

  • Construction of the U.S. Navy’s Strategic Delivery System and Its Lessons

  • The Russia-Ukraine Conflict Revives the Russian Military-Industrial Complex

  • Current Development Status of Unmanned Combat Systems in Major Military Powers

  • Current Development Status of Major Countries’ Anti-Unmanned Combat Systems

  • Accelerating the Application of Artificial Intelligence Technology in Equipment Support

  • Insights into the War Pattern Design of the U.S. and Western Countries Regarding the Russia-Ukraine Military Conflict

  • Adapting to New Situations, New Tasks, and New Requirements—Advancing the Transformation and Development of the “Double Research Association”—Annual Work Summary and New Year Work Planning Meeting of the “Double Research Association”

  • The U.S. Military Accelerates the Development of Artificial Intelligence Technology and Equipment

  • A Review of Two Years of the Russia-Ukraine Conflict

  • This Association Releases the “Standards for Assessing Enterprises in Industrial Parks” Group Standards

  • The U.S. Military’s “Tactical Last Mile” Unmanned Support

  • The Application of Artificial Intelligence in Precision Strike Systems

  • Generals Gift Books to Encourage Young People to Love National Defense

  • National Defense Science Popularization Series No. 10: Cyber Warfare—A War Without Gunpowder

AIGC Use in Military Industry Must Focus on Compliance

Leave a Comment