
Author Introduction

Tai Long Zou, Special Researcher at the Postdoctoral Research Station of Education at Guangzhou University, Associate Professor and Master’s Supervisor at the College of Teacher Education, Hubei Minzu University
Qin Jing Xiong (Corresponding Author), Special Researcher at the Postdoctoral Research Station of Education at Guangzhou University, Email: [email protected]
Note: This article is a long abstract version of the original text. Click on “Read Original” at the bottom of the page to view the original text.


1. Academic Misconduct Risks Induced by Generative AI

Faced with the rapid development and widespread application of related technologies, the powerful capabilities of large language models like ChatGPT are increasingly being applied to key processes in academic fields such as literature reviews, hypothesis generation, discussions, and paper writing. Due to the complexity of the technology, researchers find it difficult to judge the authenticity and credibility of academic texts generated by AI, significantly increasing the risk of academic misconduct.
(1) Algorithmic Ambiguity and Plagiarism Detection
From a technical perspective, generative AI is based on large-scale pre-trained deep learning models. When generating text, it responds according to the context and interacts with users. However, in academic use, this seemingly personalized content generation method conceals the risk of plagiarism. If users intentionally copy existing academic achievements, they may use generative AI to rewrite related content. Moreover, the training database of generative AI is extremely large and diverse, but when AI learns and uses this data, it does not explicitly cite or indicate the original sources, which can lead to unintentional plagiarism.

(2) Task Orientation Leading to Fabrication for Conformity
Large language models are task-oriented models aimed at simulating human language users’ thinking and behavioral processes through interaction and dialogue to understand their needs and intentions in executing specific tasks in natural language processing. However, in some cases, generative AI may produce intentional or unintentional incorrect outputs to cater to user demands. Unintentional errors mainly stem from the confusion and bias in the training data of large language models. Unintentional errors can occur because task orientation may lead AI to fabricate seemingly coherent responses that are actually nonsensical, including various types of errors such as facts, data, and statements. If researchers uncritically apply these generated outputs in academic texts, it undoubtedly touches on the issue of “fabrication” as academic misconduct.

(3) Controversies Over Improper Authorship Due to Human-AI Interaction
According to traditional views, authors who hold copyright can only be natural persons, and content generated by AI cannot be considered works in the legal sense. However, some scholars argue that AI-generated works can be deemed to possess original value. If their outputs are used for academic publication, the research text may be regarded as the original work of the author, while the author may lack substantial research contributions, leading to the risk of improper authorship. Even if the author provides their perspectives and contributions, the authorship of the generated research text remains controversial, as it is difficult to identify the exact contributions of both the user and the AI in collaboratively created academic works.
(4) Ethical Concerns Induced by Big Data Training
In academic use, generative AI’s operation relies on a large amount of research texts and data, requiring real-time dialogue interaction with users. User input may be stored and transmitted, and if this information is misused or leaked, it can severely damage researchers’ intellectual property rights and personal privacy, and may even be maliciously altered by hackers, leading AI products to generate misleading research texts, undermining the credibility of the academic field. Moreover, since its system training often relies on biased data, it is prone to manipulation and distortion, which may raise ethical concerns such as social stereotypes, unfair discrimination, and exclusionary normative effects.

2. Governance Logic of Academic Use of Generative AI


The various academic misconduct behaviors induced by generative AI present a significant challenge to academic endeavors, but we should seek proactive governance rather than simply avoiding risks through a blanket “ban on use”.
(1) Governance Substance: Promoting Innovation in Academic Knowledge Rather Than Replication
The realization of artificial intelligence is the process of converting rational thinking expressions of the human brain into computer calculations, which, no matter how complex, must be completed in finite steps. However, the value and significance of academic research lie in continuously breaking through the limits of human cognition, approaching infinity, reflecting on wholeness, and exploring uncertainty. Currently, generative AI is considered to possess the ability of “originality”, which is actually achieved through combination and association techniques and programs. True creativity cannot be reduced to free combinations and arrangements, but rather in the ability to pose new questions, alter old problems, break existing thought patterns, and re-establish rules and methods. This requires us to treat the academic use of generative AI with caution, ensuring that AI enhances rather than replaces research capabilities in academic endeavors.
(2) Governance Principles: Ensuring Human Subjectivity and Controllability of Tools
The starting point for AI governance should be to use AI as a tool and regulatory object for beneficial and harmful applications, rather than allowing machines to self-decide and act in situations where humans are unclear or cannot control. For academic use, the basic principle of governance for generative AI should uphold human values and dignity, adhere to human subjectivity, and ensure the subordinate position and controllability of tools. Relevant governance needs to focus on transforming the unknown states in current AI computations into known and controllable ordered states, increasing transparency in technical academic use, and improving its interpretability.
(3) Governance Forms: Stimulating the Inherent Governance Effectiveness of Technology
In traditional academic misconduct governance systems, governance objects and means are separate. However, entering the digital intelligence era, governance means themselves are also deeply influenced by technology. Therefore, empowering governance means with technology is a key aspect of academic governance. This also means that while AI technology brings new misconduct risks to the academic field, it also creates new opportunities for governance. The generation, determination, and enforcement of relevant regulations and systems cannot be separated from the technical assistance of AI to enhance governance levels and efficiency while achieving the inherent governance effectiveness of technology, thereby balancing the efficiency and safety of generative AI in academic use.

3. Governance Strategies for Academic Misconduct in the Era of Generative AI


As generative AI continues to penetrate academic activities with cross-domain and situational applications, academic misconduct has also mutated and upgraded, making it difficult to rely solely on traditional expert evaluation methods.
(1) Leading the Academic Ecosystem with Human Wisdom
To address the potential academic misconduct issues brought about by generative AI, many academic and publishing institutions have begun to adopt AI-based detection tools. However, relying solely on technological detection to prevent academic misconduct risks from AI-generated outputs is insufficient because both content generation and content detection are based on the same algorithms and rules, and evasion of detection can be achieved simply by selectively altering certain rules. More importantly, although AI can produce high-quality human language text, it does not mean that this text possesses academic value or a true understanding of the real world and human life. Therefore, intelligent technologies used to enhance academic misconduct governance capabilities rely on human wisdom that cannot be presented in algorithmic form by AI.
(2) Achieving a Tripartite Approach of Prevention, Constraint, and Accountability
Corresponding to the three stages of academic misconduct occurrence, governance strategies can be divided into three parts: early prevention, mid-term constraint, and late accountability. Coordination among these three is essential to better respond to risks. Prevention is the premise of preventing academic misconduct, mainly through education and training. The goal is to guide researchers to understand research norms, master the basics of academic ethics, and recognize the opportunities and challenges that the digital intelligence era brings to the research environment; constraint involves introducing appropriate supervision mechanisms in researchers’ academic research and writing processes to encourage compliance with academic norms and standards; accountability and handling are the final link in the academic misconduct governance process, which can quickly trace responsible parties through big data correlation analysis, and utilize technologies such as big data blockchain to permanently record researchers’ academic misconduct, thereby building a psychological barrier against academic misconduct in researchers’ minds.
(3) Conducting Key Governance Based on the Technology Lifecycle
For any emerging technology, opportunities for development and potential risks are a pair of main contradictions throughout its lifecycle. The academic use of generative AI is still in a relatively early stage, but on the surface of its “uniqueness”, the efficiency and qualification rate of its generated text are much higher than that of humans. This also means that AI has a more pronounced adverse impact on academic writing in more routine educational tasks, so students’ use of generative AI to write papers should be a key focus of current governance efforts. Additionally, during the rapid development stage, generative AI-related technologies objectively require more flexible regulations and systems to achieve greater development space and release more potential value. Therefore, governance based on the professional ethical norms of the academic community would be more suitable for the current academic field than rigid legal regulations.


[Funding Project] National Social Science Fund Post-Funding Project “Research on the Big Data Empowerment and Realization Path of Moral Education Reform in Universities” (Project Number: 22FKSB038).
[Cited Original Text]: Zou Tai Long, Xiong Qin Jing. Risks and Governance of Academic Misconduct in Generative AI Usage [J], Journal of Jinan University (Social Science Edition), 2024, 34(06): 163-167.



Typesetting: Jin Pin Xia
Review: Wang Wen Juan
Final Review: Yang Min
New Media Communication Matrix of the Journal of Jinan University





