Exploring AIGC Academic Ethics: Triple Constraints of Government, Publishing, and Universities

Exploring AIGC Academic Ethics: Triple Constraints of Government, Publishing, and Universities
Exploring AIGC Academic Ethics: Triple Constraints of Government, Publishing, and Universities
Exploring AIGC Academic Ethics: Triple Constraints of Government, Publishing, and Universities
Exploring AIGC Academic Ethics: Triple Constraints of Government, Publishing, and Universities
Exploring AIGC Academic Ethics: Triple Constraints of Government, Publishing, and Universities
Exploring AIGC Academic Ethics: Triple Constraints of Government, Publishing, and Universities
Exploring AIGC Academic Ethics: Triple Constraints of Government, Publishing, and Universities

Exploring AIGC Academic Ethics:

Triple Constraints of Government, Publishing, and Universities

In the wave of digitalization, AIGC (Artificial Intelligence Generated Content) technology acts as a double-edged sword, bringing unprecedented convenience and efficiency while also triggering widespread discussions about academic integrity and ethical responsibilities. To ensure we can fully utilize this technology within compliance, we need to delve into the relevant norms and principles from three key areas: government agencies, the publishing industry, and educational institutions.

Exploring AIGC Academic Ethics: Triple Constraints of Government, Publishing, and Universities
Exploring AIGC Academic Ethics: Triple Constraints of Government, Publishing, and Universities
Exploring AIGC Academic Ethics: Triple Constraints of Government, Publishing, and Universities
Exploring AIGC Academic Ethics: Triple Constraints of Government, Publishing, and Universities
Exploring AIGC Academic Ethics: Triple Constraints of Government, Publishing, and Universities

Part 1

Government Agencies: Setting Boundaries, Guiding Consensus

On September 7, 2023, UNESCO released the “Guidelines for the Application of Generative Artificial Intelligence in Education and Research[1], which details the definition of generative artificial intelligence, discusses the controversies it raises, and its impact on education, particularly the issue of exacerbating the digital divide. The guidelines propose key steps that governments should take to regulate generative artificial intelligence and suggest establishing a policy framework to ensure that generative artificial intelligence is applied ethically in education and research. The guidelines also recommend setting the minimum age for using AI tools at 13 years old and call for relevant training for teachers.

The Ministry of Science and Technology’s Supervision Division and the National Natural Science Foundation of China also released the “Guidelines for Responsible Research Conduct (2023)[2] and “Research Integrity Standards Handbook[3] in December 2023, clarifying the norms for using AIGC, including research topics, data management, and authorship, and emphasizing the need to use AI tools cautiously during evaluation activities. An excerpt from the “Guidelines for Responsible Research Conduct (2023)” is as follows:

Exploring AIGC Academic Ethics: Triple Constraints of Government, Publishing, and Universities

1. Research Topics and Implementation:

(1) Researchers

2. The materials submitted for research projects should be authentic, accurate, and objective. Duplicate submissions of the same or similar research content are not allowed, and individuals cannot be listed as team members without their consent. Plagiarism, buying, selling, or ghostwriting of submission materials is prohibited, and the direct use of generative artificial intelligence to generate submission materials is not allowed.

7. Researchers must strictly adhere to relevant safety, confidentiality, funding usage, resource and data sharing, and intellectual property regulations. They should reasonably use generative artificial intelligence in research implementation in accordance with regulations.

2. Data Management:

(1) Researchers

13. Researchers should follow relevant laws, regulations, and academic norms, and reasonably use generative artificial intelligence to handle text, data, or academic images to prevent risks of falsification and data tampering.

3. Literature Citation:

4. When using content generated by generative artificial intelligence, particularly key content involving facts and opinions, it should be clearly labeled and the generation process explained to ensure authenticity and respect for others’ intellectual property. Content that has been previously labeled as generated by AI should generally not be cited as original literature, but if necessary, should be explained.

7. References that have not been consulted or are unrelated to the research content should not be included in the references, including inappropriate self-citations, mutual citations with others, or citing unrelated literature based on reviewer or editor requests. Unverified references generated by generative artificial intelligence should not be used directly.

4. Authorship:

7. Generative artificial intelligence should not be listed as a co-author of research outcomes. The main methods and details of using generative artificial intelligence should be disclosed in related sections such as research methods or appendices.

5. Publication of Results:

(3) Academic Publishing Units

3. Authors should disclose whether generative artificial intelligence was used, specify the software name, version, and usage time, and provide specific labels for auxiliary generated content involving facts and opinions.

6. Peer Review:

(1) Reviewers

7. When using generative artificial intelligence in evaluation activities, prior consent from the organizers of the evaluation activities should be obtained, and measures should be taken to prevent leakage of evaluation content. If information leakage occurs, necessary remedial measures should be taken promptly.

Part 2

Publishing Industry: Clarifying Policies, Ensuring Originality

On September 20, 2023, the China Institute of Scientific and Technical Information, in collaboration with Elsevier, Springer Nature, and Wiley, released the “Guidelines on the Boundaries of AIGC Use in Academic Publishing[4], which clarifies the boundaries of AIGC use in academic publishing and aims to guide consensus on the norms of AI technology usage among the publishing industry, scientific community, and science and technology management departments. This guideline emphasizes that although AIGC can assist in data collection, statistical analysis, chart creation, and text editing, researchers must verify and edit the outputs provided by AIGC to ensure originality and accuracy and must disclose this appropriately, and cannot be listed as authors of papers.

Exploring AIGC Academic Ethics: Triple Constraints of Government, Publishing, and Universities

Major publishers (Cambridge University Press[5], Elsevier[6], Emerald[7], IEEE[8], Sage[9], Science[10], Springer Nature[11], Taylor & Francis[12], Wiley[13], etc.) follow the guidance of the Committee on Publication Ethics (COPE)[14], which does not allow AIGC tools to be listed as authors of papers because AI cannot assume legal and authorial responsibility. Publishers require authors to disclose the use of AIGC in the methods section or acknowledgments of the paper and take responsibility for the objectivity and accuracy of AIGC-generated content.

Although the publishers’ policies are strict, they also provide room for the reasonable use of AIGC. For example, Elsevier, Emerald, IEEE, and Wiley state that AI can be used as a proofreading tool to help improve the linguistic quality of articles but cannot replace the authors’ critical work.

Furthermore, the detailed regulations vary among publishers; for instance, Elsevier specifies that AI tools cannot be used to modify or introduce features of images but can adjust brightness or contrast as long as the original image information is not affected; whereas Emerald, Science, Springer Nature, and others explicitly state: AI tools cannot be used to modify or generate multimedia materials.

Therefore, researchers must carefully read and comply with the relevant regulations of publishers before publishing papers to ensure that academic publication is not adversely affected by neglecting these rules.

Part 3

Universities: Cautious Attitude, Encouraging Reasonable Use

Some universities abroad initially opposed the use of AIGC in the classroom, but over time and with the widespread use of AI tools, their attitudes have shifted to supporting the use of AIGC under certain conditions (such as proper citation, data protection, avoiding AI hallucinations, and compliance with academic integrity policies of the institution or instructors).

Notable institutions such as Harvard University[15], University of California, Los Angeles[16], Princeton University[17], Boston University[18], the Group of Eight in Australia[19], and universities like Oxford and Cambridge[20] have established guidelines to guide faculty and students in the correct use of AIGC.

Exploring AIGC Academic Ethics: Triple Constraints of Government, Publishing, and Universities

Harvard University has released guidelines for the use of AIGC tools.

In China, universities such as Shanghai Jiao Tong University[21] and ShanghaiTech University[22] have published guidelines to provide clear guidance for faculty and students.

Conclusion

To maintain the originality, fairness, and legality of academic research, governments, publishers, and educational institutions collaboratively construct a multi-dimensional framework. As AIGC (Artificial Intelligence Generated Content) technology continues to advance, corresponding norms and guidelines are constantly updated to adapt to new developments. As members of the academic community, we have the responsibility to continuously learn, understand, and adhere to these norms. Only by doing so can we fully utilize the conveniences and innovations brought by AI while jointly promoting the healthy development of academic publishing and ensuring respect for academic integrity and ethical responsibilities.

This is a preliminary exploration of AIGC academic ethics, and I look forward to further exchanges with everyone. Please feel free to leave comments, share your insights and suggestions.

Notice

The library will soon launch

an AI-themed LibGuide

which will compile related materials

Please stay tuned

【Special thanks to Moonlight Technology Co., Ltd. for the valuable assistance provided during the editing process of this article by the Kimi Intelligent Assistant (Generation Date: 2024/4/24). The images in this article are generated by Microsoft Designer, with the prompt “Describe the relationship between AI and academic publishing” (Generation Date: 2024/4/25)】

References

Swipe up to read more

  1. UNESCO. “Guidelines for the Application of Generative Artificial Intelligence in Education and Research” [EB/OL]. [2024-04-19]. https://www.unesco.org/zh/articles/jiaokewenzuzhigeguoxujinkuaiguifanshengchengshirengongzhinengdexiaoyuanyingyong.

  2. Ministry of Science and Technology of the People’s Republic of China. “Guidelines for Responsible Research Conduct (2023)” [EB/OL]. [2024-04-19]. https://www.most.gov.cn/kjbgz/202312/t20231221_189240.html.

  3. National Natural Science Foundation of China. “Research Integrity Standards Handbook” [EB/OL]. https://www.nsfc.gov.cn/publish/portal0/tab442/info91294.htm.

  4. China Institute of Scientific and Technical Information. “Guidelines on the Boundaries of AIGC Use in Academic Publishing” [EB/OL]. [2024-04-19]. https://www.istic.ac.cn/html/1/227/243/245/1701698014446298352.html.

  5. Cambridge University Press. “Authorship and contributorship” [EB/OL]. [2024-04-19]. https://www.cambridge.org/core/services/authors/publishing-ethics/research-publishing-ethics-guidelines-for-journals/authorship-and-contributorship#ai-contributions-to-research-content.

  6. ELSEVIER. “The use of generative AI and AI-assisted technologies in writing for Elsevier” [EB/OL]. [2024-04-19]. https://www.elsevier.com/about/policies-and-standards/the-use-of-generative-ai-and-ai-assisted-technologies-in-writing-for-elsevier.

  7. EMERALD. “Publishing ethics” [EB/OL]. [2024-04-19]. https://www.emeraldgrouppublishing.com/publish-with-us/ethics-integrity/research-publishing-ethics#authorship.

  8. IEEE. “Submission and Peer Review Policies” [EB/OL]. [2024-04-19]. https://journals.ieeeauthorcenter.ieee.org/become-an-ieee-journal-author/publishing-ethics/guidelines-and-policies/submission-and-peer-review-policies/#ai-generated-text.

  9. SAGE. “Sage Editorial Policies” [EB/OL]. [2024-04-19]. https://au.sagepub.com/en-gb/oce/chatgpt-and-generative-ai.

  10. SCIENCE. “Change to policy on the use of generative AI and large language models” [EB/OL]. [2024-04-19]. https://www.science.org/content/blog-post/change-policy-use-generative-ai-and-large-language-models#:~:text=AI-generated%20images%20and%20other%20multimedia%20are%20not%20permitted,in%20manuscripts%20specifically%20about%20AI%20and%2For%20machine%20learning.

  11. NATURE S. “Editorial Policies” [EB/OL]. [2024-04-19]. https://www.springernature.com/gp/policies/editorial-policies.

  12. TAYLOR A F. “Publishing ethics and research integrity” [EB/OL]. [2024-04-19]. https://taylorandfrancis.com/about/corporate-responsibility/publishing-ethics-and-research-integrity/.

  13. WILEY. “Best Practice Guidelines on Research Integrity and Publishing Ethics” [EB/OL]. [2024-04-19]. https://authorservices.wiley.com/ethics-guidelines/index.html#5.

  14. COPE. “Authorship and AI tools” [EB/OL]. [2024-04-19]. https://publicationethics.org/cope-position-statements/ai-author.

  15. PROVOST H U O O. “Guidelines for Using ChatGPT and other Generative AI tools at Harvard” [EB/OL]. [2024-04-19]. https://provost.harvard.edu/guidelines-using-chatgpt-and-other-generative-ai-tools-harvard.

  16. TEACHING U C F A. “GUIDANCE FOR THE USE OF GENERATIVE AI” [EB/OL]. [2024-04-19]. https://teaching.ucla.edu/resources/ai_guidance/#toggle-id-5.

  17. UNIVERSITY P. “Guidance on AI/ChatGPT” [EB/OL]. [2024-04-19]. https://mcgraw.princeton.edu/guidance-aichatgpt.

  18. UNIVERSITY B. “Quick references for teaching with AI” [EB/OL]. [2024-04-19]. https://www.bu.edu/ctl/ctl_resource/quick-references-for-teaching-with-ai/#generative-ai-information-at-boston-university.

  19. GROUP OF EIGHT A. “Group of Eight principles on the use of generative artificial intelligence” [EB/OL]. [2024-04-19]. https://go8.edu.au/group-of-eight-principles-on-the-use-of-generative-artificial-intelligence.

  20. GROUP R. “New principles on use of AI in education” [EB/OL]. [2024-04-19]. https://russellgroup.ac.uk/news/new-principles-on-use-of-ai-in-education/.

  21. Shanghai Jiao Tong University. “Shanghai Jiao Tong University releases guidelines for the use of generative artificial intelligence for teachers, establishing a co-construction and sharing mechanism for AI applications in teaching” [EB/OL]. [2024-04-19]. https://news.sjtu.edu.cn/jdyw/20231215/192059.html.

  22. ShanghaiTech University. “Guidelines for the use of generative artificial intelligence” [EB/OL]. [2024-04-19]. https://ai.shanghaitech.edu.cn/2024/0327/c14346a1093334/page.htm.

Copywriting: Liu Mengyun
Formatting: Mu Qingdian
Proofreading: Han Lifeng, Wang Yuan
Editing: Zhang Lan
Review: Jiang Yunzong

Reprinted from Tsinghua University Library, please contact for removal if there is any infringement

Leave a Comment