Guidelines for Responsible Use of Generative AI in Research

Guidelines for Responsible Use of Generative AI in Research

In recent years, with the wide availability of data resources, leaps in computing power, and advancements in machine learning, artificial intelligence (AI) technology has made remarkable progress. Particularly in the development of foundational models, we have witnessed significant achievements—AI models trained on vast amounts of unlabelled data have given rise to what is known as “general artificial intelligence.” These models can perform diverse tasks, including creating various new content (such as text, code, images, etc.), which we refer to as “generative AI.” They typically generate content based on user input instructions (prompts), and their quality is so high that it is often difficult to distinguish them from content generated by humans.

The rapid and widespread application of generative AI has garnered extensive attention and led to broad policy and institutional responses. The European Union is at the forefront globally with its AI Act, and other international governance efforts are underway. This includes the G7-led Hiroshima Process and the signing of the Bletchley Declaration following the AI Safety Summit.

1. Scientific Research is One of the Fields Most Affected by Generative AI

Generative AI brings limitless possibilities to various industries. However, it also comes with risks, such as the mass production of misinformation and other unethical uses that may have significant impacts on society. Especially in the field of scientific research, the potential of AI is enormous as it can accelerate scientific discovery and improve the speed of research verification processes. At the same time, the misuse of technology can lead to research misconduct and the spread of misinformation.

2. Why Do We Need This Guide?

To ensure that the benefits of AI tools are fully utilized, various institutions such as universities, research institutions, funding agencies, and publishers have released guidelines on the appropriate use of AI. However, the diversity of these guidelines has created a certain complexity, making it difficult to determine which guidance should be followed in specific situations.

Therefore, the European Research Area Forum has decided to develop a set of guidelines for funding organizations, research institutions, and researchers in the public and private research ecosystems regarding the use of generative AI in research. These guidelines aim to prevent abuse and ensure that generative AI plays a positive role in improving research practices.

3. Core Principles of the Guidelines

These guidelines are based on existing relevant frameworks, such as the European Code of Conduct for Research Integrity and the work and guidelines on trustworthy AI developed by high-level experts. They emphasize the reliability, transparency, fairness, thoroughness, and integrity of research quality, as well as the principles of respecting colleagues, research participants, and society.

4. Guidelines for the Responsible Use of Generative AI in Scientific Research

(1) Recommendations for Researchers

1. Ultimately responsible for research outcomes.

a) Researchers are responsible for the integrity of content generated or supported by AI tools.

b) Researchers should maintain a critical attitude when using results produced by generative AI and be aware of the limitations of these tools, such as bias, hallucinations, and inaccuracies.

c) AI systems are neither authors nor co-authors. Authorship implies agency and responsibility, which belongs to human researchers.

d) Researchers should not use materials generated by generative AI in the scientific research process, such as fabricating, altering, or manipulating original research data.

2. Use generative AI transparently.

a) For transparency, researchers should detail which generative AI tools were primarily used in their research process. Mentioned tools may include names, versions, dates, etc., as well as how they were used and their impact on the research process. If relevant, researchers should provide inputs (prompts) and outputs according to open science principles.

b) Researchers should consider the randomness of generative AI tools, meaning that the same input may yield different outputs. Researchers strive for the reproducibility and robustness of results and conclusions. They disclose or discuss the limitations of the generative AI tools used, including potential biases in generated content and possible mitigation measures.

3. Be particularly cautious about privacy, confidentiality, and intellectual property issues when sharing sensitive or protected information with AI tools.

a) Researchers should be aware that inputs (text, data, prompts, images, etc.) generated or uploaded may be used for other purposes, such as training AI models. Therefore, they should protect unpublished or sensitive works (such as unpublished works of themselves or others) and avoid uploading them to online AI systems unless they can ensure that these data will not be reused.

b) Researchers should not provide third-party personal data to online generative AI systems unless they obtain the consent of the data subjects (individuals) and should comply with EU data protection regulations to ensure compliance with these rules.

c) Researchers should understand the technical and ethical implications regarding privacy, confidentiality, and intellectual property. For example, they should check the privacy options of the tools, the administrators of the tools (public or private institutions, companies, etc.), where the tools are operated, and the impact on uploaded information. This may include closed environments, hosting on third-party infrastructure with guaranteed privacy, or open internet access platforms.

4. When using generative AI, respect applicable national, EU, and international legislation as you would in regular research activities. In particular, the results produced by generative AI may be particularly sensitive regarding intellectual property and personal data protection.

a) Researchers should be aware of the potential for plagiarism (text, code, images, etc.) when using outputs from generative AI. They should respect the copyrights of others and cite others’ works appropriately. Outputs from generative AI (such as large language models) may be based on others’ results and require proper identification and citation.

b) Outputs generated by generative AI may contain personal data. If this occurs, researchers are responsible for handling any personal data in a responsible manner and complying with EU data protection regulations.

5. Continuously learn how to use generative AI tools correctly through training and other means to maximize their advantages. Generative AI tools are developing rapidly, and new uses are constantly being discovered. Researchers should stay updated on best practices and share them with colleagues and other stakeholders.

6. Avoid the extensive use of AI generation tools in sensitive activities that may affect other researchers or organizations (e.g., peer review, research proposal evaluation, etc.).

a) Avoid using generative AI tools to eliminate potential risks of unfair treatment or evaluation that may arise from the limitations of these tools (such as hallucinations and biases).

b) Moreover, doing so can protect researchers’ unpublished original works from exposure or being included in AI models.

(2) Recommendations for Research Institutions

1. Promote, guide, and support the responsible use of generative AI in research activities.

a) Research institutions should provide and/or promote training on the use of generative AI, particularly (but not limited to) training on verifying outputs, maintaining privacy, addressing biases, and protecting intellectual property.

b) Research institutions should provide support and guidance to ensure compliance with ethical and legal requirements (EU data protection regulations, intellectual property protection, etc.).

2. Actively monitor the development and use of generative AI systems within the organization.

a) Research institutions should always pay attention to research activities and processes that use generative AI to better support its future use. This includes providing further guidance on using generative AI, helping identify training needs, and understanding which support is most beneficial; helping predict and prevent potential misuse and abuse of AI tools; publishing and sharing with the scientific community.

b) Research institutions should analyze the limitations of technologies and tools and provide feedback and suggestions to their researchers.

3. Reference these generative AI guidelines or incorporate them into general research guidelines on research practices and ethics.

a) Research institutions should use these guidelines as a basis for discussions on the use of generative AI and related policies, openly soliciting feedback from researchers and stakeholders.

b) Research institutions should adopt these guidelines as much as possible. If necessary, specific additional recommendations and/or exceptions can be supplemented and should be published to enhance transparency.

4. Implement locally hosted or cloud-based generative AI tools managed by themselves where possible and necessary. This allows staff to input their scientific data into a tool that ensures data protection and confidentiality. Organizations ensure that these systems, especially those connected to the internet, meet appropriate cybersecurity standards.

(3) Recommendations for Funding Organizations

Funding organizations operate in different contexts, following different missions and regulations, which may not align with a single set of guidelines. Here are some effective practices for organizations to implement in the way that best suits their specific circumstances and goals.

1. Promote and support the responsible use of generative AI in research.

a) Funding tools designed by funding organizations should be open, welcoming, and supportive of the responsible and ethical use of generative AI technologies in research activities.

b) Funding organizations should require funded research and recipients to comply with existing national, EU, and international legislation (as applicable) and beneficial practices for using generative AI.

c) Funding organizations should encourage researchers and research institutions to use generative AI ethically and responsibly, including compliance with legal and research standards.

2. Review the use of generative AI in their internal processes. They will ensure that AI is used in a transparent and responsible manner, thereby leading by example.

a) Following the accountability principle that emphasizes responsibility and human oversight, funding organizations remain fully accountable for the use of generative AI in their activities.

b) Funding organizations should use generative AI transparently, particularly in activities related to assessment and evaluation, without compromising the confidentiality of content and fairness of processes.

c) When selecting generative AI tools, funding organizations should carefully consider whether the tool meets standards for quality, transparency, integrity, data protection, confidentiality, and respect for intellectual property.

3. Require research project applicants to maintain transparency regarding the use of generative AI for reporting purposes.

a) Research project applicants should declare whether they used a significant amount of AI generation tools during the application process.

b) Research project applicants should provide information on the role of generative AI in proposed and conducted research activities.

4. Pay attention to and actively participate in the rapidly evolving field of generative AI. Funding organizations should promote and fund training and educational programs to use AI in scientific research ethically and responsibly.

Conclusion

The EU’s “Guidelines for the Responsible Application of Generative AI in Research” provide a comprehensive and in-depth set of principles aimed at ensuring that generative AI can play a positive role in the research field while avoiding its potential risks and abuses. It is important to note that the guidelines will be continuously updated to adapt to changes in technology and policy environments. We encourage all stakeholders to actively participate in discussions to promote the responsible application of generative AI in research.

Disclaimer:This article is reprinted from Yuan Strategy, original author Jigu. The content of the article represents the personal views of the original author, and this public account’s translation/reprinting is solely for sharing and conveying different viewpoints. If there are any objections, please feel free to contact us!

Recommended Reading

Summary and Trend Outlook of Global Cutting-Edge Technology Development in 2023—Aerospace Field

Summary and Trend Outlook of Global Cutting-Edge Technology Development in 2023—New Materials Field

Technical Economic Observation丨“Made in India”: Under the Prosperity, Dark Currents Surge

Technical Economic Observation丨Current Status of Global Nuclear Forces and the Nuclear Arms Control System under New Situations

Technical Economic Observation丨From the Biden Administration’s Implementation of the “Chief AI Officer” Mechanism: Regulation and Application of AI Technology

Source丨Yuan Strategy

Author丨Jigu

Guidelines for Responsible Use of Generative AI in Research

About the Institute

The International Institute for Technological Economics (IITE) was established in November 1985 and is a non-profit research institution affiliated with the Development Research Center of the State Council. Its main function is to research major policy, strategic, and forward-looking issues in China’s economic and technological social development, track and analyze the trends of global technological and economic development, and provide decision-making consulting services for the central government and relevant ministries. The “Global Technology Map” is the official WeChat account of the International Institute for Technological Economics, dedicated to conveying cutting-edge technological information and insights into technological innovation to the public.

Address: Building A, No. 20 Xiaonanzhuang, Haidian District, Beijing

Phone: 010-82635522

WeChat: iite_er

Guidelines for Responsible Use of Generative AI in Research

Guidelines for Responsible Use of Generative AI in Research

Leave a Comment