Skip to content
On September 7, UNESCO issued the “Guidelines for Educational and Research Applications of Generative AI” (hereinafter referred to as the “Guidelines”), which is the world’s first guideline document related to generative AI, aimed at better integrating generative AI into education.
The following is a partial compilation of the “Guidelines”. Click on the original text at the end to obtain the PDF file.

UNESCO pointed out that by January 2023, the number of active users of ChatGPT had exceeded 1 million, but only one country had issued regulations on generative AI by July (referring to the “Interim Measures for the Management of Generative AI Services”).
Comparing this with the “Interim Measures for the Management of Generative AI Services” issued domestically in July, we find that both documents mention the importance of labeling AI-generated content; practitioners in the education sector need to consider ethical principles correctly; and the requirement to increase manual review processes to avoid a large number of erroneous AI-generated information sources, which is an important signal for the education sector.
In addition, the “Guidelines” provide a more comprehensive analysis of the risks currently posed by generative AI, conveying recommendations to practitioners in the education sector.
Regarding the exploration of AI implementation, the “Guidelines” also provide multiple directions for the correct use of AI in the field of education, such as using AI assistants to facilitate teaching; utilizing AI to generate one-on-one training plans for students in language, computer science, arts, and coding learning; assisting learners with hearing or visual impairments; and using AI dialogue for psychological and emotional counseling.
The “Guidelines” also recommend limiting the use of AI tools to individuals aged 13 and above.
UNESCO’s Director-General, Audrey Azoulay, pointed out, “Generative AI could be a tremendous opportunity for human development, but it could also cause harm and bias. Without public participation and the necessary safeguards and regulations from governments, it cannot be integrated into education. This guideline from UNESCO will help decision-makers and educators navigate the potential of AI to meet the primary interests of learners.”
What are the key points of the “Guidelines”?
1. Debates surrounding generative AI and its impact on education
Before discussing the normative guidelines for generative AI, relevant organizations have summarized a series of controversies and risks currently posed by Generative AI (hereinafter referred to as Gen AI), which are often overlooked by users.
Among these, the most significant impact on the education sector is that the large volume of biased information generated by AI creates confusion in the information sources available to learners. Without regulation, the knowledge future students learn is likely to be inaccurate and one-sided information processed by AI.
The risks mentioned in the “Guidelines” mainly include the following aspects:
-
Worsening Digital Poverty
The “Guidelines” point out that generative AI relies not only on iterative innovations in AI architecture and training methods but also on vast amounts of data and powerful computing capabilities. These innovations mostly apply only to the largest international technology companies and a few economies (primarily the US, China, and European countries). As access to data becomes increasingly important for national economic development and individual digital opportunities, those who cannot access or afford sufficient data find themselves in a state of “data poverty.”
In other words, AI can promote equity but can also exacerbate inequity, depending on how users choose to utilize it.
Researchers, educators, and learners should critically assess the value orientations, cultural standards, and social customs in general training models. Policymakers should be aware of and take action to address the inequalities exacerbated by the widening gap in training and controlling general models.
-
Adapting Beyond National Regulation
Meanwhile, many companies that have begun using Gen AI have found that maintaining the security of their systems is becoming increasingly challenging. Additionally, while the AI industry itself calls for regulation, the legislative drafting concerning the creation and use of all forms of AI often lags behind the rapid pace of development.
Although Gen AI can enhance human capabilities in completing certain tasks, control over companies promoting Gen AI is limited. This raises regulatory issues, especially regarding the acquisition and use of domestic data, including data about local institutions and individuals, as well as data generated on national territory, which requires appropriate legislation.
-
Use of Content Without Consent
As mentioned earlier, the large amounts of data required by general models are often collected from the internet, usually without any owner’s permission. Many general image systems and some general code systems have thus been accused of infringing intellectual property rights.
Researchers, educators, and learners should also be aware that images or code created using Gen AI may infringe on others’ intellectual property rights, and the images, sounds, or code they create and share online may be utilized by other Gen AIs.
-
Inaccurate Sources Due to AI-Generated Content
Since GPT training data is typically extracted from the internet, which often contains discriminatory and other unacceptable language, the lack of strict regulations and effective oversight mechanisms has led to the increasing dissemination of biased materials generated by generative AI on the internet, resulting in learners acquiring erroneous knowledge.
This is particularly important for the education sector, as materials generated by generative AI may appear quite accurate and convincing, but they often contain errors and biased viewpoints. This poses a significant risk for young learners, who lack solid prior knowledge of the subjects discussed.
This also presents recursive risks for future GPT models, which will be trained on texts collected from the internet created by previous GPT models, including their biases and errors.
Imagine if institutions used AI to generate courses, but due to a lack of review, erroneous knowledge was disseminated, which would have serious consequences for students. This sends an important signal to educators that the reliability of AI-generated knowledge could pose long-term issues.
-
Lack of Understanding of the Real World
Text GPT is sometimes dismissively referred to as a “random parrot” because, as mentioned earlier, while it can generate seemingly convincing text, that text often contains errors. GPT starts from random patterns and does not understand their meaning—just as a parrot can mimic sounds but does not actually understand what it is saying.
There is a disconnect between the general models used and generated by generative AI and the real world, which may lead teachers and students to place a certain degree of trust in the output, a trust that is unwarranted. This poses serious risks for the future of education. In fact, generative AI is not based on observations of the real world or scientific arguments and does not align with human or societal values.
In addition to the controversies inherent in all generative AI, Gen AI can also be used to modify or manipulate existing images or videos to create indistinguishable fake images or videos. Gen AI is making the creation of these “deep fakes” and so-called “fake news” increasingly easy.
Currently, only China, EU countries, and the US have adjusted copyright laws to address the impact of generative AI. For instance, the US Copyright Office has ruled that outputs from general systems (like CHATGPT) are not protected by US copyright law, stating that “copyright can only protect products of human creativity” (US Copyright Office, 2023). In the EU, the proposed EU AI Act requires all tool developers to disclose the copyrighted materials used in building their systems (European Commission, 2021). China, through its regulations on generative AI issued in July 2023, requires that outputs from Gen AI be labeled as AI-generated content and only recognizes them as digitally synthesized outputs.
2. Promoting Creative Use of Generative AI in Education and Research
When ChatGPT was first launched, educators around the world expressed concerns about its potential to generate papers and how it could facilitate cheating among students.
Meanwhile, the internet is flooded with suggestions for using Gen AI in education and research. These include using it to inspire new ideas, generate multi-perspective examples, develop lesson plans and presentations, summarize existing materials, and stimulate image creation. Despite new ideas emerging almost daily on the internet, researchers and educators are still exploring the exact implications of Gen AI for teaching, learning, and research.
In particular, many of the proposed uses may not have been properly considered in terms of ethical principles, while others are driven by the technological potential of Gen AI rather than the needs of researchers, educators, or learners. This section outlines ways to creatively use Gen AI in education.
Educational and research institutions should develop, implement, and validate appropriate strategies and ethical frameworks to guide the responsible and ethical use of general systems and applications to meet the needs of teaching, learning, and research. This can be achieved through the following four strategies:
Institutional Implementation of Ethical Principles:Ensure that researchers, educators, and learners use general tools responsibly and ethically, and strictly regard the accuracy and effectiveness of outputs.
Guidance and Training:Provide guidance and training on general tools to researchers, educators, and learners to ensure they understand ethical issues such as biases in data labeling and algorithms, and comply with appropriate regulations regarding data privacy and intellectual property rights.
Establish Gen AI Prompt Engineering Capabilities:In addition to knowledge in specific subjects, researchers and educators need expertise in engineering and rigorously assess prompts generated by Gen AI. Given that the challenges posed by Gen AI are complex, researchers and educators must receive high-quality training and support to achieve this.
Detection of Gen AI-Based Plagiarism in Written Assignments:Gen AI may allow students to present text they did not write as their own work, which is a new form of “plagiarism.” General vendors are required to mark their outputs with a watermark stating “AI-generated,” while tools are being developed to identify materials generated by AI. However, there is little evidence that these measures or tools are effective. The current institutional strategy is to maintain academic integrity and strengthen accountability through rigorous human detection. The long-term strategy is for institutions and educators to rethink the design of written assignments so that they are not used to evaluate tasks that general tools perform better than human learners. Instead, they should address what humans can do that generative and other AI tools cannot do, including applying human values such as empathy and creativity to complex real-world challenges.
3. Exploring Practical Applications of Generative AI in Educational Settings
The “Guidelines” provide some examples of how the collaborative design processes in use can inform research practices, assist teaching, provide tutoring for acquiring basic skills at one’s own pace, facilitate higher-order thinking, and support learners with special needs.
-
Generative Artificial Intelligence for Research
Gen models have demonstrated their potential to expand the scope of research outlines and enrich data exploration and literature reviews. While more widespread use cases may emerge, new research is needed to define potential areas of research questions and expected outcomes to demonstrate effectiveness and accuracy, ensuring that human understanding of the real world through research is not compromised by the use of AI tools.
-
Generative AI to Facilitate Teaching
The use of general platforms and the design of specific educational general tools should aim to enhance teachers’ understanding of their subjects and their understanding of teaching methods, including co-designing lesson plans, course packages, or entire courses with AI. Pre-trained auxiliary conversational teaching assistants or “teaching assistant twins” could be utilized.
Based on data from experienced teachers and librarians, some educational institutions have tested these models, which may have unknown potential and ethical risks. The practical application processes of these models and further iterations still need to be carefully reviewed through the framework suggested in this guideline and protected through human oversight.
-
Generative AI as a 1:1 Coach for Self-Paced Acquisition of Basic Skills
While higher-order thinking and creativity have garnered increasing attention in defining learning outcomes, the importance of basic skills in children’s psychological development and capability advancement remains undeniable. Among many abilities, these basic skills include listening, pronunciation, and writing.
Native or foreign language, as well as basic computational skills, arts, and coding. “Training and practice” should not be viewed as outdated teaching methods; rather, it should be reactivated and upgraded with general technologies to facilitate learners’ self-paced practice of basic skills. Gen AI tools have the potential to become 1:1 coaches for such self-paced practice.
-
Support for Learners with Special Needs through Generative AI
Theoretically, Gen models have the potential to assist learners with hearing or visual impairments. Emerging practices include providing general subtitles or captions for learners who are deaf or hard of hearing, as well as providing general generated audio descriptions for visually impaired learners. Gen AI models can also convert text to speech and vice versa, enabling individuals with visual, auditory, or speech impairments to access content, ask questions, and communicate with their peers.
However, this functionality has not been widely utilized. According to a survey conducted by UNESCO in 2023 regarding government use of AI in education, only four countries (China, Jordan, Malaysia, and Qatar) reported that their government agencies had verified and recommended AI-assisted tools to support inclusive access for learners with disabilities (UNESCO, 2023).
Finally, there are suggestions that neural systems may be capable of conducting dialogue-based diagnostics to identify psychological or social-emotional issues and learning difficulties. However, there is little evidence that this approach is effective or safe, and any diagnosis would need to be interpreted by skilled professionals.
4. UNESCO Regulations: Age Limit for Using AI Tools Set at 13 Years
Most Gen AI applications are primarily designed for adult users. These applications often pose significant risks to children, including exposure to inappropriate content and potential manipulation. Given these risks and considering the considerable uncertainty surrounding the iteration of general applications, it is strongly recommended to impose age restrictions on general technologies to protect the rights and well-being of children.
Currently, ChatGPT’s terms of use require users to be at least 13 years old, and users under 18 must obtain parental or legal guardian consent to use the service.
Even before the widespread use of social media and the creation of user-friendly and powerful general applications (like ChatGPT), it was stipulated that organizations or individual social media providers should not provide services to children under 13 without parental consent. Many commentators argue that this threshold is too low and advocate for legislation to raise the age to 16. The EU’s GDPR (2016) stipulates that users must be at least 16 years old to use social media services without parental consent.
The emergence of various Gen AI chatbots requires countries to carefully consider and publicly deliberate the appropriate age threshold for independent dialogue with Gen AI platforms. The minimum age should be set at 13. Countries also need to decide whether self-reported age remains an appropriate means of verifying age.
Countries will need to mandate general service providers’ responsibility for age verification, as well as parents’ or guardians’ responsibility for supervising independent conversations of minors.
From a human-centered perspective, the design of AI tools should extend or enhance human intellectual capabilities and social skills, rather than undermine, conflict with, or replace them.For a long time, there has been an expectation that AI tools can be further integrated as part of usable tools.
To make AI a trusted part of human-machine collaboration at the individual, institutional, and systemic levels, the human-centered approach proposed by UNESCO in 2021 regarding AI ethics will be further refined and implemented according to the specific characteristics of emerging technologies (such as Gen AI). Only in this way can we ensure that Gen AI becomes a trustworthy tool for researchers, educators, and learners.
While Gen AI should be used in education and research, we all need to recognize that Gen AI may also change the established systems and foundations within these fields. The transformation of education and research triggered by Gen AI, if any, should be subject to strict review and guidance from a human-centered approach. Only then can we ensure that especially the potential of AI, as well as all other categories of technology used more broadly in education, can enhance humanity’s ability to build an inclusive digital future for all.