International Innovative Applications of Generative Artificial Intelligence in Education—— Overview of the “Generative AI Empowering Learners Conference” by the Global Research Alliance for AI Learning and Education
□Wang Zhijun, Teng Zhiqiang, Su Chenyu
[Abstract] Generative Artificial Intelligence (AI), as a universal technology akin to “electricity”, is sparking a wave of intelligent reform on a global scale. How to carry out education and teaching innovation in the field of education based on generative artificial intelligence is a challenge for educational researchers and practitioners in the era of innovation and a hot topic around the world. This research is based on the core content of the Generative AI Empowering Learners Conference organized by the Global Research Alliance for AI Learning and Education (GRAILE), constructing a generative AI education application innovation framework and systematically demonstrating the international generative AI innovation architecture from the three levels of concept, practice, and consensus. At the philosophical level, humans are encouraged to embrace generative AI as a partner in knowledge co-creation, fostering a culture of generative AI innovation and achieving theoretical innovation and cognitive iteration across multiple aspects from institutional systems to educational philosophies and teaching and learning methods. The SPARK model, generative AI literacy model, and generative digital education framework provide theoretical support for the application of generative AI in education in terms of organizational guarantee, literacy training, and practical guidelines respectively. At the practical level, various innovative practices should adhere to the core of personalized/humanized learning experiences, focusing on innovative practices and application promotion around practical problems in educational settings. There are currently six typical international practice cases, including (1) generative AI supported networks and physical space collaborative learning; (2) large language model-driven intelligent tutoring system; (3) generative AI-enhanced immersive platforms and peer support promoting brain health; (4) large language model-supported automatic question generation, distractor generation, and feedback; (5) intelligent learning path planning and personalized learning support provision; (6) and the affinity group practice advancing the application of generative AI. Six major international consensuses were formed in the conference: (1) recognizing the specificity of innovative applications of generative AI in education; (2) trust as a prerequisite for transitioning from negative rejection to positive application; (3) play the leading role of human in coping with cognitive stagnation and dehumanization; (4) pay attention to data leakage and security ethical risks in applications; (5) bridging the new digital divide and structural inequalities; (6) establish new standards for AI development and application. The research on generative artificial intelligence education application innovation can provide reference for the theory and practice of generative artificial intelligence education application in China.
[Keywords] Generative Artificial Intelligence; Education Innovation; ChatGPT; SPARK Model; AI Literacy; Digital Education; Personalized Learning Experience
1. Introduction
Represented by ChatGPT, large language models cross the boundaries of computer science with their powerful natural language processing capabilities, sparking a wave of intelligent reform globally. Microsoft CEO Satya Nadella pointed out that generative AI for knowledge workers is equivalent to the Industrial Revolution (World Economic Forum, 2023). Andrew Ng, an authoritative scholar in the field of artificial intelligence and a Stanford University AI expert, believes that AI is a “new electricity” aimed at the future (Stanford Online, 2023). Just as the invention of electricity over a hundred years ago transformed various industries, AI will also bring about similarly significant changes. How to carry out theoretical and practical innovations in education in the face of generative AI is a proposition of our time. Many researchers in China have conducted extensive theoretical discussions on the educational applications of generative AI, including educational application models and strategies (Wu Di, et al., 2023; Lu Yu, et al., 2023; Jiao Jianli, 2023; Zheng Yanlin, et al., 2023); the impact on education (Yu Nanping, et al., 2023; Wu Nanzhong, et al., 2023); transformation of learning paradigms (Zhu Zhitian, et al., 2023); and educational prompt design (Zhao Xiaowei, et al., 2024). At the same time, there have also been exploratory studies on ChatGPT educational applications (Wang Zhuo, et al., 2023; Chen Kaiquan, et al., 2023). In summary, the innovation of generative AI educational applications is a practical project, and the current systematic theoretical and practical innovations in this area in China are still relatively lacking. There is an urgent need to comprehensively promote systematic educational practice innovation based on a thorough understanding of international practical results.
2. Generative AI Education Application Innovation Framework
The Global Research Alliance for AI Learning and Education (GRAILE) is an academic organization systematically promoting innovation in AI educational applications. The organization, led by George Siemens, aims to apply AI research to learning and educational practice, support educational leaders in planning and deploying AI solutions, and provide spaces for AI learning and AI literacy enhancement, thereby empowering teachers and educational organizations. To maintain continuous attention to the field from theory to practice, the organization regularly organizes AI education application themed activities every month and also holds global international conferences from time to time to promote communication and cooperation in the field of education. In October 2023, 93 top researchers and practitioners from 42 universities, international organizations, research institutions, and enterprises from 17 countries participated in the “Generative AI Empowering Learners Conference” to conduct in-depth discussions on the practical innovation of generative AI education applications over three days (GRAILE, 2023). The conference content included three keynote speeches and 21 thematic forums. This research constructs a generative AI education application innovation framework by sorting out the core content of this conference and tracking related results, systematically demonstrating the core achievements of this conference from the three levels of concept, practice, and consensus.

Figure 1 Generative AI Education Application Innovation Framework
(1) Concept Level. Innovation at the conceptual level serves as an important guide for practical innovation. Compared to traditional teaching and learning tools, generative AI possesses autonomous learning and innovation capabilities. Humans should transcend the instrumental thinking of technology and view generative AI as a partner in knowledge co-creation. Based on the thinking of human-machine collaborative knowledge co-creation, it is essential to reconstruct the relationship between educational subjects, transform educational concepts, reshape intelligent teaching and learning methods, actively carry out theoretical innovation, institutional innovation, and emphasize the cultivation of generative AI literacy and practical-oriented theoretical innovation, embracing the culture of generative AI innovation. The keynote speeches and parallel forums at the conference discussed the AI technology adoption model in education, generative AI literacy framework, and generative digital education framework. These frameworks provide systematic theoretical support for the application of generative AI in education in terms of organizational guarantee, literacy training, and practical guidelines.
(2) Practical Level. Although generative AI has only recently entered the public eye, numerous innovative educational practices have emerged internationally. By using ChatGPT, educational practitioners, learning designers, educational technology experts, and learners can autonomously design customized learning tools, providing motivation for non-professionals to develop learning tools. Specifically, using APIs provided by large language models can easily complete the development of educational application tools, such as Khanmigo (Khan, 2023), LamPost (Goodman, et al., 2022), Metaphorian (Kim, 2023), Spellburst (Angert, 2023), Metagpt (Hong, et al., 2023), and others. Meanwhile, as shown in Figure 1, this conference systematically presented a number of typical educational practice cases. Related practices, based on the characteristics of generative AI, discussed diverse educational practice scenarios, such as subject teaching, interdisciplinary integration, collaborative teaching, collaborative learning, adaptive educational scenarios; learning path planning, personalized learning support services, and other instructional design; automatic generation of test questions, automatic essay scoring, automatic feedback generation, and conversational evaluation applications. These practices focus on personalized/humanized learning experiences, adhering to a student-centered innovative approach, continuously enhancing learners’ personalized learning, self-regulated learning, and collaborative learning, promoting the development of learners’ higher-order thinking and the cultivation of generative AI literacy.
(3) Consensus Level. The innovation of generative AI educational applications faces a series of fundamental issues and challenges, and resolving these issues and challenges is an important guarantee for promoting continuous emergence and large-scale application of innovation. The United Nations Educational, Scientific and Cultural Organization (UNESCO) mentioned eight application risks of generative AI in its “Guidelines for Generative AI in Education and Research”: exacerbating the digital divide, detachment from policy regulation, infringement of intellectual property rights, lack of transparency in operational mechanisms, unreliable generated content, lack of understanding of real-world society, limiting the expression of diverse viewpoints, and generating false content (UNESCO, 2023); the threats posed by generative AI to the core educational values of equality and inclusion, learner agency and values, linguistic and cultural diversity, and the plurality of knowledge construction are direct and profound (Miao Fengchun, 2024). In educational practice, it is crucial to address these issues, emphasizing the leading role of humans to create a good ecological environment that promotes innovation and practical promotion, ensuring its healthy and orderly development. In the face of numerous problems, this conference reached consensus on six aspects.
In the context of digital transformation in education, the innovation of generative AI educational applications is not merely an application of a technological tool, but a systematic reform project that needs to be carried out from the conceptual to the practical level. In practice, it is necessary to adopt a systematic perspective, conduct theoretical innovation and institutional design at the conceptual level, and based on practical scenarios and problem-solving in the education sector, carry out innovative practices from multiple dimensions and perspectives, and achieve consensus in addressing a series of new issues, building a good ecological environment, and promoting the continuous emergence and large-scale application of innovative practices. To fully grasp the outcomes of international generative AI educational application innovations, this research will present them according to three parts: conceptual innovation, typical practices, and conference consensus.
3. Conceptual Innovation
(1) Organizational Guarantee: AI Technology Adoption Model in Education
Innovations in AI educational applications require coordination among multiple stakeholders. Abelardo Pardo from the University of Adelaide in Australia pointed out that AI educational application innovation involves two layers of stakeholders: innovation and operation (GRAILE AI, 2023a), as shown in Figure 2. The innovation layer includes academic researchers, teachers, AI experts, IT experts, and pioneers responsible for trying new technologies and ideas and thinking about how to apply them in practice. The operational layer includes managers and leaders in educational institutions who are more concerned about how to respond to challenges from generative AI over the next five years and how to solve practical problems in institutional operations. The two types of learning analytics adoption models related to complexity leadership (Dawson, et al., 2018) apply equally to the context of educational institutions adopting AI technology, but both have limitations. The top-down model refers to the process where operational-level leaders strategically deploy to achieve specific goals. Since only the leadership deeply understands the strategic deployment, others have a low level of understanding, resulting in limited support and even stagnation. The bottom-up model refers to pioneers or other groups within the organization initiating innovation before the operational layer recognizes the value of new technologies. However, due to limited resources and the lack of interaction and understanding among different departments within the same institution, their actions lack higher-level goals, direction, and strategy. In this model, although groups like academic researchers, teachers, and pioneers are very engaged, their lack of understanding of other parts of the educational ecosystem limits the impact of technology.

Figure 2 Stakeholders and Their Relationships
To avoid the limitations of both models and efficiently promote AI educational application innovation with more support, the organizational structure of the operational and innovation layers needs to shift from a hierarchical to an interrelated network model to integrate advantages and build an ecosystem that promotes AI educational application innovation. Accordingly, Abelardo Pardo proposed the AI technology adoption model in education (the SPARK model), as shown in Figure 3. The left half of the model consists of stakeholders responsible for innovation and entrepreneurship and operational functions, including but not limited to leaders, managers, IT experts, pioneers, academic developers, and students. This community has a networked nature, interconnected and instantaneously communicative, capable of jointly executing systematic planning and knowledge transformation for technology adoption. The right half of the model includes systematic planning, innovative practice, and knowledge transformation and scaling. Among these, systematic planning includes algorithms, predictive models, research reports, and actions that meet demands, gradually embedded into the existing operational system. The core of innovative practice is to identify problems, deploy algorithms, and conduct research; identifying problems requires institutions to focus on a specific issue; deploying algorithms specifies the algorithms or technologies used to solve that problem, with iteration being crucial; deploying research refers to studying these pilot projects and attempting to apply technologies. Knowledge transformation and scaling represent the vision of expanding innovative technologies from pilot projects to the entire institution, which is key to their widespread adoption within the institution.

Figure 3 AI Technology Adoption Model
The SPARK model deeply reveals the organizational restructuring and effective collaborative relationships among stakeholders needed to achieve innovation and scaling of AI educational applications from a systemic perspective. This model not only helps to deeply grasp the fundamental issues that hinder the systematic emergence and promotion of various new technologies in educational innovation and application but also provides theoretical guidance for the emergence and promotion of generative AI educational applications. According to the “2022 Global AI Innovation Index Report” released by the Chinese Academy of Sciences (Chinese Academy of Sciences, 2023), China has made significant achievements in AI development, maintaining the second level globally for nearly three years in the AI innovation index, with progress in talent, education, and patent output, although the level of basic resource construction needs improvement. Meanwhile, the “2023 AI Index Report” released by Stanford University, tracking 127 countries worldwide, found that China’s optimism about AI technology ranks first in the world. 78% of surveyed Chinese respondents believe that the benefits of using AI products and services outweigh the drawbacks, while only 35% of respondents in the US share the same optimistic view. The report reflects the positive embrace of AI by the Chinese public, providing strong grassroots support and external conditions for educational innovation. At the same time, the advancement of the national education digitalization strategy also provides top-level institutional support for educational innovation. To promote the emergence of generative AI educational innovation and large-scale, in-depth educational applications, China can strengthen management system and organizational institutional reforms based on the SPARK model, forming a networked community of interests to carry out effective collaborative innovation.
(2) Literacy Cultivation: Generative AI Literacy Model
Although generative AI tools like ChatGPT and Sora have garnered widespread attention and demonstrated enormous potential in the education sector, educators often view them as potential threats, holding complex attitudes. Mastering AI technology will become one of the most important job skills in the future, necessitating both teachers and students to be proficient in using generative AI to support learning and work. At the same time, generative AI may trigger a Matthew effect among students, where those who can use ChatGPT become stronger, while those who cannot may become relatively weaker. For instance, the strong capabilities of ChatGPT in writing and communication require individuals to understand how to input prompts, critically evaluate the output, and integrate it into their own tasks. However, excessive reliance on AI at an early stage may lead to a loss of basic skills. For example, at which stage and in what manner should students acquire these skills? How do we measure students’ proficiency in using generative AI? How do we determine their ability to engage in effective dialogue with AI and extract the most useful information? These are questions we need to consider. Moreover, the gap between the skills required of graduates and the needs of enterprises prompts us to rethink educational methods, operational models, and structures, adjusting evaluation methods to meet the new literacy requirements demanded by society. UNESCO pointed out in its guidelines that attention should be paid to the occupational changes brought about by generative AI and corresponding adjustments to the educational system to meet the new demands for talent in the market and future society (UNESCO, 2023).
To prevent generative AI technology from negatively impacting students and to gain its support for personal development, writing, learning, and cognitive abilities, we must focus on cultivating students’ AI literacy. Mark Warschauer’s team proposed a generative AI literacy model, which includes five levels: understanding, access, prompting, verification, and integration, as shown in Figure 4 (GRAILE AI, 2023b). (1) Understanding means knowing the functional characteristics, shortcomings, and biases of AI technology tools. (2) Access means teaching students how to use generative AI tools. (3) Prompting means teaching students how to use prompts to obtain usable content. (4) Verification means teaching students to discern the truth and source of answers, helping them reasonably recognize the biases introduced by generative AI tools; (5) Integration involves teaching students how to integrate generated content into their tasks after checking for accuracy and bias.

Figure 4 Generative AI Literacy Model
The generative AI literacy model is essentially a higher-order thinking model based on generative AI. Similar to Bloom’s taxonomy of cognitive objectives, the higher the level, the greater the requirements for learners, and the more critical for future development. In this model, understanding and access are foundational, prompting design is a key capability, requiring learners to have strong human-machine collaborative thinking, human-machine interaction abilities, and creative thinking; while verification and integration involve the cultivation of learners’ critical thinking, problem-solving abilities, and creativity. This model not only serves as a generative AI literacy model but also represents a new way of thinking and cognition in the era of generative AI. It indicates that in the educational innovation applications of generative AI, we must emphasize the cultivation of students’ higher-order thinking abilities, enabling them to master generative AI through thinking skills, achieving cognitive development, and helping them become future learners who meet the new requirements of the digital intelligence era.
(3) Practical Guidelines: Generativism Digital Education Framework
A deep understanding of the role of generative AI in educational teaching and the specific organization of learning activities is the foundation of educational reform. The international generativism digital education framework serves as a systematic practical guideline for such educational reforms, as shown in Figure 5 (Pratschke, 2023). It reconstructs learners’ interactions in communities under the context of generative AI, categorizing the roles of generative AI in the classroom into three types: (1) collaborative AI based on social presence: learners collaborate with teachers and other learners, with generative AI acting as a collaborator, such as a virtual learning partner, role-player, learning motivator, and collaborative supporter; (2) analytical AI based on cognitive presence: intelligent agents provide specific viewpoints on topics, acting as learning partners, opponents, or coaches, with generative AI as an analyzer, supporting learners in situational creation, data interaction, critical thinking, and creative thinking; (3) facilitating AI based on teaching presence: intelligent technologies act as learning mentors, accompanying and supporting learners in courses, with generative AI as a facilitator, helping learners generate feedback, create content, navigate learning paths, and engage in Socratic questioning.

Figure 5 Generativism Digital Education Framework
In terms of organizing learning activities, this framework integrates with the six learning activities in the ABC learning design framework (Natasa, 2015) to form the six learning activity designs empowered by generative AI (Pratschke, 2023), as shown in Table 1.
Table 1 Generative AI Empowered ABC Learning Activity Design
Algorithm-supported intelligent adaptive community learning is a new form of online learning in the AI era (Wang Zhijun, et al., 2023). Generative AI is particularly suitable for collaborative learning, building learning communities, and acquiring knowledge and developing abilities through active learning. It does not emphasize using technology to enhance current educational teaching methods; rather, it applies technology to enhance learners’ strong learning interactions and build more personalized learning experiences. The generativism digital education framework provides networked, collaborative, community-oriented, and student-centered learning experience methods for educational organization in the digital age. It not only explains the roles and functions of generative AI from multiple dimensions based on learners’ learning experiences but also provides in-depth guidance for the design of learning activities, serving as an important theoretical framework for guiding innovations in generative AI educational teaching and related teaching and learning tool development.
4. Typical Practices
(1) Generative AI Supported Network and Physical Space Collaborative Learning
Network and physical space collaborative learning, as a new form of teaching organization, effectively integrates physical classrooms with online environments through intelligent technologies, allowing offline and online students to learn the same content synchronously in the same class. It emphasizes the intimacy and immediacy of interactions, promoting active learning among students. Research has been conducted on this by the University of Montreal in Canada, Singapore University of Technology and Design, and Hong Kong University of Science and Technology. Among them, Hong Kong University of Science and Technology is exploring how to enable students in Hong Kong and Guangzhou to learn the same content in the same class at almost the same cost based on intelligent technologies while ensuring that remote and local students receive the same personalized learning experience. Since interaction is a key factor affecting the quality of online education, network and physical space collaborative learning allows students to interact across campuses through intelligent agents, virtual reality, and other technologies, using AI agents and other tools to engage with students and address specific issues. Hong Kong University of Science and Technology has developed a classroom chatbot based on ChatGPT. The University of Montreal in Mexico has developed a tool called Mental Reality to enhance learning interaction and experience. Additionally, network and physical space collaborative learning analyzes learning and interaction behaviors under different learning environments using learning analytics and AI algorithms, thus bridging the online and offline spaces to analyze the interaction relationships between classroom and online students. AI technology assists students and teachers in management, monitoring, and analyzing learning situations, pinpointing learning paths and difficulties, providing real-time feedback and personalized resources, and offering learning reports to teachers. Teachers can shift their focus to overseeing the overall learning process, maximizing the use of AI tools to serve educational teaching, and providing students with more timely and effective feedback. Current researchers are exploring the impact of AI on learning contexts and knowledge construction, considering how to better leverage the advantages of online learners.
(2) Large Language Model-Driven Intelligent Tutor System Large-Scale Application
Before the emergence of large language models, intelligent tutor systems had not been widely applied in actual educational teaching, primarily due to the inadequacy of the accuracy and immediacy of the content provided by the systems for large-scale applications. Large language models have compensated for the deficiencies of intelligent tutor systems in natural language understanding, content generation, and adaptability, providing support for personalized learning. Intelligent tutor systems supported by generative AI, such as QuizBot, allow students to first explore knowledge and then receive immediate feedback on errors rather than being directly told the correct answers, greatly enhancing the practicality of intelligent tutor systems (Ruan, et al., 2019). Professor Hu Xiangen from the University of Memphis has already applied GPT-4 in intelligent tutor systems, developing applications for teaching use. This application not only provides real-time feedback based on the specific content of learner interactions but also adapts to the needs of learners, playing different roles such as mentor or learning partner, stimulating learners through the Socratic method (Interconnected Intelligent Education Center, 2023). However, merely using intelligent tutor systems to generate content may facilitate smoother classroom dialogue but does not necessarily improve students’ learning levels; it is essential to explore teaching modes that align with intelligent tutor systems to foster deeper interactions in the classroom. Therefore, it is necessary to combine design, theory, and practice to reconstruct classrooms and provide effective learning experiences for students. This includes designs that genuinely promote learner learning, teaching designs for intelligent tutor systems, theoretical uses that comply with educational teaching laws, and the complete implementation of teaching designs.
Although large language models were not initially developed for educational teaching, they can help teachers achieve precise teaching assessments by constructing learner models. Intelligent tutor systems can build cognitive models of learners based on their dialogues with the system, better presenting learning processes and outcomes. In the past, it was challenging to quantify high-order thinking and competencies like critical thinking and problem-solving abilities of learners. However, conversational analysis based on intelligent tutor systems can assist teachers in establishing learner competency models to reflect the progress of higher-order thinking abilities. This approach is more authentic than traditional examinations, as conversational assessments provide a more realistic evaluation than ordinary tests.
(3) Generative AI Enhanced Immersive Platforms and Peer Support Promoting Brain Health
Brain health focuses not only on cognitive impairments and deficits but also on fully utilizing the brain’s growth and development capabilities in specific environments. To fully tap into the potential of brain health, the Brain Health Center at the University of Texas is using AI technology to construct learning platforms that better align with cognitive development. This platform aims to help learners gain personalized learning experiences, develop better brain usage habits, and provide ongoing peer support in immersive virtual learning environments. For the brain, learning is a behavior of repeated retrieval. Traditional learning methods often fail to stimulate active participation, while video games can promote positive behaviors and responses that align with human development. Therefore, using AI technology to improve traditional educational teaching mechanisms, from the perspective of brain health, can help learners establish the intrinsic motivation necessary for self-driven learning, stimulate personal agency and positive attitudes, and enhance learners’ innovative and creative abilities, making the learning process more enjoyable. This has become a new direction for innovative practices in AI education.
Currently, members of the Brain Health Center at the University of Texas are actively trying to integrate AI technology into the learning platform of the brain health program. In immersive learning environments, AI technology sets up evaluation mechanisms, generating learners’ brain health indices and presenting content that aligns with learners’ cognitive development characteristics to optimize their personalized learning experiences. Additionally, the learning platform employs natural language processing and rapid updates to engage and retain learners, ensuring they can learn more immersively and effectively within a gamified platform. Furthermore, the platform can push specific types of content for each learning goal to help learners develop healthy brain usage habits.
(4) Large Language Model Supported Automatic Test Question, Distractor, and Feedback Generation
Large language models possess strong natural language and contextual understanding capabilities, and the emergence and development of pre-trained neural network models like Transformers provide technical support for the automatic generation of test questions, distractors, and feedback. Automatic test question generation can expand the question development process, efficiently and cost-effectively creating a large number of high-quality test questions. Traditional applications for automatic question generation are very limited, primarily focusing on subjects that are relatively easy to model, such as mathematics. The multimodal capabilities of large language models support the generation of image-based questions, greatly broadening the application scope of automatic question generation. For instance, they can assist teachers in automatically generating reading comprehension questions and automatically scoring based on students’ responses. Currently, distractors for multiple-choice questions are often designed by teachers based on teaching experience, but not all distractors are strongly relevant. Due to varying levels of knowledge mastery among different learners, their sets of distractors may differ, and the same distractor may not be suitable for all students. Therefore, the University of Massachusetts is attempting to use AI technology to automatically generate a set of high-quality distractors for each multiple-choice question and select personalized distractors based on students’ learning progress, achieving automatic distractor generation and feedback generation that adapt to individual learning paces. The team trains the model by providing examples of distractors and feedback, using accuracy matching, partial matching, and proportional matching metrics to evaluate the degree of match between generated distractors and human-coded distractors. In terms of feedback generation, they provide the model with questions, distractors, and feedback information, asking the model to attempt to provide correct answers based on the feedback information, and then comparing the accuracy of this response to the accuracy when the model generated answers without feedback, quantifying the feedback information. In the future, they will continue to explore effective metrics for evaluating distractors and feedback information and research methods for automating the assessment of distractors and feedback information to identify the precise errors within each distractor; generating distractors in a controllable manner one by one, collecting teachers’ preferences for distractors to better evaluate metrics; and constructing student models based on large language models to achieve precise personalized teaching.
Additionally, currently, using large language models for automatic feedback generation can be categorized into three types: (1) feedback generation based on large log language models, identifying errors or misunderstandings in students’ writing or assignments; (2) feedback generated based on multimodal or different modalities from student feedback, such as audio images or written texts; (3) feedback calibrated according to students’ proficiency or language levels based on large language models, supporting deeper learning. These three types of feedback can help teachers generate real-time complex feedback, process-oriented and summative feedback, and feedback on higher-order thinking. This feedback aids teachers in conducting discussions, posing questions, guiding classroom dialogues, and helping learners engage in self-regulated learning through new feedback strategies. Moreover, this feedback can also better assist learners in visualizing problem analysis, concretizing abstract issues, creating new learning resources and cases, facilitating knowledge transfer and application, guiding learners through the learning process, enhancing self-driven learning, and developing metacognitive abilities.
(5) Intelligent Learning Path Planning and Personalized Learning Support Supported by Generative AI
AI technology supports the planning of intelligent learning paths, providing support for personalized learning. (1) Intelligent path planning: Formulating learning plans is an effective way to improve learning efficiency. To achieve a specific learning goal, different learners will devise different learning plans. Blueprint Education Technology Company in the United States is using AI technology to help learners create learning plans and dynamically assess their learning activities based on their performance and past similar learning data, planning learning paths to assist them in self-regulated learning. Its development concept is similar to Google Maps, helping learners plan their study time based on their preferences, improving learning efficiency and output, updating learning progress in real-time, and reducing cognitive load by hiding irrelevant information, and dynamically planning the next zone of development based on learners’ existing knowledge mastery, effectively conducting targeted practice. (2) Personalized learning support: Southern Methodist University uses item response theory for parameter estimation and item difficulty identification to determine learners’ learning abilities; combining problem metadata and learning activity metadata, they use machine learning to identify learners’ zones of proximal development and the exercises they need to do most, stimulating learning motivation. Their practice shows that generative AI can efficiently help students raise and answer questions through continuous dialogue, combining real-world topics of interest to students with mathematical problems, and continuously refining the questions posed using prompts. Its limitations include biases in generating character images and insufficient contextual rationality related to mathematical problems. In summary, at the initial stage from scratch, generative AI can provide significant assistance, but there are still challenges in meeting deeper mathematical requirements.
(6) Affinity Group Practices Advancing the Application of Generative AI in Higher Education
Dewey emphasized, “If today’s education and teachers do not live in the future, then future students will live in the past.” Higher education institutions should actively respond to the challenges of the generative AI era and adapt to changes. The primary issue in application practice is how to organize and promote higher education teachers to actively apply generative AI for educational innovation. The affinity group practice at Southern Methodist University is worth referencing. To address the challenges of the times, the Deputy Director of the Academic Affairs Office, Paige Ware, convened faculty and staff interested in generative AI to form an affinity group, continuously holding seminars to explore the feasibility of implementing generative AI in the school, facilitating multi-level interactive dialogues among faculty, department heads, and administrative assistants, encouraging all faculty and students to use generative AI to meet teaching needs, ultimately making the school a demonstration institution for generative AI educational applications. To maintain teachers’ continuous participation, the school established four themes at the start of the affinity group: (1) classroom teaching applications of generative AI; (2) research on the application and misuse of AI technologies; (3) discussions on the principles and technologies of ChatGPT; (4) discussions on relevant policies and legal frameworks. This faculty affinity group keeps pace with the forefront, actively explores, and sends out a weekly email titled “Generative AI on the Horizon” to all faculty and staff. To ensure faculty feel involved, Paige collects organized ideas, resources, links, activities, and seminars into the email, making it a medium for faculty to propose ideas and engage in discussions. By fostering a culture of curiosity, the group binds together various units, departments, and disciplines within the school, effectively avoiding the situation where only a few tech-savvy individuals advance this work. The group also collaborates with the IT office and library to hold the “Data Week” conference and organize seminars related to data, attempting to encourage discussions on research data and data management issues. At the same time, they form research communities of students and teachers around topics of interest, integrating all discussions and tools into the school’s Canvas learning management system for easy access and use by teachers. For novice teachers, the group has also developed an introductory toolkit, providing tools and technical documentation for using generative AI, allowing teachers to integrate generative AI technology into their familiar daily work interfaces, alleviating teachers’ anxiety about AI. They also promote the use of digital syllabi across all courses through Canvas, enabling teachers to easily modify content using generative AI in any digital syllabus for courses and providing various options for teachers to choose from. To further standardize the application of generative AI, the group will establish a council responsible for collecting evidence and addressing violations.
5. Conference Consensus
Generative AI, as a technology with far-reaching potential in the digitalization process of education, not only innovates educational teaching but also challenges the boundaries of education. Generative AI can empower teachers, learners, and non-professional educational developers, helping educators establish a learning analytics approach from “clicks” to “construction” through “demand analysis” in the era of AI and big data (Knight, et al., 2017). Everyone should enhance their generative AI literacy to cope with the numerous challenges faced in the innovation of generative AI educational applications. The conference also reached the following consensus:
(1) Recognizing the Specificity of Educational Application Innovation
Education is characterized by its specificity and complexity, and successful AI educational application cases are influenced by multiple factors, making them difficult to replicate. Developing AI educational applications is not an easy task, as education involves numerous variables, and different contexts greatly affect education. The specificity and complexity of education mean that the same set of experiences and models cannot be fully replicated. Therefore, the key to successful AI educational applications lies in organically integrating generative AI technology into the educational institution’s system. Educational institutions should integrate these technologies into education and management in a meaningful way based on their unique backgrounds and characteristics, actively embrace innovation, and continuously promote the innovation process. Designing and operating generative AI models usually involves high costs, and to achieve profitability, these models must be applied to a large user base. However, the current education sector is segmented into multiple markets, making it unlikely to attract billions of users on a single platform. This will lead to a contradiction between high production costs and the potential target user base. In this regard, Andrew Ng proposed the concept of low-code or no-code tools, which simplify programming difficulties by integrating various components and parts, helping more users participate in the design and application of AI models.
At the same time, the training data for ChatGPT primarily comes from Western countries, which leads to its cultural and value alignment with Western models. Therefore, it is necessary to use local training data to support local innovations in AI education applications. Additionally, in rural areas, AI is seen as a potential solution due to a shortage of teaching resources. However, the teaching process involves not only the acquisition of cognition and knowledge transmission but also the establishment of empathetic interpersonal relationships. Therefore, we must focus on cultural and linguistic backgrounds. To effectively integrate generative AI into specific linguistic environments, we need to address the issue of insufficient language corpora in the digital space to prevent potential language and cultural homogenization phenomena that may occur within learner groups or communities, which could undermine educational equity. Generative AI is rooted in specific cultural, historical, and political contexts, and its application in different educational contexts may lead to systemic injustices (Eubanks, 2018). Therefore, when designing learning technologies, we must ensure cultural visibility, consider the future direction of generative AI, the identity of designers, the target audience, and how generative AI will shape our cognition, relationships, and modes of existence. Education is a complex, multi-dimensional context involving various roles; we must not only focus on the needs of students but also consider the needs of teachers, enhancing teachers’ digital skills and AI literacy to better understand and integrate them into educational teaching.
(2) Trust as a Prerequisite for Transitioning from Negative Rejection to Positive Application
The unstoppable wave of AI innovation necessitates a positive attitude towards the introduction of AI technology in education. Changes in higher education are too slow, and to adapt to the iterations of generative AI tools, we need to take swift action. Currently, the impact of AI technology on teaching methods is not significant. Trust issues are a significant barrier to the application of AI technology in education; teachers fear their jobs will be replaced, and students are skeptical of the outputs of AI technology, hindering the widespread use of AI technology in classrooms. For instance, faculty at Hong Kong University of Science and Technology appeared to lack creativity when exploring network and physical space collaborative learning. Wu Nanzhong and others (2023) analyzed the pros and cons of applying generative AI in education, pointing out that when we focus on alienated use, risks such as uncontrolled educational quality, disordered operations, ethical imbalances, cognitive superficiality, and innovation degradation emerge, positioning them as “technological specters” that resist consciously or covertly; when focusing on strong contextualization, re-integration, highlighting individual differences, and inherent critical spirit, attention is drawn to its potential to reconstruct spaces for learners, rebuild content, reshape abilities, readjust processes, and rebuild evaluations, thereby providing “solution tools” to break single knowledge sources, impact standardized fields, disrupt closed teaching, and overcome external performance evaluations that alienate education. Therefore, to ensure that generated content fits current learning contexts, we need to consider the role of generative AI technology in the learning process and its contribution to educational teaching objectives. Placing students at the forefront of teaching is the common goal of educators, who must immediately adjust their teaching methods to adapt to new technologies, teach students to understand the development principles of these large language models, help students effectively use these tools, discourage them from using technologies detrimental to learning, and promote the cultivation of students’ AI ethics and technological literacy.
(3) Play the Leading Role of Humans in Coping with Cognitive Stagnation and Dehumanization
The application of AI technology can provide learners with more autonomy and personalized learning experiences. However, learners may lack critical thinking when using generative AI tools, overly relying on tools to directly obtain answers. These phenomena adversely affect the development of students’ critical thinking, problem-solving, creativity, and empathy, potentially leading to cognitive stagnation. As mentioned earlier, AI generally lacks human emotions and subjectivity, exhibiting a certain degree of dehumanization. Whether the use of generative AI leads to cognitive stagnation and a reduction in the humanization of learning experiences depends on whether learners maintain the leading role of humans throughout their interactions with generative AI. In using AI, we cannot entirely delegate important cognitive tasks to AI agents; for example, while large language models excel at automatic classification and providing feedback, we must recognize the irreplaceable role of humans in more complex writing and grading tasks, integrating AI into a human-led teaching process and redesigning teaching and learning. Zhao Xiaowei and others (2024) describe the process of interaction between learners and generative AI, where learners generate cognitive needs around a specific topic and stimulate problem awareness, collaborating with generative AI to ask questions in search of responses; subsequently, they exercise discernment, making decisions based on the data output by generative AI, and strategically selecting and extracting specific data by updating question formats and adding question content, constructing an understanding of limited objects, and attempting to apply value awareness to understand the new utility of limited objects and their recognition processes in the real world. In this process, human awareness permeates the interaction, and each type of awareness plays a crucial role in the subsequent interaction. Therefore, the design, development, and deployment of AI for learning must adhere to human-centered principles, ensuring that learners always play a leading role and promoting their cognitive development to address the challenges of dehumanization posed by AI.
(4) Pay Attention to Data Leakage and Security Ethical Risks in Applications
There are risks of sensitive data leakage and ethical security in the use of generative AI. Teachers using ChatGPT to analyze student data may inadvertently upload sensitive information. Therefore, filtering measures must be implemented in product development to ensure the separation of student data from the training data of large language models to prevent data leakage. Educational leaders and managers may also face ethical privacy issues when adopting AI technology. On one hand, AI technology helps institutions gain deeper insights into student learning situations; on the other hand, improper handling of student data may lead to serious consequences. There are four limitations of using teaching data concerning security and ethical risks: insufficient predictability, the generation of hallucinations, difficulty in distinguishing truth from falsehood, and confusion of symbolic mechanisms. Moreover, the copyright and ownership issues arising from the use of generative AI remain unclear, posing challenges to the generation and control of educational content. Therefore, we view AI educational ethics as an important competence, empowering individuals at both individual and collective levels to make responsible choices, developing individual AI educational ethical capabilities, and establishing community norms and management systems to promote educational data sharing.
(5) Bridge the New Digital Divide and Structural Inequalities
The development of AI technology has opened new frontiers in the “great power game”, playing a decisive role in the reconstruction of international power structures (Yu Nanping, et al., 2023). The popularization and application of the new generation of AI technology will create high-tech barriers, exacerbating global educational inequities and further amplifying the development imbalances between countries and regions. In underdeveloped areas, the application of AI in education faces limitations in physical conditions and technological levels, intensifying global educational inequalities. Although AI technology has been introduced in primary and secondary education, schools generally lack adequate equipment; teachers’ technological reserves and training are insufficient; and there is inconsistency between curricula and software modules, resulting in poor effectiveness of AI educational applications. Moreover, merely providing basic equipment and technological access does not solve the problem; educational technology experts need to rethink how to design education using AI technology, genuinely understanding the backgrounds of learners and teachers, analyzing the application value of AI technology in educational contexts, and determining how to intervene in technology use so that underdeveloped areas can also enjoy the benefits of technological development. AI technology does not effectively address structural inequalities in education; long-term collaborative design partnerships are an effective way to eliminate such inequalities and promote collective liberation (Noble, 2018). We need to consider how to establish long-term mutually beneficial partnerships with people from different cultural backgrounds, institutional norms, and professional knowledge. Additionally, government departments need to fully consider providing technological support to teachers, allowing them to focus on the design of AI educational applications. Educators should systematically promote the innovation of AI educational applications from four aspects: infrastructure construction, the use of AI teaching agents, concerns regarding AI technology, and digital training.
(6) Establish New Standards for AI Development and Application
The high costs of using generative AI technologies hinder many frontline teachers and related personnel from utilizing these technologies to develop teaching applications. Key questions such as whether learner data can be shared across different educational frameworks and groups, and whether it can be used for training local large language models negatively impact the promotion of generative AI technology in education. Formulating standards for AI technologies and applications is an important strategy to ensure that AI and large language models operate rationally and efficiently. At the conference, Richard Tong, chief architect of the Carnegie Learning Center, pointed out that the IEEE AI Standards Committee he leads is developing over 40 relevant standards at the policy and technical levels. Among these, the policy level mainly includes ensuring the safety and privacy of AI and human data, the ethical and moral relationships between education and AI applications, and exploring methods for accountability in AI management to prevent potential harm, such as data governance and trustworthy computing. The technical level includes accelerating the establishment of unified application standards for AI technology, ensuring the accessibility of large language models, and providing a fair competitive environment for small development agencies and financially constrained students.
[References]
Chen Kaiquan, Hu Xiaosong, Han Xiaoli, et al., 2023. Mechanisms, Scenarios, Challenges, and Countermeasures of Conversational General AI Educational Applications [J]. Journal of Distance Education (3): 21-41.
Interconnected Intelligent Education Center, 2023. Academic Seminar|Hu Xiangen: Becoming Friends with LLMs—Leveraging Their Help in Teaching and Learning [EB/OL]. [2023-11-04]. https://ccie-cef.bnu.edu.cn/news/5278a0d87d7847b0bf5787d87c126019.htm.
Jiao Jianli, 2023. ChatGPT: A Friend or Foe in School Education? [J]. Modern Educational Technology (4): 5-15.
Lu Yu, Yu Jinglei, Chen Penghe, et al., 2023. Educational Applications and Prospects of Generative AI—Taking the ChatGPT System as an Example [J]. China Distance Education (4): 24-31+51.
Miao Fengchun, 2024. Basic Controversies and Countermeasures Regarding Generative AI and Its Educational Applications [J]. Open Education Research (1): 4-15.
Wang Zhijun, Wu Zhijian, 2023. New Forms of Online Learning in the AI Era—Algorithm-Supported Intelligent Adaptive Community Learning [J]. Journal of Distance Education (5): 49-55.
Wang Zhuo, Ma Yangzhen, Yang Xianmin, et al., 2023. The Impact of ChatGPT-like Reading Platforms on Graduate Students’ Academic Reading Abilities [J]. Open Education Research (6): 60-68.
Wu Di, Li Huan, Chen Xu, 2023. Analyzing the Impact of General Large Models of Artificial Intelligence in Educational Applications [J]. Open Education Research (2): 19-25+45.
Wu Nanzhong, Chen Xianzhang, Feng Yong, 2023. From “Disorder” to “Order”: The Shift and Generative Mechanism of Generative AI Educational Applications [J]. Journal of Distance Education (6): 42-51.
Yu Nanping, Zhang Yiran, 2023. The Impact of ChatGPT/Generative AI on Education: New Frontiers in Major Power Games [J]. Journal of East China Normal University (Education Science Edition) (7): 15-25.
Zhao Xiaowei, Dai Ling, Shen Shusheng, et al., 2024. Designing Educational Prompts to Promote High-Awareness Learning [J]. Open Education Research (1): 44-54.
Zheng Yanlin, Ren Weiwu, 2023. Path Selection for ChatGPT Teaching Applications from the Perspective of Practice [J]. Modern Distance Education (2): 3-10.
Chinese Academy of Sciences, 2023. Release of the “2022 Global AI Innovation Index Report” [EB/OL]. [2023-07-18]. https://www.istic.ac.cn/html/1/284/338/1506840089869938181.html.
Zhu Zhitian, Dai Ling, Hu Jiao, 2023. High-Awareness Generative Learning: Innovation of Learning Paradigms Empowered by AIGC Technology [J]. Research on Educational Technology (6): 5-14.
ANGERT T, SUZARA M, HAN J, et al., 2023. Spellburst: A Node-Based Interface for Exploratory Creative Coding with Natural Language Prompts [C]//Proceedings of the 36th Annual ACM Symposium on User Interface Software and Technology. New York: Association for Computing Machinery: 1-22.
DAWSON S, POQUET O, COLVIN C, et al., 2018. Rethinking Learning Analytics Adoption through Complexity Leadership Theory [C]//The 8th International Conference on Learning Analytics and Knowledge. New York: Association for Computing Machinery: 236-244.
EUBANKS V, 2018. Automating Inequality: How High-Tech Tools Profile, Police and Punish the Poor [M]. New York: Martin’s Press.
GOODMAN S M, BUEHLER E, CLARY P, et al., 2022. Lampost: Design and Evaluation of an AI-Assisted Email Writing Prototype for Adults with Dyslexia [C]//Proceedings of the 24th International ACM SIGACCESS Conference on Computers and Accessibility. New York: Association for Computing Machinery: 1-18.
GRAILE, 2023. Empowering Learners in AI 2023 [EB/OL]. [2023-10-23]. https://whova.com/portal/webapp/empow3_202310/Agenda/3431283.
GRAILE AI, 2023a. Conference Opening and Keynote: Embracing a Culture of Innovation in the Era of AI [EB/OL]. [2023-10-23]. https://www.youtube.com/watch?v=j3jJOridTfY&t=341s.
GRAILE AI, 2023b. Day 2 Intro & Keynote: From Print to Digital to AI: Preparing Our Students for the New Literacy Era [EB/OL]. [2023-10-23]. https://www.youtube.com/watch?v=C-CQqcsZvgA.
KHAN ACADEMY, 2023. NEW! Khan Academy’s AI Tutor, Khanmigo-In Depth Demo [EB/OL]. [2023-03-15]. https://www.youtube.com/watch?v=rnIgnS8Susg.
KNIGHT S, SHUM S B, 2017. Theory and Learning Analytics [J]. Handbook of Learning Analytics, 1: 17-22.
HONG S, ZHENG X, CHEN J, et al., 2023. Metagpt: Meta Programming for Multi-Agent Collaborative Framework [J]. arXiv preprint arXiv, 08: 00352.
KIM J, SUH S, CHILTON L B, et al., 2023. Metaphorian: Leveraging Large Language Models to Support Extended Metaphor Creation for Science Writing [C]//Proceedings of the 2023 ACM Designing Interactive Systems Conference. New York: Association for Computing Machinery: 115-135.
NATASA PEROVIC, 2015. ABC Curriculum Design 2015 Summary [EB/OL]. [2015-12-02]. https://blogs.ucl.ac.uk/digitaleducation/2015/12/02/abc-curriculum-design-2015-summary/.
NOBLE S U, 2018. Algorithms of Oppression [M]. New York: NYU Press.
PRATSCHKE B M, 2023. Generativism: The New Hybrid [EB/OL]. [2023-09-21]. https://arxiv.org/abs/2309.12468.
RUAN S, JIANG L, XU J, et al., 2019. Quizbot: A Dialogue-Based Adaptive Learning System for Factual Knowledge [C]//Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. New York: Association for Computing Machinery: 1-13.
STANFORD ONLINE, 2023. Andrew Ng: Opportunities in AI [EB/OL]. [2023-08-23]. https://www.youtube.com/watch?v=5p248yoa3oE.
STANFORD UNIVERSITY, 2023. The AI Index Report: Measuring Trends in Artificial Intelligence [EB/OL]. [2023-04-23]. https://aiindex.stanford.edu/report.
UNESCO, 2023. Guidance for Generative AI in Education and Research [EB/OL]. [2023-10-01]. https://aiindex.stanford.edu/report.
WORLD ECONOMIC FORUM, 2023. A Conversation with Satya Nadella, CEO of Microsoft [DB/OL]. [2023-01-09]. https://www.youtube.com/watch?v=TSLcA66QgMY.