10. Emerging governance of generative AI in education

Quentin Vidal
Stéphan Vincent-Lancrin
Hyunkyeong Yun

As digital technology and the use of smart data transform countries’ education systems, the integration of artificial intelligence (AI) tools in education emerges as a pivotal focal point in reshaping instruction practices (OECD, 2021[1]). The emergence of generative AI has made the power of artificial intelligence visible to all and led to unprecedented debates about AI in the classroom. Generative AI is a subset of artificial intelligence that encompasses diverse capabilities from the generation of text through to image, music, and video. It autonomously generates new content from prompts, possibly challenging conventional teaching and assessment practices, notably homework, educational assignments, and exams (Pons, 2023[2]). This transformative technology has the capacity to democratise autonomous learning experiences, but also challenges traditional skill acquisition. As such, the initial debates about AI in education were in terms of “cheating” and students not doing their assignments themselves, with the risk of a loss of learning. AI capabilities are improving at a faster pace than ever before. When integrated with other technological advancements, generative AI tools could make chatbots a greater part of the learning experience and could possibly redefine teaching and learning.

The rapid evolution of generative AI, apparent to the broader public in the successive versions of ChatGPT since December 2022, or with the sudden visibility of tools such as Lensa-AI or Dall-E, suggests an ongoing improvement in AI capabilities compared to human capacities (OECD, 2023[3]). Education systems are faced with the new challenge to harness generative AI’s potential while navigating challenges such as algorithmic bias, cheating, plagiarism, skills attrition, and concerns related to privacy, data security, intellectual property infringements, and sometimes even sustainability. Although this is an emerging domain for which there is little experience, policy makers are starting to consider guiding and sometimes even regulating artificial intelligence.

This chapter delves into the guidance and regulatory approaches adopted by 18 OECD countries and jurisdictions in governing, encouraging, or restricting the use of generative AI tools in education. As of 2024, national and central governments have mainly published non-binding guidance.

In the absence of central regulation, decisions made at the school level by teachers and school leaders significantly influence whether and how generative AI is integrated into the schooling context. Outside of schools though, generative AI is just another digital service, and in practice anyone with an Internet connection can access and use such tools (with limitations when not with a paying subscription).

Countries and jurisdictions may seek to play a role in the uptake of this new technology in education, exploring the balance between improving learning outcomes for all, fostering technological advancements, and safeguarding ethical, privacy, and equity considerations. The potential of generative AI in education has prompted a critical examination, but its implications are still in the first stages of exploration. Ensuring its use is aligned with educational objectives while mitigating risks associated with privacy, security and algorithmic biases appears as a challenge for governments. Moreover, understanding the multifaceted nature of generative AI, which extends beyond conventional text generation to a spectrum of creative outputs, is essential.

This chapter is organised as follows. First, it provides an overview of 18 OECD countries and jurisdictions’ regulation and guidance on generative AI in education. Then, it examines how, regardless of existing or forthcoming guidance, schools, teachers, and students use generative AI in practice, in various educational contexts. The chapter then reports on countries and jurisdictions’ policy priorities in the governance of generative AI in education. It concludes with a discussion on the benefits that countries and jurisdictions could reap from supporting an effective use of generative AI in education, outlining a set of policy recommendations in that direction.

As of early 2024, none of the 18 countries and jurisdictions for which we have comparative information has issued a specific regulation on the use of generative AI in education (see Figure 10.1 and Table 10.1). Two countries, France and Korea, have proposed a regulation that awaits approval before implementation – noting that Korea’s will be part of a broader “Artificial Intelligence Education Promotion Act”, inclusive but not to limited to generative AI. Instead of, or waiting for, regulation, nine countries and jurisdictions (half of the respondents) have published non-binding guidance on the use of generative AI in education.

For instance, Japan issued Temporary Guidelines for Use of Generative AI in Primary and Secondary Education in 2023. The document contains guidance for schools and teachers on the general approach to take to make an appropriate use of generative AI in education and points to possible topics to be aware of when using it, such as protecting personal information, privacy, and copyright (see Box 10.1). Seven countries are drafting new or updated guidance on this topic.

In nine countries and jurisdictions (half of respondents), schools are responsible for setting their own rules and providing guidance on the use of generative AI by their students, as long as they comply with broader national/central rules on data protection (see (OECD, 2023[4])). This is for instance the case in Nordic countries (Finland, Iceland and Sweden); as well as in Slovakia where the central government reports that local stakeholders are invited to integrate generate AI as they see fit in their schooling context.

Letting schools or lower levels of government enact their rules on AI in education does not prevent central governments from providing guidance though. In Czechia, the National Pedagogical Institute has released recommendations for school principals, teachers, students, and parents on how to use generative AI safely and effectively, based on instructions from the Ministry of Education, Youth and Sports. In addition, schools have their own sets of rules as agreed with their school boards, comprised of parent representatives, student representatives, teachers, and school leaders. In the near future, the Czech ministry plans to share guidance on AI that will include the design of a model curriculum integrating AI into project and research activities. Similarly, in the Flemish Community of Belgium, schools choose in full autonomy the learning tools they use. The community’s Knowledge Centre provides specific guidelines for schools to use these types of tools in responsible manners though, as outlined by the Digisprong action plan.

Regardless of regulation and guidance, all responding countries and jurisdictions noted that, in practice, generative AI is already used in schools (Table 10.2). Because of the specific guidance that they published on the topic, because of broader regulatory framework on the use of data and digital tools and resources in education, or because of devolved responsibilities in the governance of those new areas (down to teachers themselves), generative AI practices in the field may vary from one classroom to the other and from one context to the other.

In some cases, countries’ national/central guidance on generative AI aims to guide the use of such tools in schools. In 4 countries and jurisdictions out of 18 for which we have comparative information, guidance recommends that only approved generative AI tools should be used; and in seven countries and jurisdictions, it suggests that only students above a certain age should be using them, regulated per guidance. In fact, the latter restriction often corresponds to the tool’s terms of use. For instance, OpenAI indicates that children under the age of 13 should not use ChatGPT, and that children under the age of 18 need the approval of their parents or guardians. As such, in their guidance, Japan and the Flemish Community of Belgium simply asks schools to abide by each tool’s terms and services.

As of 2024, five central/national guidance explicitly encourage students to use generative AI as part of their schooling activities: Austria, Czechia, France, Korea, and the Flemish Community of Belgium. Those five countries, along with Japan and Latvia, also explicitly encourage teachers to use it. Albeit encouraged by the central government, Korea provinces will deliver their own guidelines on the use of generative AI in education.

Conversely, several countries and jurisdictions – sometimes the same as the ones encouraging the use – also wish to limit the use of generative AI in schools, to the extent possible. In practice though, there are only a few situations in which the use of generative AI is effectively prohibited by countries. Exams are one example, as illustrated by guidance in England (United Kingdom), Latvia and Luxembourg. Students taking high-stake exams generally do not have access to the Internet anyway. Only one country, Sweden, further forbid the use of generative AI for homework. This will require schools to be equipped with appropriate detection software, which is only rarely available yet.

Japan’s 2023 Temporary Guidelines for Use of Generative AI in Primary and Secondary Education provide schools and teachers with broad guidelines on the use of generative AI for written homework assignments. Teachers must advise students to refrain from using generative AI for graded tasks, reminding them about the concerns related to submitting AI-generated work instead of one’s own original work. Moreover, teachers are encouraged to check students’ work, verifying for instance if it is based on student’s personal experience or if they understood the content learned. For non-graded assignments, Japan’s guidelines specify that generative AI may be used if students verify their sources and acknowledge the use of a generative AI tool: the name of the generative AI tool used, as well as the prompts used, should be referenced.

In more than two thirds of the countries and jurisdictions (12 out of 17), teachers are encouraged, by guidance or in practice, to use generative AI in their classrooms. Specifically, seven countries provide training opportunities for teachers on the topic as part of their national/central guidance. The remaining five countries report that training is available in practice, possibly at lower levels of government. In Latvia, the ministry has developed a set of use cases in which generative AI can be used to assist teachers’ work. Teachers can watch these examples on YouTube, via the government’s channel. Similarly, the Swedish National Agency for Education prepares open webinars on the use of generative AI for teachers.

Beyond questions about access and use of generative AI in education, countries and jurisdictions were asked about the key issues that were addressed in their education policy discussions. Seventeen of them expressed their policy priorities on the topic (Figure 10.2).

First, all responding countries and jurisdictions highlighted data protection and privacy issues as one of their top three priorities (13) or as an important policy concern (4) regarding the use of generative AI in education (see Table 10.3). This is by far the policy area governments prioritise the most.

Second, countries and jurisdictions also prioritise the technical accuracy and reliability of generative AI, the transparency and explainability of algorithms, addressing bias to ensure fairness as well as the cultural and linguistic relevance of its outputs. With the exception of New Zealand, only non-English speaking countries prioritised the latter area.

A third body of policy priorities for countries and jurisdictions concerns possible skill attrition among students where generative AI is used. Whether as a top or secondary priority, nine countries have included this in their ongoing policy debates. In Sweden for instance, officials reported that identifying new skills and competences that students would need in the next 10 years was a crucial point in their ongoing discussions on AI in education.

Finally, concerns over the protection of intellectual property were also raised as a priority policy area, albeit to a lesser extent than the aspects mentioned above (seven countries and jurisdictions). They come just above concerns over the cost of generative AI tools, which are only brought forward in Hungary; and above other types of concern such as the equitable access and use of those digital tools in schools and at home, which threatens to amplify existing inequalities among students, as expressed in Czechia.

As of 2024, countries and jurisdictions do not formally regulate the use of generative AI in education. Instead, some of them have issued non-binding guidance specific to the use of those tools into teaching, learning and assessment practices. They also sometimes leave it to lower levels of governments, schools, and teachers themselves to decide whether and how to integrate generative AI into the schooling context, provided that the use complies with broader regulation on digital technology in education.

Moving forward, countries and jurisdictions may leverage different approaches to work on their guidance regarding generative AI in education. They may develop new guidance or update previous documents based on the lessons of actual uses within their countries, keeping an open mind as regards the multiple and diverse benefits that generative AI tools of all sorts (e.g., text, image, music, video generation) may bring to transform and improve education.

Provide guidance and keep regulatory framework adaptive. Countries should develop and disseminate clear guidelines for the use of generative AI in education. These guidelines should highlight and showcase the potential of generative AI in education to improve teaching and learning practices, while addressing issues such as algorithm bias, privacy, and data security. If regulatory frameworks are adopted, they should be adaptive and forward-looking, capable of accommodating the evolving landscape of generative AI. Instead of rigid restrictions, countries should adopt frameworks that provide guidance and oversight, allowing for innovation while safeguarding against potential risks and ensuring accountability. The “opportunities, guidelines and guardrails for effective and equitable use of AI in education” presented in chapter 16 provide countries with some guiding principles that apply to generative AI as well.

Promote dedicated teacher training programmes and cultivate stakeholders’ digital literacy. Governments should encourage the integration of generative AI examples in teacher training programmes to enhance their digital literacy with generative AI tools. They could also propose some dedicated programmes, covering both the technical aspects but also the pedagogical and ethical considerations associated with the integration of AI in the educational environment. They could for example show how generative AI could be used to strengthen students’ creativity or to develop their critical thinking. They should also highlight practices that should be discouraged, for example the use of generative AI to grade or provide feedback on students’ work.

Encourage research and collaboration and monitor impact. Countries should encourage research on the uses of generative AI in the teaching and learning process. Establishing partnerships between education authorities, AI developers, and researchers could contribute to a better understanding of the benefits and challenges of the technology.

Facilitate the sharing of best practices and foster international collaboration. Nationally, governments could facilitate platforms for the exchange of information and best practices on the use of generative AI among educational institutions. This could also be done internationally.


[5] Department for Education (2023), Generative artificial intelligence (AI) in education, https://www.gov.uk/government/publications/generative-artificial-intelligence-in-education/generative-artificial-intelligence-ai-in-education.

[3] OECD (2023), Is Education Losing the Race with Technology?: AI’s Progress in Maths and Reading, Educational Research and Innovation, OECD Publishing, Paris, https://doi.org/10.1787/73105f99-en.

[4] OECD (2023), OECD Digital Education Outlook 2023: Towards an Effective Digital Education Ecosystem, OECD Publishing, Paris, https://doi.org/10.1787/c74f03de-en.

[1] OECD (2021), OECD Digital Education Outlook 2021: Pushing the Frontiers with Artificial Intelligence, Blockchain and Robots, OECD Publishing, Paris, https://doi.org/10.1787/589b283f-en.

[6] Oregon Department of Education (2023), Generative Artificial Intelligence (AI) in K-12 Classrooms, https://www.oregon.gov/ode/educator-resources/teachingcontent/Documents/ODE_Generative_Artificial_Intelligence_(AI)_in_K-12_Classrooms_2023.pdf.

[2] Pons, A. (2023), Generative AI in the classroom: From hype to reality?, https://one.oecd.org/document/EDU/EDPC(2023)11/en/pdf.

Legal and rights

This document, as well as any data and map included herein, are without prejudice to the status of or sovereignty over any territory, to the delimitation of international frontiers and boundaries and to the name of any territory, city or area. Extracts from publications may be subject to additional disclaimers, which are set out in the complete version of the publication, available at the link provided.

© OECD 2023

The use of this work, whether digital or print, is governed by the Terms and Conditions to be found at https://www.oecd.org/termsandconditions.