5. AI policies and initiatives

Artificial intelligence (AI) policies and initiatives are gaining momentum in governments, companies, technical organisations, civil society and trade unions. Intergovernmental initiatives on AI are also emerging. This chapter collects AI policies, initiatives and strategies from different stakeholders at both national and international levels around the world. It finds that, in general, national government initiatives focus on using AI to improve productivity and competitiveness with actions plans to strengthen: i) factor conditions such as AI research capability; ii) demand conditions; iii) related and supporting industries; iv) firm strategy, structure and competition; as well as v) domestic governance and co-ordination. International initiatives include the OECD Recommendation of the Council on Artificial Intelligence, which represents the first intergovernmental policy guidelines for AI and identifies principles and policy priorities for responsible stewardship of trustworthy AI.


Artificial intelligence for economic competitiveness: Strategies and action plans

Artificial intelligence (AI) is an increasing priority on the policy agendas for governmental institutions, at both national and international levels. Many national government initiatives to date focus on using AI for productivity and competitiveness. The priorities underlined in national AI action plans can be categorised into five main themes, some of which match Porter’s economic competitiveness framework. These priorities are : i) factor conditions such as AI research capability, including skills; ii) demand conditions; iii) related and supporting industries; iv) firm strategy, structure and competition; and v) attention to domestic governance and co-ordination (Box 5.1). In addition, policy consideration for AI issues such as transparency, human rights and ethics is growing.

Among OECD countries and partner economies, Canada, the People’s Republic of China (hereafter “China”), France, Germany, India, Sweden, the United Kingdom and the United States have targeted AI strategies. Some countries like Denmark, Japan and Korea include AI-related actions within broader plans. Many other countries – including Australia, Estonia, Finland, Israel, Italy and Spain – are developing a strategy. All strategies aim to increase AI researchers and skilled graduates; to strengthen national AI research capacity; and to translate AI research into public- and private-sector applications. In considering the economic, social, ethical, policy and legal implications of AI advances, the national initiatives reflect differences in national cultures, legal systems, country size and level of AI adoption, although policy implementation is at an early stage. This chapter also examines recent developments in regulations and policies related to AI; however, it does not analyse or assess the realisation of the aims and goals of national initiatives, or the success of different approaches.

AI is also being discussed at international venues such as the Group of Seven (G7), Group of Twenty (G20), OECD, European Union and the United Nations. The European Commission emphasises AI-driven efficiency and flexibility, interaction and co-operation, productivity, competitiveness and growth and quality of citizens’ lives. Following the G7 ICT Ministers’ Meeting in Japan in April 2016, the G7 ICT and Industry Ministers’ Meeting in Turin, Italy in September 2017 shared a vision of “human-centred” AI. It decided to encourage international co-operation and multi-stakeholder dialogue on AI and advance understanding of AI co-operation, supported by the OECD. Ongoing G20 attention to AI can also be noted, particularly with a focus on AI proposed by Japan under its upcoming G20 presidency in 2019 (G20, 2018[1]).

Principles for AI in society

Several stakeholder groups are actively engaged in discussions on how to steer AI development and deployment to serve all of society. For example, the Institute for Electrical and Electronics Engineers (IEEE) launched its Global Initiative on Ethics of Autonomous and Intelligent Systems in April 2016. It published version 2 of its Ethically Aligned Design principles in December 2017. The final version was planned for early 2019. The Partnership on Artificial Intelligence to Benefit People and Society, launched in September 2016 with a set of tenets, has begun work to develop principles for specific issues such as safety. The Asilomar AI Principles are a set of research, ethics and values for the safe and socially beneficial development of AI in the near and longer term. The AI Initiative brings together experts, practitioners and citizens globally to build common understanding of concepts such as AI explainability.

Table 5.1. Selection of sets of guidelines for AI developed by stakeholders (non-exhaustive)


Guidelines for AI developed by stakeholders


ACM (2017), “2018 ACM Code of Ethics and Professional Conduct: Draft 3”, Association for Computing Machinery Committee on Professional Ethics, https://ethics.acm.org/2018-code-draft-3/

USACM (2017), “Statement on Algorithmic Transparency and Accountability”, Association for Computing Machinery US Public Policy Council, www.acm.org/binaries/content/assets/public-policy/2017_usacm_statement_algorithms.pdf

AI Safety

Amodei, D. et al. (2016), “Concrete Problems in AI Safety”, 25 July, https://arxiv.org/pdf/1606.06565.pdf


FLI (2017), “Asilomar AI Principles”, Future of Life Institute, https://futureoflife.org/ai-principles/


COMEST (2017), “Report of COMEST on Robotics Ethics”, World Commission on the Ethics of Scientific Knowledge and Technology, http://unesdoc.unesco.org/images/0025/002539/253952E.pdf


Economou, N. (2017) “A ‘principled’ artificial intelligence could improve justice”, 3 October, Aba Journal www.abajournal.com/legalrebels/article/a_principled_artificial_intelligence_could_improve_justice


EGE (2018), “Statement on Artificial Intelligence, Robotics and Autonomous Systems”, European Group on Ethics in Science and New Technologies, http://ec.europa.eu/research/ege/pdf/ege_ai_statement_2018.pdf


EPSRC (2010), “Principles of Robotics” , Engineering and Physical Sciences Research Council, https://epsrc.ukri.org/research/ourportfolio/themes/engineering/activities/principlesofrobotics/


FATML (2016), “Principles for Accountable Algorithms and a Social Impact Statement for Algorithms”, Fairness, Accountability, and Transparency in Machine Learning, www.fatml.org/resources/principles-for-accountable-algorithms


FPF (2018), “Beyond Explainability: A Practical Guide to Managing Risk in Machine Learning Models”, The Future of Privacy Forum, https://fpf.org/wp-content/uploads/2018/06/Beyond-Explainability.pdf


Google (2018), “AI at Google: Our Principles”, https://www.blog.google/technology/ai/ai-principles/


IEEE (2017), Global Initiative on Ethics of Autonomous and Intelligent Systems, “Ethically Aligned Design Version 2”, Institute of Electrical and Electronics Engineers, http://standards.ieee.org/develop/indconn/ec/ead_v2.pdf


Intel (2017), “AI - The Public Policy Opportunity”, https://blogs.intel.com/policy/files/2017/10/Intel-Artificial-Intelligence-Public-Policy-White-Paper-2017.pdf


ITI (2017), “AI Policy Principles”, Information Technology Industry Council, www.itic.org/resources/AI-Policy-Principles-FullReport2.pdf


JSAI (2017), “The Japanese Society for Artificial Intelligence Ethical Guidelines”, The Japanese Society for Artificial Intelligence, http://ai-elsi.org/wp-content/uploads/2017/05/JSAI-Ethical-Guidelines-1.pdf


MIC (2017), “Draft AI R&D Guidelines for International Discussions”, Japanese Ministry of Internal Affairs and Communication, www.soumu.go.jp/main_content/000507517.pdf


MIC (2018), “Draft AI Utilization Principles”, Japanese Ministry of Internal Affairs and Communication, www.soumu.go.jp/main_content/000581310.pdf


UoM (2017), “The Montreal Declaration for a Responsible Development of Artificial Intelligence”, University of Montreal, www.montrealdeclaration-responsibleai.com/


Nadella, S. (2016) “The Partnership of the Future”, 28 June, Slate, www.slate.com/articles/technology/future_tense/2016/06/microsoft_ceo_satya_nadella_humans_and_a_i_can_work_together_to_solve_society.html


PAI (2016), “TENETS”, Partnership on AI, www.partnershiponai.org/tenets/


Polonski, V. (2018) “The Hard Problem of AI Ethics - Three Guidelines for Building Morality Into Machines” , 28 February, Forum Network on Digitalisation and Trust, www.oecd-forum.org/users/80891-dr-vyacheslav-polonski/posts/30743-the-hard-problem-of-ai-ethics-three-guidelines-for-building-morality-into-machines

Taddeo and Floridi

Taddeo, M. and L. Floridi (2018), “How AI can be a force for good”, Science, 24 August, Vol. 61/6404, pp. 751-752, http://science.sciencemag.org/content/361/6404/751

The Public Voice Coalition

UGAI (2018), “Universal Guidelines on Artificial Intelligence”, The Public Voice Coalition, https://thepublicvoice.org/ai-universal-guidelines/

Tokyo Statement

Next Generation Artificial Intelligence Research Center (2017), “The Tokyo Statement – Co-operation for Beneficial AI”, www.ai.u-tokyo.ac.jp/tokyo-statement.html


Twomey, P. (2018), “Toward a G20 Framework for Artificial Intelligence in the Workplace” , CIGI Papers, No 178, Centre for International Governance Innovation, www.cigionline.org/sites/default/files/documents/Paper%20No.178.pdf


UNI Global Union (2017), “Top 10 Principles for Ethical Artificial Intelligence”, www.thefutureworldofwork.org/media/35420/uni_ethical_ai.pdf

Several initiatives have developed valuable sets of principles to guide AI development (Table 5.1), many of which focus on the technical communities that conduct research and development (R&D) of AI systems. Many of these principles were developed in multi-stakeholder processes. However, they can be categorised broadly into five communities: technical, private sector, government, academic and labour. The technical community includes the Future of Life Institute; the IEEE; the Japanese Society for Artificial Intelligence; Fairness, Accountability and Transparency in Machine Learning; and the Association for Computing Machinery. Examples of a private-sector focus include the Partnership on AI; the Information Industry Technology Council; and Satya Nadella, chief executive officer of Microsoft. The government focus includes the Japanese Ministry of Internal Affairs and Communications; the World Commission on the Ethics of Science and Technology; and the Engineering and Physical Sciences Research Council. Examples of the academic focus include the Université de Montréal and Nicolas Economou, the chief executive officer of H5 and special advisor on the AI Initiative of the Future Society at Harvard Kennedy School. UNI Global Union is an example of the labour community.

Common themes emerge from initiatives. Different stakeholders have developed guidelines, including human values and rights; non-discrimination; awareness and control; access to data; privacy and control; safety and security; skills; transparency and explainability; accountability and responsibility; whole of society dialogue; and measurement.

In May 2018, the OECD’s Committee on Digital Economy Policy established the AI Group of Experts at the OECD (AIGO), with the aim of scoping principles for public policy and international co-operation that would foster trust in and adoption of AI (OECD, 2019[2]). This work informed the development of the OECD Recommendation of the Council on Artificial Intelligence (OECD, 2019[3]), to which 42 national governments adhered on 22 May 2019.

National initiatives

Overview of AI national policy responses

Many countries have announced national AI strategies and policy initiatives, which commonly aim to ensure a leadership position in AI. The strategies and initiatives set objectives and targets that require concerted action by all stakeholders. Governments’ role is often as a convener and facilitator. Box 5.1 provides elements often seen in policies and measures for fostering national competitiveness in AI. In addition, some countries have created, or assigned responsibility to, a specific public entity for AI and data ethics issues.

Box 5.1. How do countries seek to develop competitive advantage in AI?

Porter identified four determinants to gain national competitive advantage in a specific industry: i) factor conditions; ii) demand conditions; iii) related and supporting industries; and iv) firm strategy, structure and competition. Porter acknowledged that companies are the actors that create competitive advantage in industries. However, he emphasised the key role of governments in supporting and enabling the four determinants of the national industrial development processes.

  • Factor conditions: This determinant depends on geography, availability of skilled labour, level of education and research capabilities. Countries are strengthening AI research capability through different measures that include: i) creating AI research institutes; ii) creating new AI-related graduate and doctoral degrees at universities, and adjusting existing degrees to include AI courses, e.g. in scientific disciplines; and iii) attracting domestic and foreign talent e.g. by increasing visas for AI experts.

  • Demand conditions: Several countries identify strategic sectors for AI development, notably transportation, healthcare and public services. They are putting in place measures to encourage domestic consumer demand for AI services in these specific industries. In public services, some governments are ensuring that AI systems meet certain standards, e.g. of accuracy or robustness, through public procurement policies.

  • Related and supporting industries: AI competitiveness requires access to digital infrastructures and services, data, computing power and broadband connectivity. A number of countries are planning AI-focused technology clusters and support structures for small and medium-sized enterprises (SMEs).

  • Firm strategy, structure and competition: Some of the approaches that countries are taking to foster private investment and competition in AI include: i) preparing AI development roadmaps for fostering private investment; ii) encouraging international AI companies to invest domestically, e.g. by opening AI laboratories; and iii) experimenting with policy approaches such as regulatory sandboxes for AI applications to encourage firms to innovate.

In addition, for the effective implementation of national AI initiatives, many countries are considering appropriate governance mechanisms to ensure a co-ordinated, whole-of-government approach. For example, France has established an AI co-ordination function within the Prime Minister’s Office to implement the French AI strategy.

Source: Porter (1990[4]), “The competitive advantage of nations”, https://hbr.org/1990/03/the-competitive-advantage-of-nations.


In July 2019, the Argentine government planned to release a ten-year National AI Strategy. This followed an assessment phase in 2018 by the Ministry of Science, Technology and Productive Innovation as part of the Argentina Digital Agenda 2030 and Argentina’s Innovative Plan 2030. Thematic priorities for the National AI Strategy include: talent and education, data, R&D and innovation, supercomputing infrastructure, actions to facilitate job transitions, and facilitating public-private co-operation on data use. Thematic priorities also include public services and manufacturing (as target sectors for AI development). The strategy’s cross-cutting themes are: i) investment, ethics and regulation; ii) communication and awareness building; and iii) international co-operation.

The strategy involves seven ministries and envisions the development of a national AI Innovation Hub to implement projects in each thematic group. Each thematic priority will have an expert steering group charged with defining goals and metrics to measure progress.


The Australian government planned over AUD 28 million (USD 21 million) in its 2018/19 budget to build capability in, and support the responsible development of, AI in Australia. This budget will finance the following projects: i) Co-operative Research Centre projects with a specific focus on AI (USD 18 million); ii) AI-focused PhD scholarships (USD 1 million); iii) development of online resources to teach AI in schools (USD 1.1 million); iv) development of an AI technology roadmap exploring AI’s impacts on industries, workforce opportunities and challenges and implications for education and training (USD 250 000); v) development of an AI ethics framework using a case study approach (USD 367 000); and vi) development of an AI standards roadmap (USD 72 000 with matching funds from industry).

The Department of Industry, Innovation and Science is also undertaking AI-related projects. They include tasking the Australian Council of Learned Academies with exploring the opportunities, risks and consequences for Australia of broad uptake of AI over the next decade. The Australian Human Rights Commission also launched a major project in July 2018 on the relationship between human rights and technology. It includes an issues paper and international conference; a final report was planned for 2019/20.


Brazil’s digital transformation strategy E-Digital of March 2018 harmonises and co-ordinates different governmental initiatives on digital issues to advance the Sustainable Development Goals in Brazil. On AI specifically, E-Digital includes action “to evaluate potential economic and social impact of (…) artificial intelligence and big data, and to propose policies that mitigate their negative effects and maximise positive results” (Brazil, 2018[5]). E-Digital also prioritises the allocation of resources towards AI research, development and innovation (RD&I), and capacity building. Brazil plans to launch a specific AI strategy in 2019. It is actively involved in international discussions on AI technical standardisation and policy.

From 2014 to early 2019, the Ministry of Science, Technology, Innovation and Communication has provided incentives and financial support for 16 different projects on AI, and to 59 AI start-ups. In addition, 39 initiatives use AI in e-government at the federal level. These initiatives aim, notably, to improve administrative and assessment procedures, e.g. in social services, citizen services, jobs advertising, etc. A new institute for AI research – the Artificial Intelligence Advanced Institute – was set up in 2019. It promotes partnerships between universities and companies on joint AI RD&I projects. Specifically, it targets fields such as agriculture, smart cities, digital governance, infrastructure, environment, natural resources, and security and defence.


Canada is seeking to position itself as an AI leader notably with the Pan-Canadian AI Strategy launched in March 2017 (CIFAR, 2017[6]). The strategy is led by the non-profit Canadian Institute for Advanced Research and backed with government funding of CAD 125 million (USD 100 million). Over five years, the funds will support programmes to expand Canada’s human capital, support AI research in Canada and translate AI research into public- and private-sector applications. The goals of the Pan-Canadian AI Strategy are:  

  1. 1. Increase AI researchers and skilled graduates in Canada.

  2. 2. Establish interconnected nodes of scientific excellence in Canada’s three major AI institutes: in Edmonton (Alberta Machine Intelligence Institute), Montreal (Montreal Institute for Learning Algorithms) and Toronto (Vector Institute for Artificial Intelligence).

  3. 3. Develop a global programme on AI in Society and global thought leadership on the economic, social, ethical, policy and legal implications of advances in AI.

  4. 4. Support a national research community on AI.

The federal government through the National Research Council Canada (NRC) plans research investments totalling CAD 50 million (USD 40 million) over a seven-year period to apply AI to key programme themes including: data analytics, AI for Design, cyber security, Canadian indigenous languages, support for federal superclusters and collaboration centres with Canadian universities, as well as strategic partnerships with international partners.

In addition to this federal funding, the Quebec government is allocating CAD 100 million (USD 80 million) to the AI community in Montreal; Ontario is providing CAD 50 million (USD 40 million) to the Vector Institute for Artificial Intelligence. In 2016, the Canada First Research Excellence Fund allocated CAD 93.6 million (USD 75 million) to three universities for cutting-edge research in deep learning: the Université de Montréal, Polytechnique Montréal and HEC Montréal. Facebook and other dynamic private companies like ElementAI are active in Canada.

The Quebec government plans to create a world observatory on the social impacts of AI and digital technologies (Fonds de recherche du Québec, 2018[7]). A workshop in March 2018 began to consider the observatory’s mandate and potential model, governance mode, funding and international co-operation, as well as sectors and issues of focus. The Quebec government has allocated CAD 5 million (USD 3.7 million) to help implement the observatory.

Canada is also working with partners internationally to advance AI initiatives. For example, the governments of Canada and France announced in July 2018 that they would work together to establish a new International Panel on AI. This Panel’s mission will be to support and guide the responsible adoption of AI that is human-centred and grounded in human rights, inclusion, diversity, innovation and economic growth.


In May 2016, the Chinese government published a three-year national AI plan formulated jointly by the National Development and Reform Commission, the Ministry of Science and Technology, the Ministry of Industry and Information Technology and the Cyberspace Administration of China. AI was included in the Internet Plus initiative, which was established in 2015 as a national strategy to spur economic growth driven by innovative, Internet-related technologies in the period 2016-18 (Jing and Dai, 2018[8]). It focuses on: i) enhancing AI hardware capacity; ii) strong platform ecosystems; iii) AI applications in important socio-economic areas; and iv) AI’s impact on society. In it, the Chinese government envisioned creating a USD 15 billion market by 2018 through R&D for the Chinese AI industry (China, 2016[9]).  

Mid-2017, China’s State Council released the Guideline on Next Generation AI Development Plan, which provides China’s long-term perspective on AI with industrial goals for each period. These comprise: i) AI-driven economic growth in China by 2020; ii) major breakthroughs in basic theories by 2025 and in building an intelligent society; and iii) for China to be a global AI innovation centre by 2030 and to build up an AI industry of RMB 1 trillion (USD 150 billion) (China, 2017[10]). The plan’s implementation seems to be advancing throughout government and China has been developing leadership in AI with state support and private company dynamism. China’s State Council set objectives for “new-generation information technology” as a strategic industry targeted to account for 15% of gross domestic product by 2020.

In its 13th Five-Year Plan timeframe (2016-20), China ambitions to transform itself into a science and technology leader, with 16 “Science and Technology Innovation 2030 Megaprojects”, including “AI 2.0”. The plan has provided impetus for action in the public sector (Kania, 2018[11]). The plan asks companies to accelerate AI hardware and software R&D, including in AI-based vision, voice and biometric recognition, human-machine interfaces and smart controls.

On 18 January 2018, China established a national AI standardisation group and a national AI expert advisory group. At the same time, the National Standardisation Management Committee Second Ministry of Industry released a white paper on AI standardisation. The paper was supported by the China Electronic Standardisation Institute (a division in the Ministry of Industry and Information Technology) (China, 2018[12]).

Private Chinese companies’ attention to AI predate the more recent government support. Chinese companies such as Baidu, Alibaba and Tencent have made significant efforts and investments in AI. Chinese industry has focused on applications and data integration, while the central government focuses on basic algorithms, open data and conceptual work. City governments focus on the use of applications and open data at a municipal level.

Czech Republic

The Czech government called for a study on AI implementation in 2018 to develop strategic goals and support negotiations at European and international levels. A team of academic experts from the Technology Centre of the Academy of Science of the Czech Republic, the Czech technological university in Prague and the Czech Academy of Sciences’ Institute of State and Law submitted a report called Analysis of the Development Potential of Artificial Intelligence in the Czech Republic (OGCR, 2018[13]). This reports maps: i) the current state of AI implementation in the Czech Republic; ii) the potential impact of AI on the Czech labour market; and iii) ethical, legal and regulatory aspects of AI development in the country.


Denmark published the Strategy for Denmark’s Digital Growth in January 2018. It aims for Denmark to become a digital frontrunner, with all Danes benefiting from the transformation. The strategy introduces initiatives to seize growth opportunities from AI, big data and Internet of Things (IoT) technologies. Its strategic focus includes: i) creating a digital hub for public-private partnerships; ii) assisting SMEs with data-driven business development and digitalisation; iii) establishing educational institutions in a Technology Pact to foster technical and digital skills; iv) strengthening cyber security in companies; and v) developing agile regulation to facilitate new business models and experimentation. The Danish government has committed DKK 1 billion (USD 160 million) until 2025 to implement the strategy. It divides this commitment into DKK 75 million (USD 12 million) for 2018, and DKK 125 million (USD 20 million) in 2019-25. The largest part of this budget will be allocated to skills development initiatives, followed by the digital hub creation and support for SMEs (Denmark, 2018[14]).


Estonia is planning the next step of its e-governance system powered by AI to save costs and improve efficiency. It is also experimenting with e-healthcare and situational awareness. Estonia aims to improve lives and cities and support human values. On the enforcement side, Estonia targets the core values of ethics, liability, integrity and accountability. This is in place of a focus on rapidly evolving technology and building an enforcement system based on blockchain that mitigates integrity and accountability risks. A pilot project is planned in 2018.

With StreetLEGAL, self-driving cars have been tested on Estonian public roads since March 2017. Estonia is also the first government discussing the legalisation of AI. This would entail giving representative rights, and responsibilities, to algorithms to buy and sell services on their owners’ behalf. In 2016, the Estonian government created a task force to look into the problem of accountability in ML algorithms and the need for legislation of AI, together with the Ministry of Economic Affairs and Communications and the Government Office (Kaevats, 2017[15]; Kaevats, 25 September 2017[16]).


Finland aims to develop a safe and democratic society with AI; to provide the best public services in the world; and for AI to bring new prosperity, growth and productivity to citizens. The AI strategy Finland’s Age of Artificial Intelligence, published in October 2017, is a roadmap for the country to leverage its educated population, advanced digitalisation and public sector data resources. At the same time, the strategy foresees building international links in research and investment, and encouraging private investments. Finland hopes to double its national economic growth by 2035 thanks to AI. Eight key actions for AI-enabled growth, productivity and well-being are: i) enhancing companies’ competitiveness; ii) using data in all sectors; iii) speeding up and simplifying AI adoption; iv) ensuring top-level expertise; v) making bold decisions and investments; vi) building the world’s best public services; vii) establishing new co-operation models; and viii) making Finland a trendsetter in the age of AI. The report highlights using AI to improve public services. For example, the Finnish Immigration Service uses the national customer service robot network called Aurora to provide multilingual communication (Finland, 2017[17]).

In February 2018, the government also created a funding entity for AI research and commercial projects. The entity will allocate EUR 200 million (USD 235 million) in grants and incentives to the private sector, including SMEs. Finland reports some 250 companies working on AI development. This includes professionals and patients in the Finnish healthcare industry organisations and healthcare system, and the associated in-depth reforms (Sivonen, 2017[18]). The role of the Finnish state-funded Technical Research Centre of Finland and the Finnish Funding Agency for Technology and Innovation will also be expanded.


French President Emmanuel Macron announced France’s AI strategy on 29 March 2018. It allocates EUR 1.5 billion of public funding into AI by 2022 to help France become an AI research and innovation leader. The measures are largely based on recommendations in the report developed by a member of parliament, Cédric Villani (Villani, 2018[19]). The strategy calls for investing in public research and education, building world-class research hubs linked to industry through public-private partnerships, and attracting foreign and French elite AI researchers working abroad. To develop the AI ecosystem in France, the strategy’s approach is to “upgrade” existing industries. Starting from applications in health, environment, transport and defence, it aims to help use AI to renew existing industries. It proposes to prioritise access to data by creating “data commons” between private and public actors; adapting copyright law to facilitate data mining; and opening public sector data such as health to industry partners. 

The strategy also outlines initial plans for AI-induced disruptions, taking a firm stance on data transfers out of Europe (Thompson, 2018[20]). It would create a central data agency with a team of about 30 advisory experts on AI applications across government. The ethical and philosophical boundaries articulated in the strategy include algorithm transparency as a core principle. Algorithms developed by the French government or with public funding, for example, will reportedly be open. Respect for privacy and other human rights will be “by design”. The strategy also develops vocational training in professions threatened by AI. It calls for policy experimentation in the labour market and for dialogue on how to share AI-generated value added across the value chain. A French report on AI and work was also released in late March (Benhamou and Janin, 2018[21]).


The German federal government launched its AI strategy in December 2018 (Germany, 2018[22]). Germany aims to become a leading centre for AI by pursuing speedy and comprehensive transfer of research findings into applications, with “AI made in Germany” becoming a strong export and a globally recognised quality mark. Measures to achieve this goal include new research centres, enhanced Franco-Germany research co-operation, funding for cluster development and support for SMEs. It also addresses infrastructure requirements, enhanced access to data, skills development, security to prevent misuse and ethical dimensions.

In June 2017, the federal Ministry of Transport and Digital Infrastructure developed ethical guidelines for self-driving cars. The guidelines, developed by the Ethics Commission of the ministry, stipulate 15 rules for programmed decisions embedded in self-driving cars. The commission considered ethical questions in depth, including whose life to prioritise (known as the “Trolley problem”). The guidelines provide that self-driving cars should be programmed to consider all human lives as equal. If a choice is needed between people, self-driving cars should choose to hit whichever person would be hurt less, regardless of age, race or gender. The commission also makes clear that no obligation should be imposed on individuals to sacrifice themselves for others (Germany, 2017[23]).


Hungary created an AI Coalition in October 2018 as a partnership between state agencies, leading IT businesses and universities. The coalition is formulating an AI strategy to establish Hungary as an AI innovator and researching the social and economic impacts of AI on society. It includes some 70 academic research centres, businesses and state agencies and is viewed as a forum for cross-sector co-operation in AI R&D. The Budapest Technology and Economics and Eötvös Loránd Universities are part of the consortium investing EUR 20 million (USD 23.5 million) in AI4EU, a project to develop an AI-on-demand-platform in Europe.


India published its AI strategy in June 2018. The strategy provides recommendations for India to become a leading nation in AI by empowering human capability and ensuring social and inclusive growth. The report named its inclusive approach #AIFORALL. The strategy identifies the following strategic focus areas for AI applications: healthcare, agriculture, education, smart cities and transportation. Low research capability and lack of data ecosystems in India are identified as challenges to realise the full potential of AI. The strategy makes several recommendations. India should create two-tiered research institutes (for both basic and applied research). It needs to set up learning platforms for the current workforce. The country should also create targeted data sets and incubation hubs for start-ups. Finally, it should establish a regulatory framework for data protection and cyber security (India, 2018[24]).


Italy published a white paper “Artificial Intelligence at the Service of Citizens” in March 2018 that was developed by a task force of the Agency for Digital Italy. It focused on how the public administration can leverage AI technologies to serve people and business, and increase public-service efficiency and user satisfaction. The report identified challenges to implement AI in public services related to ethics, technology, data availability and impact measurement. The report includes recommendations on issues such as promoting a national platform for labelled data, algorithms and learning models; developing skills; and creating a National Competence Centre and a Trans-disciplinary Centre on AI. The report also called for guidelines and processes to increase levels of control and facilitate data sharing among all European countries on cyberattacks to and from AI (Italy, 2018[25]).


The Japanese Cabinet Office established a Strategic Council for AI Technology in April 2016 to promote AI technology R&D and business applications. The council published an Artificial Intelligence Technology Strategy in March 2017 that identified critical issues. These included the need to increase investment, facilitate use and access to data, and increase the numbers of AI researchers and engineers. The strategy also identified strategic areas in which AI could bring significant benefits: productivity; health, medical care and welfare; mobility; and information security (Japan, 2017[26]).

Japan’s Integrated Innovation Strategy, published by the Cabinet Office in June 2018, has a set of AI policy actions (Japan, 2018[27]). The strategy included convening multi-stakeholder discussions on ethical, legal and societal issues of AI. This resulted in the Cabinet Office publishing Social Principles for Human-centric AI in April 2019 (Japan, 2019[28]).

At the G7 ICT Ministerial meeting in Takamatsu in April 2016, Japan proposed the formulation of shared principles for AI R&D. A group of experts called Conference toward AI Network Society developed Draft AI R&D Guidelines for International Discussions, which were published by the Japanese Ministry of Internal Affairs and Communications in July 2017. These guidelines seek primarily to balance benefits and risks of AI networks, while ensuring technological neutrality and avoiding excessive burden on developers. The guidelines consist of nine principles that researchers and developers of AI systems should consider (Japan, 2017[29]). Table 5.2 provides the abstract of the guidelines. The Conference published the Draft AI Utilization Principles in July 2018 as an outcome of the discussion (Japan, 2018[30]).

Table 5.2. R&D Principles provided in the AI R&D Guidelines

Principle of:

Developers should:

I. Collaboration

Pay attention to the interconnectivity and interoperability of AI systems.

II. Transparency

Pay attention to the verifiability of inputs/outputs of AI systems and explainability of their decisions.

III. Controllability

Pay attention to the controllability of AI systems.

IV. Safety

Ensure that AI systems do not harm the life, body or property of users or third parties through actuators or other devices.

V. Security

Pay attention to the security of AI systems.

VI. Privacy

Take into consideration that AI systems will not infringe the privacy of users or third parties.

VII. Ethics

Respect human dignity and individual autonomy in R&D of AI systems.

VIII. User assistance

Take into consideration that AI systems will support users and make it possible to give them opportunities for choice in appropriate manners.

IX. Accountability

Make efforts to fulfil their accountability to stakeholders, including users of AI systems.

Source: Japan (2017[29]), Draft AI R&D Guidelines for International Discussions, www.soumu.go.jp/main_content/000507517.pdf.


The Korean government published the Intelligent Information Industry Development Strategy in March 2016. It announced public investment of KRW 1 trillion (USD 940 million) by 2020 in the field of AI and related information technologies such as IoT and cloud computing. This strategy aimed to create a new intelligent information industry ecosystem and to encourage KRW 2.5 trillion (USD 2.3 billion) of private investment by 2020. Under the strategy, the government has three goals. First, it plans to launch AI development flagship projects, for example in the areas of language-visual-space-emotional intelligence technology. Second, it seeks to strengthen AI-related workforce skills. Third, it will promote access and use of data by government, companies and research institutes (Korea, 2016[31]).

In December 2016, the Korean government published the Mid- to Long-Term Master Plan in Preparation for the Intelligence Information Society. The plan contains national policies to respond to the changes and challenges of the 4th Industrial Revolution. To achieve its vision of a “human-centric intelligent society”, it aims to establish the foundations for world-class intelligent IT. Such IT could be applied across industries and be used to upgrade social policies. To implement the plan, the government is creating large-scale test beds to help develop new services and products, including better public services (Korea, 2016[32]).

In May 2018, the Korean government released a national plan to invest KRW 2.2 trillion (USD 2 billion) by 2022 to strengthen its AI R&D capability. The plan would create six AI research institutes; develop AI talent through 4 500 AI scholarships and short-term intensive training courses; and accelerate development of AI chips (Peng, 2018[33]).


In Mexico, the National Council of Science and Technology created an Artificial Intelligence Research Centre in 2004, which leads the development of intelligent systems.

In June 2018, a white paper entitled “Towards an AI Strategy in Mexico: Harnessing the AI Revolution” was published.1 The report finds that Mexico ranks 22 out of 35 OECD countries in Oxford Insight’s “AI Readiness Index”. Its composite score was derived from averaging nine metrics ranging from digital skills to government innovation. Mexico scores well for its open data policies and digital infrastructure, but poorly in areas such as technical skills, digitalisation and public sector innovation. The report recommends policy actions to further develop and deploy AI in Mexico. These recommendations span five areas of government: government and public services; R&D; capacity, skills and education; data and digital infrastructure; and ethics and regulation (Mexico, 2018[34]).

The Netherlands

In 2018, through adoption of the National Digitalisation Strategy, the government of the Netherlands made two commitments. First, it will leverage social and economic opportunities. Second, it will strengthen enabling conditions, including skills, data policy, trust and resilience, fundamental rights and ethics (e.g. the influence of algorithms on autonomy and equal treatment) and AI-focused research and innovation. In October 2018, “AI for the Netherlands” (AINED), a Dutch coalition of industry and academia, published goals and actions for a national AI plan. These include focusing on promoting access to AI talent and skills, as well as high-value public data. They also aim to facilitate AI-driven business development and promote large-scale use of AI in government. In addition, they envision creating socio-economic and ethical frameworks for AI, encouraging public-private co-operation in key sectors and value chains, and establishing the Netherlands as a world-class AI research centre. The Dutch government plans to finalise a whole-of-government strategic action plan for AI before mid-2019. It was to consider the AINED report, the EU co-ordinated plan and discussions of the European Commission (EC) High-Level Expert Group on AI (AI HLEG).


Norway has taken AI initiatives as part of the Digital Agenda for Norway and Long-term Plan for Research and Higher Education that include:

  • Creation of several AI labs, such as the Norwegian Open AI-lab at the Norwegian University for Science and Technology. The Open AI-lab is sponsored by several companies. It focuses on energy, maritime, aquaculture, telecom, digital banking, health and biomedicine, where Norway has a strong international position.

  • A skills-reform programme called Learning for Life with a proposed 2019 budget of NOK 130 million (USD 16 million) to help the workforce develop or upgrade skills in AI, healthcare and other areas.

  • An open data strategy by which government agencies must make their data available in machine-readable formats using application program interfaces and register the available datasets in a common catalogue.

  • A platform to develop guidelines and ethical principles governing the use of AI.

  • Regulatory reform to allow self-driving vehicles testing on the road – including test-driving without a driver in the vehicle.

Russian Federation

The Russian government developed its digital strategy entitled Digital Economy of the Russian Federation in July 2017. The strategy prioritises leveraging AI development, including favourable legal conditions to facilitate R&D activities. It also seeks to provide incentives for state companies to participate in the nationwide research communities (competence centres). In addition, it promotes activities to develop national standards for AI technologies (Russia, 2017[35]). Prior to the digital strategy, the government invested in various AI projects and created instruments to develop public-private partnerships. The Russian AI Association is encouraging co-operation between academia and companies to facilitate technology transfer to companies.

Saudi Arabia

Saudi Arabia announced its Vision 2030 in 2016. It delivers an economic reform plan to stimulate new industries and diversify the economy, facilitate public-private business models and, ultimately, reduce the country’s dependence on oil revenues. Vision 2030 views digital transformation as a key means to develop the economy by leveraging data, AI and industrial automation. Priority sectors, including for the launch of innovation centres, include healthcare, government services, sustainable energy and water, manufacturing, and mobility and transportation. The government is drafting its national AI strategy that aims to build an innovative and ethical AI ecosystem in Saudi Arabia by 2030.

Saudi Arabia is developing an enabling ecosystem for AI that includes high speed broadband and 5G deployment, access to data and security. It is also encouraging early adoption of AI concepts and solutions through several smart city projects to catalyse new solutions. These build on the momentum of NEOM, a smart city megaproject launched in 2017 that invests a significant SAR 1.8 trillion (USD 500 billion). Saudi Arabia is also actively involved in global discussions on AI governance frameworks.


Infocomm Media Development Authority published the Digital Economy Framework for Action in May 2018. This maps out a framework for action to transform Singapore into a leading digital economy, identifying AI as a frontier technology to drive Singapore’s digital transformation (Singapore, 2018[36]).

The Personal Data Protection Commission published a model AI governance framework in January 2019 to promote responsible adoption of AI in Singapore. The model framework provides practical guidance to convert ethical principles into implementable practices. It builds on a discussion paper and national discussions by a Regulators’ Roundtable, a community of practice comprising sector regulators and public agencies. Organisations can adopt the model framework voluntarily. It also serves as a basis to develop sector-specific AI governance frameworks.

A multi-stakeholder Advisory Council on the Ethical Use of AI and Data was formed in June 2018. It advises Singapore’s government on ethical, legal, regulatory and policy issues arising from the commercial deployment of AI. To that end, it aims to support industry adoption of AI and the accountable and responsible rollout of AI products and services.

To develop Singapore into a leading knowledge centre with international expertise in AI policy and regulations, the country set up a five-year Research Programme on the Governance of AI and Data Use in September 2018. It is headquartered at the Centre for AI & Data Governance of the Singapore Management University School of Law. The Centre focuses on industry-relevant research of AI as it relates to industry, society and business.


In May 2018, the Swedish government published a report called Artificial Intelligence in Swedish Business and Society. The report, which aims to strengthen AI research and innovation in Sweden, outlines six strategic priorities: i) industrial development, including manufacturing; ii) travel and transportation; iii) sustainable and smart cities; iv) healthcare; v) financial services; and vi) security, including police and customs. The report highlights the need to reach a critical mass in research, education and innovation. It also calls for co-operation regarding investment for research and infrastructure, education, regulatory development and mobility of labour forces (Vinnova, 2018[37]).


The Scientific and Technological Research Council of Turkey – the leading agency for the management and funding of research in Turkey – has funded numerous AI R&D projects. It plans to open a multilateral call for AI projects in the context of the EUREKA intergovernmental network for innovation. The Turkish Ministry of Science and Technology has developed a national digital roadmap in the context of Turkey’s Industrial Digital Transformation Platform. Part of this roadmap focuses on technological advancement in emerging digital technologies such as AI.

United Kingdom

The UK Digital Strategy published in March 2017 recognises AI as key to help grow the United Kingdom’s digital economy (UK, 2017[38]). The strategy allocates GBP 17.3 million (USD 22.3 million) in funding for UK universities to develop AI and robotics technologies. The government has increased investment in AI R&D by GBP 4.7 billion (USD 6.6 billion) over the next four years, partly through its Industrial Strategy Challenge Fund.

In October 2017, the government published an industry-led review on the United Kingdom’s AI industry. The report identifies the United Kingdom as an international centre of AI expertise, in part as a result of pioneering computer scientists such as Alan Turing. The UK government estimated that AI could add GBP 579.7 billion (USD 814 billion) to the domestic economy. AI tools used in the United Kingdom include a personal health guide (Your.MD), a chatbot developed for bank customers and a platform to help children learn and teachers provide personalised education programmes. The report provided 18 recommendations, such as improving access to data and data sharing by developing Data Trusts. It also recommends improving the AI skills supply through industry-sponsored Masters in AI. Other suggested priorities include maximising AI research by co-ordinating demand for computing capacity for AI research among relevant institutions; supporting uptake of AI through a UK AI Council; and developing a framework to improve transparency and accountability of AI-driven decisions (Hall and Pesenti, 2017[39]).

The UK government published its industrial strategy in November 2017. The strategy identifies AI as one of four “Grand Challenges” to place the United Kingdom at the forefront of industries of the future and ensure it takes advantage of major global changes (UK, 2017[40]). In April 2018, the United Kingdom published the AI Sector Deal: a GBP 950 million (USD 1.2 billion) investment package that builds on the United Kingdom’s strengths and seeks to maintain a world-class ecosystem. It has three principal areas of focus: skills and talent; the drive for adoption; and data and infrastructure (UK, 2018[41]).

The government established an Office for Artificial Intelligence (OAI) focused on implementing the Sector Deal and driving adoption more widely. It also established a Centre for Data Ethics and Innovation. The Centre focuses on strengthening the governance landscape to enable innovation, while ensuring public confidence. It will supply government with independent, expert advice on measures needed for safe, ethical and ground-breaking innovation in data-driven and AI-based technologies. To that end, it planned to launch a pilot data trust by the end of 2019. An AI Council that draws on industry expertise works closely with the OAI (UK, 2018[42]).

United States

On 11 February 2019, President Trump signed Executive Order 13859 on Maintaining American Leadership in Artificial Intelligence, launching the American AI Initiative. The Initiative directs actions in five key areas: i) invest in AI R&D; ii) unleash AI resources; iii) set guidance for AI regulation and technical standards; iv) build the AI workforce; and v) engage internationally in support of American AI research and innovation and to open markets for American AI industries.

The Initiative is the culmination of a series of Administration actions to accelerate American leadership in AI. The White House hosted the first Summit on AI for American industry in May 2018, bringing together industry stakeholders, academics and government leaders. Participants discussed the importance of removing barriers to AI innovation in the United States and promoting AI R&D collaboration among American allies. Participants also raised the need to promote awareness of AI so that the public can better understand how these technologies work and how they can benefit our daily lives. In the same month, it published a fact sheet called Artificial Intelligence for the American People (EOP, 2018[43]), which listed AI-related policies and measures by the current administration. These policies include increased public finance to AI R&D and regulatory reform to facilitate the development and use of drones and driverless cars. They also prioritise education in science, technology, engineering and mathematics (STEM) with a focus on computer science, as well as enhanced sharing of federal data for AI research and applications.

The president’s FY 2019 and FY 2020 R&D budgets designated AI and machine learning as key priorities. Specific areas included basic AI research at the National Science Foundation and applied R&D at the Department of Transportation. Research priorities also include advanced health analytics at the National Institutes of Health and AI computing infrastructure at the Department of Energy. Overall, the federal government’s investment in unclassified R&D for AI and related technologies has grown by over 40% since 2015.

In September 2018, the Select Committee on AI of the National Science and Technology Council began updating the National Artificial Intelligence Research and Development Strategic Plan. Since the plan’s publication in 2016, the underlying technology, use cases and commercial implementation of AI have advanced rapidly. The Select Committee is seeking public input on how to improve the plan, including from those directly performing AI R&D and those affected by it. 

The Administration has also prioritised training the future American workforce. President Trump signed an Executive Order establishing industry-recognised apprenticeships and creating a cabinet-level Task Force on Apprenticeship Expansion. In keeping with the policies outlined in the fact sheet, a Presidential Memorandum also prioritised high-quality STEM education, with a particular focus on computer science education. It committed USD 200 million in grant funds that were matched by a private industry commitment of USD 300 million.

The US Congress launched the bipartisan Artificial Intelligence Caucus in May 2017, which is co-chaired by congressmen John K. Delaney and Pete Olson (US, 2017[44]). The caucus brings together experts from academia, government and the private sector to discuss the implications of AI technologies. The Congress is considering legislation to establish both a federal AI advisory committee and federal safety standards for self-driving vehicles.

Intergovernmental initiatives

G7 and G20

At the April 2016 G7 ICT Ministerial Meeting of Takamatsu (Japan), the Japanese Minister of Internal Affairs and Communications presented and discussed a set of AI R&D Principles (G7, 2016[45]).

The G7 ICT and Industry Ministerial held in Turin in September 2017 under the Italian presidency issued a Ministerial Declaration in which G7 countries acknowledged the tremendous potential benefits of AI on society and economy, and agreed on a human-centred approach to AI (G7, 2017[46]).

Under Canada’s 2018 G7 Presidency, G7 Innovation Ministers convened in Montreal in March 2018. They expressed a vision of human-centred AI and focused on the interconnected relationship between supporting economic growth from AI innovation. They also sought to increase trust in and adoption of AI, and promote inclusivity in AI development and deployment. G7 members agreed to act in related areas, including the following:

  • Invest in basic and early-stage applied R&D to produce AI innovations, and support entrepreneurship in AI and labour force readiness for automation.

  • Continue to encourage research, including solving societal challenges, advancing economic growth, and examining ethical considerations of AI, as well as broader issues such as those related to automated decision making.

  • Support public awareness efforts to communicate actual and potential benefits, and broader implications, of AI.

  • Continue to advance appropriate technical, ethical and technologically neutral approaches.

  • Support the free flow of information through the sharing of best practices and use cases on the provision of open, interoperable and safe access to government data for AI.

  • Disseminate this G7 statement globally to promote AI development and collaboration in the international arena (G7, 2018[47]).

In Charlevoix in June 2018, the G7 released a communique to promote human-centred AI and commercial adoption of AI. They also agreed to continue advancing appropriate technical, ethical and technologically neutral approaches.

G7 Innovation Ministers decided to convene a multi-stakeholder conference on AI hosted by Canada in December 2018. It planned to discuss how to harness the positive transformational potential of AI to promote inclusive and sustainable economic growth. France was also expected to propose AI-related initiatives as part of its G7 Presidency in 2019.

G20 attention to AI can also be noted, particularly with discussion proposed by Japan under its upcoming G20 presidency in 2019 (G20, 2018[1]). The G20 Digital Economy Ministerial Meeting of Salta in 2018 notably encouraged “countries to enable individuals and businesses to benefit from digitalisation and emerging technologies”, such as 5G, IoT and AI. It encouraged Japan, in its role as president during 2019, to continue the G20 work accomplished in 2018, among other priorities including AI.


OECD Principles for Trust in and Adoption of AI

In May 2018, the OECD’s Committee on Digital Economy Policy established an Expert Group on Artificial Intelligence in Society. It was created to scope principles for public policy and international co-operation that would foster trust in and adoption of AI. Ultimately, these principles became the basis for the OECD Recommendation of the Council on Artificial Intelligence (OECD, 2019[3]), to which forty countries adhered on 22 May 2019. In the same spirit, the 2018 Ministerial Council Meeting Chair urged “the OECD to pursue multi-stakeholder discussions on the possible development of principles that should underpin the development and ethical application of artificial intelligence in the service of people”.

The group consisted of more than 50 experts from different sectors and disciplines. These included governments, business, the technical community, labour and civil society, as well as the European Commission and UNESCO. It held four meetings: two at the OECD in Paris, on 24-25 September and 12 November 2018; one at the Massachusetts Institute of Technology (MIT) in Cambridge on 16-17 January 2019; and a final meeting in Dubai, on 8-9 February 2019, on the margins of the World Government Summit. The group identified principles for the responsible stewardship of trustworthy AI relevant for all stakeholders. These principles included respect for human rights, fairness, transparency and explainability, robustness and safety, and accountability. The group also proposed specific recommendations for national policies to implement the principles. This work informed the development of the OECD Recommendation of the Council on Artificial Intelligence in the first half of 2019 (OECD, 2019[3]).

OECD AI Policy Observatory

The OECD planned to launch an AI Policy Observatory in 2019 to examine current and prospective developments in AI and their policy implications. The aim was to help implement the aforementioned AI principles by working with a wide spectrum of external stakeholders, including governments, industry, academia, technical experts and the general public. The Observatory was expected to be a multidisciplinary, evidence-based and multi-stakeholder centre for policy-relevant evidence collection, debate and guidance for governments. At the same time, it would provide external partners with a single window onto policy-relevant AI activities and findings from across the OECD.

European Commission and other European institutions

In April 2018, the European Commission issued a Communication on Artificial Intelligence for Europe, outlining three main priorities. First, it boosts the European Union’s technological and industrial capacity and AI uptake across the economy. Second, it prepares for socio-economic changes brought by AI. Third, it ensures an appropriate ethical and legal framework. The Commission presented a co-ordinated plan on the development of AI in Europe in December 2018. It aims primarily to maximise the impact of investments and collectively define the way forward. The plan was expected to run until 2027 and contains some 70 individual measures in the following areas:

  • Strategic actions and co-ordination: encouraging member states to set up national AI strategies outlining investment levels and implementation measures.

  • Maximising investments through partnerships: fostering investment in strategic AI research and innovation through AI public-private partnerships and a Leaders’ Group, as well as a specific fund to support AI start-ups and innovative SMEs.

  • From the lab to the market: strengthening research excellence centres and Digital Innovation Hubs, and establishing testing facilities and possibly regulatory sandboxes.

  • Skills and lifelong learning: promoting talent, skills and lifelong learning.

  • Data: calling for the creation of a Common European Data Space to facilitate access to data of public interest and industrial data platforms for AI, including health data.

  • Ethics by design and regulatory framework: emphasising the need for ethical AI and for a regulatory framework fit for purpose (including safety and liability dimensions). The ethical framework will build on the AI Ethics Guidelines developed by the independent AI HLEG. The EC also commits to anchoring “ethics by design” through its procurement policy.

  • AI for the Public Sector: outlining measures for AI for the public sector such as joint procurement and translations.

  • International co-operation: underlining the importance of international outreach and of anchoring AI in development policy and announcing a world ministerial meeting in 2019.

As part of its AI Strategy, the Commission also established the AI HLEG in June 2018. The AI HLEG, which comprises representatives from academia, civil society and industry, was given two tasks. First, it was to draft AI Ethics Guidelines providing guidance to developers, deployers and users to ensure “trustworthy AI”. Second, it was to prepare AI policy and investment recommendations (“Recommendations”) for the European Commission and member states on mid- to long-term AI-related developments to advance Europe’s global competitiveness. In parallel, the Commission set up a multi-stakeholder forum, the European AI Alliance, to encourage broad discussions on AI policy in Europe. Anyone can contribute through the platform to the work of the AI HLEG and inform EU policy making.

The AI HLEG published the first draft of its Ethical Guidelines for comments in December 2018. The draft guidelines constitute a framework to achieve trustworthy AI grounded in EU fundamental rights. To be trustworthy, AI should be lawful, ethical and socio-technically robust. The guidelines set out a set of ethical principles for AI. They also identify key requirements to ensure trustworthy AI, and methods to implement these requirements. Finally, they contain a non-exhaustive assessment list to help translate each requirement into practical questions that can help stakeholders put the principles into action. At the time of writing, the AI HLEG was revising the guidelines in view of comments, for official presentation to the EC on 9 April 2019. The EC was to explain the next steps for the guidelines and for a global ethical framework for AI. The second deliverable of the AI HLEG, the Recommendations, was due by the summer of 2019.

In 2017, the Council of Europe (CoE)’s Parliamentary Assembly published a Recommendation on technological convergence, AI and human rights. The Recommendation urged the Committee of Ministers to instruct CoE bodies to consider how emerging technologies such as AI challenge human rights. It also called for guidelines on issues such as transparency, accountability and profiling. In February 2019, the CoE’s Committee of Ministers adopted a declaration on the manipulative capabilities of algorithmic processes. The declaration recognised “dangers for democratic societies” that arise from the capacity of “machine-learning tools to influence emotions and thoughts” and encouraging member states to address this threat. In February 2019, the CoE held a high-level conference called “Governing the Game Changer – Impacts of Artificial Intelligence Development on Human Rights, Democracy and the Rule of Law”.

In addition, the CoE’s European Commission for the Efficiency of Justice adopted the first European Ethical Charter on the use of AI in judicial systems in December 2018. It set out five principles to guide development of AI tools in European judiciaries. In 2019, the Committee on Legal Affairs and Human Rights decided to create a subcommittee on AI and human rights.

In May 2017, the European Economic and Social Committee (EESC) adopted an opinion on the societal impact of AI. The opinion called on EU stakeholders to ensure that AI development, deployment and use work for society and social well-being. The EESC said humans should keep control over when and how AI is used in daily lives, and identified 12 areas where AI raises societal concerns. These areas include ethics, safety, transparency, privacy, standards, labour, education, access, laws and regulations, governance, democracy, but also warfare and superintelligence. The opinion called for pan-European standards for AI ethics, adapted labour strategies and a European AI infrastructure with open-source learning environments (Muller, 2017[48]). An EESC temporary study group on AI has been set up to look at these issues.

Nordic-Baltic region

In May 2018, ministers from Nordic and Baltic countries signed a joint declaration “AI in the Nordic-Baltic Region”. The countries within the region comprise Denmark, Estonia, Finland, the Faroe Islands, Iceland, Latvia, Lithuania, Norway, Sweden and the Åland Islands. Together, they agreed to reinforce their co-operation on AI, while maintaining their position as Europe’s leading region in the area of digital development (Nordic, 2018[49]). The declaration identified seven focus areas to develop and promote the use of AI to better serve people. First, the countries want to improve opportunities for skills development so that more authorities, companies and organisations use AI. Second, they plan to enhance access to data for AI to be used for better services to citizens and businesses in the region. Third, they intend to develop ethical and transparent guidelines, standards, principles and values to guide when and how AI applications should be used. Fourth, they want infrastructure, hardware, software and data, all of which are central to the use of AI, to be based on standards that enable interoperability, privacy, security, trust, good usability and portability. Fifth, the countries will ensure that AI gets a prominent place in the European discussion and implementation of initiatives within the framework of the Digital Single Market. Sixth, they will avoid unnecessary regulation in the area, which is under rapid development. Seventh, they will use the Nordic Council of Ministers to facilitate collaboration in relevant policy areas.

United Nations

In September 2017, the United Nations Interregional Crime and Justice Research Institute signed the Host Country Agreement to open a Centre on Artificial Intelligence and Robotics within the UN system in The Hague, The Netherlands.2

The International Telecommunication Union worked with more than 25 other UN agencies to host the “AI for Good” Global Summit. It also partnered with organisations such as the XPRIZE Foundation and the Association for Computing Machinery. Following a first summit in June 2017, the International Telecommunication Union held a second one in Geneva in May 2018.3

UNESCO has launched a global dialogue on the ethics of AI due to its complexity and impact on society and humanity. It held a public roundtable with experts in September 2018, as well as a global conference entitled “Principles for AI: Towards a Humanistic Approach? – AI with Human Values for Sustainable Development” in March 2019. Together, they aimed to raise awareness and promote reflection on the opportunities and challenges posed by AI and related technologies. In November 2019, UNESCO’s 40th General Conference was to consider development of a recommendation on AI in 2020-21, if approved by UNESCO’s Executive Board in April 2019.

International Organization for Standardization

The International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC) created a joint technical committee ISO/IEC JTC 1 in 1987. They tasked the committee to develop information technology standards for business and consumer applications. In October 2017, subcommittee 42 (SC 42) was set up under JTC 1 to develop AI standards. SC 42 provides guidance to ISO and IEC committees developing AI applications. Its activities include providing a common framework and vocabulary, identifying computational approaches and architectures of AI systems, and evaluating threats and risks associated to them (Price, 2018[50]).

Private stakeholder initiatives

Non-government stakeholders have formed numerous partnerships and initiatives to discuss AI issues. While many of these initiatives are multi-stakeholder in nature, this section primarily describes those based in the technical community, the private sector, labour or academia. This list is not exhaustive.

Technical community and academia

The IEEE launched its Global Initiative on Ethics of Autonomous and Intelligent Systems in April 2016. This aims to advance public discussion on implementation of AI technologies and to define priority values and ethics. The IEEE published version 2 of its Ethically Aligned Design principles in December 2017, inviting comments from the public. It planned to publish the final version of the design guidelines in 2019 (Table 5.3) (IEEE, 2017[51]). Together with the MIT Media Lab, the IEEE launched a Council for Extended Intelligence in June 2018. It seeks to foster responsible creation of intelligent systems, reclaim control over personal data and create metrics of economic prosperity other than gross domestic product (Pretz, 2018[52]).

Table 5.3. General Principles contained in the IEEE’s Ethically Aligned Design (version 2)



Human rights

Ensure autonomous and intelligent systems (AISs) do not infringe on internationally recognised human rights

Prioritising well-being

Prioritise metrics of well-being in the design and use of AISs because traditional metrics of prosperity do not take into account the full effect of AI systems technologies on human well-being


Ensure that designers and operators of AISs are responsible and accountable


Ensure AIS operate in a transparent manner

AIS technology misuse and awareness of it

Minimise the risks of misuse of AIS technology

Source: IEEE (2017[51]), Ethically Aligned Design (Version 2), http://standards.ieee.org/develop/indconn/ec/ead_v2.pdf.

The Asilomar AI Principles are 23 principles for the safe and socially beneficial development of AI in the near and longer term that resulted from the Future Life Institute’s conference of January 2017. The Asilomar conference extracted core principles from discussions, reflections and documents produced by the IEEE, academia and non-profit organisations.

The issues are grouped into three areas. Research issues call for research funding for beneficial AI that include difficult questions in computer science; economics, law and social studies; a constructive “science-policy link”; and a technical research culture of co-operation, trust and transparency. Ethics and values call for AI systems’ design and operation to be safe and secure, transparent and accountable, protective of individuals’ liberty, privacy, human dignity, rights and cultural diversity, broad empowerment and shared benefits. Longer-term issues notably avoid strong assumptions on the upper limits of future AI capabilities and plan carefully for the possible development of artificial general intelligence (AGI) (FLI, 2017[53]). Table 5.4 provides the list of Asilomar AI Principles.

The non-profit AI research company OpenAI was founded in late 2015. It employs 60 full-time researchers with the mission to “build safe AGI, and ensure AGI’s benefits are as widely and evenly distributed as possible”.4

The AI Initiative was created by the Future Society in 2015 to help shape the global AI policy framework. It hosts an online platform for multidisciplinary civic discussion and debate. This platform aims to help understand the dynamics, benefits and risks of AI technology so as to inform policy recommendations.5

There are also numerous academic initiatives in all OECD countries and many partner economies. The MIT Internet Policy Research Initiative, for example, is helping bridge the gap between technical and policy communities. The Berkman Klein Center at Harvard University launched the Ethics and Governance of Artificial Intelligence Initiative in 2017. For its part, the MIT Media Lab is focusing on algorithms and justice, autonomous vehicles and the transparency and explainability of AI.

Table 5.4. Asilomar AI Principles (excerpt titles of the principles)

Research issues

Ethics and values

Longer-term issues

Titles of principles

- Research goal

- Research funding

- Research funding

- Science-policy link

- Research culture

- Race avoidance

- Safety

- Failure transparency

- Judicial transparency

- Responsibility

- Value alignment

- Human values

- Personal privacy

- Liberty and privacy

- Shared benefit

- Shared prosperity

- Human control

- Capability caution

- Importance

- Risks

- Recursive self-improvement

- Common good

Source: FLI (2017[53]), Asilomar AI Principles, https://futureoflife.org/ai-principles/.

Private-sector initiatives

In September 2016, Amazon, DeepMindGoogle, Facebook, IBM and Microsoft launched the Partnership on Artificial Intelligence to Benefit People and Society (PAI). It aims to study and formulate best practices on AI technologies, advance the public’s understanding of AI and serve as an open platform for discussion and engagement about AI and its influences on people and society. Since its creation, PAI has become a multidisciplinary stakeholder community with more than 80 members. They range from for-profit technology companies to representatives of civil society, academic and research institutions, and start-ups.

The Information Technology Industry Council (ITI) is a business association of technology companies based in Washington, DC with more than 60 members. ITI published AI Policy Principles in October 2017 (Table 5.5). The principles identified industry’s responsibility in certain areas, and called for government support of AI research and for public-private partnerships (ITI, 2017[54]). Companies are taking action individually too.

Table 5.5. ITI AI Policy Principles

Responsibility: Promoting responsible development and use

Opportunity for governments: Investing and enabling the AI ecosystem

Opportunity for public-private partnerships: Promoting lifespan education and diversity

- Responsible design and deployment

- Safety and controllability

- Robust and representative data

- Interpretability

- Liability of AI systems due to autonomy

- Investment in AI research and development

- Flexible regulatory approach

- Promoting innovation and the security of the Internet

- Cybersecurity and privacy

- Global standards and best practices

- Democratising access and creating equality of opportunity

- Science, technology, engineering and mathematics education

- Workforce

- Public-private partnership

Source: ITI (2017[54]), AI Policy Principles, https://www.itic.org/resources/AI-Policy-Principles-FullReport2.pdf.

Civil society

The Public Voice coalition, established by the Electronic Privacy Information Center, published Universal Guidelines on Artificial Intelligence (UGAI) in October 2018 (The Public Voice, 2018[55]). The guidelines call attention to the growing challenges of intelligent computational systems and propose concrete recommendations to improve and inform their design. At its core, the UGAI promotes transparency and accountability of AI systems and seeks to ensure that people retain control over the systems they create.6 The 12 UGAI Principles address various rights and obligations. These comprise the right to transparency and human determination, and obligations to identification; fairness; assessment and accountability; accuracy, reliability and validity; data quality; public safety; cybersecurity; and termination. They also include prohibitions on secret profiling and unitary scoring.

Labour organisations

UNI Global Union represents more than 20 million workers from over 150 countries in skills and services sectors. A future that empowers workers and provides decent work is a key UNI Global Union priority. It has identified ten key principles for Ethical AI. These aim to ensure that collective agreements, global framework agreements and multinational alliances involving unions, shop stewards and global alliances respect workers’ rights (Table 5.6) (Colclough, 2018[56]).

Table 5.6. Top 10 Principles for Ethical Artificial Intelligence (UNI Global Union)

1. AI systems must be transparent:

Workers should have the right to demand transparency in the decisions and outcomes of AI systems, as well as their underlying algorithms. They must also be consulted on AI systems implementation, development and deployment.

2. AI systems must be equipped with an ethical black box:

The ethical black box should not only contain relevant data to ensure system transparency and accountability, but also clear data and information on the ethical considerations built into the system.

3. AI must serve people and planet:

Codes of ethics for the development, application and use of AI are needed so that throughout their entire operational process, AI systems remain compatible and increase the principles of human dignity, integrity, freedom, privacy, and cultural and gender diversity, as well as fundamental human rights.

4. Adopt a human-in-command approach:

The development of AI must be responsible, safe and useful, where machines maintain the legal status of tools, and legal persons retain control over, and responsibility for, these machines at all times.

5. Ensure a genderless, unbiased AI:

In the design and maintenance of AI and artificial systems, it is vital that the system is controlled for negative or harmful human-bias, and that any bias be it gender, race, sexual orientation or age is identified and is not propagated by the system.

6. Share the benefits of AI systems:

The economic prosperity created by AI should be distributed broadly and equally, to benefit all of humanity. Global as well as national policies aimed at bridging the economic, technological and social digital divide are therefore necessary.

7. Secure a just transition and ensure support for fundamental freedoms and rights:

As AI systems develop and augmented realities are formed, workers and work tasks will be displaced. It is vital that policies are put in place that ensure a just transition to the digital reality, including specific governmental measures to help displaced workers find new employment.

8. Establish global governance mechanism:

Establish multi-stakeholder Decent Work and Ethical AI governance bodies on global and regional levels. The bodies should include AI designers, manufacturers, owners, developers, researchers, employers, lawyers, civil society organisations and trade unions.

9. Ban the attribution of responsibility to robots:

Robots should be designed and operated as far as is practicable to comply with existing laws, and fundamental rights and freedoms, including privacy.

10. Ban AI arms race:

Lethal autonomous weapons, including cyber warfare, should be banned. UNI Global Union calls for a global convention on ethical AI that will help address, and work to prevent, the unintended negative consequences of AI while accentuating its benefits to workers and society. We underline that humans and corporations are the responsible agents.

Source: Colclough (2018[56]), “Ethical Artificial Intelligence – 10 Essential Ingredients”, https://www.oecd-forum.org/channels/722-digitalisation/posts/29527-10-principles-for-ethical-artificial-intelligence.


[21] Benhamou, S. and L. Janin (2018), Intelligence artificielle et travail, France Stratégie, http://www.strategie.gouv.fr/publications/intelligence-artificielle-travail.

[5] Brazil (2018), Brazilian digital transformation strategy “E-digital”, Ministério da Ciência, Tecnologia, Inovações e Comunicações, http://www.mctic.gov.br/mctic/export/sites/institucional/sessaoPublica/arquivos/digitalstrategy.pdf.

[12] China (2018), AI Standardisation White Paper, Government of China, translated into English by Jeffrey Ding, Researcher in the Future of Humanity’s Governance of AI Program, https://baijia.baidu.com/s?id=1589996219403096393.

[10] China (2017), Guideline on Next Generation AI Development Plan, Government of China, State Council, http://www.gov.cn/zhengce/content/2017-07/20/content_5211996.htm.

[9] China (2016), Three-Year Action Plan for Promoting Development of a New Generation Artificial Intelligence Industry (2018-2020), Chinese Ministry of Industry and Information Technology, http://www.miit.gov.cn/n1146290/n1146392/c4808445/content.html.

[6] CIFAR (2017), Pan-Canadian Artificial Intelligence Strategy, CIFAR, https://www.cifar.ca/ai/pan-canadian-artificial-intelligence-strategy.

[56] Colclough, C. (2018), “Ethical Artificial Intelligence – 10 Essential Ingredients”, A.Ideas Series, No. 24, The Forum Network, OECD, Paris, https://www.oecd-forum.org/channels/722-digitalisation/posts/29527-10-principles-for-ethical-artificial-intelligence.

[14] Denmark (2018), Strategy for Denmark’s Digital Growth, Government of Denmark, https://em.dk/english/publications/2018/strategy-for-denmarks-digital-growth.

[43] EOP (2018), Artificial Intelligence for the American People, Executive Office of the President, Government of the United States, https://www.whitehouse.gov/briefings-statements/artificial-intelligence-american-people/.

[17] Finland (2017), Finland’s Age of Artificial Intelligence - Turning Finland into a Leader in the Application of AI, webpage, Finnish Ministry of Economic Affairs and Employment, https://tem.fi/en/artificial-intelligence-programme.

[53] FLI (2017), Asilomar AI Principles, Future of Life Institute (FLI), https://futureoflife.org/ai-principles/.

[7] Fonds de recherche du Québec (2018), “Québec lays the groundwork for a world observatory on the social impacts of artificial intelligence and digital technologies”, News Release, 29 March, https://www.newswire.ca/news-releases/quebec-lays-the-groundwork-for-a-world-observatory-on-the-social-impacts-of-artificial-intelligence-and-digital-technologies-678316673.html.

[1] G20 (2018), Ministerial Declaration – G20 Digital Economy, G20 Digital Economy Ministerial Meeting, 24 August, Salta, Argentina, https://www.g20.org/sites/default/files/documentos_producidos/digital_economy_-_ministerial_declaration_0.pdf.

[47] G7 (2018), Chairs’ Summary: G7 Ministerial Meeting on Preparing for Jobs of the Future, https://g7.gc.ca/en/g7-presidency/themes/preparing-jobs-future/.

[46] G7 (2017), Artificial Intelligence (Annex 2), http://www.g7italy.it/sites/default/files/documents/ANNEX2-Artificial_Intelligence_0.pdf.

[45] G7 (2016), Proposal of Discussion toward Formulation of AI R&D Guideline, Japanese Ministry of Internal Affairs and Communications, http://www.soumu.go.jp/joho_kokusai/g7ict/english/index.html.

[22] Germany (2018), Artificial Intelligence Strategy, Federal Government of Germany, https://www.ki-strategie-deutschland.de/home.html.

[23] Germany (2017), Automated and Connected Driving, BMVI Ethics Commission, https://www.bmvi.de/SharedDocs/EN/publications/report-ethics-commission.pdf?__blob=publicationFile.

[39] Hall, W. and J. Pesenti (2017), Growing the Artificial Intelligence Industry in the UK, Wendy Hall and Jérôme Pesenti, https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/652097/Growing_the_artificial_intelligence_industry_in_the_UK.pdf.

[51] IEEE (2017), Ethically Aligned Design (Version 2), IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems, http://standards.ieee.org/develop/indconn/ec/ead_v2.pdf.

[24] India (2018), “National Strategy for Artificial Intelligence #AI for All”, Discussion Paper, NITI Aayog, http://niti.gov.in/writereaddata/files/document_publication/NationalStrategy-for-AI-Discussion-Paper.pdf.

[25] Italy (2018), Artificial Intelligence at the Service of the Citizen, Agency for Digital Italy, https://libro-bianco-ia.readthedocs.io/en/latest/.

[54] ITI (2017), AI Policy Principles, Information Technology Industry Council, https://www.itic.org/resources/AI-Policy-Principles-FullReport2.pdf.

[28] Japan (2019), Social Principles for Human-Centric AI, Japan Cabinet Office, April, https://www8.cao.go.jp/cstp/stmain/aisocialprinciples.pdf.

[30] Japan (2018), Draft AI Utilization Principles, Ministry of Internal Affairs and Communications, Japan, http://www.soumu.go.jp/main_content/000581310.pdf.

[27] Japan (2018), Integrated Innovation Strategy, Japan Cabinet Office, June, https://www8.cao.go.jp/cstp/english/doc/integrated_main.pdf.

[26] Japan (2017), Artificial Intelligence Technology Strategy, Strategic Council for AI Technology, http://www.nedo.go.jp/content/100865202.pdf.

[29] Japan (2017), Draft AI R&D Guidelines for International Discussions, Ministry of Internal Affairs and Communications, Japan, http://www.soumu.go.jp/main_content/000507517.pdf.

[8] Jing, M. and S. Dai (2018), Here’s what China is doing to boost its artificial intelligence capabilities, 10 May, https://www.scmp.com/tech/science-research/article/2145568/can-trumps-ai-summit-match-chinas-ambitious-strategic-plan.

[15] Kaevats, M. (2017), Estonia’s Ideas on Legalising AI, presentation at the "AI: Intelligent Machines, Smart Policies” conference, Paris, 26-27 October, https://prezi.com/yabrlekhmcj4/oecd-6-7min-paris/.

[16] Kaevats, M. (25 September 2017), Estonia considers a ’kratt law’ to legalise artifical intelligence (AI), E-residency blog, https://medium.com/e-residency-blog/estonia-starts-public-discussion-legalising-ai-166cb8e34596.

[11] Kania, E. (2018), “China’s AI agenda advances”, The Diplomat, 14 February, https://thediplomat.com/2018/02/chinas-ai-agenda-advances/.

[32] Korea (2016), Mid- to Long-term Master Plan in Preparation for the Intelligent Information Society, Government of Korea, Interdepartmental Exercise, http://english.msip.go.kr/cms/english/pl/policies2/__icsFiles/afieldfile/2017/07/20/Master%20Plan%20for%20the%20intelligent%20information%20society.pdf.

[31] Korea (2016), “MSIP announces development strategy for the intelligence information industry”, Science, Technology & ICT Newsletter, Ministry of Science and ICT, Government of Korea, No. 16, https://english.msit.go.kr/english/msipContents/contentsView.do?cateId=msse44&artId=1296203.

[34] Mexico (2018), Towards an AI strategy in Mexico: Harnessing the AI Revolution, British Embassy Mexico City, Oxford Insights, C minds, http://go.wizeline.com/rs/571-SRN-279/images/Towards-an-AI-strategy-in-Mexico.pdf.

[48] Muller, C. (2017), Opinion on the Societal Impact of AI, European Economic and Social Committee, Brussels, https://www.eesc.europa.eu/en/our-work/opinions-information-reports/opinions/artificial-intelligence.

[49] Nordic (2018), AI in the Nordic-Baltic Region, Nordic Council of Ministers, https://www.regeringen.se/49a602/globalassets/regeringen/dokument/naringsdepartementet/20180514_nmr_deklaration-slutlig-webb.pdf.

[3] OECD (2019), Recommendation of the Council on Artificial Intelligence, OECD, Paris.

[2] OECD (2019), Scoping Principles to Foster Trust in and Adoption of AI – Proposal by the Expert Group on Artificial Intelligence at the OECD (AIGO), OECD, Paris, http://oe.cd/ai.

[13] OGCR (2018), Analysis of the Development Potential of Artificial Intelligence in the Czcech Republic, Office of the Government of the Czcech Republic, https://www.vlada.cz/assets/evropske-zalezitosti/aktualne/AI-Summary-Report.pdf.

[33] Peng, T. (2018), “South Korea aims high on AI, pumps $2 billion into R&D”, Medium, 16 May, https://medium.com/syncedreview/south-korea-aims-high-on-ai-pumps-2-billion-into-r-d-de8e5c0c8ac5.

[4] Porter, M. (1990), “The competitive advantage of nations”, Harvard Business Review, March-April, https://hbr.org/1990/03/the-competitive-advantage-of-nations.

[52] Pretz, K. (2018), “IEEE Standards Association and MIT Media Lab form council on extended intelligence”, IEEE Spectrum, http://theinstitute.ieee.org/resources/ieee-news/ieee-standards-association-and-mit-media-lab-form-council-on-extended-intelligence.

[50] Price, A. (2018), “First international standards committee for entire AI ecosystem”, IE e-tech, Issue 03, https://iecetech.org/Technical-Committees/2018-03/First-International-Standards-committee-for-entire-AI-ecosystem.

[35] Russia (2017), Digital Economy of the Russian Federation, Government of the Russian Federation, http://pravo.gov.ru.

[36] Singapore (2018), Digital Economy Framework for Action, Infocomm Media Development Authority, https://www.imda.gov.sg/-/media/imda/files/sg-digital/sgd-framework-for-action.pdf?la=en.

[18] Sivonen, P. (2017), Ambitious Development Program Enabling Rapid Growth of AI and Platform Economy in Finland, presentation at the "AI Intelligent Machines, Smart Policies" conference, Paris, 26-27 October, http://www.oecd.org/going-digital/ai-intelligent-machines-smart-policies/conference-agenda/ai-intelligent-machines-smart-policies-sivonen.pdf.

[55] The Public Voice (2018), Universal Guidelines for Artificial Intelligence, The Public Voice Coalition, October, https://thepublicvoice.org/ai-universal-guidelines/memo/.

[20] Thompson, N. (2018), “Emmanuel Macron talks to WIRED about France’s AI strategy”, WIRED, 31 March, https://www.wired.com/story/emmanuel-macron-talks-to-wired-about-frances-ai-strategy.

[41] UK (2018), AI Sector Deal, Department for Business, Energy & Industrial Strategy and Department for Digital, Culture, Media & Sport, Government of the United Kingdom, https://www.gov.uk/government/publications/artificial-intelligence-sector-deal.

[42] UK (2018), Centre for Data Ethics and Innovation Consultation, Department for Digital, Culture, Media & Sport, Government of the United Kingdom, https://www.gov.uk/government/consultations/consultation-on-the-centre-for-data-ethics-and-innovation/centre-for-data-ethics-and-innovation-consultation.

[40] UK (2017), Industrial Strategy: Building a Britain Fit for the Future, Government of the United Kingdom, https://www.gov.uk/government/publications/industrial-strategy-building-a-britain-fit-for-the-future.

[38] UK (2017), UK Digital Strategy, Government of the United Kingdom, https://www.gov.uk/government/publications/uk-digital-strategy/uk-digital-strategy.

[44] US (2017), “Delaney launches bipartisan artificial intelligence (AI) caucus for 115th Congress”, Congressional Artificial Intelligence Caucus, News Release, 24 May, https://artificialintelligencecaucus-olson.house.gov/media-center/press-releases/delaney-launches-ai-caucus.

[19] Villani, C. (2018), For a Meaningful Artificial Intelligence - Towards a French and European Strategy, AI for Humanity, https://www.aiforhumanity.fr/ (accessed on 15 December 2018).

[37] Vinnova (2018), Artificial Intelligence in Swedish Business and Society, Vinnova, 28 October, https://www.vinnova.se/contentassets/29cd313d690e4be3a8d861ad05a4ee48/vr_18_09.pdf.


← 1. The report was commissioned by the British Embassy in Mexico, funded by the United Kingdom’s Prosperity Fund and developed by Oxford Insights and C Minds, with the collaboration of the Mexican Government and input from experts across Mexico.

← 2. See www.unicri.it/news/article/2017-09-07_Establishment_of_the_UNICRI.

← 3. See https://www.itu.int/en/ITU-T/AI/.

← 4. See https://openai.com/about/#mission.

← 5. See http://ai-initiative.org/ai-consultation/.

← 6. For more information, see https://thepublicvoice.org/ai-universal-guidelines/memo/.

End of the section – Back to iLibrary publication page