Chapter 2. Artificial intelligence and the technologies of the Next Production Revolution

Nolan Alistair

Mastering the technologies of the Next Production Revolution requires effective policy in wide-ranging fields, including digital infrastructure, skills and intellectual property rights. This chapter examines a selection of policy initiatives that aim to enable this transformation process and ensure it benefits society. Developing and adopting new production technologies is essential to raising living standards and countering declining labour productivity growth in many OECD countries. Digital technologies can increase productivity in many ways. Artificial intelligence (AI) could spur the development of entirely new industries. And technologies enabled by advances in digital technology, such as biotechnology, 3D printing and new materials, promise important economic and social benefits. This chapter has two parts. The first covers individual technologies, their applications in production and their specific policy implications. These technologies are: AI, blockchain, 3D printing, industrial biotechnology, new materials and nanotechnology. The second part of the chapter addresses two cross-cutting policy issues relevant to future production: access to and awareness of high-performance computing, and public support for research (with a focus on public research for advanced computing and AI).



Developing and adopting new production technologies is essential to raising living standards and countering the declining labour productivity growth in many OECD countries over recent decades. Rapid population ageing – the dependency ratio in OECD countries is set to double over the next 35 years – makes raising labour productivity more urgent. Digital technologies can increase productivity in many ways. For example, they can reduce machine downtime, as intelligent systems predict maintenance needs. They can also perform work more quickly, precisely and consistently, as increasingly autonomous, interactive and inexpensive robots are deployed. New production technologies will also benefit the natural environment in several new ways. For example, nanotechnology is helping to develop materials that cool themselves to below ambient temperature without consuming energy.1

This chapter examines a selection of policies aiming to enable the Next Production Revolution. With the exceptions of artificial intelligence (AI) and blockchain, it describes only briefly some of the many transformational uses of digital technology in production, as these developments are reviewed in (among other publications) OECD (2017, 2018a). Instead, the chapter emphasises policy initiatives and policy research findings that have arisen recently, or were not addressed in OECD (2017).

This chapter has two parts. The first covers individual technologies and their specific policy implications, namely AI and blockchain in production, 3D printing, industrial biotechnology, new materials and nanotechnology. The second addresses just two of the many cross-cutting policy issues relevant to future production, namely: access to and awareness of high-performance computing (HPC), and public support for research. Particular attention is given to public research related to computing and AI, as well as the institutional mechanisms needed to enhance the impact of public research.

Production technologies: Recent developments and policy implications

AI in production

The Oxford English Dictionary defines artificial intelligence as “the theory and development of computer systems able to perform tasks normally requiring human intelligence”. Expert systems – a form of AI drawing on pre-programmed expert knowledge – have been used in industrial processes for close to four decades (Zweben and Fox, 1994). However, with the development of deep learning using artificial neural networks2 – the main source of recent progress in the field – AI can be applied to most industrial activities, from optimising multi-machine systems to enhancing industrial research (Box 2.1). Furthermore, the use of AI in production will be spurred by automated machine learning processes that can help businesses, scientists and other users employ the technology more readily. Currently, with respect to AI that uses deep learning techniques and artificial neural networks, the greatest commercial potential for advanced manufacturing is expected to exist in supply chains, logistics and process optimisation (McKinsey Global Institute, 2018). Some survey evidence also suggests that the transportation and logistics, automotive and technology sectors lead in terms of the share of early AI-adopting firms (Boston Consulting Group, 2018).

Box 2.1. Recent Applications of AI in Production

A sample of recent uses of AI in production illustrates the breadth of the industries and processes involved:

  • In pharmaceuticals, AI is set to become the “primary drug-discovery tool” by 2027, according to Leo Barella, Global Head of Enterprise Architecture at AstraZeneca. AI in preclinical stages of drug discovery has many applications, from compound identification, to managing genomic data, analysing drug safety data and enhancing in-silico modelling (AI Intelligent Automation Network, 2018).

  • In aerospace, Airbus deployed AI to identify patterns in production problems when building its new A350 aircraft. A worker might encounter a difficulty that has not been seen before, but the AI, analysing a mass of contextual information, might recognise a similar problem from other shifts or processes. Because the AI immediately recommends how to solve production problems, the time required to address disruptions has been cut by one-third (Ransbotham et al, 2017).

  • In semiconductors, an AI system can now assemble circuitry for computer chips, atom by atom (Chen, 2018); has developed machine-vision instruments to identify defects in manufactured products – such as electronic components – at scales that are invisible to the unaided eye.

  • In the oil industry, General Electric’s camera-carrying robots inspect the interior of oil pipelines, looking for microscopic fissures. If laid side by side, this imagery would cover 1 000 square kilometres every year. AI inspects this photographic landscape and alerts human operators when it detects potential faults (Champain, 2018).

  • In mining, AI is being used to explore for mineral deposits, optimise the use of explosives at the mine face (taking into consideration the cost of milling larger chunks of unexploded material later on), and operate autonomous drills, ore sorters, loaders and haulage trucks. In July 2017, BHP switched to completely autonomous trucks at a mine in Western Australia (Walker, 2017).

  • In construction, generative software uses AI to explore every permutation of a design blueprint, suggesting optimal building shapes and layouts, including the routing of plumbing and electrical wiring, and linking scheduling information to each building component.

  • AI is exploring decades of experimental data to radically shorten the time needed to discover new industrial materials, sometimes from years to days (Chen, 2017).

  • AI is also enabling robots to take plain-speech instructions from human operators, including commands not foreseen in the robot’s original programming (Dorfman, 2018).

  • Finally, AI is making otherwise unmanageable volumes of Internet of things (IoT) data actionable. For example, General Electric operates a virtual factory, permanently connected to data from machines, to simulate and improve even highly optimised production processes. Used for predictive maintenance, AI can process combined audio, video and sensor data, and even text on maintenance history, to greatly surpass the performance of traditional maintenance practices.

Beyond its direct uses in production, the use of AI in logistics is enabling real-time fleet management, while significantly reducing fuel consumption and other costs. AI can also lower energy consumption in data centres (Sverdlik, 2018). In addition, AI can assist digital security: for example, the software firm Pivotal has created an AI system that recognises when text is likely to be part of a password, helping to avoid accidental online dissemination of passwords. Meanwhile, Lex Machina is blending AI and data analytics to radically alter patent litigation (Harbert, 2013). Many social-bot start-ups also automate tasks, such as meeting scheduling (, business-data and information retrieval (, and expense management (Birdly). Finally, AI is being combined with other technologies – such as augmented and virtual reality – to enhance workforce training and cognitive assistance (Box 2.2).

Box 2.2. In my view: AI and digitalisation for workforce training and assistance

Globalisation has increased the demand for customisation, with small product runs requiring agile supply chains. The adaptability demanded of workers is increasing, and established training methods are no longer sufficient. Digitalisation and AI could revolutionise how workers are trained, both on and off the job. Digitalisation itself has drastically lowered the investment in hardware necessary for on-the-job training, as powerful computers allow accurate interactive simulation of complex production processes. For example, human-in-the-loop simulation using virtual-reality headsets has lowered the hardware costs of digital training systems from thousands of dollars to a few hundred. The cost of augmented-reality systems and multimodal interfaces will also continue to decrease, while their performance in factory conditions continues to improve.

The key challenge to reaping the full benefits of digitally delivered training and assistance systems lies in the training material itself. Training courses require specialist knowledge, often from heterogeneous sources, and adaptation to context (worker experience, culture, existing skills, time available, characteristics of the manufacturing operation where training is required, etc.). Today, training material is largely developed manually, which is costly and time-consuming. AI has begun to provide solutions to this challenge. Chatbots and similar systems are now able to interact with workers using natural language, providing answers and context-specific help that often draw on multiple databases.

More significantly still, connected AI is set to tap into collective experience to improve training and cognitive assistance. Shared training databases can contain data on the cumulative experience of many workers undergoing training, as well as their subsequent performance, their responses in unexpected situations and other variables. If training systems are scaled up to serve communities of thousands of users, they will be enormously useful.

Over time, a major effect of AI on production could be the creation of new industries

Beyond such applications, a main effect of AI on future production could be the creation of entirely new industries, based on scientific breakthroughs enabled by AI, much as the discovery of DNA structure in the 1950s led to a revolution in industrial biotechnology and the creation of vast economic value (the global market for recombinant DNA technology has been estimated at around USD 500 billion [US dollars]).3 Approximately 40 years separated the elucidation of DNA structure and the emergence of a major biotech industry, and around 100 years passed between the scientific revolution in quantum physics and the recent birth of quantum computing (Box 2.5). Such observations underscore the importance of basic research and the importance of long time horizons in some aspects of research policy.

AI: specific policies

Several types of policy affect the development and diffusion of AI. These include: regulations governing data privacy (because of the critical importance of training data for AI systems); liability rules (which particularly affect diffusion); research support (Section 3.2); intellectual property rules; and, systems for skills. Other policies are most relevant to the (still uncertain) consequences of AI. These could include: competition policy; economic and social policies that mitigate inequality; policies for education and training; measures that affect public perceptions of AI; and, policies related to digital security. Well-designed policies for AI are likely to have high returns, because AI can be widely applied and accelerate innovation (Cockburn et al., 2018). Some of the policies concerned – such as those affecting skills – are relevant to any important new technology. This section focuses on policies most specifically affecting AI in production, namely, policies that affect the availability of training data, measures to address hardware constraints, and the design of regulations that do not unnecessarily hinder innovation.

Training data are critical

Wissner-Gross (2016) reviews the timing of the most publicised AI advances over the past 30 years and notes that the average length of time between significant data creation and major AI performance breakthroughs has been much shorter than the average time between algorithmic progress and the same AI breakthroughs. Among many examples, Wissner-Gross cites the performance of Google’s GoogLeNet software, which achieved near-human level object classification in 2014, using a variant of an algorithm developed 25 years earlier. But the software was trained on ImageNet, a huge corpus of labelled images and object categories that had become available just four years earlier.4

Many tools that firms employ to manage and use AI exist as free software in open source (i.e. their source code is public and modifiable). These include software libraries such as TensorFlow and Keras, and tools that facilitate coding such as GitHub, text editors like Atom and Nano, and development environments like Anaconda and RStudio. Machine learning-as-a-service platforms also exist, such as Michelangelo, Uber’s internal system that helps teams build, deploy and operate machine-learning solutions. The challenges in using AI in production relate to its application in specific systems and the creation of high-quality training data.

Without large volumes of training data, many AI models are inaccurate. A deep-learning supervised algorithm may need 5 000 labelled examples per item and up to 10 million labelled examples to match human performance (Goodfellow, Bengio and Courville, 2016). The highest-value uses of AI often combine diverse data types, such as audio, text and video. In many uses, training data must be refreshed monthly or even daily (McKinsey Global Institute, 2018). Consequently, companies with large data resources and internal AI expertise, such as Google and Alibaba, have an advantage in deploying AI. Furthermore, many industrial applications are still somewhat new and bespoke, limiting data availability. By contrast, sectors such as finance and marketing have used AI for a longer time (Faggella, 2018).

In the future, research advances may make AI systems less data-hungry. For instance, AI may learn from fewer examples, or generate robust training data (Simonite, 2016). In December 2017, the computer program AlphaZero famously achieved a world-beating level of performance in chess by playing against itself, using just the rules of the game, without recourse to external data. In rules-based games such as chess and Go, however, high performance can be achieved based on simulated data. For the time being, external training data must be cultivated for real-world applications.

Governments can take steps to help develop and share training data

Many firms hold valuable data which they do not use effectively (whether through lacking in-house skills and knowledge, lack of a corporate data strategy, lack of data infrastructure, or other reasons). This can be the case even in firms with enormous financial resources. For example, by some accounts, less than 1% of the data generated on oil rigs are used (The Economist, 2017). However, many AI start-ups, and other businesses using AI, could create value from data they cannot easily access. To help address this mismatch, governments can act as catalysts and honest brokers for data partnerships. Among other measures, they could work with relevant stakeholders to develop voluntary model agreements for trusted data sharing. For example, the US Department of Transportation has prepared the draft “Guiding Principles on Data Exchanges to Accelerate Safe Deployment of Automated Vehicles”. The Digital Catapult in the United Kingdom also plans to publish model agreements for start-ups entering into data-sharing agreements (DSAs).

Government agencies can also co-ordinate and steward DSAs for AI purposes

DSAs operate between firms, and between firms and public research institutions. Co-ordination could be helpful in cases where all data holders would benefit from data sharing, but individual data holders are reluctant to share data unilaterally, or are unaware of potential data-sharing opportunities. For example, a total of 359 offshore oil rigs were operational in the North Sea and the Gulf of Mexico as of January 2018. AI-based prediction of potentially costly accidents on oil rigs would be improved if this statistically small number of data holders were to share their data (in fact, the Norwegian Oil and Gas Association has asked all members to have a data-sharing strategy in place by the end of 2018).

The Digital Catapult’s Pit Stop open-innovation activity (which complements the Catapult’s model DSAs mentioned earlier) is an example of co-ordination aiming to foster DSAs. Pit Stop brings together large businesses, academic researchers and start-ups in collaborative problem-solving challenges around data and digital technologies. Also in the United Kingdom, the Turing Institute operates the Data Study Group, to which major private and public-sector organisations bring data-science problems for analysis: Institute researchers are thereby able to work on real-world problems using industry datasets, while businesses have their problems solved and learn about the value of their data. In a model that promotes data sharing without DSAs, Japan has developed the Industrial Value Chain Initiative, a collaborative cloud-based platform/repository where member firms share data to help implement digital applications.

Governments can promote open-data initiatives and ensure that public data are disclosed in machine-readable formats for AI purposes

Open-data initiatives exist in many countries, covering diverse public administrative and research data (Chapter 6). To facilitate AI applications, disclosed public data should be machine-readable. A further measure to encourage AI could consist in ensuring that copyright laws allow data and text mining, providing this does not lead to substitution of the original works or unreasonably prejudice legitimate interests of the copyright owners. Governments can also promote the use of digital data exchanges5 that share public and private data for the public good.

Technology itself may offer novel solutions to use data better for AI purposes

Sharing data can require overcoming a number of institutional barriers. Data holders in large organisations can face considerable internal bureaucracy before receiving permission to release data. Even with a DSA, data holders worry that data might not be used according to the terms of an agreement, or that client data will be shared accidentally. In addition, some datasets may be too big to share in practical ways: for instance, the data in 100 human genomes could consume 30 terabytes (30 million megabytes). Uncertainty over the provenance of counterpart data can also hinder data sharing or purchase. Ocean Protocol,6 an open-source protocol built by the non-profit Ocean Protocol Foundation, is pioneering a system linking blockchain and AI, to address such concerns and incentivise secure data exchange. By combining blockchain and AI, data holders can obtain the benefits of data collaboration, with full control and verifiable audit. Under one use case, data are not shared or copied. Instead, algorithms go to the data for training purposes, with all work on the data recorded in the distributed ledger. Ocean Protocol is currently building a reference open-source marketplace for data, which users can adapt to their own needs to trade data services securely. Governments should be alert to the possibilities of using such technology in public open-data initiatives.

Governments can also help resolve hardware constraints for AI applications

As AI projects move from concept to commercial application, specialised and expensive cloud-computing and graphic-processing unit (GPU) resources are often needed. Trends in AI experiments show extraordinary growth in the computational power required. According to one estimate, the largest recent experiment, AlphaGo Zero, required 300 000 times the computing power needed for the largest experiment just 6 years before (OpenAI, 2018). Indeed, AlphaGo Zero’s achievements in chess and Go involved computing power estimated to exceed that of the world’s ten most powerful supercomputers combined (Digital Catapult, 2018).

An AI entrepreneur might have the knowledge and financial resources to develop a proof-of-concept for a business, but lack the necessary hardware-related expertise and hardware resources to build a viable AI company. To help address such issues, Digital Catapult runs the Machine Intelligence Garage programme, which works with industry partners – such as GPU manufacturer NVidia, intelligent processing unit-producer Graphcore, and cloud providers Amazon Web Services and Google Cloud Platform – to give early-stage AI businesses access to computing power and technical expertise.

Care is needed to avoid regulating AI in ways that unnecessarily dampen innovation

Algorithmic transparency, explainability and accountability are among the key concerns in discussions on AI regulation (OECD, 2018b). While this chapter does not examine these questions, a few overarching observations are relevant. First, economy-wide regulation of AI may not be optimal at this time: the technology is still young, and many of its impacts are still unclear (Chapter 10). While international experience on the regulation of AI is still limited, there are grounds for thinking that regulation should specifically cover identified harms arising in particular sectors and applications, and addressed by those agencies already responsible for regulating the relevant sectors. A broad trade-off exists between the accuracy of algorithms and their scrutability. This trade-off highlights the risk of universal regulation of transparency and explainability dampening innovation. New and Castro (2017) argue that an overall approach emphasising algorithmic accountability might best protect society’s needs, while also encouraging innovation. The impacts of any adopted regulation, whatever its form, should be closely monitored. Finally, regulatory reviews should be frequent, because AI technology is changing rapidly.7

Blockchain in production

Blockchain – a distributed ledger technology – has many potential applications in production (Box 2.3). Blockchain is still an immature technology, and many applications are only at the proof-of-concept stage. The future evolution of blockchain involves various unknowns, for example with respect to standards for interoperability across systems. However, similar to the ‘software as a service’ model, “blockchain as a service” is already provided by companies such as Microsoft, SAP, Oracle, Hewlett-Packard, Amazon and IBM. Furthermore, consortia such as Hyperledger and the Ethereum Enterprise Alliance are developing open source-distributed ledger technologies in several industries (European Commission, 2018).

Adopting blockchain in production creates several challenges: blockchain involves fundamental changes in business processes, particularly with regard to agreements and engagement among many actors in a supply chain. When many computers are involved, the transaction speeds may also be slower than some alternative processes, at least with current technology (fast protocols operating on top of blockchain are under development). Blockchains are most appropriate when disintermediation, security, proof of source and establishing a chain of custody are priorities (Vujinovic, 2018). A further challenge relates to the fact that much blockchain development remains atomised: the scalability of any single blockchain-based platform – be it in supply chains or financial services – will depend on whether it is interoperable with other platforms (Hardjano et al., 2018).

Blockchain: Possible policies

Regulatory sandboxes are designed to help governments better understand a new technology and its regulatory implications, while at the same time giving industry an opportunity to test new technology and business models in a live environment (Chapter 10). Evaluations of the impacts of regulatory sandboxes are sparse (Financial Conduct Authority (2017) is an exception8). Blockchain regulatory sandboxes mostly focus on Fintech, and are being developed in countries as diverse as Australia, Canada, Indonesia, Japan, Malaysia, Switzerland, Thailand and the United Kingdom (European Commission, 2018). Pursuant to proper impact assessment of such schemes, and being sure to design selection processes that avoid benefitting some companies at the expense of others, the scope of sandboxes could be broadened to encompass blockchain applications in industry and other non-financial sectors.

By using blockchain in the public sector, governments could also raise awareness of blockchain’s potential, when it improves on existing technologies. Technical issues also need to be resolved, such as how to trust the data placed on the blockchain. Trustworthy data may need to be certified in some way. Blockchain may also raise concerns about competition policy, as some large corporations begin to mobilise through consortia to establish blockchain standards, e.g. for supply-chain management.

Box 2.3. Blockchain : Potential applications in production

By providing a decentralised, consensus-based, immutable record of transactions, blockchain could transform important aspects of production when combined with other technologies. For example:

  • A main application of blockchain is tracking and tracing in supply chains. One consequence could be a reduction in counterfeiting: in the motor-vehicle industry alone, firms lose tens of billions of dollars a year to counterfeit parts (Williams, 2013).

  • Blockchain could replace elements of enterprise resource-planning systems. The Swedish software company IFS has demonstrated how blockchain can be integrated with enterprise resource-planning systems in the aviation industry. Commercial aircraft have millions of parts. Each part must be tracked, and a record kept of all maintenance work. Blockchain could resolve current failures in such tracking (Mearian, 2017).

  • Blockchain is being tested as a medium permitting end-to-end encryption of the entire process of designing, transmitting and printing 3D computer-aided design (CAD) files, with each printed part embodying a unique digital identity and memory (European Commission, 2018). If successful, this technology could incentivise innovation using 3D printing, protect intellectual property and help address counterfeiting.

  • By storing the digital identity of every manufactured part, blockchain could provide proof of compliance with warranties, licences and standards in production, installation and maintenance (European Commission, 2018).

  • Blockchain could induce more efficient utilisation of industrial assets. For example, a trusted record of the usage history for each machine and piece of equipment would facilitate developing a secondary market for such assets.

  • Blockchain could authenticate machine-based data exchanges, implement associated micro-payments and help monetise the IoT. In addition, recording machine-to-machine exchanges of valuable information could lead to “data collateralisation”, giving lenders the security to finance supply chains and helping smaller suppliers overcome working-capital shortages (Maerian, 2017) (Chapter 13). By providing verifiably accurate data across the production and distribution processes, blockchain could also enhance predictive analytics.

Blockchain could further automate supply chains through the digital execution of “smart contracts”, which rely on pre-agreed obligations being verified automatically. Maersk, for example, is working with IBM to test a blockchain-based approach for all documents used in bulk shipping. Combined with ongoing developments in the IoT, such smart contracts could eventually lead to full transactional autonomy for many machines (Vujinovic, 2018).

3D printing

3D printing is expanding rapidly, thanks to falling printer and materials prices, higher-quality printed objects and innovation in methods. Recent innovations include 3D printing with novel materials, such as glass, biological cells and even liquids (maintained as structures using nanoparticles); robot-arm printheads that allow printing objects larger than the printer itself (opening the way for automated construction); touchless manipulation of print particles with ultrasound (allowing printing electronic components sensitive to static electricity); and hybrid 3D printers, combining additive manufacturing with computer-controlled machining and milling. Research is also advancing on 3D printing, with materials programmed to change shape after printing.

Most 3D printing is used to make prototypes, models and tools. Currently, 3D printing is not cost-competitive at volume with traditional mass-production technologies, such as plastic injection moulding. Wider use of 3D printing depends on how the technology evolves in terms of the print time, cost, quality, size and choice of materials (OECD, 2017). The costs of switching from traditional mass-production technologies to 3D printing are expected to decline in the coming years as production volumes grow, although it is difficult to predict precisely how fast 3D printing will diffuse. Furthermore, the cost of switching is not the same across all industries and applications.

3D printing: Specific policies

OECD (2017) examined policy options to enhance 3D printing’s effects on environmental sustainability. One priority is to encourage low-energy printing processes (e.g. using chemical processes rather than melting material, and automatic switching to low-power states when printers are idle). Another priority is to use and develop low-impact materials with useful end-of-life characteristics (such as compostable biomaterials). Policy mechanisms to achieve these priorities include:

  • targeting grants or investments to commercialise research in these directions

  • creating a voluntary certification system to label 3D printers with different grades of sustainability across multiple characteristics, which could also be linked to preferential purchasing programmes by governments and other large institutions.

Ensuring legal clarity around intellectual property rights, for 3D printing of spare parts for products that are no longer manufactured, could also be environmentally beneficial. For example, a washing machine that is no longer in production may be thrown away because a single part is broken; a CAD file for the required part could keep the machine in operation. However, most CADs are proprietary. One solution would be to incentivise rights for third parties to print replacement parts for products, with royalties paid to the original product manufacturers as needed.

Government can help develop the knowledge needed for 3D printing at the production frontier

Bonnin-Roca et al. (2016) describe another possible policy area. They observe that metals-based additive manufacturing (MAM) has many potential uses in commercial aviation. However, MAM is a relatively immature technology – the fabrication processes at the technological frontier have not yet been standardised – and aviation requires high safety standards. The aviation sector – and the commercialisation of MAM technology – would benefit if the mechanical properties of printed parts of any shape, using any given feedstock on any given MAM machine, could be accurately and consistently predicted. Government could help develop the necessary knowledge. Specifically, the public sector could support the basic science, particularly by funding and stewarding curated databases on materials’ properties, and brokering DSAs across users of MAM technology, government laboratories and academia; support the development of independent manufacturing and testing standards; and help quantify the advantages of adopting the new technology, by creating a platform documenting early users’ experiences.

Bonnin-Roca et al. (2016) suggest such policies for the United States, which leads globally in installed industrial 3D manufacturing systems and aerospace production. However, the same ideas could apply to other countries and industries. These ideas also illustrate how policy opportunities can arise from a specific understanding of emerging technologies and their potential uses. Indeed, governments should strive to develop expertise on emerging technologies in relevant public structures, which will also help anticipate hard-to-foresee needs for technology regulation.

Industrial biotechnology and the bioeconomy

As part of the bioeconomy, industrial biotechnology involves the production of goods from renewable biomass –i.e. wood, food crops, non-food crops or even domestic waste – instead of finite fossil-based reserves. Much progress has taken place in the tools and achievements of industrial biotechnology (OECD, 2018c). For example, several decades of research in biology have yielded gene-editing technologies and synthetic biology (which aims to design and engineer biologically based parts, devices and systems, and redesign existing natural biological systems). When combined with other scientific and technological advances – for instance in materials science and robotics – the tools are in place to begin a bio-based production revolution. Bio-based batteries, artificial photosynthesis and micro-organisms that produce biofuels are just some examples of recent advances in biotechnology. Notwithstanding these advances, the largest positive medium-term environmental impacts of industrial biotechnology hinge on the development of advanced biorefineries, which transform sustainable biomass into marketable products (food, animal feed, materials, chemicals) and energy(fuel, power, heat) (OECD, 2017).

Industrial biotechnology and the bioeconomy: Specific policies

Strategies to expand biorefining must address the sustainability of the biomass used. Governments should urgently support efforts to develop standard definitions of sustainability (as regards feedstocks), tools for measuring sustainability, and international agreements on the indicators required to drive data collection and measurement. Furthermore, environmental performance standards are essential: regulators often impose sustainability criteria for bio-based products, most of which are not currently cost-competitive with petrochemicals.

Demonstrator biorefineries operate between pilot and commercial scales, and are critical to answering technical and economic questions about production before costly investments are made at full scale. However, biorefineries and demonstrator facilities are high-risk investments, and some aspects of the technologies are not fully proven. Additional study is also required of the economics of large bio-production facilities. Financing through public-private partnerships is needed to de-risk private investments and demonstrate governments’ commitment to long-term, coherent policies on energy and industrial production.

Public initiatives for bio-based fuels have existed for decades, but little policy support has been extended to producing bio-based chemicals, which could substantially reduce greenhouse gas emissions and preserve non-renewable resources OECD, 2018c).

With respect to regulations, governments should focus on boosting the use of instruments – particularly standards – to reduce barriers to trade in bio-based products; addressing regulatory hurdles that hinder investment; and establishing a level playing field between bio-based products and biofuels. Better waste regulation could also boost the bioeconomy. For example, governments could promote less proscriptive and more flexible waste regulations, allowing the use of agricultural and forestry residues and domestic waste in biorefineries.

Governments could also lead in supporting the bioeconomy and industrial biotechnology through public procurement. Bio-based materials are not always amenable to public procurement, as they sometimes form only part of a product (e.g. a bio-based screen on a mobile phone), but public purchasing of biofuels (e.g. for public vehicle fleets) is easier (OECD, 2017).

New materials

Advances in scientific instrumentation, such as atomic-force microscopes, and developments in computational simulations have allowed scientists to study materials in more detail than ever before. Today, materials with entirely novel properties are emerging: solids with densities comparable to the density of air; super-strong lightweight composites; materials that remember their shape, repair themselves or assemble themselves into components; and materials that respond to light and sound, are all now realities (The Economist, 2015).

The era of trial and error in material development is also coming to an end. Powerful computer modelling and simulation of materials’ structure and properties can indicate how they might be used in products. Desired properties, such as conductivity and corrosion resistance, can be intentionally built into new materials. Better computation is leading to faster development of new and improved materials, more rapid insertion of existing materials into new products, and the ability to improve existing processes and products. In the near future, engineers will not just design products, but will also design the materials from which the products are made (Teresko, 2008). Furthermore, large companies will increasingly compete in terms of materials development. For example, a manufacturer of automotive engines with a superior design could enjoy longer-term competitive advantage if it also owned the material from which the engine is built.

New materials: Specific policies

No single company or organisation will be able to own the entire array of technologies associated with a materials-innovation ecosystem. Accordingly, a public-private investment model is warranted, particularly to build cyber-physical infrastructure and train the future workforce (Chapter 6 in OECD, 2017).

New materials will raise new policy issues and give renewed emphasis to longstanding policy concerns. For example, new digital-security risks could arise because in a medium-term future, a computationally assisted materials “pipeline” based on computer simulations could be hackable. Progress in new materials also requires effective policy in already important areas, often related to the science-industry interface. For example, well-designed policies are needed for open data and open science (e.g. for sharing simulations of materials’ structures or sharing experimental data in return for access to modelling tools).

Policy co-ordination is needed across the materials-innovation infrastructure at the national and international levels. Major efforts are under way in professional societies to develop a materials-information infrastructure – such as databases of materials’ behaviour, digital representations of materials’ microstructures and predicted structure-property relations, and associated data standards – to provide decision support to materials-discovery processes (Robinson and McMahon, 2016). International policy co-ordination is necessary to harmonise and combine elements of cyber-physical infrastructure across a range of European, North American and Asian investments and capabilities, as it is too costly (and unnecessary) to replicate resources that can be accessed through web services. A culture of data sharing – particularly pre-competitive data – is required (Chapter 6 in OECD, 2017).


Closely related to new materials, nanotechnology involves the ability to work with phenomena and processes occurring at a scale of 1 to 100 nanometres (a standard sheet of paper is about 100 000 nanometres thick). Control of materials on the nanoscale – working with their smallest functional units – is a general-purpose technology with applications across production (Chapter 4 in OECD, 2017). Advanced nanomaterials are increasingly used in manufacturing high-tech products, e.g. to polish optical components. Recent innovations include nano-enabled artificial tissue, biomimetic solar cells and lab-on-a-chip diagnostics.

Nanotechnology: Specific policies

Sophisticated and expensive tools are needed for research in nanotechnology. State-of-the-art equipment costs several million euros and often requires bespoke buildings. It is almost impossible to gather an all-encompassing nanotechnology research and development (R&D) infrastructure in a single institute, or even a single region. Consequently, nanotechnology requires interinstitutional and/or international collaboration to reach its full potential. Publicly funded R&D programmes should allow involvement of academia and industry from other countries, and enable targeted collaborations between the most suitable partners. The Global Collaboration initiative under the European Union’s Horizon 2020 programme is one example of this approach.

Support is also needed for innovation and commercialisation in small companies. Nanotechnology R&D is mostly conducted by larger companies, thanks to their critical mass of R&D and production; their ability to acquire and operate expensive instrumentation; and their ability to access and use external knowledge. Policy makers could improve the access to equipment of small and medium-sized enterprises (SMEs) by: 1) increasing the size of SME research grants; 2) subsidising or waiving service fees; and/or 3) providing SMEs with vouchers for equipment use.

Regulatory uncertainties regarding risk assessment and approval of nanotechnology-enabled products must also be addressed, ideally through international collaboration. These uncertainties severely hamper the commercialisation of nano-technological innovation. Products awaiting market entry are sometimes shelved for years before a regulatory decision is taken. This has sometimes led to promising nanotechnology start-ups failing, and to large companies terminating R&D projects and innovative products. Policies should support the development of transparent and timely guidelines for assessing the risk of nanotechnology-enabled products, while also striving for international harmonisation in guidelines and enforcement. In addition, more needs to be done to properly treat nanotechnology-enabled products in the waste stream.

Selected cross-cutting policy issues

Developing a productive base that masters the technologies of the “next production revolution” involves diverse policy challenges, from implementing the types of technology-specific policies discussed above, in Section 2, to developing cross-cutting policies relevant to all the relevant technologies. Figure 2.1 depicts the types and scope of the policies involved. Cross-cutting policies must address issues as diverse as designing micro-economic framework conditions promoting technology diffusion; building fibre-optic cable networks to carry 5G; increasing trust in cloud computing; and designing education and training systems to respond efficiently to changing needs for skills. OECD (2017a) examines many of these issues in detail. This section covers two cross-cutting policy issues only, namely: improving access to and awareness of High-Performance Computing (HPC); and ensuring public support for R&D. It includes subjects, such as the race to achieve quantum computing and possible public research agendas for AI, that were not addressed in OECD (2017).

Figure 2.1. An overview of policies affecting advanced production
Figure 2.1. An overview of policies affecting advanced production

Improve access to HPC

HPC – which involves computing performance far beyond that of general-purpose computers – is increasingly important to firms in industries ranging from construction to pharmaceuticals, the automotive sector and aerospace. Airbus, for instance, owns 3 of the 500 fastest supercomputers in the world. Two-thirds of US-based companies that use HPC say that “increasing performance of computational models is a matter of competitive survival” (US Council on Competitiveness, 2014). The applications of HPC in manufacturing are also expanding beyond design and simulation, to include real-time control of complex production processes. Among European companies, the financial rates of return for HPC use are reportedly extremely high (European Commission, 2016). A 2016 review observed that “[m]aking HPC accessible to all manufacturers in a country can be a tremendous differentiator, and no nation has cracked the puzzle yet” (Ezell and Atkinson, 2016).

As Industry 4.0 becomes more widespread, demand for HPC will rise. But like other digital technologies, the use of HPC in manufacturing falls short of potential. According to one estimate, 8% of US manufacturers with fewer than 100 employees use HPC, yet one-half of manufacturing SMEs could potentially use HPC for prototyping, testing and design (Ezell and Atkinson, 2016). Public HPC initiatives often focus on the computation needs of “big science”. Greater outreach to industry, especially SMEs, is frequently needed. Box 2.4 sets out some possible ways forward, several of which are described in European Commission (2016).

Box 2.4. Getting supercomputing to industry: Possible policy actions
  • raise awareness of industrial-use cases, quantifying their costs and benefits

  • develop a one-stop source of HPC services and advice for SMEs and other industrial users

  • provide low-cost or free experimental use of HPC for SMEs for a limited period, to demonstrate its technical and commercial implications

  • establish online software libraries/clearing houses to help disseminate innovative HPC software to a wider industrial base

  • incentivise HPC centres with long industrial experience, such as the Hartree Centre in the United Kingdom or Teratec in France, to advise centres with less experience

  • modify eligibility criteria for HPC projects, which typically focus on peer reviews of scientific excellence, to include commercial-impact criteria

  • engage academia and industry in co-designing new hardware and software, similarly to European projects such as Mont-Blanc9

  • include HPC in university science and engineering curricula

  • explore opportunities to co-ordinate demand for commercially provided computing capacity.

Public support for R&D

The technologies discussed in this chapter ultimately emerge from science. Microelectronics, synthetic biology, new materials and nanotechnology, among others, have arisen from advances in scientific knowledge and instrumentation. Publicly financed research in universities and public research institutions has often been critical to AI. Furthermore, because the complexity of many emerging production technologies exceeds even the largest firms’ research capacities, public-private research partnerships are essential. Hence, the declining public support for research in some major economies is a concern (Chapter 8).

Public R&D and commercialisation efforts have many possible targets, from advancing the use of data analytics and digital technologies in metabolic engineering, to developing bio-friendly feedstocks for 3D printers. One interesting possibility is shaping research agendas to alleviate shortages of economically critical materials (as proposed by the Ames Laboratory’s Critical Materials Institute in the United States).

An overarching research challenge relates to computation itself

The processing speeds, memory capacities, sensor density and accuracy of many digital devices are linked to Moore’s Law. However, atomic-level phenomena and rising costs constrain further shrinkage of transistors on integrated circuits. Many experts believe a limit to miniaturisation will soon be reached. At the same time (as noted earlier), the computing power needed for the largest AI experiments is doubling every 3.5 months (OpenAI, 2018). By one estimate, this trend can be sustained for at most three-and-a-half to ten years, even assuming public R&D commitments on a scale similar to the Apollo or Manhattan projects (Carey, 2018). Much, therefore, depends on achieving superior computing performance (including in terms of energy requirements). Many hope that significant advances in computing will stem from research breakthroughs in optical computing (using photons instead of electrons), biological computing (using DNA to store data and calculate) and/or quantum computing (Box 2.5).

Box 2.5. A new computing regime: The race for quantum computing

Quantum computers function by exploiting the laws of subatomic physics. A conventional transistor flips between on and off, representing 1s and 0s. However, a quantum computer uses quantum bits (qubits), which can be in a state of 0, 1 or any probabilistic combination of both 0 and 1 (for instance, 0 with 20% and 1 with 80% probability), while also interacting with other qubits through so-called quantum entanglement (which Einstein termed “spooky action at a distance”).

Fully developed quantum computers, featuring many qubits, could revolutionise certain types of computing. Many of the problems best addressed by quantum computers, such as complex optimisation and vast simulation, have major economic implications. For example, at the 2018 CogX Conference, Dr Julie Love, Microsoft’s director of quantum computing, described how simulating all the chemical properties of the main molecule involved in fixing nitrogen – nitrogenase – would take today’s supercomputers billions of years, yet this simulation could be performed in hours with quantum technology. The results of such a simulation would directly inform the challenge of raising global agricultural productivity and limiting today’s reliance on the highly energy-intensive production of nitrogen-based fertiliser. Rigetti Computing has also demonstrated that quantum computers can train machine-learning algorithms to a higher accuracy, using less data than with conventional computing (Zeng, 2018).

Until recently, quantum technology has mostly been a theoretical possibility, but Google, IBM and others are beginning to trial practical applications with a small number of qubits (Gambetta et al., 2017). For example, IBM Quantum Experience10 offers free online quantum computing. In 2017, Biogen worked with Accenture and quantum software company 1QBit on a quantum-enabled application to accelerate drug discovery. In 2017 Volkswagen piloted traffic optimisation experiments using quantum computing (Castellanos, 2017). However, no quantum device currently approaches the performance of conventional computers

A need for more – and possibly different – research on AI

Public research funding has been key to progress in AI since the origin of the field. The National Research Council (1999) shows that while the concept of AI originated in the private sector – in close collaboration with academia – its growth largely results from many decades of public investments. Global centres of AI research excellence (e.g. at Stanford, Carnegie Mellon and MIT) arose because of public support, often linked to US Department of Defense funding. However, recent successes in AI have propelled growth in private-sector R&D for AI. For example, earnings reports indicate that Google, Amazon, Apple, Facebook and Microsoft spent a combined USD 60 billion on R&D in 2017, including an important share on AI. By comparison, total US Federal Government R&D for non-defence industrial production and technology amounted to around USD 760 million in 2017.11

While many in business, government and among the public believe AI stands at an inflection point, some experts emphasise the scale and difficulties of the outstanding research challenges. Some AI research breakthroughs could be particularly important for society, the economy and public policy. However, corporate and public research goals might not fully align: Jordan (2018) notes that much AI research is not directly relevant to the major challenges of building safe intelligent infrastructures, such as medical or transport systems. He observes that unlike human-imitative AI, such critical systems must have the ability to deal with “distributed repositories of knowledge that are rapidly changing and are likely to be globally incoherent. Such systems must cope with cloud-edge interactions in making timely, distributed decisions and they must deal with long-tail phenomena whereby there is lots of data on some individuals and little data on most individuals. They must address the difficulties of sharing data across administrative and competitive boundaries” (Jordan, 2018).

Other outstanding research challenges relevant to public policy relate to making AI explainable; making AI systems robust (image-recognition systems can easily be misled, for instance); determining how much prior knowledge will be needed for AI to perform difficult tasks (Marcus, 2018); bringing abstract and higher-order reasoning, and “common sense”, into AI systems; inferring and representing causality; and developing computationally tractable representations of uncertainty (Jordan, 2018). No reliable basis exists for judging when – or whether – research breakthroughs will occur. Indeed, past predictions of timelines in the development of AI have been extremely inaccurate.

Research and education need to be multi-disciplinary

Interdisciplinary research is essential to advancing production. Materials research involves disciplines such as traditional materials science and engineering, as well as physics, chemistry, chemical engineering, bio-engineering, applied mathematics, computer science and mechanical engineering. Environments supporting interdisciplinary research include institutes (e.g. Interdisciplinary Research Collaborations in the United Kingdom);12 networks (e.g. the eNNab Excellence Network NanoBio Technology in Germany, which supports biomedical nanotechnology);13 and individual institutions (e.g. Harvard’s Wyss Institute for Biologically Inspired Engineering).14

Research and industry can often be linked more effectively

Government-funded research institutions and programmes should have the freedom to assemble the right combinations of partners and facilities to solve scale-up and interdisciplinarity challenges. Investments are often essential in applied research centres and pilot production facilities, to take innovations from the laboratory to production. Demonstration facilities – such as test beds, pilot lines and factory demonstrators – which provide dedicated research environments, with the right mix of enabling technologies and operating technicians, are also necessary. Some manufacturing R&D challenges may need the expertise not only of manufacturing engineers and industrial researchers, but also of designers, equipment suppliers, shop-floor technicians and users (Chapter 10 in OECD, 2017).

Beyond traditional metrics – such as numbers of publications and patents – more effective research institutions and programmes in advanced production may also need new evaluation indicators. These new indicators could assess such criteria as successful pilot-line and test-bed demonstrations; technician and engineer training; membership in consortia; incorporation of SMEs in supply chains; and the role of research in attracting foreign direct investment.

Public-private partnerships can help research commercialisation

Financing business scale-up is a widespread concern. This owes in great part to the fact that many venture-capital firms prefer to invest in software, biotech and media start-ups rather than advanced manufacturing firms, which often work with costlier and riskier technologies (in the United States, only around 5% of venture funding in 2015 targeted the industrial/energy sector) (Singer and Bonvillian, 2017). Partnerships between universities, industry and government can help provide start-ups with the know-how, equipment and initial funding to test and scale new technologies, so that investments are more likely to attract venture funding. Singer and Bonvillian (2017) describe several such collaborations. For example, Cyclotron Road, supported by the US Department of Energy’s Lawrence Berkeley Lab, provides energy start-ups with equipment, technology and know-how for advanced prototyping, demonstration, testing and production design. Cooperative Research and Development Agreements – which are struck between a government agency and a private company or university – have also been valuable in providing frameworks for intellectual property rights in such collaborations.


Mastering the technologies of the Next Production Revolution requires effective policy in wide-ranging fields, including digital infrastructure, skills and intellectual property rights. Typically, these diverse policy fields are not closely connected in government structures and processes. Governments must also adopt long-term time horizons, for instance, in pursuing research agendas with possible long-term payoffs. As this chapter has illustrated, public institutions must possess specific understanding of many fast-evolving technologies. One leading authority argues that converging developments in several technologies are about to yield a “Cambrian explosion” in robot diversity and use (Pratt, 2015). Adopting Industry 4.0 poses challenges for firms, particularly small ones. It also challenges governments’ ability to act with foresight and technical knowledge across multiple policy domains.


AI Intelligent Automation Network (2018), AI 2020 : The Global State of Intelligent Enterprise,

Almudever, C.G. et al. (2017), "The engineering challenges in quantum computing", conference paper presented at Design, Automation & Test in Europe Conference & Exhibition (DATE), 27-31 March 2017, Lausanne, pp. 836-845, 10.23919/DATE.2017.7927104.

Azhar, A. (2018), “Exponential View: Dept. of Quantum Computing”, The Exponential View, 15 July,

Bonnin-Roca, J et al. (2016), “Policy Needed for Additive Manufacturing”, Nature Materials, Vol. 15, pp. 815-818, Nature Publishing Group, United Kingdom,

Boston Consulting Group (2018), “AI in the Factory of the Future : The Ghost in the Machine”, The Boston Consulting Group, Boston, MA,

Carey, R. (2018), “Interpreting AI Compute Trends”, 10 July, blog post, AI Impacts,

Castellanos, S. (2017), “Volkswagen Pilots Quantum Computing Experiments”, The Wall Street Journal, New York, 8 May 2018,

Champain, V. (2018), “Comment l’intelligence artificielle augmentée va changer l’industrie”, La Tribune, Paris, 27 March,

Chen, S. (2018), “Scientists Are Using AI to Painstakingly Assemble Single Atoms”, Science, American Association for the Advancement of Science, Washington, DC, 23 May,

Chen, S. (2017), “The AI Company That Helps Boeing Cook New Metals for Jets”, Science, American Association for the Advancement of Science, Washington, DC, 12 June,

Cockburn, I., R.Henderson and S.Stern (2018), “The Impact of Artificial Intelligence on Innovation”, NBER Working Paper No.24449, Issued March 18,

Digital Catapult (2018), “Machines for Machine Intelligence: Providing the tools and expertise to turn potential into reality”, Machine Intelligence Garage, Research Report 2018, London,

Dorfman, P. (2018), “3 Advances Changing the Future of Artificial Intelligence in Manufacturing”, Autodesk Newsletter, 3 January 2018,

European Commission (2018), “#Blockchain4EU: Blockchain for Industrial Transformations”, Publications Office of the European Union, Luxembourg,

European Commission (2016), “Implementation of the Action Plan for the European High-Performance Computing Strategy”, Commission Staff Working Document SWD(2016)106, European Commission, Brussels,

Ezell, S.J. and R.D. Atkinson (2016), “The Vital Importance of High-Performance Computing to US Competitiveness”, Information Technology and Innovation Foundation, Washington DC,

Faggella, D. (2018), “Industrial AI Applications – How Time Series and Sensor Data Improve Processes”, Techemergence, San Francisco, 31 May,

Financial Conduct Authority (2017), “Regulatory sandbox lessons learned report”, London,

Gambetta, J.M., J.M. Chow and M. Teffen (2017), “Building logical qubits in a superconducting quantum computing system”, npj Quantum Information, Vol. 3, article No. 2, Nature Publishing Group and University of New South Wales, London and Sydney,

Giles, M. (2018a), “Google wants to make programming quantum computers easier”, MIT Technology Review, Massachusetts Institute of Technology, Cambridge, MA, 18 July,

Giles, M. (2018b), “The world’s first quantum software superstore – or so it hopes – is here”, MIT Technology Review, Massachusetts Institute of Technology, Cambridge, MA, 17 May,

Goodfellow, I., Y. Bengio and A. Courville (2016), Deep Learning, MIT Press, Massachusetts Institute of Technology, Cambridge, MA.

Harbert, T. (2013), “Supercharging Patent Lawyers with AI: How Silicon Valley's Lex Machina is blending AI and data analytics to radically alter patent litigation”, IEEE Spectrum, IEEE, New York, 30 October,

Hardjano, T., A.Lipton and A.S.Pentland (2018), “Towards a Design Philosophy for Interoperable Blockchain Systems”, Massachusetts Institute of Technology, Cambridge, MA, 7 July,

House of Lords (2018), “AI in the UK: ready, willing and able?”, Select Committee on Artificial Intelligence – Report of Session 2017-19, Authority of the House of Lords, London,

Jordan, M. (2018), “Artificial Intelligence — The Revolution Hasn’t Happened Yet”, Medium, A Medium Corporation, San Francisco,

Knight, W. (2018), “Microsoft is Creating an Oracle for Catching Biased AI Algorithms”, MIT Technology Review, Massachusetts Institute of Technology, Cambridge, MA, 25 May,

Letzer, R. (2018), “Chinese Researchers Achieve Stunning Quantum Entanglement Record”, Scientific American, Springer Nature, 17 July,

Marcus, G. (2018), ‘Innateness, AlphaZero, and Artificial Intelligence”,, Cornell University, Ithaca, NY,

McKinsey Global Institute (2018), “Notes from the AI frontier: Insights from hundreds of use cases”, discussion paper, McKinsey & Company, New York, April,

Mearian, L. (2017), “Blockchain integration turns ERP into a collaboration platform”, Computerworld, IDG, Framingham, MA, 9 June,

National Research Council (1999), Funding a Revolution: Government Support for Computing Research, The National Academies Press, Washington, DC,

New, J. and D. Castro (2018), “How Policymakers can Foster Algorithmic Accountability”, Information Technology and Innovation Foundation, Washington DC,

OECD (2018a), “Going Digital in a Multilateral World, Interim Report of the OECD Going Digital Project, Meeting of the OECD Council at Ministerial Level”, Paris, 30-31 May 2018, OECD, Paris,

OECD (2018b), “AI: Intelligent machines, smart policies: Conference summary", OECD Digital Economy Papers, No. 270, OECD Publishing, Paris,

OECD (2018c), Meeting Policy Challenges for a Sustainable Bioeconomy, OECD Publishing, Paris,

OECD (2017), The Next Production Revolution: Implications for Governments and Business, OECD Publishing, Paris,

OpenAI (2018), “AI and Compute”, OpenAI blog, San Francisco, 16 May,

Pratt, G.A. (2015), “Is a Cambrian Explosion Coming for Robotics?”, Journal of Economic Perspectives, Volume 29/3, AEA Publications, Pittsburgh, DOI: 10.1257/jep.29.3.51.

Ransbotham, S et al. (2017), “Reshaping Business with Artificial Intelligence: Closing the Gap Between Ambition and Action”, MIT Sloan Management Review, Massachusetts Institute of Technology, Cambridge, MA,

Robinson, L. and K. McMahon (2016), “TMS launches materials data infrastructure study,” JOM, Vol. 68/8, Springer US, New York,

Simonite, T. (2016), “Algorithms that learn with less data could expand AI’s power”, MIT Technology Review, May 24th, Boston,

Singer, P.L. and W.B. Bonvillian (2017), “’Innovation Orchards’: Helping tech startups scale”, Information Technology and Innovation Foundation, Washington DC,

Sverdlik, Y. (2018), “Google is Switching to a Self-Driving Data Center Management System”, Data Center Knowledge, August 2nd,

Teresko, J. (2008), “Designing the next materials revolution”, IndustryWeek, Informa, Cleveland, 8 October,

The Economist (2017), “Oil struggles to enter the digital age”, The Economist, London, 6 April,

The Economist (2015), “Material difference”, Technology Quarterly, The Economist, London, 12 May,

U.S. Council on Competitiveness (2014), “The Exascale Effect: the Benefits of Supercomputing for US Industry”, U.S. Council on Competitiveness, Washington, DC,

Vujinovic, M. (2018), “Manufacturing and Blockchain: Prime Time Has Yet to Come”, CoinDesk, New York, 24 May,

Walker, J. (2017), “AI in Mining: Mineral Exploration, Autonomous Drills, and More”, Techemergence, San Francisco, 3 December,

Williams, M. (2013), “Counterfeit parts are costing the industry billions”, Automotive Logistics, 1 January, Ultima Media, London,

Wissner-Gross, A. (2016), “Datasets Over Algorithms”,, Edge Foundation, Seattle,

Zeng, W. (2018), “Forest 1.3: Upgraded developer tools, improved stability, and faster execution”, Rigetti Computing blog, Berkeley,

Zweben, M and M.S. Fox (1994), Intelligent Scheduling, Morgan Kaufmann Publishers, San Francisco.


← 1. See Aaswath Raman’s 2018 TED talk, “How can we turn the cold of outer space into a renewable resource”,

← 2. Deep learning with artificial neural networks is a technique in the broader field of machine learning that seeks to emulate how human beings acquire certain types of knowledge. The word ‘deep’ refers to the numerous layers of data processing. The term “artificial neural network” refers to hardware and/or software modelled on the functioning of neurons in a human brain.

← 3. AI will of course have many economic and social impacts. In relation to labour markets alone, intense debates exist on AI’s possible effects on labour displacement, income distribution, skills demand and occupational change. However, these and other considerations are not a focus of this chapter.

← 4. At its peak, ImageNet reportedly employed close to 50 000 people in 167 countries, who sorted around 14 million images (House of Lords, 2018).

← 5. e.g.

← 6.

← 7. Microsoft, for instance, is developing a dashboard capable of scrutinising an AI system and automatically identifying signs of potential bias (Knight, 2018).

← 8. Even if this assessment covers only the first year of a scheme in the United Kingdom.

← 9.

← 10.

← 11. OECD Main Science and Technology Indicators Database,

← 12.

← 13.

← 14.

End of the section – Back to iLibrary publication page