copy the linklink copied!1. An overview of key developments and policies

Alistair Nolan

Directorate for Science, Technology and Innovation, OECD

Chapter 1 summarises the main themes and policy lessons examined in the rest of the report. It provides background to the broader policy concerns facing OECD countries. It also introduces topics not considered elsewhere in the report, particularly in connection with artificial intelligence in science; using digital technology to deliver skills in science, technology, engineering and mathematics; possible targets for public research; and blockchain in science. The chapter also discusses potential uses of digital technology for policy making and implementation, mainly linked to various forms of collective intelligence. These essentially untapped opportunities – such as self-organised systems for funding allocation, and prediction markets – might have significant benefits for science, technology and innovation. They invite further study and, possibly, pilot testing.


The statistical data for Israel are supplied by and under the responsibility of the relevant Israeli authorities. The use of such data by the OECD is without prejudice to the status of the Golan Heights, East Jerusalem and Israeli settlements in the West Bank under the terms of international law.

copy the linklink copied!Introduction

In 2015, in their joint declaration (OECD, 2015), ministers from OECD countries and partner economies, at the OECD Ministerial Meeting in Daejeon (Korea), recognised that digital technologies are revolutionising science, technology and innovation (STI). The ministers asked the OECD to monitor this transformation.

During 2017 and 2018, the OECD’s “Going Digital” project comprehensively examined digital technology’s economic and social impacts (OECD, 2019). The resulting report, Going Digital: Shaping Policies, Improving Lives, presents a strategy for policy making in the digital age. Complementing that report, this publication examines digitalisation’s effects on STI and the associated consequences for policy. It draws mainly on work performed under the aegis of the OECD’s Committee for Scientific and Technological Policy.

Apart from this overview, the publication has six other chapters:

Chapter 2 (“How are science, technology and innovation going digital? The statistical evidence”) presents recent statistical evidence of key developments in the digitalisation of STI. It also reviews current and future measurement priorities.

Chapter 3 (“Digital technology, the changing practice of science and implications for policy”) focuses on digitalisation and open science, and the associated policy consequences.

Chapter 4 (“Digital innovation: Cross-sectoral examples and policy implications”) explores the many ways that digital technology is affecting innovation in firms, and the priorities for innovation policy in the digital age.

Chapter 5 (“Artificial intelligence, digital technology and advanced production”) discusses digital technology in advanced manufacturing.

Chapter 6 (“Digitalisation in the bioeconomy: Convergence for the bio-based industries”) explains the fast-evolving applications of digital technology in bio-based science and industry, and the priorities for government action.

Chapter 7 (“The digitalisation of science and innovation policy”) reviews developments in digital information systems that support policy for STI, what these systems could look like in future, and what policy makers should do to maximise their potential.

copy the linklink copied!Why does digitalisation matter?

The importance of digitalisation in STI is hard to overstate. Today, it is usual to view the future of STI through the lens of digitalisation’s projected impacts. Carlos Moedas, the EU Commissioner for Research, Science and Innovation, recently announced that the Ninth EU Framework Programme for Research and Innovation will focus on digitalisation, beginning in 2021 (Zubașcu, 2017). Digitalisation also makes the present moment unique in the history of technology. As the technology commentator Kevin Kelly observed, “This is the first and only time the planet will get wired up into a global network” (Kelly, 2013). Furthermore, digitalisation’s impacts are just beginning. Around a century passed before the full effects of earlier technology revolutions, linked to steam and electricity, became clear. By those standards, the digital revolution has generations to go.

Digitalisation is ubiquitous in STI in part because its effects are both microscopic and macroscopic. At the microscopic level, for example, researchers recently stored 200 megabytes (MB) of high-definition video and books in deoxyribonucleic acid (DNA) (see Chapter 6). At the macroscopic level, new digital technology means that a standard 10-pound satellite can capture better images of any point on Earth than a 900-pound satellite 20 years ago (Metz, 2019).

If anything, this publication illustrates that digitalisation’s effects are deeper than most media reports reflect. Areas of research not traditionally associated with digitalisation, and on which advanced economies depend, from materials science to biology, are increasingly digital in character. At the same time, digital technology is changing the processes of science and enlarging its scope.

In STI, the pace of change brought by digitalisation is also striking. In all likelihood, no one foresaw in 2007 that ten years later more than a million people would be working in companies labelling and annotating data and images for machine-learning systems (Hutson, 2007). A decade ago, few anticipated how far artificial intelligence (AI) would progress in generating scientific hypotheses, scanning scientific literature and orchestrating experiments performed by robots. Similarly, until recently, only a few devotees understood distributed ledger technologies (DLTs), much less the possibility of combining AI and DLTs such that each amplifies the other (Corea, 2017).

Digitalisation is also facilitating convergence among technologies, a hallmark of innovation. There are several reasons for this convergence. Digital technologies can be combined – more easily than many other technologies – because of the shared numerical basis of different digital devices. Moreover, as it progresses, science can represent more of the natural world in the form of digital information. For example, as Chapter 5 shows, materials science is advancing in a transformational way because of the growing ability to observe, represent in computer models and then simulate the properties of a material’s microstructure.

Convergence between the digital and biological worlds also reflects the relatively new understanding that life itself is informational and algorithmic (Valiant, 2013). Miniaturisation, which digital technology propels, likewise facilitates convergence. For instance, millimetre-sized computers could become common in the next decade (Biles, 28 September 2018). Such devices are likely to converge with medical technologies, for example in monitoring disease processes from inside the body.

Recent achievements in STI opened by digital technologies are extremely diverse, which reflects the technology’s general-purpose character. In 2014, for example, Japan introduced the first trillion-frame-a-second camera, which gives scientists new ways to explore complex ultrafast phenomena. Supercomputers partition the globe into tens of thousands of digital units to simulate local weather, improving the accuracy of weather prediction. Indeed, a seven-day weather forecast in 2018 is as accurate as a two-day forecast 50 years ago (Fischer, 2018). The firm Lex Machina blends AI and data analytics to assist patent litigation (Harbert, 2013). Using digital tools, and in a break from previous norms, consumers now innovate in significant ways in many industries. Furthermore, digitalisation is making science more collaborative and networked. In 2015, for instance, researchers working on the Large Hadron Collider published a paper with a record-breaking 5 154 authors.

The broader context in which science, technology and innovation are digitalising

The digitalisation of STI is directly relevant to many important short- and long-term policy challenges. Over recent decades, for example, labour productivity growth has declined in many OECD countries. Developing and adopting efficiency-enhancing digital production technologies, along with organisational changes, are necessary to counter this decline. Rapid population ageing means that raising labour productivity is ever more urgent; the dependency ratio in OECD countries is set to double over the next 35 years. Digital technology contributes to productivity in part by making the mixing and recombining of ideas easier, which facilitates innovation. Some evidence even suggests that innovation increasingly occurs by combining existing ideas rather than by forming new ones (Youn et al., 2015).

Demographic change is likely to exert long-term downward pressure on discretionary public spending in OECD countries. Relative to national incomes, this pressure could entail static or even reduced levels of public support for science and innovation (OECD, 2018a). A protracted period of slow growth could have a similar effect. Such scenarios raise the question of whether, and by how much, digital technology could increase the efficiency of policy.

A related and worrying possibility is that the productivity of science might be falling. Some scholars claim that science is becoming less productive. They argue, variously, that the low-hanging fruits of knowledge have now been picked, that experiments are becoming more costly, and that science must increasingly be done across complex boundaries between a growing number of disciplines.

Scientists are also flooded with data and information. The average scientist reads about 250 papers a year, but more than 26 million peer-reviewed papers exist in biomedical science alone.1 In addition, the overall quality of scientific output may be declining. Freedman (2015) estimated that around USD 28 billion per year is wasted on unreproducible preclinical research in the United States alone.

Not everyone agrees that research productivity is faltering (Worstall, 2016). However, any slowdown would have serious implications for growth. Increased funding would be needed to maintain discovery at previous levels and to seed the innovations and productivity improvements necessary to cope with demographic change and public spending constraints. Any boost to research productivity spurred by digital technology, from open science to the wider use of AI, could be of structural importance.

If deployed effectively, digitalisation could also help accelerate science and technology’s ability to resolve global challenges. Environmental challenges include a warming atmosphere, loss of biodiversity, depleted topsoil and water scarcity. Health challenges include threats of disease – from multidrug-resistant bacteria to new pandemics. Demographic challenges include the consequences of ageing populations and the pressing need to treat neurodegenerative diseases. Breakthroughs in science and technology are necessary to address such challenges, and to do so cost-effectively.

While this report describes many ways in which digitalisation can strengthen STI, it also examines policy challenges created by digital technology. For example, owing to digitalisation, technology choice may be becoming more complex, even for large firms. One eminent venture capitalist recently wrote:

“Many of my friends at big companies tell me that ‘what is 5G ?’ floats around a lot of corporate headquarters almost as much as ‘what is machine learning?’” (Evans, 2019).

Digitalisation might also widen capability gaps in science across countries, owing to the uneven distribution of complementary assets such as computational resources, human capital and data access. In addition, the complex digital systems that underpin vital infrastructures, from transport networks to financial markets, might become more difficult to manage safely. Issues such as how to cope with so-called “predatory” online science journals (see Chapter 3), and how to keep personal research data anonymous, illustrate that new (and useful) applications of digital technology can generate new policy concerns.

Digitalisation also creates the need for new thinking about institutions and norms, both public and private. For example, in the public sector, governments in a number of countries are considering whether commissions for AI and robotics might be necessary. Similarly, in the private sector, as AI voice assistants become increasingly lifelike, firms must decide if customers should have the right to know that they are talking with machines (Ransbotham, 21 May 2018). The rapid pace of developments in digital technology may also require that regulatory processes become more anticipatory.

Digitalisation also raises other more far-reaching challenges, which this report does not tackle. What, for instance, should policy makers do about corrosive social and psychological effects that stem from the seepage of digital technology into much of everyday life?

copy the linklink copied!Measuring the digitalisation of science and innovation

Chapter 2 provides a statistical context for the rest of the publication. It addresses measurement challenges and reports statistics on some key trends in the digitalisation of science and innovation. To that end, it draws principally on work under the OECD’s Working Party of National Experts on Science and Technology Indicators.

The chapter examines four broad dimensions of the digital transformation of science: i) adoption of facilitating digital practices and tools; ii) access to digitised scientific outputs, especially publications, data and computer code; iii) use and further development of advanced digital procedures to make research more data-driven; and iv) communication of scientists’ work and how this influences the way scientists are rewarded.

Overall, while digital activity in science is pervasive, there is considerable room to better exploit the potential of digital technology, particularly advanced tools. Findings in this chapter include the following:

  • Digital technology facilitates sharing of scientific knowledge. However, OECD analysis reveals that, one year later, 60% to 80% of content published in 2016 was only available to readers via subscription or payment of a fee.

  • Less than half of respondents in all science fields deliver data or code to a journal or publisher to support their publications.

  • One-third of research and development (R&D) performed and funded by companies in the United States is software-related. OECD research suggests that for companies using advanced digital technologies, the odds of reporting innovations are doubled. A positive relationship also exists between the development of technologies and innovation, especially product innovation.

  • From 2006 to 2016, the annual volume of AI-related publications grew by 150%, compared to 50% for indexed scientific publications overall. The People’s Republic of China (hereafter “China”) is now the largest producer of AI-related science, in terms of publications. The country is also fast improving the quality of its scientific output in this area.

  • Public funding of science relating to AI is growing significantly, with a spate of recent policy and funding announcements. However, comparisons across countries are difficult because AI does not fit into pre-established taxonomies of R&D funding. Indeed, available data systems are ill equipped to address queries about subject areas supported by publicly funded research. Addressing this shortcoming is an OECD priority (through the “Fundstat” pilot project). The OECD has also begun to map trends in research funding for AI using institutional case studies, as Chapter 2 illustrates with two examples from the United States.

  • At both doctorate and master’s levels, many more men than women graduate in information and communication technology (ICT). ICT doctorate holders are especially likely to have been born abroad, exposing this population to policies that change residential or nationality requirements. Holders of doctorates in ICT are also more mobile across jobs than their counterparts. For example, in the United States, 30% of ICT doctorate holders changed jobs in the last year compared to 15% on average across other fields.

  • Data from the OECD International Survey of Scientific Authors show that younger scientists are more likely to engage in all dimensions of digital behaviour.

copy the linklink copied!Digitalisation, science and science policy

Chapter 3 shows that digitalisation is bringing change to all parts of science, from agenda setting, to experimentation, knowledge sharing and public engagement. Digital technology is facilitating a new paradigm of open science, a term referring to efforts to make scientific processes more open and inclusive. Open Science has three main pillars: open access (OA) to scientific publications and information; enhanced access to research data; and broader engagement with stakeholders. Together, the three pillars could increase the efficiency and effectiveness of science and speed the translation of research findings into innovation and socio-economic benefits. However, transitioning to open science requires the management of policy tensions associated with each pillar.

In his book Imagined Worlds, the physicist Freeman Dyson observed that there have been seven concept-driven revolutions in science during the past 500 years (Dyson, 1998). These revolutions are associated with the names of Copernicus, Newton, Darwin, Maxwell, Freud, Einstein and Heisenberg. During roughly the same period there were around 20 tool-driven revolutions, from the telescope in astronomy to X-ray diffraction in biology. Today, ICT is an evolving tool creating revolutionary change in science.

Many of the processes and outputs of science also improve digital technology. For example, the Laser Interferometer Gravitational-Wave Observatory, which detected cosmic gravitational waves, yielded new algorithms for detecting small signals in noisy data. And physicists designing the Large Hadron Collider federated computing systems at hundreds of sites to analyse petabytes of data, further developing grid computing.

Accessing scientific information

Emerging OA publishing models and pre-print servers, mega-journals, institutional repositories and online information aggregators are simplifying access to scientific information. However, the new era brings challenges compared to traditional specialised journals that published scientific research after peer review. It is less clear how editorial and peer review processes will work and how the academic record will be maintained and updated over time. There is considerable concern about the number of “predatory” online journals that charge authors for publication but carry out little or no quality control. It is important to identify predatory journals publicly and revise any funding mandates or other incentives that inadvertently encourage publication in such journals.

Digital tools can support the publication of scientific papers in several ways. Stimulated by a growing global scientific community, and by academic pressure to publish, the volume of scientific papers is vast and growing. ICT can help organise, share and analyse this growing volume of scientific information. At the same time, online open lab notebooks such as Jupyter provide access to primary experimental data and other information. Researchers are also employing AI to scrutinise suspicious scientific research and identify falsified data (Sankaran, 2018). Such tools depend on the broad adoption of standards and unique digital identifiers, which policy can facilitate.

Many science funders mandate OA publication, but academic careers, and in some cases institutional funding, are largely determined by publishing in high-impact, pay-for-access journals. Incentives and changes to evaluation systems need to match funders’ mandates in order to transition faster to OA publication. A stronger focus on article-based metrics rather than journal impact factors is one way forward. New indicators and measures will also be required to incentivise data sharing.

A tiered publication process might emerge to address the challenges of using digital tools. Sharing and commenting on scientific information could occur earlier, with only some findings eventually published in journals. Some fields of research are testing open post-publication peer review, whereby the wider scientific community can discuss a manuscript. Such a process has strengths: transparent public discussion among peers gives incentives for sound argumentation, for instance. But it could also have weaknesses if, for example, reviewers making false or erroneous comments capture the process. However, with proper safeguards, post-publication peer review could bolster the quality and rigour of the scientific record.

Enhancing access to research data

Policy responses are needed to enhance access to research data. The OECD first advocated for greater access to data from publicly funded research in 2006. Since then, tools to enable greater access have improved, and guidelines and principles have been widely adopted. Nevertheless, as the following points illustrate, obstacles still limit access to scientific data:

  • The costs of data management are increasing, straining research budgets. Science funders should treat data repositories as part of research infrastructure (which itself requires clear business models).

  • A lack of policy coherence and trust between communities hinders data sharing across borders. The sharing of public research data requires common legal and ethical frameworks. Through such fora as the Research Data Alliance, funders should co-ordinate support for data infrastructure. New standards and processes, such as safe havens for work on sensitive data, could also strengthen trust, as might new technology such as blockchain.

  • Science must adapt its governance and review mechanisms to account for changing privacy and ethical concerns. For example, to use human subject data in research requires informed consent and anonymisation. However, anonymising personal data from any given source might be impossible if new ICTs can link it to other personal data used in research. Transparent, accountable, expert and suitably empowered governance mechanisms, such as institutional review boards and/or research ethics committees, should oversee research conducted with new forms of personal data.

  • Strategic planning and co-operation are required to build and provide access to cyber-infrastructure internationally. Global bodies such as the aforementioned Research Data Alliance can help develop community standards, technical solutions and networks of experts.

  • The skills needed to gather, curate and analyse data are scarce. New career structures and professions – such as “data stewards” – need to be developed for data management and analysis.

Broadening engagement with science

Engagement with a broader spectrum of stakeholders could make scientific research more relevant. Digitalisation is opening science to a variety of societal actors, including patient groups, non-governmental organisations, industry, policy makers and others. Such opening aims to improve the quality and relevance of science and its translation into practice. Societal engagement can enhance the entire research process, from agenda setting to co-production of research and dissemination of scientific information. Perhaps the most critical area of enlarged engagement is in setting priorities for research. If well designed, a more inclusive process of agenda setting could make research more relevant and might even generate entirely new research questions.

Recent years have seen the expansion of “citizen science”, whereby scientific research is conducted or supported through ICT-enabled open collaborative projects. ICT is helping science elicit input from the networked public to label, generate and classify raw data, and draw links between data sets. ICT is also creating opportunities for the networked public to take part in novel forms of discovery. For instance, by playing a video game – Eyewire – over 265 000 people have helped neuroscientists develop thousands of uniquely detailed neuronal maps, colour-coding over 10 million cell sections and generating data on neuron function (Princeton University, 2018). Whether, and how best, to expand citizen science requires answers to a number of questions. These include how to break complex research projects into parallel subtasks that do not depend on understanding the entire project. Crowdfunding of science is also emerging. It appears to provide opportunities for small-scale but meaningful funding for young scholars with risky research projects.

Digital technology could benefit science by levering collective input in other ways. For example, recent research suggests that digital technology could help draw on the collective insight of the entire scientific community to improve allocation of public research funds (Box 1.1).

Artificial intelligence for science

AI might increase productivity in science at a time when – as discussed earlier – some evidence suggests research productivity may be falling (Bloom et al. 2017). AI is being used in all phases of the scientific process, from automated extraction of information in scientific literature, to experimentation (the pharmaceutical industry commonly uses automated high-throughput platforms for drug design), large-scale data collection, and optimised experimental design. AI has predicted the behaviour of chaotic systems to distant time horizons, tackled complex computational problems in genetics, improved the quality of astronomical imaging, and helped discover the rules of chemical synthesis (Musib et al., 2017). Today, AI is regularly the subject of papers published in the most prestigious scientific journals.

Recent drivers of AI in science

AI in various forms has assisted research for some time. In the 1960s, the AI program DENDRAL helped identify chemical structures. In the 1970s, an AI known as Automated Mathematician helped perform mathematical proofs. Several key developments explain the recent rise of AI and ML. These include vast improvements in computer and AI software, much greater data availability and scientists’ access to open-source AI code (King and Roberts, 2018).

copy the linklink copied!
Box 1.1. Collective intelligence to help allocate science funding

Bollen et al. (2014) and Bollen (2018) examine a new class of Self-Organized Funding Allocation (SOFA) systems to address issues associated with peer review. Peer review is the dominant approach to assessing the scientific value of proposals for research funding. However, critique of peer review is mounting. A major concern is the opportunity cost of scientists’ time. For example, one study in Australia found that 400 years of researchers’ time was spent preparing unfunded grant proposals for support from a single health research fund (Herbert, Barnett and Graves, 2013). Peer review has other drawbacks, too. The expertise in review panels is not interchangeable: many successful grant applications would be rejected if panel membership changed randomly (Graves, Barnett and Clarke, 2011). Some studies also show that peer review is less favourable to minorities, women and unconventional ideas.

To lower administrative overheads and improve funding allocation, Bollen et al. (2014) propose a SOFA system that would work like this: funding agencies would give all qualified scientists an unconditional and equal baseline amount of money each year. Scientists would then distribute a fixed percentage of their funding to peers who they think would make best use of the money. Every year, all scientists would therefore receive a fixed grant from their funding agency and an amount passed on by peers. Scientists could log on to their funding agency’s website and simply select the names of scientists to whom they wish to donate, and indicate the amount for each.

As funding circulates between scientists, it would come to reflect the funding preferences of the entire scientific community, not small review panels. Widely esteemed scientists, who also distribute a fixed share of the money they receive, would end up with greater influence on how funding is allocated overall. At the same time, because all scientists receive an unconditional yearly grant, they would have greater stability and autonomy for discovery. Funding levels would adjust as the collective perception of scientific merit and priorities evolve. Scientists would also have incentives to share research because if colleagues were positively impressed, more funding could follow. In addition, funding people rather than projects might provide scientists with more freedom to explore new research paths.

Individual distributions would be anonymous (to avoid personal influence) and subject to conflict of interest restrictions. For example, scientists might be prohibited from donating to themselves, advisees, colleagues at their own institution, etc. By tuning distribution parameters, funding agencies and governments could still target research in ways that promote policy goals, such as funding under-represented communities. Existing funding systems could also link to a SOFA to complement peer review and maintain societal accountability.

Using millions of Web of Science records, simulation of a SOFA yielded a distribution of funding similar to that produced by grant review, but without a single proposal being prepared (Bollen et al., 2014). SOFAs merit further study and pilot testing. In 2018, the Dutch Parliament mandated the Netherlands Organisation for Scientific Research to explore a pilot study.

AI can also combine with robot systems to perform scientific research

Laboratory-automation systems can exploit techniques from AI to execute cycles of scientific experimentation. For instance, one system uses AI to analyse molecular models with desirable properties. A robot then tests the predictions by physically combining chemical samples and analysing the results. These results become inputs to continue improving the system’s predictions (Knight, 2018). AI-enabled automation in science, especially in disciplines that require intensive experimentation, such as molecular biology and chemical engineering, has several potential benefits (King and Roberts, 2018):

  • Faster discovery. Automated systems can generate and test thousands of hypotheses in parallel.

  • Cheaper experimentation. AI systems can select more cost-effective experiments.

  • Improved knowledge/data sharing and scientific reproducibility. Robots can automatically record experimental procedures and results, along with the associated metadata, at no additional cost (recording the data, metadata and procedures adds up to 15% to the total costs of experimentation by humans).

Challenges still exist in using AI and ML in science. Scientific models developed by ML are not always explainable. This is partly because ML poses general challenges of interpretability. It is also because laws that underlie an AI/ML-derived model might depend on knowledge that scientists do not yet possess. Furthermore, some scientific laws might be so complex that, if discovered by an AI/ML system, experts would still struggle to understand them (Butler et al., 2018).

As AI plays a greater role in science, certain policies will grow in importance. These include policies that affect access to high-performance computing (HPC) (the computational resources essential to some leading-edge fields of research, including in AI, can be extremely expensive), skills (discussed later in this chapter), and access to data (such as standardisation for machine readability of scientific datasets). Policies on access to data not only matter for training AI systems, and for the scope of scientific problems on which AI can operate, they also matter for reproducibility. Without access to underlying data, the validity of conclusions arrived at by complex algorithms – some of which may already have a “black box” character – will be open to question. AI in science also raises new and so far unanswered questions: for instance, Should machines be included in academic citations? Will intellectual property (IP) systems need adjustment in a world in which machines can invent?

copy the linklink copied!Digitalisation and innovation in firms

Digitalisation is also shaping innovation throughout the economy, generating new digital products and services and enhancing traditional ones with digital features. Chapter 4 shows that four trends characterise innovation in the digital age: data are a key innovation input; digital technologies enable services innovation; innovation cycles are speeding up; and, digital technology is making innovation more collaborative. The following paragraphs describe these four trends.

Innovation processes increasingly rely on data. They use data to explore product and services development, and gain insight on market trends; to understand the behaviour of competitors; to optimise development, production and distribution processes; and to tailor products and services to specific or fluctuating demand.

More diverse and voluminous types of data have driven the development of new business models. Such models include peer-to-peer accommodation (e.g. Airbnb) and on-demand mobility services (e.g. Uber). Other examples are platforms to search, compare and book accommodation and transportation options (e.g., digitalised invoice discounting (e.g. Due Course) and digital co-operatives (the latter described in Scholz and Schneider, 2019). All these new business models are enabled by the availability and capacity to analyse (large volumes of) real-time data.

Digital technologies also facilitate services innovation. Examples include new digitally enabled services, such as predictive maintenance services using the Internet of Things (IoT) and web-based business services. Manufacturers increasingly offer services enabled by digital technology to complement the goods they produce, and service providers increasingly invest in digital technology to improve their activities. Large retailers, for instance, invest intensively in the IoT to improve inventory management.

Digital innovations such as generative design software and three-dimensional (3D) printing speed innovation cycles by accelerating product design, prototyping and testing. ICTs also enable the market launch of product beta versions that can be updated to incorporate consumer feedback. For example, GE Appliances’ FastWorks system involves consumers early in the development of new products such as refrigerators.

Digital technology is also making innovation ecosystems more open and diverse. Firms increasingly interact with research institutions and other firms for three reasons. First, they gain access and exposure to complementary expertise and skills. Second, collaboration helps share the costs and risks of uncertain investments in digital innovation. Third, reduced costs of communication allow greater interaction, regardless of location. One example of a collaboration using digital technology is the SmartDeviceLink Consortium, an open-source platform for smartphone app development for vehicles created by Ford and Toyota.

Does innovation policy need to be adapted for the digital age?

Innovation increasingly involves the creation of digital products and processes. Consequently, policies for innovation need to align with generic features of digital technology. In this connection, Chapter 4 proposes overarching considerations for policy design. These considerations include access to data for innovation; providing suitably designed support and incentives for innovation and entrepreneurship; ensuring that innovation ecosystems support competition; and supporting collaboration for innovation. The following paragraphs further describe these considerations.

Ensuring access to data for innovation

To favour competition and innovation, data access policies should aim to ensure the broadest possible access to data and knowledge (incentivising sharing and reuse). At the same time, they must respect constraints regarding data privacy, ethics, intellectual property rights (IPRs), and economic costs and benefits (i.e. incentives to produce data). To foster data-driven innovation, some governments provide access to data generated by public services, such as urban transportation. Policy can also facilitate the emergence of markets for data.

Restricting cross-border data flows could be harmful. Manufacturing, for instance, creates more data than any other sector of the economy, and cross-border data flows are set to grow faster than growth in world trade (Chapter 5). Research suggests that restricting such flows, or making them more expensive, for instance by obliging companies to process customer data locally, can raise firms’ costs and increase the complexity of doing business. This is especially the case for small and medium-sized enterprises (SMEs).

As businesses innovate with data, new policy issues are likely to arise. One such issue is whether firms should have legal data portability rights. Companies such as Siemens and GE are vying for leadership in online platforms for the IoT. Such platforms will become repositories of important business data. If companies had portability rights for non-personal data, competition among platforms could grow, and switching costs for firms could fall. Another incipient policy issue concerns the treatment of non-personal sensor data. Individual machines can contain multiple components made by different manufacturers, each with sensors that capture, compute and transmit data. This raises legal issues. For example, which legal entities should have rights to own machine-generated data and under what conditions? Who owns rights to data if a business becomes insolvent? More broadly, are provisions needed to protect data transmitted in value chains – say, between contractors and sub-contractors – from sale to or use by third parties?

Providing the right support and incentives for innovation and entrepreneurship

Government needs to be flexible and alert to change as innovation agendas quickly evolve. One approach to achieving policy responsiveness is the deployment and monitoring of small policy experiments, after which policies might be scaled up or down. In a context of rapid change, application procedures for innovation support instruments also need to be streamlined. For example, the Pass French Tech programme offers fast-growing start-ups simplified and rapid access to services (e.g. in financing, innovation and business development).

Policies should also address services innovation. Relevant measures might include projects to develop entirely new services using digital technologies such as the Smart and Digital Services Initiative in Austria. Other potential measures include policies to help manufacturing SMEs to develop new services related to their products (e.g. service design vouchers for manufacturing SMEs in the Netherlands).

Ensuring that innovation ecosystems support competition

Markets in which digital innovation is important are subject to rapid innovation (a source of competition) and scale economies (a source of persistent concentration). Competition authorities and innovation policy makers should work together to ensure the contestability of these markets. They should also address the role of data as a source of market power.

Supporting collaboration for innovation

Digital technology permits new ways for firms and institutions to collaborate for innovation. These new mechanisms include crowdsourcing, open challenges and so-called living labs. The latter typically involve concurrent research and innovation processes within a public-private-people partnership. New research and innovation centres, often public-private partnerships, help multidisciplinary teams of public researchers and businesses work together to address technology challenges. Such centres often have innovative organisational structures. Examples include Data61 in Australia and Smart Industry Fieldlabs in the Netherlands.

copy the linklink copied!Digitalisation and the next production revolution

Digital technologies are at the heart of advanced production (Chapter 5). The widely used term “Industry 4.0” refers to a new paradigm in which all stages of manufacturing are controlled and/or connected by digital technology. These stages range from product design, fabrication and assembly to process control, supply-chain integration, industrial research and product use. Industry 4.0 technologies can raise productivity in many ways, from reducing machine downtime when intelligent systems predict maintenance needs, to performing work faster, more precisely and consistently with increasingly autonomous, interactive and inexpensive robots. The digital production technologies in question are evolving rapidly. For instance, recent innovations permit 3D printing with novel materials such as glass, printing strands of DNA, and even, most recently, printing on gels using light (OECD, 2017; Castelvecchi, 2019).

AI in production

With the advent of deep learning using artificial neural networks – the main source of recent progress in AI – AI is finding applications in most industrial activities. Such uses range from optimising multi-machine systems to enhancing industrial research. Beyond production, AI is also supporting functions such as logistics, data and information retrieval, and expense management.

Several types of policy affect the development and diffusion of AI in industry. These include policies for education and training; access to expertise and advice; research support, policies on digital security, and liability rules (which particularly affect diffusion). In addition, while AI entrepreneurs might have the knowledge and financial resources to develop a proof-of-concept for a business, they sometimes lack the hardware and hardware expertise to build an AI company. As Chapter 5 describes, governments can help resolve such constraints.

Without large volumes of training data, many AI/ML models are inaccurate. Often, training data must be refreshed monthly or even daily. Data can also be scarce because many industrial applications are new or bespoke. Research may find ways to make AI/ML systems less data-hungry (and in some cases artificially created data can be helpful). For now, however, training data must be cultivated for most real-world applications. But many industrial companies do not have the in-house capabilities to exploit the value in their data, and are understandably reluctant to let others access their data. As Chapter 5 describes, some public programmes exist to bridge between company data and external analytic expertise. In addition, to help develop and share training data, governments can work with stakeholders to develop voluntary model agreements and programmes for trusted data sharing. More generally, governments can promote open-data initiatives and data trusts, and ensure that public data exist in machine-readable formats. While such actions are not usually aimed at industry, they can be helpful to industrial firms in incidental ways (for instance in research, or in demand forecasting that draws on economic data, etc.).

New materials and nanotechnology

Advances in scientific instrumentation, such as atomic-force microscopes, and progress in computational simulation, have allowed scientists to study materials in more detail than ever before. Powerful computer modelling can help build desired properties such as corrosion resistance into new materials. It can also indicate how to use materials in products.

Professional societies are working hard to develop a materials-information infrastructure to support materials discovery. This includes databases of materials’ behaviour, digital representations of materials’ microstructures and predicted structure-property relations, and associated data standards. Policy co-ordination at national and international levels could enhance efficiency and avoid duplicating such infrastructures.

Closely related to new materials, nanotechnology involves the ability to work with phenomena and processes occurring at a scale of 1 to 100 billionths of 1 metre. The sophistication, expense and specialisation of tools needed for research in nanotechnology – some research buildings must even be purpose-built – make inter-institutional collaboration desirable. Publicly funded R&D programmes on nanotechnology could also allow collaboration with academia and industry from other countries. The Global Collaboration initiative under the European Union’s Horizon 2020 programme is an example of this approach.

copy the linklink copied!Developing digital skills

Digitalisation raises demand for digital skills. For example, rapid improvements in AI systems have led to an overall scarcity of AI skills. Occupations like “industrial data scientist” and “bioinformatics scientists” are recent, reflecting a rate of technological change that is generating skills shortages. A dearth of data specialists is impeding the use of data analytics in business. Some countries also have too few teachers of computer programming (Stoet, 2016). A shortage of cybersecurity experts has led at least one university to recruit students to protect itself against hackers (Winick, 2018). Furthermore, the general-purpose nature of digital technology means that skills required to be a good scientist are also increasingly attractive in industry, adding to competition for talent (Somers, 2018).

Rising demand for digital skills has implications for income distribution and economic productivity. In terms of income distribution, for instance, lack of ICT skills in low-skilled adult populations in semi-skilled occupations places this demographic group at high risk of losing jobs to automation. In terms of productivity, the ability of education and training systems to respond to changing skills demand affects the pace of technology adoption.

Education and training systems must draw on information from all social partners

Skills forecasting is prone to error. Just a few years ago, few could have foreseen that smartphones would so quickly disrupt, and in some cases end, a wide variety of products and industries, from notebook computers and personal organisers to niche industries making musical metronomes and hand-held magnifying glasses (functions now available through mobile applications). Because foresight is inherently uncertain, education and training systems should draw on information about skills needs from businesses, trade unions, educational institutions and learners. Students, parents and employers also need access to data with which to judge the performance of educational institutions. In turn, resources in educational and training systems must flow efficiently to courses and institutions that best match skills demand. Institutions that play such roles include Sweden’s job security councils and SkillsFutureSingapore.

New courses and curricula may be needed

New courses and curricula may be needed to keep pace with rapid changes brought on by digitalisation. Advances in digital technology may require entirely new fields of tuition, such as dedicated programmes for the autonomous vehicle industry. Existing curricula may also need to change. For example, software engineers are effectively becoming social engineers. Society might benefit if they were to learn civics and philosophy, subjects rarely taught in science, technology, engineering and mathematics programmes (Susskind, 2018).

In many countries, schools do not teach logic, and universities rarely teach logic outside of specialised courses. As a result, too few students learn the fundamental role of logic in AI. Many schools barely teach data analysis (King and Roberts, 2018). Various parts of this report also emphasise the need for greater multidisciplinary education. For instance, the bioeconomy increasingly requires degree programmes that combine biology, engineering and programming (Chapter 6). In addition, in many countries, male students far outnumber female students in some subjects, including AI. One recent survey of 23 countries found that, on average, 88% of researchers were male (Mantha and Hudson, 2018).2

Lifelong learning must be an integral part of work

In a context of significant technological change, lifelong learning must be an integral part of work. Achieving this demands greater collaboration between government and social partners to develop and/or fund appropriate programmes. Strong and widespread literacy, numeracy and problem-solving skills are critical, because these foundation skills provide the basis for subsequent acquisition of technical skills, whatever they may be in the future. Working with social partners, governments can help develop entirely new training programmes, such as conversion courses in AI for those already in work, and ensure effective systems of certification. Beyond technical know-how, workforce education can help impart other important skills, such as the ability to work well in teams and in complex social contexts, to be creative and exercise autonomy.

Many countries have far-reaching programmes to develop digital technology skills. Using online tuition, Finland aims to teach every citizen the basics of AI. All Finnish students in initial education learn coding. Estonia is using public-private partnerships to teach coding and robotics. And the United Kingdom’s government recently committed up to GBP 115 million (EUR 134 million) for 1 000 students to complete doctoral degrees in AI. Digital technology is also creating novel ways to deliver skills (Box 1.2).

copy the linklink copied!
Box 1.2. Using digital technology to deliver skills

Digital technologies are beginning to facilitate skills development in new ways. In 2014, for example, Professor Ashok Joel and graduate students at Georgia Tech University created an AI teaching assistant – Jill Watson – to respond to online student questions. For months, students were unaware that the responses were non-human. iTalk2Learn, a European Union project, aims to develop an open-source intelligent platform for mathematics tutoring in primary schools. Researchers at Stanford University are developing systems to train crowdworkers using machine-curated material generated by other crowdworkers. In France, on an experimental basis, haptic technology – which allows a remote sense of touch – has shortened the time required to train surgeons, and promises many other applications.

Augmented reality (AR) uses computer vision to overlay objects in the user’s field of view with data and annotations (such as service manual instructions). Tesla has applied for a patent for an “Augmented Reality Application for Manufacturing”, built into safety glasses. With AR, skills such as those needed to repair breakdowns in complex machine environments will effectively become downloadable.

Virtual reality (VR) environments could improve the speed and retention of learning, as is beginning in industry. Using VR, Bell Helicopter reports reducing a typical six-year aircraft design process to six months. Furthermore, Walmart has put 17 000 VR headsets in its US stores for training. VR could also permit safe and costless “learning by doing” for beginners in fields where this is otherwise too dangerous or expensive.

The declining cost of VR and AR, and the integration of AR into mobile devices, should lower barriers to public participation in education, training and research. Elon Musk, for example, promises a high-definition VR live-stream of a future SpaceX moon mission (Craig, 2018).

copy the linklink copied!Facilitating the diffusion of digital technologies and tools

Most countries, regions and companies are primarily technology users, rather than technology producers. For them, technology diffusion and adoption should be priorities. Technology diffusion helps raise labour productivity growth and may also lower inequality in wage growth rates. Policy makers tend to acknowledge the importance of technology diffusion, but to underemphasise it in the overall allocation of resources.

Even in the most advanced economies, diffusion of technology can be slow or partial. For example, a survey of 4 500 German businesses in 2015 found that only 4% had implemented digitalised and networked production processes or had plans to do so (Chapter 5). One recent study examined 60 manufacturers in the United States with annual turnovers of between USD 500 million and USD 10 billion. The study found that just 5% had mapped where AI opportunities lie within their company and were developing a strategy for sourcing the data AI requires, while 56% had no plans to do so (Atkinson and Ezell, 2019).

New digital technologies may make diffusion more difficult

Certain features of new digital technologies could hinder diffusion. As technology becomes more complex, potential users must often sift through burgeoning amounts of information on rapidly changing technologies and knowledge requirements. Once the technology is chosen, deployment can pose difficulties as well. Even the initial step of collecting sensor data can be daunting. A typical industrial plant, for example, might contain machinery of many vintages from different manufacturers. This machinery may have control and automation systems from different vendors, all operating with different communication standards. To deploy AI, firms must often invest in costly information technology upgrades to merge data from disparate record-keeping systems (consumer and supply-chain transactions are often separate, for instance). Firms also have unique challenges – from proprietary data types to specific compliance requirements. These conditions may require further research and customisation (Agrafioti, 2018). Difficulties in determining the rate of return on some AI investments may also hinder adoption. Furthermore, to understand how an AI system works, staff may need to take time away from other critical tasks (Bergstein, 2019). In addition, the expertise required for all of the above is scarce.

Institutions for diffusion can be effective, if well designed

As Chapters 4 and 5 discuss, various micro-economic and institutional settings facilitate diffusion. These range from supportive conditions for new-firm entry and growth, to economic and regulatory frameworks for efficient resource allocation. In addition to enabling framework conditions, effective institutions for technology diffusion also matter. Institutions for diffusion range from applied technology centres such as the Fraunhofer Institutes in Germany to open technology mechanisms such as the Bio-Bricks Registry of Standard Biological Parts.

New diffusion initiatives are emerging, often involving partnership-based approaches. An example is the US National Network for Manufacturing Innovation (NNMI). The NNMI uses private non-profit organisations as the hub of a network of company and university organisations to develop standards and prototypes in areas such as 3D printing and digital manufacturing. Some initiatives aim to facilitate the testing of new digital technology applications, such as by creating test beds, regulatory sandboxes, and state-of-the-art facilities as well as providing expertise. As Chapter 4 describes, the Industry Platform 4 FVG, in the Italian region of Friuli Venezia Giulia, is an example of an institution that offers access to testing equipment, prototyping tools and demonstration labs.

To strengthen science, and the interface between science and industry, governments should also support platform technologies. These could include biofoundries, distributed R&D networks, data curation and digital/ genetic data storage. This is a public role because the associated investment risks are too high for the private sector. Moreover, for the private sector such investments may not provide a clear route to market.

Technology diffusion institutions need realistic goals and time horizons

Effective diffusion is more likely under two conditions. First, technology diffusion institutions must be empowered and resourced to take longer-term perspectives. Second, evaluation metrics must emphasise longer-run capability development rather than incremental outcomes and revenue generation. Introducing new ways to diffuse technology also takes experimentation. Yet many governments want quick and riskless results (Shapira and Youtie, 2017).

Diffusion in SMEs involves particular challenges. In Europe, for example, as Chapter 5 describes, 36% of surveyed companies with 50 to 249 employees use industrial robots compared to 74% of companies with 1 000 or more employees. Such discrepant patterns of technology use reflect, among other reasons, the more limited availability of digital skills in SMEs. For instance, only around 15% of European SMEs employ ICT specialists compared to 75% of large firms (Box 1.3). As Chapter 4 discusses, traditional instruments to foster technology adoption by SMEs – such as innovation vouchers and training – have been redesigned to meet specific challenges of the digital age, and often use digital tools themselves (for example, the SME 4.0 Competence Centres in Germany).

copy the linklink copied!
Box 1.3. Diffusing digital technology to SME : Some key considerations

Various measures can help diffuse digital technology to SMEs, including:

  • Systematising key information for SMEs. For example, Germany’s Industry 4.0 initiative has documented over 300 uses cases of applications of digital industrial technologies, along with contacts to experts (

  • Providing information on the expected return on investments in new technologies, as well as information on essential complementary organisational and process changes.

  • Providing signposts to reliable sources of SME-specific expertise, because the skills to absorb information are scarce in many SMEs. For example, as part of its “SMEs Go Digital Programme”, Singapore’s TechDepot provides a list of pre-approved digital technology and service solutions suited to SMEs. And Tooling U-SME, an American non-profit organisation owned by the Society of Manufacturing Engineers, provides online industrial manufacturing training and apprenticeship programmes.

  • Providing facilities where SMEs can test varieties and novel combinations of equipment to help de-risk prospective investments.

copy the linklink copied!Committing to public sector research

The technologies discussed in this publication have arisen because of advances in scientific knowledge and instrumentation. Publicly financed basic research has often been critical. For decades, for example, public funding supported progress in AI, including during periods of unproductive research. Today, AI attracts huge private investment. In this context, a recent hiatus – and in certain cases decline – in government support for research in some major economies is a concern (Figure 1.1).

copy the linklink copied!
Figure 1.1. Trends in total R&D performance, OECD countries and selected economies, 1995-2015
As a percentage of GDP
Figure 1.1. Trends in total R&D performance, OECD countries and selected economies, 1995-2015

Note: R&D = research and development; GDP = gross domestic product.

Source: OECD (2017), OECD Science, Technology and Industry Scoreboard 2017: The Digital Transformation,


Multidisciplinary research

Various chapters in this publication stress the importance of multidisciplinary research. The importance of understanding the interplay between disciplines reflects the need to address complex and cross-cutting problems, the fact that new disciplines are born as knowledge expands, and the increased complexity of scientific equipment. It also reflects the frequent need to bring together different digital technologies. For example, developing the potential of haptic technologies – not least for uses in education and training – requires the combination of electrical engineering (communications, networking), computer science (AI, data science) and mechanical engineering (kinaesthetic robots) (Dohler, 2017).

Policies on hiring, promotion and tenure, and funding systems that privilege traditional disciplines, may impede interdisciplinary research. Scientists need to know that working at the interface between disciplines will not jeopardise opportunities for tenure. Institutions that demonstrably support multidisciplinary research can provide useful lessons. Such cases include the United Kingdom’s Interdisciplinary Research Collaborations, networks in Germany to support biomedical nanotechnology, and individual institutions such as Harvard’s Wyss Institute for Biologically Inspired Engineering.

Public-private research partnerships

The complexity of some emerging digitally based technologies exceeds the research capacities of even the largest individual firms. This necessitates a spectrum of public-private research partnerships. For example, materials science relies on computational modelling, enormous databases of materials’ properties and expensive research tools. It is almost impossible to gather an all-encompassing materials science R&D infrastructure in any single company or institute.

Many possible targets exist for government R&D and commercialisation efforts to continue progress in the digital revolution. These range from quantum computing to new mathematics for big data. Box 1.4 presents a small selection of these ideas.

copy the linklink copied!
Box 1.4. Public research goals relevant to the digital transformation of STI

Responding to the end of Moore’s Law. In many digital devices, processing speeds, memory capacity, sensor density and accuracy, and even numbers of pixels are linked to Moore’s Law (the law asserts that the number of transistors on a microchip doubles about every two years). Atomic-level phenomena now limit the shrinkage of transistors on integrated circuits. Some experts believe a new computing paradigm is needed. The current computing paradigm is based on von Neumann’s design of the electronic computer. This architecture involves a channel for instructions that pass through one or more central processing units (CPUs) that retrieve data, compute and store results. This architecture, in which CPUs are a bottleneck, has not changed since 1948 (Damer, 2018). Hopes for significant advances in computing rest on research breakthroughs in optical computing (using photons instead of electrons), biological computing (storing data in and calculating using segments of DNA) and quantum computing.

Advancing the development of quantum computing, communication and information. Quantum technology has mostly been a theoretical possibility until recently, but Google, IBM and others are now trialling practical applications. In 2017, Biogen worked with Accenture and quantum software company 1QBit on a quantum-enabled application to accelerate drug discovery. Quantum technologies, if successful, could revolutionise certain types of computing. This would have strategic consequences for secure communication. Quantum computing still involves major research and technical challenges. For example, most of today’s quantum devices require operating temperatures near to absolute zero, as well as the development of new materials. Quantum computing, communication and information is becoming a priority for a number of governments. China plans to open a National Laboratory for Quantum Information Sciences in 2020, with USD 10 billion of investment.

Creating more capable AI. Brooks (15 July 2018) observes that AI does not yet possess the object recognition of a two-year-old, the language understanding of a four-year-old, the manual dexterity of a six-year-old or the social understanding of an eight-year-old. While businesses far outspend governments on R&D for AI, much of this R&D focuses on application rather than breakthroughs in knowledge. Furthermore, Jordan (2018) observes that much research on human-like AI is not directly relevant to the major challenges involved in building safe intelligent infrastructures such as in medical or transport systems. Unlike human-imitative AI, such critical systems must have the ability to cope with “cloud-edge interactions in making timely, distributed decisions and they must deal with long-tail phenomena whereby there is lots of data on some individuals and little data on most individuals. They must address the difficulties of sharing data across administrative and competitive boundaries.”

Many research challenges are important for public policy. These range from the explainability of AI, to the robustness of AI systems (image-recognition systems can easily be misled), to how much a priori knowledge AI might need to perform difficult tasks. Jordan (2018) also describes a number of major open research questions in classical human-imitative AI research. These include the need to bring meaning and reasoning into systems that perform natural language processing and the need to infer and represent causality.

copy the linklink copied!Developing technology- and sector-specific capabilities in government

Understanding major technologies is particularly important when these evolve quickly. For instance, one leading authority argues that converging developments in several technologies are about to yield a “Cambrian explosion” in robot diversity and use (Pratt, 2015). Without governments fully understanding technologies and sectors, strategic opportunities to benefit from digital technologies might be lost.

Chapter 5 describes an example of this challenge. Technical and sector experts in the United States understand that a strategic opportunity exists to use metals-based 3D printing in commercial aviation. However, as an immature technology, metals-based 3D printing does not meet the stringent tolerances and high reliability needed in aviation. Targeted policy could change this, with measures ranging from funding and curating databases on materials’ properties, to brokering essential data-sharing agreements (DSAs) across government laboratories, academia and users of metals-based 3D printing. Perceiving and successfully acting on such opportunities require technical and sectoral expertise.

Regulation, when used, also needs deep technology and industry-specific understanding. The effects of regulation on innovation can be complex, of uncertain duration and ambiguous, making them difficult to predict. Calls to regulate AI highlight the need for expertise in government, so that any regulation of this fast-evolving technology does more good than harm. Developments in fast-changing technologies such as AI may also require that regulatory processes become more anticipatory and innovative. As Chapter 4 describes, three policy domains require a sectoral approach for designing new initiatives: data access policies, given the diversity of data types in different sectors; digital technology adoption and diffusion policies; and policies supporting the development of sectoral applications of digital technologies.

Technical expertise in government will help avoid unrealistic expectations about new technologies, especially those emerging from science (such as quantum computing). New discoveries and technologies often attract hyperbole. No more than 6 years ago, for example, massive open online courses (MOOCs) were widely held to represent a democratising transformation in postsecondary education. However, recent research shows that less than 12% of MOOC students return for a second year, and most students come from affluent families in rich countries (Reich and Ruipérez-Valiente, 2019).

Similarly, many hailed Bitcoin as the democratisation of money. Indeed, a 2013 article in WIRED called Bitcoin “the great equalizer” (Hernandez, 2013). However, by 2017 just 1 000 users owned 40% of Bitcoin (Kharif, 2017). Public discussion of AI also involves wildly varying accounts of its likely impacts. AI-related hyperbole may even have particular psychological roots: experiments show that subjects unconsciously anthropomorphise AI and robots (Fussell et al., 2008).

Effective sectoral support requires, as a first step, mechanisms to strengthen policy intelligence. As Chapter 4 discusses, these mechanisms include roadmaps or sectoral plans prepared with industry and social partners. One example is the Sector Competitiveness Plans developed by Industry Growth Centres in Australia. Developing a shared vision for the future, with industry and social partners, is also useful.

copy the linklink copied!Ensuring access to complementary infrastructures

Certain types of infrastructure help to utilise digital technology. These include HPC, cloud computing and fibre-optic connectivity. HPC is increasingly important for firms in industries ranging from construction and pharmaceuticals to the automotive sector and aerospace. In manufacturing, the use of HPC is going beyond applications such as design and simulation to encompass real-time control of complex production processes. However, like other digital technologies, manufacturing’s use of HPC falls short of potential. A number of possible ways forward exist. SMEs could receive low-cost, or free, limited experimental use of HPC, while online software libraries/clearing houses could help disseminate innovative HPC software to a wider industrial base.

Industry 4.0 requires increased data sharing across production sites and company boundaries (Chapter 5). For example, BMW aims to know the real-time status of production equipment at every company that produces key components for its vehicles. Increasingly, machine data and data analytics, and even monitoring and control systems, will operate in the cloud. The cloud will also allow independent AI projects to start small, and scale up or down as required. Indeed, Google’s chief AI scientist, Fei-Fei Li, argues that cloud computing will democratise AI.3 Cloud computing will also increasingly help data sharing and analysis in science: Amazon Web Services, for instance, participates in the 1 000 Genomes Project, helping researchers to access and analyse vast amounts of cloud-based genetic data. However, cloud use varies greatly between small and large firms, and across countries. For example, only 20% of Austrian manufacturers used cloud computing in 2016. By comparison, in Finland, the country with the highest incidence of cloud use in manufacturing in the OECD, the rate was 69% (OECD, 2018b).

Broadband networks – especially fibre-optic connectivity – are also essential to Industry 4.0. Policy priorities here include overhauling laws governing the speed and coverage of communication services. Policies to promote competition and private investment, as well as independent and evidence-based regulation, have also helped extend coverage. In addition, new technology could expand services in underserved areas. A case in point is the delivery of broadband through “White Spaces”, the gaps in radio spectrum between digital terrestrial television channels.

copy the linklink copied!Improving digital security

Among other issues, digital technology is creating wholly new sources of risk. For example, as Chapter 5 observes, with respect to new materials, a novel risk could arise because, in a medium-term future, materials development processes based on computer simulations could be hackable. Chapter 6 notes that bio-production relies heavily on data, IP and research, all of which need protection from cyber-attack. Companies in the bioeconomy are elevating cybersecurity to a strategic imperative, but at a pace that lags behind their desire to adopt digital technologies. Enhancing trust in digital services is also critical to data sharing and, in some countries, uptake of cloud services.

While challenging to measure, digital security incidents appear to be increasing in terms of sophistication, frequency and influence. New digital security solutions are emerging, such as homomorphic encryption, through which data remains encrypted even when being computed on in the cloud. The technological race between hackers and their targets is nevertheless unrelenting. Government awareness-raising initiatives are important. SMEs, in particular, need to introduce or improve their digital security risk management practices.

Chapter 6 suggests that governments could encourage timely sharing of information on digital security threats. Public sector actors could also run cyber-attack simulations and share the lessons learned. Voluntary standards, regulations, industry programmes and information-sharing networks could draw attention to digital security enhancements. In addition, in public-private research partnerships, individual facilities could be encouraged to develop and validate methods for staff or external service providers to strengthen digital security. OECD (2019) includes detailed recommendations on digital security. These focus on managing rather than eliminating digital security risk – among individuals, firms and governments – because some degree of risk is inevitable.

copy the linklink copied!Examining intellectual property systems in light of digitalisation

New digital technologies are raising new challenges for IP systems. 3D printing, for example, might create complications in connection with patent eligibility. For instance, if 3D-printed human tissue improves upon natural human tissue, it may be eligible for patenting, even though natural human tissue is not. Ensuring legal clarity around IPRs is also important for 3D printing of spare parts (when printed by anyone other than the original equipment manufacturer).

More fundamentally, a world in which machines can invent could require new patenting frameworks. For example, AI systems that automatically – and unpredictably – learn from many publicly available sources of information could complicate the task of identifying deliberate infringements of patent laws. In another example, a licensor might hold IP rights on an AI system and license this. The licensee might run the AI system using data on which it too has IP rights (as certain jurisdictions permit protection of data ownership). This might lead to an improvement in the AI system. A conflict might thereby arise with respect to ownership of the improved AI. Current IP law is also silent on the issue of whether AI can itself acquire IP rights.

All the chapters in this report address different types of standards. For instance, Chapter 5 shows that Industry 4.0 currently involves more than 100 standards initiatives. Chapter 6 likewise explains that in the bioeconomy, standards for product and process interoperability directly affect issues of IP.

Countries and firms that play primary roles in setting international standards can enjoy advantages if new standards align with their own national standards and/or features of their productive base. The public sector’s role should be to encourage industry, including firms of different sizes, to participate at early stages in international (and in some cases national) standards setting. Dedicated support could aim to include under-represented groups of firms in standards development processes. Relevant public agencies should also pursue standards development in the research system.

copy the linklink copied!Optimising digital systems to strengthen science and innovation policies

Chapter 7 examines digital science and innovation policy (DSIP) systems. DSIP systems use digital procedures and infrastructures to help formulate and deliver science and innovation policy. They are used to monitor policy interventions, develop new STI indicators, assess funding gaps, strengthen technology foresight, and identify leading experts and organisations. Data are mainly sourced from funding agencies (e.g. databases of grant awards), R&D-performing organisations, proprietary bibliometric and patent databases, and the web.

There are various types of DSIP systems. Databases of public funders are one type, of which Belgium’s Flanders Research Information Space (FRIS) is an example. The FRIS portal, launched in 2011, aims to accelerate innovation, support science and innovation policy making, share information on publicly funded research with citizens, and reduce the administrative burden of research reporting.

A second type of DSIP infrastructure is a Current Research Information System. Through the Estonian Research Information System (ETIS), for example, Estonian higher education institutions (HEIs) manage research information and showcase research. Public funders use ETIS to evaluate and process grant applications. National research assessments and evaluations also draw on ETIS.

A third type of DSIP infrastructure is what might be termed an “intelligent system”. For example, to examine the socio-economic impacts of research, Japan’s SciREX Policymaking Intelligent Assistance System (SPIAS) uses big data and semantic technologies (which aim to extract meaning from data). They help to process data on Japan’s research outputs and impacts, funding, R&D-performing organisations and research projects.

Chapter 7 discusses three main challenges facing DSIP systems: ensuring the interoperability of diverse data sets; preventing misuses of DSIP systems in research assessments; and managing the roles of non-government actors, particularly the private sector, in developing and operating parts of DSIP systems. The following subsections briefly describe these three themes.

Ensuring interoperability in DSIP systems

DSIP systems pull data from multiple sources, linking them to gain policy insights that are otherwise impossible to achieve. But linking data is highly problematic, chiefly on account of different data standards. Recent years have seen attempts to establish international standards and vocabularies to improve data sharing and interoperability in science and research management. These include unique, persistent and pervasive identifiers, which assign a standardised code unique to each research entity, persistent over time and pervasive across datasets. Many DSIP infrastructures have adopted such standards to link data from universities, funding bodies and publication databases, thereby relating research inputs to research outputs.

Using DSIP systems in research assessment

Many metrics aim to quantify scientific quality, impact and prestige. More than half of the DSIP systems identified in OECD work play a role in research assessment. The growing digital footprint of academic and research activities suggests that, in future, most relevant dimensions of research activity might be represented digitally. In this connection, the altmetrics movement promotes metrics generated from social media as a type of evidence of research impact that is broader and timelier than academic citations. However, as with traditional metrics, questions remain over the extent to which altmetrics afford valid signals of research impact.

The roles of the business sector in DSIP

Non-government actors are emerging as a main force in DSIP systems. The large academic publishers, Elsevier and Holtzbrinck Publishing Group, together with the analytics firm, Clarivate Analytics, are particularly active in developing products and services into platforms that mimic fully fledged DSIP systems. Multinational corporations like Alphabet Inc. and Microsoft Inc., and national technology companies such as Baidu Inc. (China) and Naver Inc. (Korea), have also designed platforms to search academic outputs. In the future, these platforms could become key elements in national DSIP systems.

Harnessing these private sector developments in public DSIP systems has many potential benefits. Solutions can be implemented quickly and at an agreed cost, sparing the public sector the need to develop in-house skills beforehand. Private companies can promote interoperability through their standards and products, which can expand the scope and scale of data used in a DSIP system. However, outsourcing data management activities to the private sector may bring risks. These could include loss of control over the future development of DSIP systems, discriminatory access to data and even the emergence of private platforms that become dominant because of hard-to-contest network effects.

The outlook for DSIP systems

Governments need to shape DSIP ecosystems to fit their needs. This will require interagency co-ordination, sharing of resources (such as standard digital identifiers) and coherent policy frameworks for data sharing and reuse in the public sector. Since several government ministries and agencies formulate science and innovation policy, DSIP systems should involve co-design, co-creation and co-governance. In a desirable future, DSIP infrastructures will provide multiple actors in STI with up-to-date linked microdata. Policy frameworks will have resolved privacy and security concerns, and national and international co-operation on metadata standards will have addressed interoperability issues.

copy the linklink copied!Digitalisation in science and innovation: Possible “dark sides”

The thrust of this report is that digitalisation offers many positive opportunities for STI, so long as complementary policies receive proper attention. This subsection considers the possibility of unwelcome outcomes from digitalisation in STI. These include widening capability gaps across countries and subnational regions, negative effects on science processes, excessive complexity in machine ecosystems, and risks that are diffuse, hard to foresee and primarily social. Evidence on the likelihood or scale of these undesirable outcomes is scant. A conclusion from this subsection, therefore, is the need for greater awareness and further study. Public concerns about automation, jobs and inequality, where the literature is vast, are not discussed.

Distributional effects and digitalisation of STI

Aspects of digitalisation could widen gaps in STI capability and income across countries and regions. Three possibilities are considered here:

Centralisation effects in science. Science increasingly occurs within data (Hey, Tansley and Tolle, 2009). Developed countries have a comparative advantage in capital-intensive scientific tools that generate data. It is an open question whether these conditions might affect the broad geography of scientific activity. In one scenario, with suitable data access, developing-country researchers might be able to do science without making the sorts of capital investments made by developed countries. In another, researchers in developed countries might strengthen their existing advantages in leading-edge science. As a narrower but possibly related issue, laboratory automation is now essential to many areas of science and technology, but is expensive and difficult to use. Consequently, laboratory automation is most economical in large central sites, and companies and universities are increasingly concentrating their laboratory automation. The most advanced example of this trend is cloud automation in biological science. Biological samples are sent to a single site and scientists design their experiments using application programming interfaces (King and Roberts, 2018). The effect of such cloud-based possibilities on the overall dispersion or concentration of scientific work is unclear.

Effects on subnational geographies. The digital economy may exacerbate geographic disparities in income, as it amplifies the economic and social effects of initial skills endowments (Moretti, 2012). In many OECD countries, income convergence across subnational regions has either halted, or reversed, in recent decades (Ganong and Shoag, 2015). Among remedial policies, investments in skills and technology are most important (because investments in infrastructure and transport, while often beneficial, also have diminishing returns (Filippetti and Peyrache, 2013).

Effects from supercomputing. Today, some supercomputers are designed specifically for AI. Previously, supercomputers were used mostly for modelling, such as in climate and nuclear science. Many tech companies are orienting towards supercomputing (Knight, 2017). Worldwide, however, only 27 countries possess a supercomputer listed among the top 500 most powerful. China, notably, has made major strides in building supercomputers with domestically produced components. China also boasts large numbers of supercomputers, along with abundant data to train AI algorithms. Might capabilities across countries diverge because of increasing synergy between supercomputing and AI? Will the value of owning/building increasingly powerful supercomputers change relative to using cloud-based computing?

Complex systems and unmanageable machine ecologies

Governments need improved understanding of complex systems (Nesse, 2014). As a wide array of critical systems becomes more complex, mediated and interlinked by code, the risk and consequences of vulnerabilities could increase. As code controls a growing number of connected systems, errors can cascade, with effects that become more extensive than in the past. For instance, owing to software faults, the United States recently experienced the first national – rather than local – 911 outages (Somers, 2017). Critical ICT systems might behave in unpredictable and even emergent ways, and the ability to anticipate failures in technology could diminish (Arbesman, 2016). A widely publicised case was the unexpected interaction of algorithms that contributed to the “Flash Crash” of May 2010, when more than 1 trillion dollars in value was lost from global stock markets in minutes. However, many more examples exist of software errors that caused system failures. In 1996, for instance, the European Space Agency’s Arianne 5 rocket exploded on launch owing to a software glitch.4

AI and other measures will help to automate and improve software verification. Nevertheless, as the physicist Max Tegmark observes “the very task of verification will get more difficult as software moves into robots and new environments, and as traditional pre-programmed software gets replaced by AI systems that keep learning, thereby changing their behaviour…” (Tegmark, 2017).

An inbuilt feature of technology is that it deepens complexity: systems accumulate parts over time, and more connections develop between those parts. Technologies that become more complex can end up depending on antiquated legacy systems. This is especially so for code. For example, in the lead up to 1 January 2000, amid Y2K concerns, the US Federal Aviation Administration examined computers used for air traffic control. One type of machine required fixing, an IBM 3083 that had been installed in the 1980s. However, only two persons at IBM knew the machine’s software, and both had retired (Arbesman, 2016).

Negative impacts on science from digitalisation

This chapter has already described a number of challenges that digitalisation raises for science – from coping with predatory online science journals to keeping personal research data anonymous. Chapter 2 – on measurement – reports that a sizeable number of scientists think digitalisation will have at least some negative impacts on science. These potential impacts include the growth of hypothesis-free research in data-driven science, and divides in research between those who possess advanced digital competences and those who don’t. Digitalisation could also encourage a celebrity culture in science, lead to premature diffusion of research findings and expose individuals to pressure groups. Other concerns are the use of readily available but inappropriate indicators for monitoring and incentivising research, and the potential concentration of workflows and data in the hands of a few companies providing digital tools.

Another potentially problematic issue is the misapplication of AI in science and society. The design and use of effective AI systems requires expertise which is scarce. Moreover, stricter requirements on performance, robustness, predictability and safety will increase the need for expertise. This is especially true for deep learning techniques that are now central to AI research and applications.

With expertise bottlenecks and, sometimes, unrealistic expectations about what AI can achieve, non-experts are increasingly deploying AI. Such systems often suffer from deficiencies in performance, robustness, predictability and safety, outcomes that even AI experts can struggle to achieve (Hoos, 2018). Hoos and others propose building a next generation of AI systems known as Automated AI as one way to alleviate the AI complexity problem. This could help develop and deploy accurate and reliable AI without the need for deep and highly specialised AI expertise. Automated AI builds on work on automated algorithm design, and automated ML, which is developing rapidly in academia and industry (Hoos, 2012).

Wider risks linked to digital technology

Like all technology, digital technologies can help and harm. AI, for instance, can increase digital security by predicting where threats originate, but it can decrease digital security by adding intelligence to malware. Synthetic biology can help cure disease, but it can also make pathogens more virulent. Some risks of digital technology reflect complex interactions with social systems and as such may be impossible to foresee.

Today, one risk is the fragmentation of public discourse by social media. The future might also see a loss of trust in accredited information owing to high-fidelity audio and video fakes. In addition, the diminished economic viability of journalism and literary writing, a development attributed to digital technology, could have unwanted social and political effects (de León, 2019).

Harari (2018) even suggests the future of computing could shape the future of democracy. Autocracy, he notes, has generally failed in advanced economies, partly because information processing could not be centralised sufficiently. Decentralised information processing gives democracies an efficiency advantage. However, if AI comes to encompass ever more of the digital economy, it may have a centralising tendency. AI will also become more effective as data are concentrated. Harari (2018) suggests that finding ways to keep distributed data processing more efficient than centralised data processing could ultimately help safeguard democracy.

Policy makers can take additional steps to mitigate emerging risks brought on by the dual-use nature of technology. Past episodes in the history of science might provide useful lessons. The case of Paul Berg, the Nobel laureate who helped create recombinant DNA, is one example. Aware of the ramifications of his discovery, Berg convened the Asilomar Conference. This led to a moratorium on the most dangerous experiments until the science improved.

Policy makers can mitigate technological risk in several ways. They can earmark part of research budgets to study the broader implications of science. Engaging the public in debate, while avoiding hyperbole about technology, is useful. In addition, they can ensure that science advice is trustworthy. Investments in research and innovations that reduce risk (such as in cyber-security) might also help.

copy the linklink copied!The untapped potential of digital technology for STI policy

This section explores new ideas for how digital technology might support policy for science and innovation. Earlier, Box 1.1 described new thinking on collective intelligence and the allocation of public research funds. Other examples considered here are prediction markets, various applications of blockchain, and using social media to increase exposure to innovation in a selective way. Some of these ideas have yet to receive significant attention, and few governments have experimented with the opportunities available.

Prediction markets for STI policy

Prediction markets, which involve trading bets on whether some specific outcome will occur, could inform STI policy. Prediction markets have outperformed the judgement of experts in forecasting outcomes in fields as diverse as sporting tournaments and political elections. They aggregate decentralised private information, which is captured in the changing price of the next bet on the outcome in question (in a similar way to a futures market). Prediction markets incentivise participants to find or generate new information (from which profit could derive). Recent experiments (see Dreber et al. [2019], Munafo et al. [2015], Dreber et al. [2015] and Almenberg, Kittlitz and Pfeiffer [2009]) show that prediction markets might accomplish the following:

  • Predict the results of otherwise expensive research evaluations (e.g. of HEIs).

  • Quickly and inexpensively identify research findings that are unlikely to replicate.

  • Help optimally allocate limited resources for replications.

  • Help institutions assess whether strategic actions to improve research quality are achieving their goals.

  • Test scientific hypotheses.

  • Help understand specific scientific processes. For instance, a research project could be examined alongside a history of the project’s market prices, to show when hypotheses had strengthened or weakened (Dreber et al., 2015).

Specialised digital platforms make it easier to implement prediction markets. On the Augur platform, for example, with an initial commitment of less than a dollar, anyone can ask a question and create a market based on a predicted outcome. Using prediction markets in STI appears more constrained by tradition than by technical infeasibility.

Prediction using human-machine combinations

Human intelligence (of individuals or crowds) and machine intelligence could be combined for prediction and research. For instance, researchers at Stanford University and Unanimous AI, a California-based company, connected small groups of radiologists over the Internet using AI, and tested their ability to diagnose chest X-rays. Radiologists and algorithms together were more accurate than the unaided group. They were even more accurate than individual radiologists, and 22% more accurate than state-of-the-art AI alone (Rosenberg et al., 2018).

Accurate foresight is particularly elusive when technological change is radical. One complication is that it often takes considerable time before the main applications of radical innovations emerge. After Gutenberg, for example, it took nearly a century of technical and conceptual improvements to arrive at the modern book (Somers, 2018).

Indeed, even the most knowledgeable experts frequently misjudge technological timelines. In the digital sphere, one example of such misjudgement is the 1955 proposal for the Dartmouth Summer Research Project on Artificial Intelligence, a seminal event in the history of AI. The proposal stated that a significant advance in AI could be made “…if a carefully selected group of scientists work on it for a summer”. Whether using prediction markets, a human-machine approach or other methods, harnessing collective intelligence might strengthen policy foresight.

Blockchain for science, technology and innovation

One leading commentator has described blockchain as follows: “blockchain technology facilitates peer-to-peer transactions without any intermediary such as a bank or governing body…the blockchain validates and keeps a permanent public record of all transactions. This means that personal information is private and secure, while all activity is transparent and incorruptible – reconciled by mass collaboration and stored in code on a digital ledger” (Tapscott, 2015). As Chapter 5 discusses, while blockchain applications in production are still incipient, companies such as Microsoft, IBM and others now offer commercial blockchain services. Proposals to use blockchain in STI are flourishing (Box 1.5).

copy the linklink copied!
Box 1.5. Possible applications of blockchain in science and innovation

Recent proposals for how blockchain might benefit STI include the following:

Establishing a cryptocurrency for science. Using a cryptocurrency, publishers of scientific works could receive micro-payments as content is consumed. A science cryptocurrency could also facilitate a system of rewards for sometimes under-rewarded activities such as statistical support, exchange of lab equipment, data hosting and curation, and peer review (van Rossum, 2018). Science Matters – an OA publishing platform – will soon implement a crowdsourced peer-review process using the Ethereum blockchain. Ideally, researchers and publishers will quickly see metrics that can help expedite publication. Furthermore, for their time, reviewers will also receive cryptocurrency linked to the platform (Heaven, 2019).

Storing and sharing research data. Databases that encompass large parts of the research ecosystem are technically possible. However, the need for centralised management and ownership complicates their implementation. Data security and ease of access are just some of the concerns. In principle, the blockchain could make scalable, safe and decentralised data stores more practical. It could also enhance the reproducibility of science by automatically tracking and recording work such as statistical analysis, while reducing the risk of data fraud. In addition, metrics could be developed for activities that are not well recognised, such as data development, because they could be clearly attributed (van Rossum, 2018).

Enabling data use. Data sharing can be difficult for several reasons, including institutional and technical issues, as well as regulations. Institutional obstacles include bureaucratic processes that hinder permission to share data. Even when a DSA is reached, data holders still worry about inappropriate use of their data, or about accidental sharing of client data. Furthermore, on a technical level, some datasets are just too big to share easily. For instance, 100 human genomes could consume 30 000 000 MB. Uncertainty about the provenance of data can also hinder data sharing or purchase. In addition, regulators might increasingly require that AI systems demonstrate auditable data use. In this environment, efforts are underway to link blockchain and AI in a system that gives data holders the benefits of data collaboration, but with full control and verifiable audit. Ocean Protocol, an open-source not-for-profit foundation, is pioneering such a system. Under one use case, data are neither shared nor copied. Instead, algorithms go to the data for training purposes, with all work on the data recorded in a distributed ledger (Chhabra, 2018).

Making ownership of creative material transparent. Commercial services now offer secure attribution of ownership of creative works by providing a blockchain-verified cryptographic ID (Stankovic, 2018). Launched in 2018, Artifacts is a platform for publishing any material that researchers consider worth sharing. This ranges from data sets to single observations, hypotheses and negative research results, all logged to the blockchain. Artifacts aims to disseminate more scientific information, securely and in citable ways, more quickly than occurs with peer-reviewed written articles (Heaven, 2019).

Broadening access to supercomputing. Golem aims to create a global supercomputer, accessible to anyone, using processing power from idle computers and data centres around the world. Users would rent processing time from each other, and rely on blockchain to track computations and payments, and to keep data secure (Golem, n.d.).

Technical and policy challenges such as interoperability must be resolved before blockchain in STI can be widely deployed. Without consensus on the protocols in blockchain and other DLTs, use will be limited. One effort towards consensus, IBM’s Hyperledger, seeks an interoperable architecture for DLTs. Technical limits also exist on the volume of transactions that blockchain networks can process. However, the scalability challenge is less severe for so-called permissioned blockchain applications – where participation in the network is controlled. Permissioned blockchain networks are the most likely in STI, because they will generally be used to help a particular professional community achieve some policy-relevant outcome. Mechanisms to ensure the veracity of information in a blockchain registry are lacking (although efforts are underway to establish the veracity of the identity of those feeding information into the blockchain).5 Agreement is also lacking on how to terminate a so-called smart contract – a contract that executes itself, enabled by blockchain – and how to treat smart contracts that contain errors or illegal instructions. The tamper-proof design of the blockchain could also be problematic if the system prevents corrections, even when necessary (Stankovic, 2018).

Using social media to spread innovation

People’s propensity to innovate involves an element of imitation. Research shows that children who grow up in areas with more inventors are more likely to become inventors. Greater exposure to innovation among minorities and children from low-income families might increase the prevalence of innovation. Among other measures, social media could provide a channel for targeted interventions (Bell et al., 2019).

copy the linklink copied!Conclusion

Scientific progress cannot be taken for granted. There are many areas of science – fundamental to human well-being – where knowledge is still surprisingly limited. For example, the process by which E. coli (a bacterium) consumes sugar for energy is one of the most basic biological functions, and also important for industry. But how the process operates has not been fully established, even though research on the subject was first published over 70 years ago. Uncertainty also exists on many critical questions in climate science. To name a few, what is the tipping point for the inversion of the flows of cold and hot oceanic waters? When could changes become irreversible (e.g. melting of West Antarctic or Greenland ice-shelves)? What is the quantitative role of plants and microbes in the carbon cycle?

Progress in STI is also necessary because, despite striking advances in technology, the pace of innovation is insufficient in some crucial fields. For instance, today’s leading energy generation technologies were mostly developed or demonstrated over a century ago. The combustion turbine was invented in 1791, the fuel cell in 1842, the hydro-electric turbine in 1878 and the solar photo-voltaic cell in 1883. Even the first nuclear power plant began operating over 60 years ago. The performance of all these technologies has, of course, improved. But truly disruptive breakthroughs have not occurred (Webber et al., 2013). Indeed, some high-profile commentators from academia and industry have gone further, claiming (debatably) that a more general innovation plateau has been reached.

Furthermore, efficient and effective policies for STI are ever more important in countries where rapid population ageing is likely to constrain discretionary public spending over the long run.

For these and other reasons examined in this publication, utilising the full potential of digital technology in STI is important.


Agrafioti, F. (2018), “How to set up an AI R&D lab”, Harvard Business Review, 20 November,

Almenberg, J., K. Kittlitz and T. Pfeiffer (2009), “An experiment on prediction markets in science”, PLOS One, 30 December, PLOS, San Francisco,

Arbesman, S. (2016), Overcomplicated: Technology at the Limits of Comprehension, Penguin Random House, New York.

Atkinson, R.D. and S. Ezell (2019), “The Manufacturing Evolution: How AI will Transform Manufacturing and the Workforce of the Future”, Information Technology and Innovation Foundation, Washington DC,

Bell, A.M. et al. (2019), “Who becomes an inventor in America? The importance of exposure to innovation”, NBER Working Paper No. 24062, issued in November 2017, revised in January 2019, National Bureau of Economic Research, Cambridge, Massachusetts,

Bergstein, B. (2019), “This is why AI has yet to reshape most businesses”, MIT Technology Review, 13 February, Massachusetts Institute of Technology, Cambridge,

Biles, S. (28 December 2018), “Tiny computers could transform our lives”, Scientific American Observations blog,

Bloom, N. et al. (2017), “Are ideas getting harder to find?”, NBER Working Paper No. 23782, September, National Bureau of Economic Research, Cambridge, Massachusetts,

Bollen, J. (2018), “Who would you share your funding with?”, Nature, Vol. 560, Nature Research, Springer,p.143,

Bollen, J. et al. (2014), “From funding agencies to scientific agency: Collective allocation of science funding as an alternative to peer review”, EMBO Reports, Vol. 15/1, Wiley Online Library, pp. 131-133,

Brooks, R. (15 July 2018), “Steps towards superintelligence IV: Things to work on now”, Robots, AI and other stuff blog,

Butler, K.T. et al. (2018), “Machine learning for molecular and materials science”, Nature, Vol. 559/7715, Nature Research, Springer, pp. 547-555,

Castelvecchi, D. (2019), “Forget everything you know about 3-D printing – the ‘replicator’ is here”, Nature, Vol. 566, Nature Research, Springer, p. 17,

Chhabra, C.S (2018), “New data and the AI economy”, presentation at the workshop on digital technology for science and innovation, Oslo, 5-6 November,

Corea, F. (2017), “The convergence of AI and blockchain: What’s the deal?”, Medium, 1 December,

Craig, E. (2018), “An overview of the possibilities and challenges in delivering STEM education using AR and VR”, presentation at the workshop on digital technology for science and innovation, Oslo, 5-6 November,

Damer, B. (2018), “The origin of life and the engine of emergence”, presentation at the workshop on digital technology for science and innovation, Oslo, 5-6 November,

de León, C. (2019), “Does it pay to be a writer?”, The New York Times, 5 January,

Dohler, M. (2017), “Global reach: Will the tactile Internet globalize your skill set?”, ComSoc Technology News, 23 January,

Dreber, A. et al. (2019), “Using prediction markets to predict the outcomes in DARPA’s next generation social science program”, 11 February,

Dreber, A. et al. (2015), “Using prediction markets to estimate the reproducibility of scientific research”, Proceedings of the National Academy of Sciences, 15 December, Vol. 112/50, United States National Academy of Sciences, Washington, DC, pp. 15343-15347,

Dyson, F.J. (1998), Imagined Worlds, Harvard University Press, Cambridge, Massachusetts.

Evans, B. (2019), “5G: If you build it, we will fit it”,, 16 January,

Filippetti, A. and A. Peyrache (2013), “Labour productivity and technology gap in European regions: A conditional frontier approach”, Regional Studies, Vol. 49/4, 14 June, Regional Studies Association, Brighton, pp. 532-554,

Fischer, E. (2018), “How far we’ve come: 70 years of weather technology advances”, CBS Boston, 22 May,

Freedman, L.P., I.M. Cockburn and T.S. Simcoe (2015), “The economics of reproducibility in preclinical research”, PLOS Biology, 9 June, PLOS, San Francisco,

Fussell, S.R. et al. (2008), “How people anthropomorphize robots”, presentation at the international conference on human robot interaction, 12-15 March, Amsterdam, https://doi10.1145/1349822.1349842.

Ganong, P. and D. Shoag (2015), “Why has regional income convergence in the US declined?”,

Golem (n.d.), Golem website, (accessed 20 June 2019).

Graves, N., A.G. Barnett and P. Clarke (2011), “Funding grant proposals for scientific research: Retrospective analysis of scores by members of grant review panel”, The BMJ 2011; 343:d4797, London,

Harari, Y. (2018), “Why technology favors tyranny”, The Atlantic, October,

Harbert, T. (2013), “Supercharging patent lawyers with AI: How Silicon Valley’s Lex Machina is blending AI and data analytics to radically alter patent litigation”, IEEE Spectrum, 30 October, Institute of Electrical and Electronics Engineers, New York,

Heaven, D. (2019), “Bitcoin for the biological literature”, Nature, Vol. 566, Nature Research, Springer, pp. 141-142,

Herbert, D.L, A.G. Barnett and N. Graves (2013), “Australia’s grant system wastes time”, Nature, Vol. 495, 21 March, Nature Research, Springer, pp. 314,

Hernandez, D. (2013), “Homeless, Unemployed and Surviving on Bitcoins”, WIRED, 20 September,

Hey, T., S. Tansley and K. Tolle (2009), “The fourth paradigm: Data-intensive scientific discovery”, Microsoft Research, Redmond, United States.

Hicks, M. (2017), Programmed Inequality: How Britain Discarded Women Technologists and Lost Its Edge in Computing, MIT Press, Cambridge, United States.

Hoos, H.H. (2018), “Democratisation of AI by automating the creation of AI systems”, presentation at digital technology for science and innovation workshop, Oslo, 5-6 November.

Hoos, H.H. (2012), “Programming by optimization”, Communications of the ACM, Vol. 55/2, Association for Computing Machinery, New York, pp. 70-80.

Hutson, M. (2017), “The future of AI depends on a huge workforce of human teachers”, Bloomberg Businessweek, 7 September,

Jordan, M. (2018), “Artificial intelligence  –  The revolution hasn’t happened yet”, Medium, 19 April,

Kelly, K. (2013), “The post-productive economy”, The Technium,

Kharif, O. (2017), “The bitcoin whales: 1,000 people who own 40 percent of the market”, Bloomberg Businessweek, 8 December,

King, R.D. and S. Roberts (2018), “Artificial intelligence and machine learning in science”, in OECD Science, Technology and Innovation Outlook 2018, OECD Publishing, Paris,

Knight, W. (2018), “A robot scientist will dream up new materials to advance computing and fight pollution”, MIT Technology Review, 7 November, Massachusetts Institute of Technology, Cambridge,

Knight, W. (2017), “Google reveals a powerful new AI chip and computer,” MIT Technology Review, 17 May, Massachusetts Institute of Technology, Cambridge,

Mantha, Y. and S. Hudson (2018), “Estimating the gender ratio of AI researchers around the world”, Medium, 17 August,

Metz, C. (2019), “‘Businesses will not be able to hide’: Spy satellites will give edge from above”, 24 January, The New York Times,

Moretti, E. (2012), The New Geography of Jobs, Houghton, Mifflin, Harcourt Publishing, New York.

Munafo, M.R. et al. (2015), “Using prediction markets to forecast research evaluations”, Royal Society Open Science, Vol 2/10, Royal Society, London,

Musib, M. et al. (2017), “Artificial intelligence in research”, Science, Vol. 357/6346, 7 July, American Association for the Advancement of Science, Washington, DC, pp. 28-30,

Nesse, R. (2014), “The Fragility of Complex Systems”, in Brockman, J. (ed.), What Should We Be Worried About?, Harper Perrenial, New York, London, Toronto, Sydney, New Delhi, Auckland.

OECD (2019), Going Digital: Shaping Policies, Improving Lives, OECD Publishing, Paris,

OECD (2018a), OECD Science, Technology and Innovation Outlook 2018: Adapting to Technological and Societal Disruption, OECD Publishing, Paris,

OECD (2018b), OECD Reviews of Innovation Policy: Austria 2018, OECD Reviews of Innovation Policy, OECD Publishing, Paris,

OECD (2017), OECD Science, Technology and Industry Scoreboard 2017: The Digital Transformation, OECD Publishing, Paris,

OECD (2015), “Daejeon Declaration on Science, Technology, and Innovation Policies for the Global and Digital Age”, webpage, (accessed 20 June 2019).

Pratt, G.A. (2015), “Is a Cambrian explosion coming for robotics?”, Journal of Economic Perspectives, Vol. 29/3, Summer, American Economic Association, Pittsburgh, pp. 51-60.

Princeton University (2018), “Researchers crowdsource brain mapping with gamers, discover six new neuron types”, Medical Express, 17 May,

Ransbotham, S. (21 May 2018), “Using AI to create humanlike computers is a shortsighted goal”, MIT Sloan Management Review blog,

Reich, J. and J.A. Ruipérez-Valiente (2019), “The MOOC Pivot: What happened to disruptive transformation of education?”, Science, 11 January, Vol. 363/6423, American Association for the Advancement of Science, Washington, DC, pp. 130-131,

Rosenberg, L, et al. (2018), “Artificial swarm intelligence employed to amplify diagnostic accuracy in radiology,” presentation at the annual information technology, electronics and mobile communication conference, Vancouver, 2-4 November, www.Artificial_Swarm_Intelligence_employed_to_Amplify_Diagnostic_Accuracy_in_Radiology.

Sankaran, V. (2018), “Meet the people busting scientists who fake images in research papers”, The Next Web, 6 November,

Scholz, T. and N. Schneider (2019), Ours to Hack and to Own: The Rise of Platform Cooperativism, A New Vision for the Future of Work and a Fairer Internet, OR Books, New York and London.

Shapira, P. and J. Youtie (2017), “The next production revolution and institutions for technology diffusion”, in The Next Production Revolution: Implications for Governments and Business, OECD Publishing, Paris,

Somers, J. (2018), “The scientific paper is obsolete: Here’s what’s next”, The Atlantic, 5 April,

Somers, J. (2017), “The coming software apocalypse”, The Atlantic, 26 September,

Stankovic, M. (2018), “Using blockchain to facilitate innovation in the creative economy”, presentation at the digital technology for science and innovation workshop, Oslo, 5-6 November.

Stoet, G. (2016), “Maths anxiety is creating a shortage of young scientists … here’s a solution”, The Conversation, 1 June,

Susskind, J. (2018), Future Politics, Oxford University Press, United Kingdom.

Tapscott, D. (2015), “Blockchain revolution: How the technology behind Bitcoin is changing money, business and the world”, Don Tapscott, (accessed 20 June 2019).

Tegmark, M. (2017), Life 3.0: Being Human in the Age of Artificial Intelligence, First Vintage Books, New York.

Valiant, L. (2013), Probably Approximately Correct, Basic Books, New York.

van Rossum, J. (2018), “The blockchain and its potential for science and academic publishing”, presentation at the digital technology for science and innovation workshop, Oslo, 5-6 November.

Webber, M.E., R.D. Duncan and M.S. Gonzalez (2013), “Four technologies and a conundrum: The glacial pace of energy innovation”, Issues in Science and Technology, Vol. xxix/2, Winter, National Academy of Sciences, National Academy of Engineering, Institute of Medicine, University of Texas at Dallas,

Winick, E. (2018), “A cyber-skills shortage means students are being recruited to fight off hackers”, MIT Technology Review, 18 October, Massachusetts Institute of Technology, Cambridge,

Worstall, T. (2016), “Are ideas getting harder to find? Not really, no, unless you measure by ideas already found”, Forbes, 20 December,

Youn, H. et al. (2015), “Invention as a combinatorial process: Evidence from US patents”, Journal of the Royal Society Interface, 6 May, Royal Society, London,

Zubașcu, F. (2017), “‘Digital revolution’ will underpin next EU research programme, says Commissioner”,, 11 December, (accessed 20 June 2019).


← 1. Last year, more than 1.2 million new papers were published in the biomedical sciences alone, bringing the total number of peer-reviewed biomedical papers to over 26 million. However, the average scientist reads only about 250 papers a year,

← 2. In this connection, one recent study argued that gender stereotyping was instrumental in the United Kingdom losing its globally pre-eminent position in computing after World War II (Hicks, 2017).

← 3. Professor Li’s full remarks at the 2017 Global StartupGrind Conference can be found here:

← 4. Tegmark (2017) provides many similar examples.

← 5. See, for example, Authenteq (, which uses DLTs to provide digital identity verification.

Metadata, Legal and Rights

This document, as well as any data and map included herein, are without prejudice to the status of or sovereignty over any territory, to the delimitation of international frontiers and boundaries and to the name of any territory, city or area. Extracts from publications may be subject to additional disclaimers, which are set out in the complete version of the publication, available at the link provided.

© OECD 2020

The use of this work, whether digital or print, is governed by the Terms and Conditions to be found at