Are ideas getting harder to find? A short review of the evidence

M. Clancy
Institute for Progress
United States

Some believe that technological progress between the 1970s and 2020s was significantly slower than during the preceding 50 years. In recent work, various well-known economists have drawn on evidence from a broad range of fields – computer chips, agriculture, health, national productivity statistics and firm-level data – to argue that ideas are becoming harder to find. This essay reviews the flurry of work generated in response to those claims. It finds, despite the intensity of the debate, considerable consensus about what the data show. Measured in a particular way, a constant supply of research effort does not lead to a constant proportional increase in various proxies for technological capabilities (e.g. doubling the number of transistors on an integrated circuit every 2 years, or doubling US corn yields every 20 years). Instead, a constant proportional increase in metrics of interest has tended to require an increasing supply of research effort.

Bloom et al. (2020) provoked much discussion. At the heart of the dispute is whether their choice of metric for measuring changes in the productivity of research is appropriate. Was the focus on such metrics relevant for assessing whether ideas really are getting “harder to find”? Some argue that so long as a constant supply of research effort – e.g. the same number of researchers working on the same problem area year after year – leads to a constant absolute increase in some technology metric, then ideas are not getting harder to find.

To illustrate the disagreement, suppose US corn yields is used as the technology metric – an example Bloom et al. (2020) does use, as will be discussed shortly. Between 1980 and 2008, US agricultural research remained roughly constant and annual corn yields increased by a fairly consistent 1.5 bushels per acre per year. Since annual corn yields increased from roughly 100 bushels per acre in 1980 to 150 bushels per acre over the time period, a consistent annual increase of 1.5 bushels per acre per year implies the growth rate slowed from 1.5% per year to 1.0% per year (see the author’s essay on agricultural productivity in this book).

Bloom et al. (2020) would argue this is evidence that ideas are getting harder to find: constant research effort led to declining growth rates in the technology metric. However, others disagree since constant research effort still led to constant absolute increases in crop yields. The actual calculations are not quite so simple, but this example still illustrates one point of disagreement about how to interpret the data.

For their part, the reason why Bloom and his co-authors frame the question in the way they do derives from long-standing models of economic growth. These models show that constant exponential growth in gross domestic product (GDP) per capita (i.e. the same percentage growth every year) can be achieved with constant exponential growth in technology (defined and measured in different ways, depending on the theoretical framework – Acemoglu, 2009).

Bloom et al. (2020) make a point often missed by readers unfamiliar with the literature on economic growth. Namely, they do not actually assume that a constant (i.e. unchanging) supply of researchers must deliver constant exponential increases in technological capability. Their paper instead uses a subtly different input measure: the effective number of researchers.

The effective number of researchers is computed by dividing some measure of total research and development (R&D) spending by the wage rate for scientists in the relevant country or industry. This is the number of scientists who could be employed if R&D was spent exclusively on the hiring of scientists. It is not an actual headcount of the number of scientists because R&D spending is also spent on research equipment, materials and other non-labour inputs.

More precisely, according to Bloom and co-authors, if ideas are not getting harder to find, then a constant effective number of researchers should be able to generate exponential increases in technological capability.1 That said, it is not necessary to adopt Bloom’s framing of the question, and many do not. The next section, then, looks at what some of the data reveal.

To make any headway, some measure of “ideas” is needed. Many papers look at the evolution of various metrics of change related to specific technologies. They then compare them to various measures of the inputs to innovation. This section looks at several potential metrics.

Bloom et al. (2020) look at Moore’s law, which holds that the number of transistors that can fit on an integrated circuit doubles approximately every two years. While this doubling has held constant over the last half century, Bloom et al. (2020) point out that research effort devoted to achieving these doublings has grown nearly twentyfold.

Kressel’s essay in this book largely agrees that doubling the number of transistors on an integrated circuit has gotten harder. He points out that few expect this to continue at the same pace. However, he disagrees on the relevance of this metric as a measure of technological progress. While shrinking transistors is one way to enhance the performance of integrated circuits, it is not the only way. For example, much research effort is dedicated to improving energy efficiency in integrated circuits. Kressel points to advances on a range of parameters, besides transistor density, to argue that overall progress remains robust.

Bloom et al. (2020) also show that growth in the yields of a variety of agricultural crops has only been sustained by a substantial expansion of agricultural R&D effort. In another essay in this book, Clancy reviews complementary evidence that supports the general thrust of this conclusion. More theoretically grounded (but empirically challenging) measures of technological progress in agriculture in the United States indicate a slowdown in the rate of progress that extends beyond yields. It suggests this slowdown cannot be easily pinned on confounding factors such as a changing climate or pest burdens.

When studying health, Bloom et al. (2020) diverge from their earlier metrics, choosing to measure research effort by the number of scientific articles or clinical trials. They also measure improvements in health outcomes by the number of years of life saved. They again document that a constant supply of research effort results in steadily lower incremental improvements in a selection of key health outcomes. Scannell, in this report, provides a variety of complementary types of evidence. He shows, for instance, that new molecular entities discovered per US dollar has fallen dramatically, as has the financial return on investment in health R&D in general.

Others argue the focus on individual technologies misses the point. Guzey and Rischel (2021), for example, argue that technological progress is mostly about the creation of entirely new technological categories, not just improvement of existing technologies. It may well be that for any given technology, improvements get harder to eke out. However, this is more than counterbalanced by the creation of new technologies. Rather than breeding faster horses, we invent automobiles. Or more broadly, we invent telegrams, telephones and the Internet so that we do not need to travel at all to communicate. If the creation of new technologies is the main way technological progress occurs, then metrics of this progress that help capture the creation of these entirely new technologies should be considered.

Bloom et al. (2020) look at a few alternative metrics that might help answer the concern that technological progress in part involves the generation of entirely new technologies. For example, health outcomes (years of life saved) derive in part from successive waves of new forms of technology: antibiotics, chemotherapy, gene therapy, mRNA vaccines, etc. Each of these medical interventions may have diminishing returns to R&D. However, medical R&D overall could remain just as productive by successively hopping from mature to new and emerging technologies. But here, too, Bloom et al. (2020) show it takes increasing research effort to save a year of life. In this case, effort is measured by the number of clinical trials or biomedical articles.

An even broader measure of the fruits of R&D can be computed for private sector companies. After all, they invest in R&D presumably because they believe it will help their firms become more profitable. Profit is agnostic as to the underlying technology; if firms find it more profitable to invent new categories of technology, then they will seek to do that.

Profit itself can be challenging to measure properly. Therefore, Bloom et al. (2020) examine a host of related metrics: sales, number of employees, sales per employee and market capitalisation. They find here, too, that on average it takes more and more R&D effort by firms to increase any one of these profit proxies by an exponential amount.

Boeing and Hünermund, in this volume, examine the same metrics for the People’s Republic of China and Germany. They confirm that Bloom et al. (2020) findings are not limited to the United States; it takes more R&D effort for firms to proportionally increase proxies for profit in these two countries as well.

That said, a lot more than R&D affects profit, which presents a conceptual challenge. A firm entering a larger market, for example, might have higher profits, even though its underlying technologies are the same.

An alternative measure of broad technological progress is total factor productivity. The idea here is that, rather than trying to directly observe and measure technology, one should measure economic outputs (for example, total GDP) and inputs (for example, labour and capital), and then examine how the ability to produce more outputs from the same or fewer inputs changes over time. Economists assume that one driver of an economy or firm being able to squeeze more outputs from each unit of input is technological progress, which can include the sequential creation of new technological categories.

Bloom et al. (2020) do use data on total factor productivity for the US economy, going back to the 1930s, and find that more and more R&D effort is required to keep total factor productivity growing at a constant exponential rate. Miyagawa and Ishikawa (2019) perform a similar exercise over a much shorter time frame (1996-2015) but over a wider set of countries and industries. Again, R&D productivity, framed in the way Bloom et al. (2020) prefer, has generally declined, although with some exceptions.

Total factor productivity has its own problems as a measure. Like profit, it can change for reasons that have little connection to technological progress. Vollrath (2019), for example, decomposes the decline of the growth rate of total factor productivity in the United States since 2000 into several categories with little or no relation to technological progress. These include rising consumer spending on services and declining geographic mobility of the workforce. Likewise, total factor productivity is a statistical estimate of how well inputs can be translated into outputs. It thus requires good data on both inputs and outputs. Since most firms use a complicated mix of inputs and often produce a complicated mix of outputs, measurement is challenging.

Another way to tackle the question of research productivity is to look more directly at “ideas” themselves – or at least something closer to ideas than the technologies derived from them. In this connection, another line of work looks at scientific research specifically rather than technologies. Several metrics are noted below.

A seemingly natural place to start is with trends in the production of scientific publications. While the number of scientific publications produced each year has grown rapidly over the last century, it turns out this is mostly driven by an equally rapid increase in the number of authors (Wang and Barabási, 2021). The number of papers produced per author has been remarkably stable during the 20th century but has begun to increase slightly since then. However, this does not necessarily imply there has been no slowdown in scientific research productivity as papers vary substantially in their contribution to knowledge. It could be that papers today contribute less than in the past. Indeed, several studies suggest this is the case.

Cauwels and Sornette (2020) propose to count exceptional ideas by counting exceptional people, since the discoverers of new ideas tend to be recognised and celebrated. Using the Krebs Encyclopedia of Scientific Principles and Asimov’s Chronology of Science and Discovery, they assemble a count of exceptional scientists in physics and the life sciences going back to 1750. They then look to see if this number has been rising or falling, especially as a share of the total population. They hypothesise that ideas are becoming harder to find if a bigger population is failing to discover new ideas (and thereby generate celebrated scientists). They find the number of exceptional scientists as a share of the population increased from 1750 to roughly 1950 but has been falling since.

There are two concerns with their analysis. First, it may take time for the importance of new ideas to be recognised. Their sources end in 2008 so they will miss any ideas developed before that date but only recognised for their importance afterwards. This creates the appearance of a decline in the number of scientists towards the end of the sample (Cauwels and Sornette attempt to statistically correct for this bias). Second, science is increasingly a team endeavour (Wuchty, Jones and Uzzi, 2007), rather than an individual pursuit. This complicates efforts to assign individuals to ideas.

Rather than counting exceptional people, it may also be possible to identify trends in the production of exceptional ideas and discoveries. For example, one could examine trends in characteristics of the Nobel Prize for physics, chemistry and medicine. At least in theory, the Nobel Prize recognises the most important discoveries in the respective fields, and has been awarded for long enough to observe long-run trends.

One simple measure of scientific progress is to see the share of awards that go to discoveries described in papers published in the preceding 20 years. Across all fields, this has fallen from an average of nearly 90% prior to the 1970s to closer to 50% today (calculations based on Li et al., 2019). Alternatively, Collison and Nielsen (2018) survey scientists and ask them to select the more important discovery from pairs of randomly selected Nobel Prizes. Since the 1940s, there is no clear evidence that scientists prefer discoveries made in more recent decades.

Nobel Prizes have their own idiosyncrasies that make them less than ideal for measuring the rate of progress. Another potential measurement approach, then, is to look at features of scientific publications, such as their citations. Citations also provide a potentially informative window into how the scientific community receives new scientific work. Diverse citation-based metrics suggest a slowdown in scientific progress. Chu and Evans (2021) document that as fields have grown larger, the turnover among top-cited papers has slowed. They argue this slowdown shows the scientific canon is increasingly ossified.

Another indicator focuses on research papers and the share of academic citations that go to recent work. Lariviere et al. (2007) and Cui, Wu and Evans (2022) collectively show a steady decline since the 1960s in the share of citations to recent papers (those published in the preceding five or ten years). Patents also increasingly cite older scientific work (Park et al., 2022). One possible explanation for these trends is that more recent work is proving less useful to today’s scientists and inventors compared to earlier work. Finally, Park et al. (2022) show that a citation-based measure of a paper’s level of disruption also indicates the typical paper has become steadily less disruptive over time.

Alas, citations can also be biased by a range of factors and likely measure scientific impact with (possible substantial) noise. Milojević (see her essay in this book) provides another alternative measure of the rate of progress in science. She counts the number of unique phrases in paper titles as a way of gauging the number of distinct research concepts that a field is investigating. In every year, for each field, she samples the same number of phrases from paper titles. A decline in the number of unique phrases is taken to indicate that more papers are working on the same ideas. This may indicate that progress along any one idea has gotten harder.

For this book, Milojević applied the method to the entire Web of Science database covering 1900 to 2020. It encompasses the literature for science as a whole, as well as individual research fields. She finds the number of unique phrases in constant samples of articles has grown (at variable rates) since 1900. Since 2005, however, this trend has begun to reverse. This 2005-20 period seems to be the longest stretch in history during which the number of ideas explored by science has declined.

To sum up, across a range of approaches and measures, there are few exceptions to the general finding that constant proportional progress in technology requires ever- larger investments of research effort. In several of the (uncommon) exceptions to this rule, the exceptions were present in the past but have been absent for at least a decade.

Does this mean ideas really are literally getting harder to find? Not necessarily, for a few reasons. With regards to the first question, none of the papers referenced here have tried to directly measure “ideas” themselves, only proxies. Indeed, it is not clear how one would begin to count actual ideas in the first place.

It does at least seem clear that exponential growth in most things requires increasing effort directed at improvement. Why this might be the case is also an important question. It may simply be a constraint imposed on research by the way nature works – insights, discoveries and applications yielding a proportional increase simply get harder and harder to achieve.

Conversely, no such constraint on research may exist and the decline in research productivity could be driven by changes in the institutions supporting research. Several possible factors have been suggested on this front. Arora et al. (2019), for example, look at the retreat of the private sector from basic science. Cui, Wu and Evans (2022) study the impact of ageing on the scientific labour force. Meanwhile, Bhattacharya and Packalen (2020) blame the increasing importance of citations as a measure of scientists’ output. However, this is not an exhaustive list of posited explanations.

If, for whatever reason, research productivity is falling, has technology stagnated since the 1970s? Again, it does not necessarily follow that technological process slows because of declining R&D productivity. This is because declining research productivity has been at least partially counterbalanced by increased R&D effort. If ideas are getting harder to find, society also seems to be trying harder to find them.


Acemoglu, D. (2009), Introduction to Economic Growth, Princeton University Press.

Arora, A. et al. (2019), “The changing structure of American innovation: Some cautionary remarks for economics growth”, in Innovation Policy and the Economy, Lerner, J. and S. Stern (eds.), Vol. 20. University of Chicago Press.

Bhattacharya, J. and M. Packalen. (2020), “Stagnation and scientific incentives”, Working Paper, No. 26752, National Bureau of Economic Research, Cambridge, MA,

Bloom, N. et al. (2020), “Are ideas getting harder to find?”, American Economic Review, Vol. 110/4, pp. 1104-1144,

Cauwels, P. and D. Sornette (2020), “Are ‘flow of ideas’ and ‘research productivity’ in secular decline?”, Research Paper, No. 20-90, Swiss Finance Institute, Zurich,

Chu, J.S.G. and J.A. Evans (2021), “Slowed canonical progress in large fields of science”, PNAS, Vol. 118/41, p. e2021636118,

Collison, P. and M. Nielsen (2018), “Science is getting less bang for its buck”, 16 November, The Atlantic,

Cowen, T. (2011), The Great Stagnation: How America Ate All the Low-Hanging Fruit of Modern History, Got Sick, and Will (Eventually) Feel Better, Dutton, New York.

Cui, H., L. Wu and J.A. Evans (2022), “Aging scientists and slowed advance”, arXiv, 2202.04044,

Gordon, R. (2017), The Rise and Fall of American Growth: The U.S. Standard of Living since the Civil War, Princeton University Press.

Guzey, A. and E. Rischel (2021), “Issues with Bloom et al.’s ‘Are ideas getting harder to find?’ and why total factor productivity should never be used as a measure of innovation”, webpage, (accessed 28 November 2022).

Larivière, V. et al. (2007), “Long-term patterns in the aging of the scientific literature, 1900–2004”, in Proceedings of ISSI 2007, Torres-Salinas D. and H.F. Moed (eds.),

Li, J. et al. (2019), “A dataset of publication records for Nobel Laureates”, Scientific Data, Vol. 6/33,

Miyagawa, T. and T. Ishikawa (2019), “On the decline of R&D efficiency”, Discussion Paper, No. 19052, Research Institute of Economy, Trade and Industry, Tokyo,

Park, M. et al. (2022), “The decline of disruptive science and technology”, arXiv, 2106.11184,

Vollrath, D. (2019), Fully Grown: Why a Stagnant Economy is a Sign of Success, Chicago University Press.

Wang, D. and A. Barabási (2021), The Science of Science, Cambridge University Press,

Wuchty, S., B.F. Jones, and B. Uzzi (2007), “The increasing dominance of teams in production of knowledge”, Science, Vol. 316/5827, pp. 1036-1039,


← 1. Being precise about these concepts matters. Based on economic theories, measuring research effort in terms of the effective number of researchers is meant to recognise that a larger economy can do more things than a smaller one. To illustrate the intuition, suppose (inflation-adjusted) annual salaries for US scientists in 1950 are USD 20 000 and USD 80 000 in 2000. If R&D spending in a given industry is USD 20 million in 1950 and USD 80 million in 2000, then in each year the effective number of scientists is 1 000, which is obtained by dividing R&D spending by annual salaries. However, suppose only half of R&D spending is actually spent on labour in each year. Then the number of research scientists hired by this industry is 500 in both 1950 and 2000. Suppose the other half of R&D spending is spent on non-labour research inputs. Even though the effective and actual number of scientists is the same in 1950 and 2000, USD 10 million is spent in 1950 on non-labour research inputs, and USD 40 million is spent on non-labour inputs in 2000. In other words, the scientists working in the year 2000 have access to many more non-labour research resources than the ones working in 1950. Measuring research in terms of the number of effective researchers (1 000 in each year, in this example) is meant to reflect that economic growth should allow scientists to bring more resources to bear on problems over time. The key point of Bloom et al. (2020) is to document that the increased research productivity of scientists over time, as they work in an increasingly advanced economy, is not sufficient to maintain a proportional growth rate in various technology metrics.

Metadata, Legal and Rights

This document, as well as any data and map included herein, are without prejudice to the status of or sovereignty over any territory, to the delimitation of international frontiers and boundaries and to the name of any territory, city or area. Extracts from publications may be subject to additional disclaimers, which are set out in the complete version of the publication, available at the link provided.

© OECD 2023

The use of this work, whether digital or print, is governed by the Terms and Conditions to be found at