Eroom’s Law and the decline in the productivity of biopharmaceutical R&D

J.W. Scannell
University of Edinburgh
United Kingdom

There is a historical case for describing biomedical innovation from around 1940 to 1970 as a “golden age”, which followed the maturation of medicinal chemistry and the application of physiological science to people. Levels of innovation have since fallen for several reasons. Arguably of greatest importance is the progressive accumulation of an excellent and inexpensive pharmacopoeia of generic drugs. When drugs’ patents expire, they become much cheaper but no less effective. The ever-expanding catalogue of cheap generic drugs progressively raises the evidential, regulatory and competitive bar for new drugs in the same therapy area, eroding incentives for research and development (R&D). Such therapy areas hold meagre returns for investment in “new ideas”, even if the ideas themselves have not become harder to find.

The catalogue of generic medicines, now over 90% of prescriptions in the United States, has therefore squeezed R&D investment towards diseases where R&D has been less successful over the last hundred or so years; diseases that may be pharmacologically intractable and/or hard to model effectively in the laboratory (e.g. advanced Alzheimer’s, some metastatic solid cancers, etc.). Again, it is relatively easy to propose therapeutic “ideas” for these diseases. However, the lack of predictive laboratory models and/or the inherent pharmacological intractability creates a very low “innovation yield” from the human trials required to identify the small subset of ideas that are any good.

The fact of the decline in innovative efficiency in the drug industry is relatively uncontroversial. Steward and Wibberley (1980) asked in Nature, the leading science journal: “Drug innovation: What’s slowing it down?” Two years later in the same journal, Weatherall (1982) speculated on “an end to the search for new drugs”. By 1997, rapid progress against AIDS was celebrated as a return to a golden age of innovation that ran through the middle third of the 20th century (Richard and Wurtman, 1997; Le Fanu, 1999). There has been a distinct uptick in some research productivity measures since 2010, whose causes are considered later, but this is modest compared with the prior fall.

The causes of the decline are more obscure than the decline itself. The literature describes a great many possible causes, but they have not been sufficiently prioritised. Widely touted productivity “fixes” have generally failed to change the downward trend. There has also been a notable failure to explain the large divergence in the efficiency trends of R&D inputs and outputs. DNA sequencing, genomics, high-throughput screening, computer-aided drug design, x-ray crystallography and computational chemistry, among other advances, were created and widely adopted, and/or became orders of magnitude cheaper between 1950 and 2010. The efficiency gains resemble, and in the case of DNA sequencing exceed, the performance gains of computer chips described by the now famous Moore’s Law. In contrast, the number of new drugs approved by the US Food and Drug Administration (FDA) per billion US dollars of inflation-adjusted industrial R&D investment fell roughly a hundredfold over the same period (Figure 1A). The term “Eroom’s Law” (Eroom is “Moore” backwards) was coined to draw attention to the contrasting trends in input efficiency relative to output efficiency (Scannell et al., 2012).

This essay reviews some of the data that point to a decline in the productivity of biopharmaceutical R&D. It summarises some of the causes. It then draws attention to two important papers (Bender and Cortés-Ciriano, 2021a, 2021b) and some technical blog posts (Lowe, 16 December 2021, 9 December 2021, 8 November 2021, 23 July 2021, 30 November 2020, 25 September 2019). These provide a realistic assessment of the impact of artificial intelligence (AI) on drug discovery, which is likely to be modest in the near term. The essay finishes with some comments on financial incentives for biopharmaceutical innovation. At present, private sector investment in novel chemistry may be over-incentivised. Conversely, investment in scientific tools that help decide whether the novel chemistry is likely to benefit sick people is likely under-incentivised.

Before looking at the trends, it is worth reflecting on the practical challenges of measuring biopharmaceutical R&D productivity.

The first challenge is deciding what to measure. Any productivity measure divides an output by an input, and there is a wide choice of both. Output choices could concentrate, for example, on any of the following: drug industry profits, the number of new drugs approved, the number of new patients treated by those new drugs, the number of healthy life-years gained by those patients, etc. Input choices could include the amount of money spent on R&D each year, the quantity of labour involved, etc. Productivity measures could also consider either all or parts of the R&D process (e.g. the academic work, the medicinal chemistry or antibody creation, experimental efficacy testing or the clinical trials).

As the next challenge, some of the most appealing output measures, such as the number of healthy life-years gained, are not practically measurable with any degree of precision. New drugs are adopted into changing health systems and their real-world use is optimised over years or even decades. Drugs, diagnosis, surgery and other aspects of patient-management co-evolve.

There are practical problems with the data needed to measure temporal trends. These can be a Frankenstein’s monster, crudely stitched together from datasets that have changed over time. An FDA drug approval today is not the same as in the 1960s, but many productivity measures treat it as if it were.

There is the problem of extreme lumpiness, or skew, in the financial and therapeutic value of new drugs. The mRNA-based COVID vaccines have allowed billions of people to return to normal life, will generate hundreds of billions of US dollars of revenue, and have transformed future vaccine innovation. But most new drugs offer marginal clinical gains and generate little revenue. Averages and trends calculated from such skewed data can be misleading.

There are also the problems of survivor bias and mean reversion. Drug R&D has the characteristics of a lottery. Companies attract scrutiny, and corporate outcomes enter analytic datasets; an R&D success is a lottery win. However, performance then tends to revert to the industry average. This means that the productivity of samples of “obvious” companies will tend to decline over time, even if industry productivity does not.

Having said all that, a diverse range of metrics shows a declining R&D productivity trend. Several measures are shown in Figure 1. Panel A shows the number of new drugs – defined as “new molecular entities” and “new biologics” – approved by the world’s major drug regulator, the FDA, per billion US dollars of inflation-adjusted R&D in the drug and biotechnology industries. This measure fell roughly a hundredfold between 1950 and 2010 (Scannell et al., 2012). The vertical axis is logarithmic. In other words, a real-terms US dollar of R&D spending in 1950 made a contribution to generating new drugs that was around 100 times greater than a real-terms US dollar in 2010. The downward trend broke around 2010, with a modest uptick (Ringel et al., 2020). The uptick in drug approvals is, however, associated with a decline in the number of eligible patients per new drug. This is due, in part, to greater focus on rare diseases (Ringel et al., 2020). Note that the absolute number of drug approvals has not declined. Rather, R&D spending per drug has grown. Approvals bounced around at 20-30 per year until around 2010 and have roughly doubled since then.

Figures 1B and 1C are financial productivity measures. Investors are typically interested in the unit of profit, per unit of capital employed, per unit of time (Damodaran, 2007). Return on equity (ROE) captures this idea (Figure 1B), and can be easily computed from public accounting data. ROE is annual net profit divided by the average annual equity balance. Equity is a measure of the quantity of the owners’ capital tied up in the business. Equity tends to increase the more the owners invest in long-term assets (such as factories and intellectual property), and the less they take out of the business (in terms of dividends or stock buybacks). Financial analysis of R&D-intense industries, such as the drug industry, should treat spending on multi-year R&D programmes in the same way as long-term capital expenditure is treated in other industries. In this way, R&D investment shows up in the equity balance.1 With this treatment, the drug industry’s ROE becomes a proxy for financial returns on its R&D investment. This is because its profits are largely R&D-dependent and because R&D investment dominates the equity balance. Taking this approach, ROE for publicly traded US drug and biotech firms has fallen since around 2000. Biopharma ROE is now roughly comparable to that of other industries.

Various authors have published an alternative financial measure called the “internal rate of return” (IRR) on R&D investment, also derived from public accounting data. Think of IRR as the aggregate interest rate earned on a series of cashflows, in this case the series of profits from an initial series of R&D investments. For a sample of large US drug and biotech firms, the IRR paints a similar picture to the ROE (Figure 1B) (SSR Health, 2014; Scannell et al., 2015).

A third financial measure, less transparent but closer to the reality of individual project-level R&D investment decisions is an IRR number calculated from companies’ own internal project data rather than their published accounts (Grabowski and Vernon, 1990) (Figure 1C). This measure matches project-level R&D spending to the profits that those projects yield (making suitable allocations for the cost of failed projects, etc.). Recent time series (Deloitte, 2021, 2019) show a decline in this R&D productivity measure (Figure 1C).

How did the drug and biotech industries do so well financially for so long (Figure 1B) despite a large fall in innovative efficiency (Figure 1A)? The short answer is that profit growth offset rising R&D costs.2 However, profit growth could not keep pace with R&D cost growth indefinitely (Figure 1D), and this has depressed financial returns on R&D investment since around 2000. In the early 1960s, the industry’s net income was roughly twice its spending on R&D. Today, for the industry as a whole, aggregate R&D spending is higher than net income. Ironically, the “golden age” of pharmaceutical innovation occurred when R&D investment in the pharmaceutical sector was much less intense than today (Figure 1D).

Other published analyses also suggest a decline in the productivity of R&D using a variety of measures. These include SSR Health LLC (2014), Barker and Scannell (2015) and Bloom et al. (2020).

Good explanations of the productivity decline should be able to account for the large scale of the productivity change and its progressive nature. Two broad classes of mutually non-exclusive explanation are key (Scannell et al., 2012):

The exhaustion of opportunities for pharmaceutical innovation. These opportunities might include as-yet-untreated diseases, unexploited biological mechanisms or unexplored regions of chemical space.

The gradual abandonment of more productive methods of R&D in favour of less productive ones (Horrobin, 2003). For example, many patients were treated as “experimental material” during the 1950s and 1960s. This may have been extremely productive but would cause horror today (Le Fanu, 1999).

These classes of explanation are causally linked. The depletion of certain opportunities has forced the abandonment of more productive R&D methods and the adoption of less productive ones. Consider a therapy area where R&D is particularly successful. The patents on the successful drugs eventually expire, and generic versions become available at a fraction of the price of new branded drugs. Generics often settle at around 10% of the price of the branded versions they replace. Since the 1980s, health systems have increasingly used these cheap generics before using more expensive, newer, patent-protected drugs. The ever-growing catalogue of cheap generic drugs progressively raises the competitive bar for new drug candidates in the same therapy area and so deters R&D investment. In 1994, around one-third of US prescriptions were for generic drugs. Today, over 90% of US prescriptions are for generics. Antibiotics, nearly all discovered and launched before 1970, are one obvious example. Antidepressants, nearly all discovered and launched by the early 1990s, are another. The improving generic pharmacopoeia is, of course, good for health systems. However, it pushes R&D towards diseases that have proven less tractable over the last 80 years – diseases for which the established R&D methods have proven less productive.

The second of the two main proposed explanations for declining R&D productivity relates to the progressive abandonment of more productive R&D methods (Scannell and Bosley, 2016; Scannell et al., 2022). The important factor here is the adequacy of the models used to test both new therapeutic hypotheses and drug candidates. These include animal, in vitro, computational and AI models, and even certain kinds of experimental medicine in human subjects. These are the tools used to evaluate new drugs and therapeutic mechanisms to decide if they are likely to work in patients.

Decision theory suggests that the ability to detect effective therapeutic candidates is extremely sensitive to a model’s “predictive validity”. Models have high predictive validity if they rank a set of therapeutic candidates in a way that matches the ranking generated if one could test all the candidates in patients. Furthermore, predictive validity is generally more important than simply being able to test tens or even hundreds of times as many drug candidates. In other words, quality beats quantity.

The history of drug R&D suggests that screening and disease models with high predictive validity (e.g. animal models of bacterial infection, animal models of hypertension, etc.) correctly identified drugs that worked well in people. When their patents expired, the drugs became the generics that undermine economic incentives for further R&D in the disease area. This rendered the best models commercially redundant (Scannell and Bosley, 2016; Shih, Zhang and Aronov, 2018, Scannell et al., 2022). This leaves the diseases for which the models are widely acknowledged to lack predictive validity (e.g. advanced solid cancers, Alzheimer’s, etc.).

Ironically, bad screening and disease models often remain in academic and commercial use for decades (Horvath et al., 2016; Scannell et al., 2022). There are several possible reasons for this. First, there may be nothing obviously better. Second, there may be strong tradition and availability biases (Veening-Griffioen et al., 2021). Third, they may not identify the useful drugs that would lead to their redundancy (Scannell and Bosley, 2016). Furthermore, as argued later, the private sector incentives for developing better screening and disease models are relatively weak. A progressive decline in the predictive validity of the stock of industrially relevant screening and disease models offsets the large gains in brute force efficiency.

The post-2010 uptick in drug approvals is also explicable within the framework of screening and disease models’ predictive validity. Modern genetic methods have made it easier to match pathological mechanisms with the patients who share the pathology. For genetically identifiable groups of patients, often with rare diseases that have a simple genetic basis, one can create or identify screening and disease models with relatively high predictive validity (Ringel et al., 2020). Modern genetic methods have been much less useful in creating good models of common diseases where single gene errors play a less important role in human pathology (Joyner and Paneth, 2019).

Researchers are clearly not exhausting some other important resources or opportunities. For example, the number of different chemical compounds that could be produced is, for practical purposes, infinitely large. There also appear to be plenty of as-yet-unexploited drug targets and therapeutic mechanisms (Finan et al., 2017; Rodgers et al., 2018; Shih et al., 2018).

AI technologies will help in drug R&D. They will be important in certain niches, particularly with respect to drug chemistry. However, their overall impact on industry-level productivity will likely be modest in the near term. The areas with the most progress in using AI are rarely relevant to the rate-limiting steps in drug R&D. Meanwhile, the biggest gaps in the ability to raise R&D productivity tend to be less amenable for AI solutions. For a longer and more technical discussion of these points, see Bender and Cortés-Ciriano, (2021a, 2021b) and a series of blog posts by Lowe (16 December 2021, 9 December 2021, 8 November 2021, 23 July 2021, 30 November 2020, 25 September 2019).

Much of what is called AI, in fact, falls within the broader field of statistical pattern recognition (and increasingly, pattern generation). However, any statistical pattern recognition technology is likely to perform poorly without sufficient training data that closely resemble the real-world problems to which the technology will be applied. This is the case with many of the as-yet-poorly treated diseases that still have commercial value for the drug industry. By AI standards, the data are lousy (Bender and Cortés-Ciriano, 2021a, 2021b). It may be, for example, that most of the published biomedical literature is false, irrelevant or both (Horrobin, 2003, 2001; Ioannidis, 2005; Prinz, Schlange and Asadullah, 2011; Begley and Ellis, 2012). It is unreasonable to expect screening and disease models that struggle to identify good therapeutic mechanisms and good drugs to suddenly generate the reliable and unbiased data required to train accurate AI algorithms. Generating better biological data will help take advantage of AI (and a wide range of older pattern recognition methods). However, that is costly and takes time.

The general enthusiasm for AI also means that a wide set of activities now fall under the AI banner. This includes many disciplines that have been important in drug R&D for decades, such as chemoinformatics, bioinformatics, computational chemistry, structural biology and biostatistics (Bender and Cortés-Ciriano, 2021a, 2021b). For example, the “protein folding problem” of structural biology – where there has been both fanfare and real progress in applying AI – is not new. It reflects “a seventy-year symbiotic relationship between molecular biology and computer science” (Singh, 2020). Similar long-standing relationships exist between computer science and genomics, virtual drug screening, etc. These long-standing relationships overlap in time with a huge decline in R&D productivity (Figure 1A).

There is a degree of consensus that the lack of valid screening and disease models is a major constraint on drug discovery. Researchers also generally agree that novel chemistry is the most appropriable and investible form of biopharmaceutical innovation because it can be protected by strong patents.

Chemistry has become easier over time, in part because of the contribution of computational and data analytic methods. It can be difficult, however, for the private sector to appropriate much of the potential economic value by investing in better screening and disease models (Scannell and Bosley, 2016; Billette de Villemeur and Versaevel, 2019). The models behave as what economists call “public goods” with substantial knowledge spillovers to those who have not invested in them. Once the mechanism identified by the new model is publicly proven in early trials in human patients, for example, the information becomes freely available to competitors. Competitor firms can then exploit the mechanism without investing in the novel models that led to its discovery. This situation skews private sector investment. Cancer drug candidates, for example, have among the highest clinical failure rates of any major therapy area (Shih, Zhang and Aronov, 2018; Wong, Siah and Lo, 2019). Nonetheless, as of 2018, around 1 500 compounds were in human trials (Moser and Verdin, 2018). They were progressed on the basis of in vitro and animal-based cancer models that nearly everyone involved believes to be inadequate. In relative terms, investing in chemical “roulette” appears to be too profitable. Meanwhile, investing in the screening and disease models that might improve the odds of the game is not profitable enough.


Barker, R.W. and J.W. Scannell (2015), “The life sciences translational challenge: The European perspective”, Therapeutic Innovation & Regulatory Science, Vol. 49/3, pp. 415-424,

Begley, C.G. and L.M. Ellis (2012), “Raise standards for preclinical cancer research”, Nature, Vol. 483/7391, pp. 531-533,

Bender, A. and I. Cortés-Ciriano (2021a), “Artificial intelligence in drug discovery: What is realistic, what are illusions? Part 1: Ways to make an impact, and why we are not there yet”, Drug Discovery Today, Vol. 26/2, pp. 511-524,

Bender, A. and I. Cortés-Ciriano, I. (2021b), “Artificial intelligence in drug discovery: What is realistic, what are illusions? Part 2: A discussion of chemical and biological data”, Drug Discovery Today, Vol. 26/4, pp. 1040-1052,

Billette de Villemeur, E. and B. Versaevel (2019), “One lab, two firms, many possibilities: On R&D outsourcing in the biopharmaceutical industry”, Journal of Health Economics, Vol. 65, pp. 260-283,

Bloom, N. et al. (2020), “Are ideas getting harder to find?” American Economic Review, Vol. 110/4, pp. 1104-1144,

Damodaran, A. (2020), “Data: History and Sharing”, webpage, (accessed 4 May 2020).

Damodaran, A. (2007), “Return on Capital (ROC), Return on Invested Capital (ROIC) and Return on Equity (ROE): Measurement and Implications”, SSRN, 1105499,

Deloitte (2021), Seeds of Change: Measuring the Return on Pharmaceutical Innovation 2021, Deloitte Centre for Health Solutions, London,

Deloitte (2019), “Pharma R&D return on investment falls to lowest level in a decade”, Deloitte, London, 18 December, Press Release,

Finan, C. et al. (2017) “The druggable genome and support for target identification and validation in drug development”, Science Translational Medicine, Vol. 9/383,

Goncharov, I., J.C. Mahlich and B.B. Yurtoglu (2018), “Accounting profitability and the political process: The case of R&D accounting in the pharmaceutical industry”, SSRN, 2531467,

Goncharov, I., J.C. Mahlich and B.B. Yurtoglu (2014), “R&D investments, intangible capital and profitability in the pharmaceutical industry”, Value in Health, Vol. 17/7, p. A419,

Grabowski, H. and J. Vernon (1990), “A new look at the returns and risks to pharmaceutical R&D”, Management Science, Vol. 36/7, pp. 804-821,

Horrobin, D.F. (2003), “Modern biomedical research: An internally self-consistent universe with little contact with medical reality?”, Nature Reviews Drug Discovery, Vol. 2/2, pp. 151-154,

Horrobin, D.F. (2001), “Realism in drug discovery – could Cassandra be right?”, Nature Biotechnology, Vol. 19/12, pp. 1099-1100,

Horvath, P. et al. (2016), “Screening out irrelevant cell-based models of disease”, Nature Reviews Drug Discovery, Vol.15/11, pp. 751-769,

Ioannidis, J.P.A. (2005), “Why most published research findings are false”, PLOS Medicine, Vol. 2/8, p. e124,

Joyner, M.J. and N. Paneth (2019), “Promises, promises, and precision medicine”, The Journal of Clinical Investigation, Vol. 129/3, pp. 946-948,

Le Fanu, J. (1999), The Rise and Fall of Modern Medicine, Little Brown, Boston.

Lowe, D. (16 December 2021), “AI improvements in chemical calculations”, In the Pipeline blog,

Lowe, D. (9 December 2021), “Another AI drug announcement”, In the Pipeline blog,

Lowe, D. (8 November 2021), “AI-generated clinical candidates, so far”, In the Pipeline blog,

Lowe, D. (23 July 2021), “More protein folding progress – what’s it mean?”, In the Pipeline blog,

Lowe, D. (30 November 2020), “Protein folding, 2020”, In the Pipeline blog,

Lowe, D. (25 September 2019), “What’s crucial and what isn’t”, In the Pipeline blog,

Moser, J. and P. Verdin (2018), “Burgeoning oncology pipeline raises questions about sustainability”, Nature Reviews Drug Discovery, Vol. 17/10, pp. 698-699,

Prinz, F., T. Schlange and K. Asadullah (2011), “Believe it or not: How much can we rely on published data on potential drug targets?, Nature Reviews Drug Discovery, Vol. 10/9, pp. 712-712,

Richard, J. and M.D. Wurtman (1997), “What went right: Why is HIV a treatable infection?”, Nature Medicine, Vol. 3/7, pp. 714-717,

Ringel, M.S. et al. (2020), “Breaking Eroom’s law”, Nature Reviews Drug Discovery, preprint,

Rodgers, G. et al. (2018), “Glimmers in illuminating the druggable genome”, Nature Reviews Drug Discovery, Vol. 17/5, pp. 301-302,

Scannell, J. et al. (2022), “Predictive validity in drug discovery: What it is, why it matters and how to improve it”, Nature Reviews Drug Discovery, Vol. 21/12,

Scannell, J.W. and J. Bosley (2016), “When quality beats quantity: Decision theory, drug discovery, and the reproducibility crisis”, PLOS ONE, Vol. 11/2, pp. e0147215,

Scannell, J.W., S. Hinds and R. Evans (2015), “Financial returns on R&D: Looking back at history, looking forward to adaptive licensing”, Reviews on Recent Clinical Trials, Vol. 10/1, pp. 2-43,

Scannell, J. et al. (2012), “Diagnosing the decline in pharmaceutical R&D efficiency”, Nature Reviews Drug Discovery, Vol. 11/3, pp. 191-200,

Shih, H.-P., X. Zhang and A.M. Aronov (2018), “Drug discovery effectiveness from the standpoint of therapeutic mechanisms and indications”, Nature Reviews Drug Discovery, Vol. 17/1, pp. 19-33,

Singh, J. (2020), “The history of the protein folding problem: A seventy year symbiotic relationship between molecular biology and computer science”, 12 June, Medium,

SSR Health LLC (2014), “Biopharmaceuticals R&D productivity: Metrics, benchmarks and rankings for the 22 largest (by R&D spending) US-Listed firms”, SSR Health LLC.

Steward, F. and G. Wibberley (1980), “Drug innovation: What’s slowing it down?”, Nature, Vol. 284/5752, pp. 118-120,

Veening-Griffioen, D. et al. (2021). “Tradition, not science, is the basis of animal model selection in translational and applied research”, ALTEX - Alternatives to Animal Experimentation,

Weatherall, M. (1982), “An end to the search for new drugs?”, Nature, Vol. 296/5856, pp. 387-390,

Wong, C.H., K.W. Siah and A.W. Lo (2019), “Estimation of clinical trial success rates and related parameters”, Biostatistics, Vol. 20/2, pp. 273-286,


← 1. This graph has not been published elsewhere, although the analysis is easy to replicate and the required accounting data are widely available. Damodaran routinely puts similar analyses and datasets in the public domain (Damodaran, 2020). An important technical point here, often neglected in the public policy debate on drug industry profits and productivity, is that one needs to adjust both the profits and the equity balance for R&D capitalisation if one wants to compare ROE in R&D-intense industries, such as the drug and biotechnology industries, with industrial sectors (Damodaran, 2020, 2007; Goncharov, Mahlich and Yurtoglu, 2018, 2014). Without the adjustment, one overstates the financial performance of R&D-intensive sectors.

← 2. In recent decades, expensive drug classes (e.g. cancer drugs and rare disease drugs) have grown as a share of drugs launched. Furthermore, newly launched drugs in these expensive classes generally launch at higher prices than similar drugs launched in prior years. This phenomenon – “mix inflation” – largely superseded like-for-like price inflation (i.e. the same drug getting more expensive in real terms each year). At an aggregate industry level, like-for-like price inflation has become less important. Payers have become much more effective at engendering price competition between branded drugs that are therapeutic substitutes. Like-for-like price inflation itself superseded prescription volume growth as the driver of revenue growth for branded drugs. Patent expiry and generic substitution have been a major drag on branded drug volumes in the United States since the mid-1980s. Expensive branded drugs are now less than 10% of US prescriptions. Note that these pricing comments apply to drug prices net of rebates and discounts and are also US-focused.

Metadata, Legal and Rights

This document, as well as any data and map included herein, are without prejudice to the status of or sovereignty over any territory, to the delimitation of international frontiers and boundaries and to the name of any territory, city or area. Extracts from publications may be subject to additional disclaimers, which are set out in the complete version of the publication, available at the link provided.

© OECD 2023

The use of this work, whether digital or print, is governed by the Terms and Conditions to be found at