linki2

The waste in futile oncology drug development

John Hickman, PhD

Co-authors: Valerie Jentzsch, Leeza Osipenko, PhD, Jack Scannell, PhD

 

Leeza Osipenko: Today we have a special event. We're presenting the work done in-house at Consilium. I would like to give you a very brief background of how we arrived at this project. So it all started in the summer of 2021, when Valerie Jentzsch joined Consilium as an intern from the LSE, and we wanted to look at targeted therapies and economics around them, and eventually, after her dissertation, this project evolved into this very interesting work that John Hickman will be presenting today. But this is a product of the efforts of the 4 of us. So Valerie started it with her dissertation and Jack Scannell, who is an expert in drug R&D processes from both financial and biology side, and John Hickman, who actually worked in drug R And D all his career. So great minds came together to put this project forward. So we're very excited to present this work to you. We will be submitting this to the Journal next week, so we have your emails for everyone who joined the seminar, so whenever the publication comes out, we'll be happy to share it with you. But for now enjoy the show. 

John Hickman: This is going to be a joint presentation between: Valerie, Jack, Leeza, and myself. You can see the title there. So I want to give start with some background, and here it is. So a few points, I think you all know that the mortality from cancer worldwide is projected to rise in the next decade, and that in Europe will equal almost that that in the United States. Because of this medical need to do something about this disease, 40% of pharma projects are now in oncology, and that interest by pharma’s started around the year 2000. We might come back to that. In a recent paper that I'll show in a moment, it was estimated clinical trials in oncology are the most expensive of all therapeutic areas. That's one particular study that suggests that they're certain expensive. And then another set of studies, and I’ll illustrate one of them. There's around a 90% attrition rate in oncology drug development. That's a failure to take a drug to registration, and that figure is somewhat debatable and it's complex as well. What are the causes of attrition? But quite a large part of it is actually a failure of the candidate drugs to be efficient in clinical trials. If you look at that situation, then many drugs that are being tried are going to be inactive and they can expose patients to toxicity. So it's very important that drugs that go into clinical trial have a reasonable chance of success. Because in 2018 there was a paper published in nature reviews drug discovery. I don't know what the current figure is, but then there was 1,405 oncology drugs in development, not clinical trials on college products. That's a huge number. Many companies compete to find drugs for the same target. So that article which I'll refer to later shows that many companies will be trying to make yet more. PD-1 inhibitors or RAS inhibitors. This is the me-too problem and many fail. Then the last point that we'll discuss is that drugs enter clinical trial based on so-called positive, preclinical data, but they fail in the clinic largely due to inefficacy. So there's a problem that we want to address about the quality of pre-clinical data, and the decisions that are made in industry to actually take candidates into humans. I think Jack will address this as we go through the presentation, because Jack and I and others recently had a publication addressing this.

Okay, so that's the background. So this just shows a couple of totally unreadable papers in the slide, but just I had to say that in 2019 a paper published in biased statistics, I think suggested the worst attrition rate or a success rate of only 3.4% in oncology. A bit surprising when you think of how badly CNS is also doing, central nervous system. But one might say, looking at a number of publications. It's around a 90% failure rate. There's another paper in science and another one in nature reviews drug discovery. So it's around 90%, one in 10 probably failed. As I mentioned before, how much does it cost to take things through clinical trial to registration? And this is a quote from this paper therapeutic area-specific estimates were highest for anti-cancer drugs between 944 million, and 4.54 billion for one trial. So Oncology is very expensive and so quite clearly, I don't need to spell it out to you, if there's a huge attrition rate, then there's likely to be a loss of quite a lot of money and resources. 

So our questions that started this, where how many patients are involved in these failed trials with this high attrition rate? What might be the factors that are responsible for this failure attrition there are going to be many probably, but we'll focus on one. And could we estimate the expenses of oncology clinical trials which fail to show drug activity? And what can be learned. So we did a case study, because I remember I was around doing drug discovery in the early 2000s, and we chose to look at inhibitors of the insulin-like growth factors-1 receptor and here's a picture. But this is a receptor that tries a pretty canonical pathway, both proliferation, self-survival and may also play a role in motility. In 1997 Bert Vogelstein’s groups published a paper in science at the time when gene expression was all the mode, and they showed that in gastrointestinal tumours there was over expression of this receptor and then in 2004 a very nice review by Michael Pollak brought together all the data suggesting that this signalling pathway was important in cancer. And this is based on both laboratory studies, but also studies of pathology, looking at the expression of the receptor, and also some of the circulating ligands which include insulin like growth factor 1 and insulin itself. So there was a very good case put for actually making inhibitors against this receptor, and I'll come back to why, that was the case a little bit later. I'll amplify that So I’ll pass over to Valerie and let her tell you what she did. 

9:29

Valerie Jentzsch: As Leeza mentioned, this work was started during my time, at Consilium last summer. While I was there, we essentially created this database of IGF1R inhibitor clinical trials. We first searched some general public databases on IGF1R, and for some other key terms as well like  targeted therapy, molecular IGF1 therapies and so on, and through that we identified 16 drugs that had been used in trials targeting IGF1R and then with those specific drug names and the general search term IGF1R we then search clinical trials.Gov From January, 2000 through to July 2021, to identify the clinical trials that had been conducted on those specific drugs and then we searched through those trials there were several and removed all the ones that weren't specifically in oncology, and came to a total of 183 trials like that, and then we also included information such as the drug name, it's type, the number of patients enrolled in the trial, the company that was leading the trial, the phase of the trial and its current status, just to be able to do some analysis on those aspects as well.

John Hickman: You can see on the table here that that's a mixture here of antibodies, small molecules, and a rather rare anti sense cell therapy. So a broad range of agents by 16 companies, and Valerie said, a total of what we found of 283 clinical trials, with over 12,000 patients. Not all of them got the drugs of course some of them would be in the control arm, but that’s a lot of patients, so do you want to say something about your spreadsheet? You did mention this, but just take us through this

Valerie Jentzsch: Sure, so as mentioned research clinical trials.gov, so all the way on the left the spreadsheet you can see the specific NCT Number, and then we have the title of the trial and the specific drug code. We also looked at whether the trial was a single or combination, so whether the IGF1R drug was given on its own or in combination with another drug. We also looked into the lead agency, as John mentioned, we identified 16 main trial leads, companies that were leading research in the area, and then we also identified whether the trial lead, was industry or academic, and we'll talk about that later with regard to the costs for these trials and then we also looked at what cancer indication the drugs were tested in, and then, as you can see, the phase of the trial and also the trial status. We'll discuss later a little bit more regarding to costs but specifically when the trial was academic or industry, what phase of the trial and the status of the trial creates a role when we tried to estimate the costs for each of them

John Hickman: So we've got these huge spreadsheets that give us all this detail on these 283 trials. So here is a representation Valerie put together of the actual phases of the clinical trials. You can see the first one which was performed in 2003 was a phase 1, obviously, and you can see the progress here. So the dark bars at the top of this this graph up here are phase 3, that is that you are really looking for an effect on a tumour, and in phase 1/2 you'll be looking at toxicity and some indication of activity. Phase 3 is where you actually believe that there is activity, and you'll look at a number of patients in a variety of pathologies, as Valerie said, and if you look at this, you can see that there weren't a lot of phase 3s. There are a lot of phase 2s and they didn't go any further. That suggests that activity was not seen in phase 2. I should say that in phase 1, there was very little toxicity of the agent by itself, I won't go into it, but in combinations and there are lots of combinations, actually combinations were toxic so, and provided no benefits, and what you can also see very clearly that, after 2009, this is a nice bell shaped curve, interest was lost in taking this any further, with a little bit of revitalization there in 2014, but you can see there was a quite rapid decline after all these phase 2 trials, so none of the drugs, 16 drugs exhibited clinical activity against a wide variety of tumours, and I haven't shown that data, but this is all away from hematopoietic across a wide variety of solid tumours, breast, colon, lung, and 283 trials. So none were registered for use in oncology and then in 2013 one of the great proponents and an expert who and a fabulous scientist Renato Baserga wrote this review, that's 2013 you can see what's happening by that time he was writing it, probably in 2012. The decline and fall of the IGF1 receptor. So it was accepted that one of the proponents. The strong proponents and experts on IGF1 receptor biochemistry and biology, accepted that this was not going to work so 283 trials, for nothing. So the other thing that we wanted to do, and thanks to Jack, was to work out the expense involved in this, and that word expense is something I keep tripping over and that will say something about it. But the great thing that Jack did for us was to contact John Moser at Evaluate and Jack will tell you the rest of the story, and you'll see on the right hand side of this slide some numbers which are dollars. 

Jack Scannell: So estimating R&D and costs is a real rat's nest to be honest. And so it's a rat's nest for these 3 reasons the first is it's politically contentious, so some of the drug pricing debate hangs on, in my view, not very good arguments about R&D costs, but nonetheless this means that some people like to imagine R&D as very cheap, or that it's incredibly expensive, depending on whether they're, trying to push drug prices up or down but then there's 2 other slightly more data analytic points. The first one is that there is, there is actually no real standard way of defining R&D expenses, and a lot of it has to do with things you either include or don't include like the do you include the cost of failure? Do you include the time cost of money, those sorts of things actually have a very, very big effect. They can sort of change your estimates by a factor of you know, 2 or 3 or 4, and then it also depends a lot on the sample of programs you look at. So some drugs are approved with a few tens of hours of patient exposure. Some drugs are approved with a few tens of thousands of hours of patient exposure. If you pick expensive programs, you'll get a very different answer to if you pick cheap programs. But what we managed to do is get some data that is kind of I think about as good as it gets from an outlet called evaluate. This is a commercial database. You normally have to pay to get the data and what they do is they take advantage of the fact that small and mid-sized drug companies often do disclose R&D expenses at the product level in their filings to the various national securities Exchange Commission in the US. So public companies have to file detailed accounts. Sometimes again for small companies that will have product level information, and from that product level information you can build benchmarks so the companies will tell you how much they spent on phase 2 for a particular product in a particular year, and if you know what therapy area that product was in and you go to clinical Trials.Gov and you see how many patients were involved you can generate decent benchmarks and Evaluate, generate their cost data on that basis, and if you then apply those cost benchmarks to the data that Valerie tortuously extracted from clinical trials.Gov, you can get a fairly good idea or a reasonable idea of the R&D Expenses that would have been associated with these programs.

John Hickman: So this will take a bit of explaining. I'll leave you both to do it

Valerie Jentzsch: So we have this total of 183 trials, and we sort of separated those out. We had 129, which Evaluate provided information for that, as Jack just explained, where evaluate was able to cost these programs, then we identified a further 55 where evaluate did not have you know the capacities in their system to estimate the costs, and so we did that through our methodology so what we did there was we costed all of the 129 trials where evaluate provided data and the total number of patients in all of those trials, per phase, and then divided those by each other to obtain the cost per patient per phase of a clinical trial, and then we took that and the patient numbers that we identified from clinical trials.gov and multiplied it to get sort of an estimate of the trial cost for the 55 additional trials. So for the 11 industry trials that you see in the second column that was the methodology that we used, and then in the third column when we get to academic trials, Jack can probably talk about this as well, but we sort of took the assumption academic trials were less costly than industry trials. So we made an additional modification to the per cost per phase, numbers, that we obtained by a factor of 0.5 and a factor of 0.2 to get a range of what these academic trials could potentially cost and then did the same thing as before. We took the per cost per phase, per patient per phase cost, and multiplied that by the number of patients in each trial, and these are the numbers that we got

Jack Scannell:  if we've got some finance geeks in the audience, they may be interested in this, and if you're not a finance geek, you'll probably find it horribly boring but the costs here, mapped roughly onto expenses that would show up in the income statement of drug and biotech companies. So they're sort of accounting costs, roughly. They’re not sort of finance costs where you would include a cost of capital number. So if you remember, John introduced a very good paper earlier on when he said cancer trial cancer drug approvals cost anywhere between 955 million- 4.5 billion a lot of that range is explained by whether or not you add time cost of money. If we added the time cost of money and inflation to these figures, you could easily take it from 2 billion to about 6 billion or 6 and a half if you sort of looked at it in 2022 dollars. Right so again I just want to make sure that there's no I don't need the impression of spurious precision here. These numbers are about as good as one can estimate from the outside. But still the precise number depends a lot on how you define it.

John Hickman: Yeah, just a point I’d like to make is that if you go over to the left of the table, looking at the price that evaluate found the expenses on the trials, these are not adjusted for inflation, so the numbers come from you saw the range of years going and actually there was another trial yet again in 2020. So we're talking about from 2003 to 2020. So these aren't adjusted for inflation, and as Jack has said, probably we could add quite a lot more to these figures. Maybe just, Jack, another word about this factor here for academic trials

Jack Scannell:  So there's a fairly strong consensus amongst experts, I know, and I think experts that several of us know, that academic trials are often cheaper than industry sponsored trials. There's a whole bunch of reasons for that, one of which is that you don't necessarily need to submit the results from an academic trial to as suspicious drug regulator, or a drug rate regulator who will scrutinize it in great detail. But I certainly haven't come across good data on quite what that factor is, and the experts I know disagree about what the factor is depending on their personal experience. So we have again indicated quite a wide range of possibilities there.

 

John Hickman: I mean there's arbitrary numbers, we thought was reasonable, Say, half the costs or a fifth of the cost. But the totals, on the right hand side are the important thing. So these trials we estimate cost between 1.9 billion the lower range and 2.4 billion. Not adjusted for inflation. So why did all of these agents fail? Well around 2013-14, going on to 2015, I think half a dozen, 6 or 7 papers came out, from many of the experts in the IGF1 receptor field trying to analyse what had happened, and they didn't say anything astonishing. We with hindsight know that all of the inhibitors that are currently used to inhibit growth and survival signalling, there’s redundancy between the receptors with the EGF receptor and so forth and this compensation of signalling in the case of the insulin like growth factor hormone can actually take over the signalling. The big thing that they went on about in all of these papers was the lack of appropriate biomarkers and actually I think that's quite disputable because I've read back over that literature, and a lot of the companies published on biomarkers, and presumably in the trials, used those biomarkers. There was a recent paper that came out about attrition in oncology again, saying that the lack of biomarkers was an important factor in leading to drug failure, and I agree with that. I agree with all these factors, but the strange thing is that that these 6 or 7 papers failed to mention something that I think is quite important, and the question is, what data propelled these drugs into trial. So when management was taking a decision to go into humans, how was that decision based? Well, largely. It was based on pre-clinical studies and data particularly, I presume, but it's a presumption, coming from experiments in Vivo. I found, and it may not be fully extensive, but I found 35 publications reporting data from xenografts with 62 experiments, and half of these inhibited tumour growth by only 50% or less. And I think this is highly questionable that went on the basis of this data, which was later published, this was sufficient to take them into clinical trial. So this was perceived to be positive data, and the clinical trials showed that there was no effect on human tumours, and yet all the prefrontal data was spoken of as being consistent and positive, and one of the reviews said that that there was very good positive data. I think this is the major problem, not only in this field of IGF1 receptors and  antagonist, but elsewhere as well. The quality of the data from these pre-clinical models and the decisions that are being made about it a highly questionable so I’m going to turn over to Jack and before he speaks, I think you ought to read his quotation from this paper that just came out in nature reviews in drug discovery. So just take a moment to read this nice quotation about the importance of models.

“Thus, the thing that nearly everyone already believes is important is more important than nearly everyone already believes” Jack Scannell, PhD

 

Jack Scannell:  I'll give you a bit of background on this paper first, before talking more specifically. There's a very ugly contrast at the heart of modern pharmaceutical R&D and that is that all of the technologies that we think make drug R&D better. You know DNA sequencing, looking at protein structures, our ability to make novel chemicals, our ability to make clever computers that test those chemicals in silico against the protein structures, our ability to make transgenic animals. All of those things since 1950 have got hundreds, thousands, millions, or billions of times cheaper and better, but it costs the drug industry, if you include the cost of failure and account for inflation, a 100 times more to bring a drug to market now than it did in 1950. So there's this great powerful productivity headwind and If you look at the decision making side of drug R&D and you regard screening and disease models as tools that allow you to make good or bad decisions, and if you sort of try and formalize that thinking in some decision theoretic models, what you find is that very small changes in the quality of screening and disease models can have a huge impact on one's ability to detect drugs that work in people. And this partly perhaps, explains the long-term productivity decline because what's happened over the years is the diseases for which we have good models, for example many anti-infectives, stomach ulcer drugs, anti-hypertensives; Those models find effective drugs that work in people, and consequently the models are retired because those areas become commercially boring right, because we have lots of generic drugs and the drug industry are then left with the diseases where actually the models routinely give us the wrong answer, because those are the diseases which remain untreated remaining commercially interesting. So it’s left with diseases like advanced solid cancers, and Alzheimer's, and a point we make in this paper is that screening and disease models which in principle could be evaluated are often under-evaluated. We don't necessarily think rigorously about the extent to which they recapitulate the biology of the human clinical state. We don't necessarily think enough about the tests and endpoints we use in those models, and how they map onto the tests and endpoints that we would use in a human trial. I think the mouse xenograft example is a good one here, or a bad one here. So, for example, one looks typically at tumour response in xenografts. Actually in humans we are interested in overall survival. One measures the mice xenografts after 35 days, actually in people we're really interested in how they do over a much longer time period than that. And then another important test and end point which needs to map from animal models to people is the performance threshold that one needs to see in the animal model to believe that there will be a useful therapeutic effect in people again it seems clear at least from John's analysis of the xenograft data here, that the efficacy hurdle applied in these models was insufficiently high right? John can maybe say a bit more if he wants, I don't want to go into great detail here, particularly as the papers here is for all to read, but both John and I think there’s been a sort of been a particular problem with oncology models over the years, partly related incentives, or the economic incentives that people doing R&D face and again if there's questions about that we'll be happy to talk to them. But I think in this particular case, there's particular problems around oncology models, and that's a major contributor to the relatively low rates of success we see in oncology trials.

 

John Hickman: I would just comment that the failure to see any clinical activity of these 16 drugs suggests that the models are invalid. That's one way we're looking at that. They've been invalidated by the clinical data. But the other thing that Jacks mentioned is I think the interpretation of what they did show in those models and the threshold for activity being questionable. So one of the last questions we want to ask is why was 16 IGF1R inhibitors put into clinical trials and there's been a number of papers asking why companies are piling into the same targets. And I mentioned John Moses paper as well that highlights that I can't remember 47 agents in Non Hodgkins lymphoma at the same sort of time. I think we've got to be a little bit fair here. So around 2000 two important drugs of a new class emerged. One of these was her Herceptin, which is an antibody to a growth factor receptor like the IGF1 receptor and this was dragged really quite difficultly through Genentech, who didn't like the idea of going into cancer therapy, but Herceptin was a pretty big success. We could discuss that as well, but what it launched then was what's cited in this this review, that one can't anticipate, this was in 2003 “a postgenomic wave of sophisticated smart drugs to fundamentally change the treatment of all cancers”, so there was terrific optimism at that time that all the drugs that would emerge on other targets would be like Gleevec against BCR-ABL and Herceptin. 

And we're trying to analyse what decisions were being made in industry, and I like this quotation from Borup et al. 2006, that behaviour, looking at expectations in science, is not only based on rational risk return, but also influenced by expectations and perceptions of other people's behaviour. And I have some inside information that one of these similar molecule inhibitors that we've been talking about. the management actually knew that the pre-clinical data wasn't very good, but it was a biotech, they needed to fill the pipeline, and they also knew that everybody else was doing it the Roches, Novartis, and so they had to do it as well, so there was a kind of herd instinct here, and 16 of them failed.

So this is a summary, and then we'll have some lessons learned. So 16 inhibitors were entered into clinical trial, based on preclinical data. that pretty preclinical data was pretty questionable I think and that's what was published. Obviously, we don't have access to all of the data that was there in industry. There were 283 trials, 12,000 patients. None showed clinical activity. We estimate the expenses to be 2 billion, and I have to let Jack just say something about the next thing. current estimates suggests that the annual R&D expenses in the order of 50 to 60 billion per year on failed on oncology R&D.

 

Jack Scannell:  So this is what in the old days I'd have called a back of the cigarette packet number. It's what I know physicists who are cleverer and have a better way of describing it call a Fermi estimate, but roughly, oncology is probably around 45% clinical trial expenses currently in the drug industry. Roughly, 70% of expense are associated with failed projects, and the drug industry spends, roughly, 200 billion a year on R&D. So if you, combine figures like that, you get to this estimate of sort of run rate R&D expenses on failed oncology R&D in the 50 to 60 billion$ a year range

John Hickman: So that's big numbers, the human and financial resources I’ve written involved in drug discovery and the low impact of drug therapy on survival. This is my opinion. Suggest resources might be better focused elsewhere to reduce cancer, mortality, I'm not gone into that, but we'll hear from Nathan Cherney  in a couple of weeks’ time about his review of FDA approvals in oncology since 2017, and the numbers are quite sobering about how effective these drugs actually are. 

So here are the lessons learned. As Jack said there are technical challenges in trying to estimate the cost of drug development and the expensive clinical trials. We've done our best. We think the validity of pre-clinical models to represent the complexity, and heterogeneity of cancers and the data they provide should be questioned, and I showed you that the paper that Jack led. That's published in nature reviews and drug discovery. Better target validation and improved models could save the loss of billions of dollars, 50 to 60 billion a year being lost on these trials. There’s also over investment in some drug targets and under investment in others, and something that Jack’s is always saying is there's a lack of rigorous analysis of major translational failures

 

Jack Scannell:  Yeah, can I just emphasize that a bit. Suppose we have our numbers roughly right for the IGF1 receptor inhibitors, roughly 2 billion spent on 16 drugs, none of which work. 2 billion dollars is twice the annual budget of the UK’s Medical Research Council. So this is a lot of money in biomedical research terms, and it's quite interesting that you can have a face of that magnitude, and what happens is 6 review papers get published all of which kind of shrug their shoulders and say, well you know what there's no biomarkers. I'm being slightly unfair. But this was not a kind of rigorous post-mortem, so that the industry and other people can learn lessons so that it doesn't happen again

John Hickman: And then my personal opinion I feel looking at these huge numbers that resources might be better targeted away from therapeutic approaches and request to reduce mortality from concept. So that's a debatable point. So thank you for your attention, and we're open to questions

 

Leeza Osipenko: I did want to make a few small comments. One is that one drug was approved for IGF1R but it was not in cancer. So the target worked elsewhere, and there weren’t many trials in that area, and it's a very expensive drug. It would be very interesting, because when Valerie started her research back in the summer of 2021, we didn't know which target we would pick and there's a lot of work that needs to be done to look at let's say most successful targets in Cancer where drugs have been launched in ALK, in KRAS, in a EGFR to see what were the costs there? What were there volume of trials, and how successfully industry and academia fared there with targeted therapies, because John correct me If I'm wrong, but I think the ideal precision medicine and targeted therapies are still very much high on the agenda. We have these results but I don't think we're turning away from this and as far as we know, in terms of background work that we've done, this is the first research that looks specifically at the target from this perspective, trying to look at the cost across the board across companies to make these estimates. So I think there's a lot more to discover in the direction to help shape the policy in oncology and beyond when it comes to targeted therapies. So I think we can close the slides and just get to questions. David let's start with you first

 

David Colquhoun: Yes, it seems to me that the record in pre-clinical cancer research for reproducibility is pretty lousy, isn't it? And that's perhaps not surprising when you consider that even if you give a candidate a 50% chance of being active, that corresponds to a false positive risk of around 30% and that's in a well-powered, perfectly designed experiment. Of course you probably should not reckon on 50%, but more like 10% of that, and that would correspond to a false positive risk of 76%, according to my argument. Other arguments give similar answers for that so perhaps more rigor in the pre-clinical research should be demanded. The trouble is p=0.5 gives you a publication, and that's a great problem. But the other side of the coin is, how do you prevent missing a Herceptin. I guess people will have looked fairly carefully about what was different about the development of Herceptin from all these failed ones, but I don't know enough about it to say what's different. It would be sad if that had been missed

 

Jack Scannell: I'll say a couple of things first. Firstly David, I think in the work I've done in model validity, some of your publications on false discovery rates have been incredibly useful and influential, so first thanks for that right. I think it seems to me that one can classify model failure in sort of 3 broad categories. I think you have what you might call statistical and experimental hygiene. So was the sample size big enough to avoid reasonably high false discovery rates. Was it blinded etc. And my view is, although people don't always follow best practice, I think at least a lot of people know what good practice might look like in that domain. I think there are 2 other areas in which there are 2 other broader classes of model failure. One is that you simply fail to recapitulate the biology of the pathological state that you think you're modelling. I think in oncology models , it’s clear that the models in widespread industrial use they don't recapitulate tumour division kinetics. So that means the models were good at finding drugs that kill rapidly dividing cells; only a relatively small number of cancers have very rapidly dividing cells, and actually we have quite good drugs for those cancers because those cancers look like the models. The tumour models rarely recapitulate the genetic complexity of advanced cancer and again interestingly, we have quite good drugs that target genetically simple tumours like Gleevec. So in a sense, what's happened is that I think inadvertently a number of cancer disease model technologies have been implemented widely, and we've ended up discovering drugs that are very good for the cancers that look like the models. So we kind of got it the wrong way around, and then I think that's the sort of biological recapitulation. And then I think there's another way that models can give you the wrong answer, and that's by using the wrong tests and endpoints and the tumour models in mice are typically run for 35 days. you look at tumour response, arguably that may not be very relevant for thinking about human disease. But I can think of 2 other very good examples of this mismatch. Theres a lovely example of failed translation in stroke. There's a drug called tirilazad that was positive in maybe 20 something animal studies maybe more than that. It went into human trials and didn't work in a large number of human trials, and it turns out that if you look at the Median delay between inducing an ischemic event in the animal studies that were positive, the Median Delay was 10 min if I recall correctly across the animal studies, If you looked at the human trials the Median delay was 5 h. So the tests and endpoints that we used in the model, although the pathophysiology of stroke arguably in the animals were translated the way that the tests and endpoints were implemented didn't translate to the human condition, so I think statistics, and false discovery rates are one and arguably the easiest to address. My view is that the other ones relate to some insufficient critical scrutiny around the lack of good tools for evaluating the degree to which the biology is recapitulated and thinking formally about whether you're using a set of tested endpoints that map from the model to the human clinical state.

 

Ian Tannock: One positive thing is they didn't actually approve any of these drugs. I mean one of the problems with cancer drugs is that there is a plethora of examples out there where the drugs are almost totally ineffective and yet are approved for human use. Things like Bevacizumab has for many years, been one of the leading money earning drugs, but it does very minimal benefit some would argue, no benefit. Alpelisib has been recently approved. It doesn't improve survival. It makes quality of life worse and is approved on a biased secondary endpoint. So when you're a company, if you're looking at sort of developing an IGF1R inhibitor, you actually don't have to show that it ss very effective. All you have to show is that on some often surrogate endpoint you might just get an effect that shows that your p-value is less than 0.05. We don't set up trials in cancer, so you've got to have a certain effect size. Any of effect size on survival seems to be enough for these registration agencies like the FDA and the EMA to approve them. So why companies were probably jumping on the bandwagon there was because the minimal effects in animals let them think they might get minimal effects in humans, and that would be enough. It will be very interesting to hear Nathan Cherney in a couple of weeks’ time, because the number of drugs that we have available for cancer that actually meet criteria of clinical benefit, it's arbitrary way you've set it, but let's say at least a three-month improvement and Median survival, or an improvement in quality of life, is no more than about a third Of them. So people are chasing profit, but not really chasing effective drugs, and that encourages more research to be done on drugs that have at most marginal effects in early models. 

Leeza Osipenko: There's a comment from Lydia who had to leave in the chat, so you can read it, where she argues about looking at the new adjuvant and year-up research trials to improve care in oncology. Another very practical way forward, and very good comment, a question from Jean About what about the human cost to trial participants? Because we've done the quantification and you're absolutely right, there are a lot of externalities which are difficult to put into empirical measures, and this cost is for a given family might be incalculable, for a given individual, so this is another huge issue in terms of doing bad trials and enrolling patients into studies which are not well designed or doomed to fail from the start. 

John Hickman: Just a comment I did say that given alone, the 16 inhibitors none of them really had big issues with toxicity. What happened was when they gave them in combination then yes, indeed, those patients were getting toxicity from combinations of drugs which were ineffective, so yeah there’s a moral question there as well. 

Leeza Osipenko: And there's always an opportunity cost, these patients could have been a) in different trials or b) on other standard approved therapies, so there are a lot of implications from the patient point of view. 

Andrew Dillon: It just raised the question as you were talking in my mind about how did decisions get taken in drug companies to progress treatments at early stages in the evolution of the entity that's being researched, and who's involved in making those decisions so how is it that something gets progressed from an animal model into human clinical trials under circumstances in which, when we might look at it now, it was clearly not the right thing to do, clearly going to be a waste of resources, never going to produce something or very unlikely to produce something that's going to be effective in humans and I wonder what consistency there is inside life sciences companies about the way in which those decisions are taken and who's involved in taking those decisions. The other thought I've got is the extent to which something like this piece of work will be of interest to investors, and shareholders because 60 billion, or 2 billion in this particular field, but 60 billion right across oncology is an enormous sum of money. An enormous sum of money that could be invested in something else, something more effective or not invested at all and substantial amounts of money that isn't going to produce any shareholder value

 

John Hickman: Well what I would say, is that I was surprised in the 6 or 7 papers that analysed the failure, that that question of the quality of preclinical data was not discussed. so it's as if it's embedded. I mean if the data, whatever quality it is, from xenografts is there, and in vitro data, then one doesn't question that. As long as it looks fairly positive you can go ahead, and I agree I think Jack said the same thing, there has to be a lot more scrutiny. I mean if you look at these particular tests, these mice got 200 milligram tumours, tiny little pieces of tumour, and they were treated immediately and they were treated constantly for 4 weeks. I mean that doesn't represent, and most of the tests were using cell lines that were implanted so they were they were monoclonal. I mean this doesn't represent the problem that you're seeing in the clinic. But who's looking at that? And are there experts in industry that are really being very careful with their analysis of this sort of data. I think there are more people that I know that are defending xenografts, particularly from companies that are selling them, and people making more you know saying that they've made more people PDXs patient derived xenografts and publishing papers in high impact journals and these things are not providing drugs which work which we will hear from Nathan Cherney in a couple of weeks’ time. So there's a big problem

 

Jack Scannell: So you asked about investors. I spent a lot of the last 20 years working in drug and biotech investment, and also trying to look at how industry thinks about some of these questions, and I think there's a sort of widespread recognition that the economics of R&D are largely driven by decision quality, and the sort of 2 broad sets of decisions at least around the technical success of a project. One is, are you starting with the right chemical universe, I.e. In the set of candidates that you're going to be optimizing or testing, Is it likely that some of them will be useful in the therapy area of interest? And then the second one is, How good are we, What's the false positive rates and the true positive rates of our abilities to distinguish those candidates and progress. And I think there are some companies who are pretty overt that they've at least tried to move to that kind of model. So there's a sort of term that's used in the industry. Are people moving from progression seeking, to truth seeking behaviour, and 2 companies that both say they do it and I think from the outside looks like they at least do it as well probably. AstraZeneca had a bit of an R&D turnaround, and a company called Vertex, arguably vertex thought very clearly about problem choice. So they thought for very clearly about the kinds of R&D problems that they thought would be tractable, given the decision tools that one could bring to bear. But what that means is Vertex doesn't work on Alzheimer's, it doesn’t work on cancer. It works on other things. If I talk to companies who work in cancer, I think an economic problem becomes apparent, so it's hard to appropriate economic value from investment in better screening and disease models because much of the value you create will leak to your competitors. So, for example, if you invest in better screening of disease models, and you show that mechanism X is really important in a range of cancers, other people will find that out when you publish your phase1 or phase 2 data and can probably, on the other hand, novel chemistry is appropriable. I've had strange conversations with well-funded biotech firms, where I've said, why don't you spend a bit of money on trying to get some better cancer models to which the reply is actually no, you know we can't make any money from that. What we're going to do is invest lots of money in novel chemistry and use the standard models. So effectively It’s a bit like saying the financial returns from playing sort of low probability chemical roulette in models that everyone knows are poor, are better than the financial returns from investing and making better models that would benefit everyone, so I think I think there's some economic problems around the financial incentives to improve screening in disease models. 

John Hickman: I want to use this opportunity to talk about the limits of my scepticism about this area, and that is, if we got very good models of genetically heterogeneous cancers with complex microenvironments maintained, I think we'd find, particularly with small molecules, we haven't addressed immune-oncology here, but with small molecules I think that these test beds would show that small molecule therapies are not going to work. They're going to give 4 months increase in survival at best, and that's exactly what we're going to hear from Nathan Cherney. So we can work on improving models, but I would feel that in the end we make come to a situation where a very good model shows that this is not the strategy to reduce cancer mortality. 

Lawrence: I think you've identified actionable issues, and so one of the questions is going to be: you know what action needs to be performed to try to address these issues and so summarizing what I think I’m hearing, at least this is touched upon, is this the misalignment of incentives, or these clinical trials, so if you have a tremendous misalignment of the incentive for the trial, then a small amount of data will precipitate a trial. A poor model will precipitate a trial, you know pathological consensus will precipitate a trial. All of these things. Everything is to move to the trial. The question is, how do we address this misalignment of incentives? We do it by oversight, or by modifying the incentives? Or what are the thoughts about that

John Hickman: I don't know. I don’t know if Andrew is still on the on the Webinar. I think the regulatory authorities should be looking very carefully about how decisions are made based on these pre-clinical models. Because of the rate of attrition in oncology, if you do the analysis, and you see the positive results in pre-clinical models from the failure in the clinic, then I think there have to be questions about that transition from the pre clinic to the decision to go into humans and I don't know if that's being regulated sufficiently. Who's the regulator that needs to look at that.

 

Andrew Dillon: I'm not sure there's a there is a specific regulator. You could argue that the drug regulates themselves, so the people who provide the ticket to play ultimately in various markets ought to recalibrate their assessments of efficacy. I don't know enough about that aspect of the regulation process to know how they would go through that. What process they would need to go through in order to do that. The trouble is that they the pressure exerted by the prevailing risk appetites of the countries in which they operate will determine to some extent by the sort of political pressure and expectations of patient communities, make it quite difficult for the regulators to raise the bar in the way that will be necessary in order to filter out treatments that are really going to have marginal value and  research on them could have been terminated at a much earlier stages we've just been discussing. I mean ultimately you could say, well health systems shouldn't be buying them, and that health systems themselves or organizations like NICE and others that in effect act on behalf of health systems determine where there is sufficient incremental therapeutic benefit in order to make it worth the health system paying, or to take a different view and a tougher view about what incremental benefits is worth paying for. I'll agree with that. But again, those decision frameworks are heavily influenced by expectations of patient communities about access to treatments, even those that might have a marginal benefit and the pressure that that places on those who got responsibility for the allocation of funds within those health systems. But it did seem to me that there's this kind of an interesting  coincidence of interest amongst those who are putting the money up to invest in treatment in the first place, with those who've got responsibility for paying for them when they finally emerged into the market and exposing the kind of data and information that emerges from studies like this is really important. You know it's important to do that, and to put that in front of regulators, to put it in front of funders, but also to put it in front of those who are effectively providing the resources to enable life sciences companies to do research and for all of those entities to ask questions about what decision frameworks are being applied at these early stages, and who is making those decisions, and what accountability there is for those decisions within companies and outside shareholders. Ultimately shareholders include and ultimately health systems who are paying and patients

Leeza Osipenko: I will now take a question from Francois, and in the meantime there's an interesting question in the chat for Jack and John to consider and we will take that next. 

Francois Maignen: I must say I have a slightly different view on the preclinical studies. There has been a clear shift over the years in pre-clinical models where we had these very simple models with the very simple pharmacological move to much more complicated pharmacological pathways, and I think the main proposal now of pharmacological models is not ready to predict the efficacy of a product, but just to maximize the safety of the pharmacological human studies. So basically to try to understand the basic toxicities, those developmental toxicities of medicines and prevent horror stories like TGN1412 which was some years ago in the UK. So at person I’m not challenging the fact that a lot of products entered in clinical trials. What I’m challenging is the fact that why did companies proceed from phase one to phase two studies when the phase one studies did not really show very clear efficacy. So I think for me the key step is not really from pre-clinical to clinical. The limiting step is from early pharmacological studies, first in man studies to later stages in the clinical development. We need to keep in mind that it's obviously a deadly competitive market and in these phases, companies will not talk to each other. These early pharmacological studies will not be published or exchanged between company, so to cut a long story short, I don’t think I entirely buy the idea of bad pre-clinical models. It's very well, traditional, and natural, that this is now the main purpose of the companies and regulators. I'm challenging the conduct of early pharmacological studies and the decision to proceed from early from early phase ones, or first in man studies, to later stages. The fact that the result of this early pharmacological studies are not challenged, are not published or were not published or not put in the in the public domain. I don't know whether you have any views on that Jack. 

John Hickman: Well, I can just say that some of the companies did publish their studies, and that shows probably the best studies that they got from a number of them, and even then, I think that that they're quite questionable. You raised a point about looking for the toxicity but looking for toxicity doesn't involve a model of cancer. That's a completely different question. So I think we've got to come back to the value of the models and something I didn't really mention, but it was there in the conclusions was about not just validating the models, but validating the target, and that was in the AstraZeneca paper about how they were going to improve their productivity and to be quite fair about the IGF1R antagonists, that was an era where a lot of the tools, maybe to look at the validity of the targets, we're not really available. I have to say again there was a rush of 16 companies in there based on the success of Herceptin, and whether or not they actually tried to validate their target or not. I think they would have gone ahead, anyway, as Ian suggested.

Leeza Osipenko: Jack, you can also reflect on the comments in the chat.

Jack Scannell: First of all I think this maybe is a rather tangential answer to your question, Francois. I don't really like the term model, actually but it's very commonly used, because I think when people hear the word model they confuse the specific rat or computer system or test tube, in which you're doing the testing, from the wider decision process and so the term we used in our recent Nature Reviews paper that John wrote with me, with a number of other authors, was Decision Tool, which is a slightly more integrated view. It's the system and the set of rules you use to rank therapeutic candidates to decide which ones you progress and which ones you don't. And if you use that framework there really isn't any sort of logical distinction between, I mean there's a practical real-world distinction between pre-clinical models and phase one but actually there's no real logical distinction in that in every case, one is basically trying to maximize a true positive rate and minimize a false positive rate, right? That's what you're doing in each in each step of the process. I Think the credence to which people give preclinical models varies a lot by therapy area. I think it may be true actually in oncology. People don't give them much credence. And that's partly because you know the phase one in oncology can be relatively informative these days. But I think that's probably not true, I think it varies by therapy area. Again if I go back to the Vertex example, effectively they pick the therapeutic problems where one can make the best decisions based on the models that can be built, and that does mean they tend to pick, for example, rare genetic disorders, right where you can, for example, get patient derived tissue in which to test drugs. I think that would be an example where people do take the pre-clinical models very seriously, but I think I think it varies from case to case. Leeza, you said about the human organoids, etc. So I have actually done a bit of work with an organoid company, which I was very impressed with, but again I think lots of the debate about models is tech centric when it should be decision centric. So really, we use screenings of these disease models because we hope they give a high predictive validity. Their output when testing therapeutic candidates will correlate with the scores you would get if you tested those same candidates in people. That's why we use screening disease models. And so my view about the organ chips is or the micro-physiological systems is, we're going to find they're valid for some things but not others and it's still probably the case that many people who are promoting organs on chips find it difficult to fund the work that one would like them to do to show that they're valid for a particular application so the experience I have is with a company that did a huge amount of work to evaluate drug-induced liver injury model. And they actually did a bunch of hugely sensible things like you know: do the liver cells produce the right man of Urea? Do they produce the right amount of albumin? Does it look like liver if you look at it down the microscope? Do they express genes or express the genes that normal, healthy liver expresses? Then also do they correctly predict the rank ordering of toxicity of a large set of drugs. That's how you should evaluate a model. But most of the time when we have a disease model, we don't do that. We simply notice that it observes the disease in a couple of ways, and then we assert, retrospectively, that is a model of that disease. We don't give disease models, the same kind of rigorous prospective testing against a pre-specified set of criteria that we give many other tools that we use in science, which is surprising given how much downstream failure costs.

 

Leeza Osipenko: So we have 2 more questions before we wrap up. So Ian, and then Kathy

 

Ian Hart: I've learned some things, and I've had prejudices confirmed. The fact that tumour xenografts are not really very good models at predicting what human responses are going to be, it comes as no surprise. What I am staggered by are the sums of money that are in involved, and I find it very hard to believe that the drug companies are not aware of this amount of money and my question would be: Why did drug companies invest so much money in something, and the answer of course, is their only moral responsibility is not to the patient. Their moral responsibility is to the shareholders, and they're just setting out to make a profit. Now, when you think that if you had a 100% efficacious drug that will cure cancer tomorrow, the increase in average life expectancy amongst those people will be about 2 years, that's not really a very good return is it, so actually the drug companies are just in it for profit, and then that is their responsibility. But it does seem to me that if they were to put that money into something with prophylactic in preventing cancer, maybe like peloton machines, so that they exercise more, that will be a much better way of dealing with the problem so my question really is: why do the drug companies tolerate this huge outpouring of money, which must be obvious to them from their balance sheets? And why do they persist in looking for anti- oncological agents? Why don't they just shift to a completely different target? Is there a target out there which actually gives you a similar profit?

 

Jack Scannell: So as the ex-investment analyst, maybe I'll have a go at this. I do think much, possibly not all, but much of the aggregate behaviour of drug companies can be explained in terms of profit maximization right. But they do also compete with each other and also, I'm actually at the moment in the process of trying to start little biotech, and we're trying to discover drugs, and I can tell you we don't want to discover lousy drugs. Because we're competing with other companies, we would really like to discover really good drugs. So I don't have any sympathy at all with the idea that if you've got 1,400 cancer compounds in the pipeline, the companies don't really care if they're good or not. The problem is they're doing their best and this is what we've got. So why have we got so much going into oncology? Oncology at the moment to me has the economics of a gold rush. Right so cancer drugs for reasons of some historic accident and effective lobbying, and also the confluence of interests, of health care providers, and drug companies, have almost infinite pricing power in the US and I won't bore you with the reasons but this means you can charge a huge amount of money for cancer drug in the US and this has sucked a huge amount of capital into oncology R&D and if you look at the financial returns of drug companies, they're not great, and part of the reason is you've got a huge amount of R&D competitions. It’s a little bit like a gold rush. The prospects of this very lucrative market sucks in lots of R&D capital. There's no price competition, but there's lots of R&D competition. That actually depressing the returns of the drug companies. Then you could look at other therapies like antibiotics where actually there's not many people playing the game again, because arguably the financial returns for antimicrobial R&D look very poor. So my view is drug companies will approximately profit maximize, or try and profit maximize, and it's up to health systems to try and get the incentives right to a certain extent right I.e. easier said than done, Given the sort of political circumstances in the US and elsewhere. But they're investing oncology because actually, there's a lot of money there, despite the low success rates largely to do with the very high pricing power

Leeza Osipenko: Thank you. Thank you very much. So Kathy. Last question from you

Cathy Tralau-Stewart: I think this has been a great discussion and it reflects my experience in the drug discovery and development industry per wholly being big pharma where decisions are made on, not actually on the science, but on strategy, the money, the fact that everyone else is doing it at the as we've said. But you know I think we need to understand that this is an ecosystem problem. You know now trying to move things from academics into the next stage, trying to get investments for new projects. They insist on having data from models which we know are not predictive, for that. This is a real problem, and I think we really do have to focus on improving their models, because I think what most people in this Webinar one is drugs that actually help people, are successful, that's what it's about. unfortunately the money lobby the need to be the pharmaceuticals driving all this so how do we change things? And that's the big question, and I think Andrew pointed that out a bit. Maybe the Regulators can input, because actually a lot of regulators don't actually worry too much about, as Jack said, about efficacy models and data. They just want them, to be safe and that's the problem. So we're not doing the efficacy right? We're not doing the decision making right. I think the people in the Webinar recognize that. But how can we change it? The drive here is about money. And I'd love anyone's ideas on that because I think we're doing this wrong we're sinking lots of money with no success for patients.

Leeza Osipenko: Perhaps we need to set up a regulator for common sense that would check the things we're talking about so any final comments?

Jack Scannell: Yeah, just the quick one on this question on how we change behaviours inside companies amongst investors. How do we change their expectations of what health systems are going to pay for? It's very difficult, because you know a global drug companies got 120 potential customers. Some are extremely powerful, many are not powerful, but there isn't a sort of single means of bringing those disparate customers together to enable a sort of single powerful signal to go back to drug companies about what in the end they're prepared to pay for, so it makes it very difficult for individual countries health systems to do that. But I think as I said a few minutes ago, that somehow if we can get a virtuous combination of interest and ambitions between health systems who pay and investors who put the money up in the first place then there's the potential opportunity for influencing and a kind of pincer movement on how drug companies go about making their early stage decisions and hopefully reduce some of this waste

Search