linki2

Pathological Consensus

Lawrence Lynn, MD

 

Lawrence Lynn: I'm interested in talking about something that I discovered was a problem back in the late 1990s - we were doing a clinical trial at Ohio State - I was funding the trial, and we were looking at the patterns of sleep apnea. In the end, we were doing a blinded trial against the standard polysomnography and then it looked like we had a lot of false positives with the technology we were using. I then looked at the technology at the patterns and realized that those were the patterns of sleep apnea. I decided to look back at the origin of the gold standard they were using and found out it was just simply guessed in the 1970s. I thought that was a revelation. I thought people would be very excited about it, but what I discovered was that no one cared that they were happy to use the gas. They were able to use the gas and incorporated it into their clinical trials, and they had in the consensus measurement of the disease that they were all happy with it. There was really nothing anyone could do about it. So, I later recognize that this problem had evolved in the 80s and was a major problem across the board that has evolved for decades. I have sort of coined the term of pathological consensus to describe this phenomenon which is simply an amplification of pathological science described by Langmore.

So, pathological science as Langmore describes a bit in 1953 was once an oddity sort of undetected measurement error that in a laboratory that caused people to think they've discovered something, because their research, but the research would not be reproducible, so they described this as a group of different discoveries that turned out not to be true. What happened was, with pathological science. It was expanded to be applied worldwide and we're going to show exactly how that happened and the extent of waste is profound. Pathological science is where the methodology has an unrecognized measurement upstream and but the scientific method looks pristine the deck looks perfect but there is a gigantic gaping hole in the hall. As someone coming along and looking at the boat would think that everything was perfect, and certainly the Cochrane Review would come back perfect, but there's an underlying pathology that's going to make the boat sink. Dr. Langmore described this as people that are perfectly honest, they're enthusiastic, but they fool themselves. The first principle is you can't fool yourself because you're the easiest person to fool. What we're talking about consensus groups fooling themselves. Researchers fool themselves with the origin of pathological consensus in the 1980s, the emergence of threshold decision making my Pauker at L, in publishing New England Journal around 1980. This idea that you could come up with these thresholds and define a disease. A group of docs will get together and use the Delphi method or some other method of that they sort of pseudo-scientific method of consensus and develop a set of criteria for a syndrome. These criteria are really measurements. They’re, in a sense, replacing measurement with these guesses. They develop worldwide pathological science standardization of the apical error - in a sense they're doing what was done in the laboratory that Langmore describes for things like poly water, and those kinds of things but they're propagating this error. This worldwide and standardizing of the research on the error for pathologic consensus, it is a type of pathological science where the apical error and measurement is standardized that consensus. So, we've gone from the laboratory where somebody is diligently working and makes an error, and then thinks they have a great discovery, but they overcome that research phase. Here we have standardized the error so powerful. We think about it, way it's being used is an erroneous commonly guess set of threshold nonspecific laboratory and vital measurements were promulgated as a consensus measurement of an adverse condition. This often captures a set of different diseases causing non reproducibility of the research show examples of that are the apnea hypopnea index that I discovered was wrong in the 1990s surge which was gassed in 1992. Then, multiple derivatives of SIRS were guessed each decade and then sepsis three which is another guessed set of thresholds in 2016. It is hard to believe that this is the way science is being done. But the magnitude of the waste is beyond the pale, there have not been a single positive reproducible sepsis trial since Oregon 30 years ago and I'm going to talk a little bit about sleep apnea, which is, which is sort of the cycle has completed itself as it relates to pathological consensus and sleep apnea. What is described this phenomenal called a synthetic syndrome, that's a variable set of different diseases with similar initial clinical presentation but reverse pathophysiology morbidity but they're combined by pathological consensus. You come up with a set of thresholds that define a syndrome that you think is similar, but it's actually comprised of a whole group of diseases - we saw this happen with coven when they thought covid pneumonia was ARDS. The criteria for ARDS was made in 2012 the latest version of it was made in 2012, they actually thought those criteria were clairvoyant and included coven pneumonia, even though it didn't exist in 2012, that's how severe pathologic consider of course that resulted in significant delay in an optimization of treatment because they tried to apply the old therapy for ARDS which didn't work for covid. How does this work? If we make up a syndrome, if we make up a group of criteria, then we're going to capture - these nonspecific laboratory and vital signs - will capture a set of diseases, each with a present potentially different average treatment effect. The percent of different diseases is going to be different for each randomized control trial, and therefore the randomized control trials aren't reproducible. This is something they just can't figure out. It's strange that they don't understand this, but we really helped expose this. So, let's look at the first ARDS syndrome which is a severe pulmonary condition, and it's actually the ARDS criteria are quite broad and nonspecific, so it captures a whole group of diseases, for instance, after pancreatitis or an associated with trauma or pneumonia, all those diseases following the scope of arts and they try to standardize treatment for arts. In the first randomized control trial, this particular disease dominated, there was less of this a lot less of this disease. And then this one, this disease dominated. So, if these diseases have different average treatment effects, you're not going to have the same result of your randomized control trial. This is of course what happened later severe covid pneumonia dominated, and they just included in the ARDS. Why? Well, because in their world they don't understand that their, their guessed threshold - if it falls within the scope of the guests. The fact that covid pneumonia didn't exist when they made up the threshold, it didn't occur to them that that was not science. So, let's look at the ways full cyclic history of pathologic consensus so this emergence of synthetic syndromes, that were produced by these apical guess measurements, like sleep apnea, like the apnea hypopnea index severe ARDS reproducibility the randomized control trials.

But what they would do is just guess another group, so they would go back, and they didn't understand that this process was just propagating prebiotic science so they just every decade, they would guess a new set, they'd have a big meeting, maybe have the Delphi method or something. And guess a new set of criteria. And of course, reproducing randomized control trials would, would occur again. And they just keep recycling. So let's look at sleep apnea because it's the one I discovered first and it's a prototypical pathological consensus. Back in the 1970s, a couple of guys in a laboratory in California, publish a paper, describing the measurement for sleep apnea, that became standardized, modified in the 1980s, this measurement was guessed. And they standardized it in the 1980s as a simple some of these 10 second apnea and 10 second hypopnea. It became the standardized measurement ran for randomized control trials. It was found to be highly variable, so they had this meeting in Chicago, and they came up with this criteria that they named Chicago which is common they'll name these criteria. These guessed criteria by the city that they guessed it in. Then the research remains non reproducible, so they, as expected, 35 years later the AHRQ finds no credibility to the apnea hypopnea index 35 years later. What they identify is insufficient evidence exists to assess the ability of ad hoc on the index as a surrogate, and identified, just, It's not valid. This is going to be stunning, but nobody cares, they didn't care when I first told them, and they don't care now. They don't like the results of it. Now they sleep apnea specialist is dumbfounded a critique of gold standard treatment - well it can't be dumbfounded - I've told him for decades that it will not work as a matter of fact I told him exactly what was going to happen. I said if you use a standard measure that you guessed, and it's not a good measurement of the disease you will not be able to determine that your disease is morbid, it may well be morbid, but I think it is morbid, I'm a pulmonologist who takes care of sleep apnea all the time. But you won't be able to determine it's more of a from your research because your research will be fake. The hallmark of pathological sciences is that they don't care - overwhelming evidence of failure - doesn't convince its advocates it's very similar to chiropractic or to two different types of kind of cult-like or quackery, because it is an offense, a little bit of quackery - they don't realize this quackery they are actually convinced it works but it, but you know, Polly water was quackery, it was the people that believed in Polly water and had the measurements for a thought it was real. They weren't they weren't trying to fool anybody they were just fooling themselves. The problem is if we incorporate a system and we adopt the system, we become part of the problem, we become part of the resistance to change, and that's an important point. So it was pathologic consensus even severe dissent is ignored. So, my dissent, you know, even though I laid it out very clearly, really kind of couldn't get published back in those days because you know they didn't have the open access journals so they would publish only what they what they believe is true and arguments against the apnea hypopnea index were not very publishable. In the early days, but in 2013, they point it does not appear to be a proper measurement and in 2016, if it remains a holy grail and sleep and rest for medicine, the science will certainly not advance which in turn will regard to clinical and public response to disease so 2016. It's about 16 years after I said its going to happen. They're starting to realize it but that's certainly doesn't help. So, this paper which came out in 2021 I believe is wishful thinking. There's only the rise of pathological consensus, it's protected socially from falling, it will not fall. One of the thought leaders here wrote a paper saying it's clear that it is inadequate, or sufficient, to find a present - that really doesn't matter. They'll still do it, and actually in their paper they talk about well maybe we need to just say, which measurements we use to define the apnea hypopnea index - they don't give up on it they don't recall it, it just continued. That's what it requires - pathologic consensus requires a formal recall. They mandated it, they required it for research that now that we recognize it and correct you have to recall it patient subject should not be asked to participate in trial, using incorrect measurements just because the doctors think they're, they want to keep using it because it's expedient to keep using it. Future waste must be prevented. We have to have a former recall like an automobile recall okay just let this sort of die over the next decade. So, we have decades of wasted research and a formal recall is the only answer pathologic it cannot be allowed to just fade away, and no one argues that this is wrong. No one argues, what I'm saying is wrong. They just won't engage; they won't talk about it. I won't say anything because they know it's true.
The pathologic consensus was formerly promulgated, so it has to be recalled with the same vigor. So, what do we do? We know that the research doesn't work – SIRS has been already abandoned, the apnea hypopnea index is a joke, sepsis 3 doesn’t work, we can't build on unreliable results, it's been going on for decades, do we repeat it? Do we go back and look at all the studies that we're done with these and recognize that none of them, produce any value and calculate the magnitude of the waste, is something we could do? As part of the social forces of expedience, we just can't tolerate it anymore. We inform the public, what can be done this is actually a form of embedded quackery know I use that term kind of bold here, but it is. So, again, in summary pathological science at the science develops in a laboratory we have an apical error in measurement; pathologic consensus is when you take it when you have a apical error and you standardize it - you're trying to standardize and randomized control trials, so you set up the criteria for the trial and that's erroneous. So basically, you standardize the error. And that's what happened. So pathologic consensus lacks the rigor of measurement of disease, so it generates non reproducible results. And these are not useful for randomized control trials or inflexible treatment protocols. So it must be recalled. So, what's the next step once embed the social fabric of critical care and sleep apnea? And no matter what we do, there’s probably little we can do. They, the people that engage in this have taught it all their life. I taught it back until 1990 I taught the apnea hypopnea index myself. So I you know I was trained in sonography, and I thought it was all based on science - I never realized we didn't have the internet then I got a, I got a librarian to go back and look at the history she and I found that, basically, it was all made up, so it has to be recalled. But who can recall it, who can really do anything about it? You know I've been trying to get something done for 30 years and you know, there's nothing really that can be done, I appreciate this group. But you know what can we really do so the question isn't about the place in the science who's responsible for leasing be answered all of us but if it's all of us it's none of us, you know when somebody has to try to do it. You know what somebody has to try to do it. Somebody has to try to figure out how to solve this problem. And I remember we applied for a grant for looking at the patterns of sepsis trying to really understand what sepsis is you know what is sepsis is one of those synthetic syndromes. That's comprised of a whole host of infections. And the reviewer in our request for the funding, we said that sepsis really doesn't have a good agreement of what it really is and we have to determine these trajectory and the relational time patterns of it to understand all the different components, and the reviewers snap back to us we know what sepsis is and it's sepsis 3. Well, shucks this three was made up by a guy in 1996, but that is the state of the present science of disease. So, because most, I have mostly men here and it's a tragedy of the past that we didn't have the women making these determinations. These are all the kind of products of petty ideas in the past but obviously we have.
We have a big challenge if we're going to actually really try to solve this. And I'm not sure exactly how to do it, but that's one of the reasons I'm here talking to this group.

Leeza Osipenko: I realized that sepsis has a very complex definition, I was just thinking about many other conditions where we use completely subjective scales - let's take psychiatry and let's take other behavioral conditions and neurological conditions. How much is pathological consensus might be there, and who should investigate it?

Jack Scannell: The general problem you identify is a very broad one - my backgrounds in drug or biotech investments - I spent a lot more time looking at clinical trials that haven't worked and often it seems to be a disease was actually a physiological entity is not clearly defined. But one does see in some areas of improvement so may you know the applications of genetics in oncology, for example, seems to sort of identify this in some cases more care and pathophysiology entities. So, I guess my question is: Can you think of any examples where actual real progress has been made, where people have dumped terrible old disease classifications and operational definitions into the measurements and actually then substituted them for much better ones?

Lawrence Lynn: The field of oncology that you identified is an area where you know we used to consider conditions together, you know, adenocarcinoma of the lung was one classification and now we see it as separated so they're there. But those were still conditions where we had pathology and we had. We were pretty far along in, in being relatively precise within the capabilities that we had these new nice other conditions that are I'm describing here. Our conditions were in the 1980s or crisis in the 1970s, a guy came up with an idea of respiratory distress syndrome. They called it adult respiratory distress syndrome because a similarity at that time was to risk, respiratory distress syndrome of the neonates. Tom Petty came up with this idea actually. He accepted that this was a central. And then they came up with criteria for the syndrome. And, unfortunately, over time, you know, these, this, this syndrome captures a group of diseases, Tom Petty even suggested add pneumonia to it, we thought he was playing around with those kind of ideas that you would combine say pulmonary dysfunction due to pancreatitis and post trauma to pneumonia, you know we could bring all those together from a path of physiological perspective either not in any way related, but that actually prevailed and evolved to the point where when covid pneumonia developed, it was called ARDS, and all the, all the treatment for covid was applied as if it was evidence based for covid, because it met the criteria from 2012 that was guessed in Berlin, that they called the Berlin criteria. And of course, there was a lot of morbidity associated with trying to treat covid pneumonia as ARDS, a lot of mistakes made and a lot of delay and treatment so that there is much more. So, this is a deeper issue - it's sort of a rather severe mistake, you know, to think for instance that you can get some measurement for sleep apnea and then use that measurement for 30 years 40 years.
It's a severe mistake to think you can get some criteria for sepsis which was done in 1989 and the first sepsis trial - and you know when Roger guessed that, I thought he was kind of playing around - I didn't think he was coming up with criteria, I'm not even sure he thought he was coming up with criteria, but later in 1992 they standardized his criteria which includes things as nonspecific as a white count of 12 in form this disease out of it, that captured things like Toxic Shock due to beta structure...

 

Jack Scannell:  Do you think there's something unusual to medicine here and the things I'm wondering so for example, in psychiatry? The certainly a sense I think that, amongst other things, regulation tends to set in stone diagnostic criteria that may not be very helpful. But once the regulator starts approving drugs for a particular syndrome and starts using certain measuring devices that identify that syndrome is hard than for people to move away from it right because there wouldn't be regulated precedent. I just wonder whether you think it's something unusual, whether this is common to all science but we only see it in medicine because we just look at medicine, or whether actually there's a bunch of things to do you know like with the popularity of meta-analysis or regulation or clinical guidelines, or the ethics of clinical trials that mean it’s particularly sort of stabilized and ossified in medicine in a way that it isn't in other scientific disciplines.

 

Lawrence Lynn: There are the combination desire to increase, and for the randomized control trials and all the driving force of doing research that we want to combine things to have large trials. All of that are driving forces for this. And I think that a lot of this is, everyone has been taught these things, it's very difficult to extract that I give some deference to situations for instance in in the mental health field where they really don't have the ability to do the measurements, right, it's hard to do that in our situation in the, in the critical care field in the field of sleep apnea for instance the measurements are capable, or you can do those, but they just don't. They've standardized on the past and they just not cool.

Leeza Osipenko: I keep hearing this discussion and thinking what can be done to try to resolve it. Medical education is extremely demanding, and people are losing sleep and all they managed to do is just learn the volumes of information, rather than question and produce scientific inquiry. So, is this a problem - do you think is this a chance to influence younger generations when they can, instead of just taking notes and passing the exams, ask questions where it's coming from?

Lawrence Lynn: This is a process of simplification of complexity - you have something like the evolution of a severe condition associated with infection. They do look similar, you know they have a lot of differences and infection from beta strep, you know, toxic shock is different than an infection from a perforated bowel, you know, and there's a different, there are different manifestations, but you can combine them, and I think it helps from a standpoint of quality, and from education side, to be able to bring things together and create objects that contain the Create sets and learn about sets. I think that from the standpoint of improving quality, form a building standpoint here on this side of the pond anyway. Those are all the sets, or the syndromes are useful, I think I think they're useful from that perspective. The problem is when we fool ourselves and think they're real and you can do randomized control trials with them, you know that that's just not going to work. And people have a tendency to do randomized control trials with what they use clinically. The other problem is when you think you can rigidly protocolize something like a syndrome, that was made up by some people.
And then you apply rigid protocol to that, and you wind up with a new disease like covid and you have significant morbidity because it's the wrong treatment for covid.
So that's the that. So that's the danger of it, but I agree with you it's, it's an efficient way you know that's how we think right we think in objects or objects, capture, different components. So when we think of a door, we're thinking of all the components of the door, I think it's the way we have to teach. But we have to teach that you know that there are, that this is just a way of thinking, and it's not rigid, you know I haven't taught when I tell you about substance, I haven't told you about a disease. I've told you about a set that we sort of capture and think globally about, but it's not something that that a scientist in the laboratory is going to, to think is a real entity that they can study with randomized control trial. 

John Hickman: You very nicely presented sort of the development of this pathological syndrome. From that, 50s, all the way to now. It made me think a little bit of it almost seemed like religiosity. Certainly here at the US, you've seen extremes, big extremes being amplified. That's even true in the UK. I wonder if you could talk a little bit about where we are right now. Is it better in your thought, or worse, or hasn't it changed at all. And if it is better or worse, what are the drivers you talked about open access - certainly things have changed. So I'd like to get your perspective on- are we doing a better job or are we getting even worse?

Lawrence Lynn: The answer is, we're doing a better job - I think there is a move to try to move toward phenotypes, so we say there are the phenotypes of sepsis so trying to separate out the phenotypes that has some value. There was a recent article that just came out, sort of a capitulation article about, about sepsis suggesting that we have to move toward identifying path of physiologic treatable tracks, you know that the least there is this recognition of what's happening. The problem is there isn't the resignation to the truth, and you can't really make progress without the resignation to the truth, so you say, well, these are phenotype there are phenotypes of sepsis without acknowledging that substances and is a factual object. You actually wind up still capturing the same set with your original criteria and then separating them off and phenotypes. This is not productive, this is all this does is, is continue the same failed methodology for another decade. That's what I'm concerned about. And I'm guilty of that too. At first, I thought well maybe the phenotype approaches the way to go, you know and I, but, but I don't think it is I think it's better to think about these as separate diseases and study them as separate diseases, the phenotype still keeps the same problem, yes so that answers your question. David: When I zoom out, it seems to me that very little is known about medicine, it's very much in its early stages medical research Has anyone been going for less than 100 years in a serious way so it's not surprising knowns to blame because it's very complicated, but people tend to overclaim what's known all the time. The late john diamond was a good journalist who died of cancer, wrote a tirade against cancer quackery, which is very effective, and he put part of the blame on that on regular meds, because people have been misled into thinking that there's a magic bullet for every condition and if they can't get it they get indignant and they run the quacks.
I think overclaiming what mentioned can do has done quite a good deal of harm including encouraging quackery. Most meta-analysis for example I come to become very skeptical about because it is a question of garbage in, garbage out, need to have the NICE reviews, Cochrane Reviews, perhaps worse, always tend to come up with something works or may well work on the basis of terrible evidence. Perhaps meta-analyses is the poor man’s substitute to doing some research. These are just comments I’m afraid, but I'd be glad to have your opinion.

Lawrence Lynn: If you look at Cochrane for instance, and it's its approach right they, they, I remember, they have a six point or at least what I've read a kind of a six-point review looking at, at a trial. I think they do some good work looking at them, the statistics of a trial with a sample size were adequate and all that sort of thing.
But they don't go deeper into whether or not the, you know, for instance the measurements were correct and here for instance these criteria Cochrane would never look at the happening I popped an index and say well you know that's a variable measurement and you really can't get reproducible randomized control trials from that because we haven't for 30 years. So, but that's probably what Cochran needs to do.
So, you have to look, you can't just look at the deck, you have to look at the hull and look at the past. The way I found out for instance the out in the apnea hypopnea index was made up was by going back into the archives you know with a librarian and finding out where it came from. And if you go back to the origin of many of the things we believe in medicine. They come from some non-science; they aren't science, and yet we've standardized them, and we think they're science; we built an entire industry and entire, an entire discipline on it. And it's a, you know, it's a house of cards, and we ignore the non-reproducibility, so I think that your point is we think you're right I understand your point to be - we think we know more than we do. And, and we have produced these, these objects that really contain a variety of different conditions. And we think we know something about them. But we are fooling ourselves, so I agree with that.

David: This entire class of drugs I think we going back, does it mean things like expectorant or cough suppressants, even there's nothing that makes you nothing that can suppress a cough in general, and not allow us to breathe at least, and these classifications were generated many years ago on an unfound basis, I suspect.

Leeza Osipenko: I think it's really good comment and especially about the lack of reproducibility and it became very apparent how much we're lacking with successes of reproducibility in biological sciences, so lots of examples where I the industry or volunteering groups or researchers tried to reproduce basic science and they ran into a lot of disappointment, hoping to reproduce actual clinical trials, that's, that's a much more difficult undertaking so I think a good hearing from what you are saying Lawrence, that perhaps going back to the library for some conditions would be the way forward to try to figure out the baseline. So I'm still trying to find solutions, rather than lamenting on just the problem but, Rob, why, why don't you tell us what you think go ask a question.

Rob: I'm the non-medical school participant here at but to Jack's point I really think there's a much broader implication of, you know what you're demonstrating here, and I think it's really any scientific study, any kind of trial. I think it goes much beyond just the randomized control medical trials that you're talking about, I think there's much broader integration and quite a bit of, you know, almost any scientific research I think there's very well could be a pretty good hint of this throughout that research, not just specifically focused to medical trials such as this.

Lawrence Lynn: That's true I think that this is a tendency of people to think in objects as I indicated, and we simplify, we are we have a tendency to oversimplify, and we work toward simplification. There's always a process to sort of dumb down things, and we go too far with it. You know that I think that the stuff rob the threshold science stuff that we've talked about in terms of critical care monitoring, where you use a threshold warning tool, instead of the actual time series pattern of the evolution of the disease and a relational revolution disease warranted oxygen saturation and 90%, which can give you either a false sense of security or alarm fatigue, you know, so those are the kinds of things. So if you have this problem but I think that trying to focus on these areas the specific areas of say sepsis and using sleep apnea as a prototypical failure. I think we have to eventually come to some kind of solution; I think that what bothers me about the whole industry, the whole science industry the randomized control trial industry multibillion dollar industry is that they are reticent to do anything about anything any problem. There's no there is no authority to go to, you know, I could contact the people who were running these randomized control trials, you know, can meet up with them at a at a conference, and they would all, none of them would argue with me, they all just run away. They don't want to talk about this kind of thing. They don't want to deal with this kind of thing they want to sit and talk about the things that they all believe in their box, and they have a lot of discussions, you know, within that box, it's not that much different than a geocentric model. But I think if you walked up to a geocentric scientist back in the day and started talking to them about the fact that maybe that geocentric science has some issues.
I don't think they just run away the same way these people run away. There's nothing you can do about it. If you run if they run away and they won't engage, you can write papers, you can you know I wrote a paper about SIRS and then they did abandon SIRS in 2015. I wrote a paper in 2012 showing it had no potential value in it for 20 years, they abandoned it but then they just substituted and another criteria which had been guessed in 2019, so I don't know what you could do. I think that if this group can figure out something that can actually be done. I think that's what's important, because for me. I've experienced already 20 years, 25 years of barrier of inability to actually make a difference. You have all this knowledge about where the problems are. No one will disagree that you're wrong. But you can't do anything about it because no one will do anything. And I think that and they just continue to do the same thing because it results in, you know, basically clicks and career advancement. It's very scary on this side of the Atlantic to oppose these kind of things. You know careers, you try to advance your career, I frankly don't need their grants so I can do it, but a lot of people that need their grants, they simply can't do it. So, I'm hoping there's someone in this group who can figure out a way for us to actually make a difference, because otherwise we're just a group of people talking about a problem that everybody actually kind of knows exists, but nobody will do anything about. Leeza Osipenko: What is the current situation with sleep apnea, because if you say that here that we acknowledged that he was a mistake to begin with. Do you think now there's a chance for better science or not really? Did you see, have you started seeing the changes or not?

Lawrence Lynn: I showed the paper the rise and fall the AHI, the AHRQ came out. Right, right after that showing that sleep apnea at the AHI didn't work. Then a consensus paper has come out now from the group of thought leaders that use the AHI for decades, and some of them are just you know their inbox advocates for AHI, they're going to die believe in the AHI, and then some of them show some signs that they want to add something else. But in the consensus paper if you read it, it reads exactly like that like some of the people that wrote part of it completely recognize it it's largely valueless, and some of the route people that wrote part of it, it's still arguing well we can use we just have to say what components we have to let people know how we how we're measuring AHI, what whether we're using a thermoster, or what kind of equipment we're using and that sort of thing. I don't see what I don't see is actually a resignation or that they admit it doesn't work but they don't say what they're going to do. So that is some progress. They actually have admitted it didn't work. The in this capitulation paper I thought the paperwork going to be propagated I read it I am reading it.
And then at the bottom of it they actually say that it's not going to work going forward, they have to do something else. So maybe we'll get something out of the AHI, it's possible, but I don't see them coming up with- the paper still has a lot of propaganda in it. A lot of arguments about the value of AHI and it's still good and, you know, that sort of thing. 

Leeza Osipenko: Yeah, we touch upon this and several discussions that we have here that we very much reward success. However society currently defines it, with innovation with potential profits with potential headlines, and headlines are made about promises of cures and something very exciting when you start telling people: hold on you've been wrong for 30 years, or hold on, we've been doing science in this particular disease area incorrectly for all this time, you should be making maybe more headlines then potential discovery of something that worked in rats and tomorrow should cure cancer furrows So, but it's not how the human mind works, it's not how media works and that's not how money-making works. And even if for example we integrate a prize tomorrow for discover is like you describing sleep apnea, shattering the establishment and saying, hold on, this is not true this is how it should be done, it goes against human nature and I think that's why we're so stuck, not just in the field that you described but in many areas that we're touching upon temperament and integrity of science.
People are not good at that admitting that they've done something wrong, they're not good thinking: hold on, I'm about to retire and everything I've been doing is just not on the right path. Do you think this might be a blocking and blocking point or there are also technical issues that stay on the way, why we can't better resolve this conflict?

Lawrence Lynn: I think it's a social constraint. We're fully capable of doing a better job in and, for instance, especially with sleep apnea and measuring disease, much better and coming up. So, yeah, no, I think it's completely a largely social constraint.

Jack Scannell: So this is just some sort of random anecdotes really. But I can think of a few areas where progress has been made or might be made right, for example, my view is that where enormous amounts of money can be sold from selling expensive new drugs, and where drug companies are very motivated to improve disease taxonomy, in oncology, for example, disease taxonomy is improved. I don't know enough about many other areas to know how much that's the case but I think it asked me, you may see a similar sort of molecular slicing and dicing and subcategorization. Then there are also a number of new technologies available some genomic but some other where I think it's some therapy in some diseases, you can do so just large observational work where you do high throughput shotgun proteomics to look at the proteins that are expressed in people's blood. If you do that prospectively what you find is, you know, diseases that you previously thought were one disease actually turned out to be three or four, so there are certain technical solutions at least in some therapy areas, and again I think it's psychiatry there is sort of NIH funded efforts to try and improve the classification of the underlying pathophysiology better realizes that depression actually may not be a coherent entity, but there is an underlying things that are maybe more coherent examples anhedonia. But, but it seems to me that those things happen where it's in the interests of people, you know to make them happen and I wonder whether you have the biggest problems of sort of old unhelpful syndromes precisely in the therapy areas where people can't make money by challenging the old established syndromes I know that's a question, rather, rather than a state but I just wonder if that's true, like in apnea, for example, if people say people sending apnea drugs actually isn't quite cheap devices. So it's in everyone's interest to spend a huge amount of money to work out with the classifications correct. 

Lawrence Lynn: In sepsis this goes beyond a little bit beyond that to that in a sepsis area for instance no one really understood syndromes know why they were not real. And so they had all kinds of drug trials, over time, and they were not successful. So, eventually companies just abandon that market abandoned doing research in that field.
So, from that perspective, and there were a whole variety of technology companies that grew up in the sleep apnea environment trying to simplify the diagnosis of sleep apnea, but they were held back by the complexity of the AHI and having to match a highly variable result in a in a trial. So, the syndromes have a way of self-protecting because they stopped the productivity of good treatment. So, some sort of book or some sort of document that shows that this is fundamental to stopping the progress of science.
And so once - I think you're right - once inroads are actually made, and people recognize for instance in the oncology area that you know everything is not adenocarcinoma there are there are tracks to go, then they see that there's money to be made there and they proceed. But, but until you recognize the syndromes are not real, and the field a gate can open up and you can actually get past the medical reviewers who are stopping – they're the gatekeeper saying: no, you have to use the apnea hypopnea index, or you have to use the sepsis three definition. You know you can't you have to get through that there's a variety of gatekeepers that are there the funding gatekeeper that will make you do it, the publishing gatekeeper, all of them are holding the line on these syndromes and stopping the progress so I agree.
David: I realized how very complicated, apparently simple conditions are chronic myasthenic syndrome, which is caused by simple, relatively simple inherited mutations in name. Muscle nicotinic receptor turned out to be maybe 40 different mutations that explains why the perhaps why the symptoms vary so widely. It also means that the chances are getting tailored drugs for different conditions the next to nothing because the one mutation which was derestricted to the one family, essentially, and that that makes personalized medicine almost impossible as far as I can see. That's a simple conditioning, when talking about depression or anxiety, God knows where, where you start. Same with cystic fibrosis - I think this 1500 different mutations last time I checked, in the relevant protein. Lawrence Lynn: The massive complexity is a spectrum, but we have to get past the basic profound oversimplification into some sort of area where we can actually make some progress, and I think that's there has to be some means to do that, whether that's informing the public that all of this money is being wasted on this research. And there has to be somebody in the public that cares about that. And I think that the sleep apnea world, is, is it what I call the complete cycle of pathologic consensus it's complete because it started decades ago and it's reached a point of complete futility where, you know,  something that could be presented as evidence of the futility of 30 years of research and how much funding went into that and all the work that everybody did, all based on a guess by a couple of guys in you know in California and 1970s. That really, that would be a great story to kind of show why science goes the way it does what, why, why we waste so much money, and I think that that hopefully somebody over here will write that book, but I don't know we have to figure out what how to, how to solve this problem.

Search