linki2

How research integrity and open science hang together?

 

Lex Bouter: My topic today is the hanging together of research integrity and open science. Research integrity and open science happen in largely separate communities. And my point today is that these communities should come together at least to some extent and what I mainly want to put to your attention is that some of the open science practices can really help the text problems with research integrity, and to prevent problems with research integrity.

What I have in a menu are a few finds on the core concepts. Then I try to list the most important current problems. We're trying to solve with open science practices. Then I come back to the hanging together of the two and save a little bit more about open methods and open data, because I believe that these 2 elements of open science are most important for research integrity. And then I'll dwell a little bit on how we can improve matters by using open science practices and other measures to improve research integrity, and now build that up from the drivers of problems we have discussed before. So, this is a rhythm for this lecture.

To start with, research integrity is about behavior of researchers, individually or connectively. And behavior only in the sense that it bears on truths and trusts on the validity of, and the trust in research findings and also in researchers as a class or as individuals. Research integrity shows that the behavior can go in 2 directions. It can promote research integrity, and it can hamper research integrity, and I'll show you some examples of that later. Now, trust is a way of saying, and it needs to be deserved, and you need to deserve it by being trustworthy. How can you be trustworthy? Mainly by being transparent I believe. Transparency means that you're open, everyone can check what you promised to do, and redo what you laid out as your plan, and in research that means that your research plans, your research methods, your research data analysis plan and also your data needs to be out in the open that makes you feel on the road. But people can trust you because there is the possibility to check what you did. And a number of these open science practices, and you could argue that the tools for many of them enables accountability. We can check what people do in research. And that is a good thing and that is a way you deserve first.

We did initial survey in our country a few years ago on research integrity, and Gowri Gopalakrishna was the poster who did the workshop and presenting her work. What we did is we sent out the questionnaire to many people in research. In fact, we try to reach everyone in academic research, and we asked them about questionable research practices. These are behaviors most people think you should not engage in. We have 11 of these behaviors. They came from other base and other studies. And, by the way, there have been about 43 surveys on these research issues already. This was a large one, and a rather late one, and the big one, but we are not alone. And in 2 systematic reviews, you can see that our values are on the high end of the scope, but completely within the scope of other studies. This questionable research practices that could be scored on the scale of one to seven. One is never, seven is always. And this specified the behavior in the last 3 years. References are the upper end of the scale, 5, 6, and 7 together, and when you look at 5 of these 11 questionable research practices, you see that negative publication. So not publishing a negative publication, that happens in 17.5% by the people. This is self-admittance. We should remember that it's a survey, so that might be an under reporting. But still this is what people said, and we made them believe, and we check that their identity was protected as well. Insufficient mentioning of study limitations and flaws happens 17% by the people. Insufficiently supervising and mentoring, which is a very important task in academic research of your junior co-workers who can be students or can be postdocs, happens 15% self-admitted again. And then giving insufficient attention to the stuff you use the equipment to scale and the expertise, again is 15% and another 15% admitted to inadequate note of research process.

We take all 11 together. You get a preference of more than 50%, meaning that of our respondents, more than half declared to engage frequents in at least one of these 11 QRPs. This is not rare, so this is quite common, and we also looked at the bigger falls of research integrity, fabrication, falsification, and self-admitted again, more than 4% for each of these things, making up data or results, or manipulation of research, material data or results. Now, that means that when you are in the department of, for instance, 25 people on average at least one is a self-admitted fraud, and maybe two when you count them both. It won't be your department. Of course, it will be the neighbouring department with double numbers, but still, this is also not rare and let's say there is some room for improvements here.

Recently, we've seen the rise of Paper Mills. These are companies, factories, websites that promise you an authorship on a paper if you pay for it. You don't need to write the paper; they give you just an authorship. People can, of course, fabricate papers themselves, or this can be done by software in Paper Mills and ChatGPT is quite popular now, and Paper Mills, and it's really good for their business. Sometimes it's plagiarizing other papers, or pasting and cutting from other places, or making versions of other papers by using other words in the papers, using synonyms for not be able to detect by the greater recent detectors, and these companies sell authorships as I said. And then, you have fake authors on fake papers, and they also arrange for fake reviews to review these papers, and these companies also supplements fake conferences and fake guest editors of the supplements where the stuff of these so-called conferences is discussed. So, detecting the whole thing is a major challenge. Work has been done on software, and some private persons work as a detective almost 24 hours to detect this awful stuff.

Final problem which we all know about by now is the replication prices. It happens to be the case that when you redo the study, even when you do it exactly in the same way, you don't always get the same answer. You cannot expect that a hundred percent of the cases may be, but still, this is a bit worrying, and we call that the replication prices. It starts in Nature and Science with articles in 2012 and 2015. The royal societies of the Netherlands and of the US routes learned reports about it with many interesting details, and recently a sculpting review was pre-printed, explaining in the quite last study, summarizing 177 replication studies that only 95% was successful in replicating the initial step. So, it seems to be a 50-50 issue that when you see a positive result, whether it would also be positive on replication. We can say a lot about it. But it is a problem, and that is connected to a few of these questionable research practices I alluded to before, namely the selective reporting first and foremost.

And here you see what I said so far. I discussed a little bit about fabrication, falsification, and we alluded to the questionable research practices. I also said that some of them are drivers. That is the red arrow of the replication prices, research not being so replicable, and 3 trees together, I hope that you will agree with me. Bear on what we want in science, and that is the validity and trustworthiness of which was where I started my story. And I also already said that transparency is key to get validity and trustworthiness, and that open science can help you a lot to get transparency. So, this is how these things happen to be together quite closely, specifically, open methods and open data can work as responsible research practices as opposed to personal research practices. And this can be shown as well that they help preventing the reporting crisis can help prevent the questionable research practices or detecting questionable research practice when they are there, and they can also help detecting fabrication, falsification, and plagiarism, and maybe even prevent a little bit by being a threat on the floor, saying that we will catch you when you behave poorly. So, the green arrows are what we want to pull down all this awful thing in the red gateway, so this is the way these things happen to hang together in my opinion, and that is the core story of my presentation.

Now, most of you will know that open science practices are a lot, and it seems to be a quite expanding universe and it goes all over the place. It has mostly to do with transparency, but not always research integrity. I focus on this talk mainly, on the one hand, pre-registration, and open protocols. And on the other hand, a little bit on open data. I might say a little bit about the good issue of our peer review later, and I will ignore the rest of this wonderful collection of interesting and important, and quite innovative things. So open size is a lot more than I talked about today.

When you say Open Methods, it can mean at least 3 different things.  One is registration. That's what we say in biomedicine or preregistration that what they say in the social sciences of the essential features of study designs. That is the basic questions, the methods core elements, outcomes and so on and so forth. We do that already in a lot of clinical trials. I should know at these 30 years. It is mandatory, at least for drugs and medical devices in some countries, and it goes in the right direction. A lot of the staff of the clinical trials is indeed registered before the data collection starts. That is the idea. Park in the cyberspace somewhere with a timestamp on it where you can later see what they wanted to do, and then check whether they indeed did it exactly the way they planned. Another version is slightly better and completing and computer is publication of your full study protocol and also including the data analysis plan where you do that. It can either be a real publication in a digital journal or even a paper journal, or a preprint for a word it's on. It's there again, out the open, you can see when it was posted there, and you can see whether it happened before the data collection began. So, you can check everything. And even more important and interesting thing is the registered reports, and that is what my next slide will be about.

Essential traits of registration or pre-registration are perspective. You need to do it before the start of the data collection It needs to be public. That's nice to have that does not need to have, because it can also be embargoes for an amount of time. And almost all these portals, the offer that portability, although it's better for transparency and for accountability when it is public, at least after a few months. And then, of course, it is not a harness you can make amendments, but then also these amendments to your registrations or study protocols are timestamps, and you can see later whether they might be data driven to an extent that you're not going to be to believe anymore about these demand amendments that we understand. 

This is what I promised about the registered reports. You might know the format already. The ideal is simple and quite brilliant. It cuts the process of publication in two. First you get an idea for your study, you get the grants. You get permission from the Ethics Committee, and you're ready to go. Normally, you would then start data collection, but not this time. You paid at least the first half of your pack. You write the introduction and the methods section, and that you send to a journal, and then the journal says, hey, do major revisions, and then in the end of the day hopefully they accept it, and then it is accepted for publication before you start collecting your data. And that means that's the editor and the reviewers were not distracted by your results. They cannot say, well, this is spectacular, we should publish it. No, they look whether the study is important, and should have been done relevant to the subject. You can read it in the introduction, and whether it will be done well, you can read it in the method section. The beauty of the thing is that it is a killer of publication bias. This is a quite convincing non-randomized study of the thing. It is a social science. 71 registered reports, 45% of them was positive, and 55% was negative, and almost all the other studies match double control groups, match similar topics, similar journals, similar type of research, similar designs and so on and so forth. In these studies, more than 95% was positive. This is usual. You see that everywhere in journals, in many subdisciplines, we only publish what we like are positive results. But when you remove that in the format of registered reports, you fill the publication bias, and you get this 55% and there's a bonus. The bonus is that the methodology is better of registered reports. This is another article, and it shows all these indicators that the red system reports are better. We're not sure why that is, but it might be important that here the study can still be changed when we do peer review. Normally in peer review, many remarks on the methods of the studies are made. But it's quite useless because study has been done already. It cannot be improved anymore. You can only write to talk to a little bit more beautiful, but you cannot do it differently then. But now you can decide to take a board suggestion of refuel, and to improve yourself well that might be an explanation party of what you see here.

I can be quicker on open data. Most people have heard about fair open data reposition. It means findable, accessible, interoperative, and reusable. It's not easy. We are only discovering how to do it well, and it's different in different fields. The challenges are different, but the idea is beautiful that the data should be there in a usable way, that other people can check what we did and reuse our data for other purposes, and by the way, when they reuse it for the purposes of course they need to pre-register and say, what they're going to do before they start looking at your data of course.

This combines the last 2 slides in some ways. It shows that when you talk about policy making and the slides is by Gilad Feldman, and it's inspired by Chris Chambers. You’re always in textbook of epidemiology and evidence-based medicine. You see pyramids of credibility, of research designs, and this is somewhat similar. But now it is the openness, and the idea of the resistance report is included. This is the normal stuff. Don't believe it. Basically says, when you have explorative open science which is not pre-resited but at least you make your data materials available. That's a bit better when you preregister and makes your data available. It's confirmatory open science. That is a lot better. A registered report is better because of the reason I have this outline before, and here they say what we need our meta-analysis of registered reports. Usually, you have the systematically review and the meta-analysis on top of the pyramid. They make it the meta-analysis of registered reports, and they say nicely that of course should be pre-registered before as well and have open data. It makes a lot of sense. I've never seen this so far. A preregistered meta-analysis of registered report, but it will happen when register reports will get flowing, which they don't do yet, because only a little bit more than three on the journals have adopted them. But I believe it's a major innovation, and it deserves a foresting future.

There's a cut in the story. I now move on to what's drive all these behaviors that have a good or a bad influence on research integrity, and I believe there are 3 categories. You have individual factors, institutional factors, and systemic factors. The virtuousness of the individual is important, but he or she is not working in splendid isolation. We are working in social environments. So, the research climate in the lab for in the research group is important as well. And then you have the centers. It's nice when there are other groups, it's all for when there are prefers. I give a few examples of these three categories, but before that I go back to my national survey on the research integrity, we call these explanatory factors. We also looked in these drivers of research integrity a little bit by using scales that has been used in further data before and linking them to the answers on your QRPs and fabrication, and falsification. And another thing, I haven't mentioned before, the responsible research practices. That's not get carried away. It's self-reported. It's cross-sectional. It's a survey. So, let's not have dreams about causality. But still we wanted to see whether we could detect associations and the arrows mean meaningful, and statistically significant. The light green is what we want. The red is what we don't want. People who believe that reviewers could detect their wrongdoings in papers which, by the way, is hardly the case, and reported less fabrication and falsification. So, having that threats in the air that lights work, people who support research integrity norms at least, they also say that I engage more responsible research practices, and this in falsification and fabrication. So far for scales, let's say the best or the dubious survival. That's about supervising to help you to cut corners, to get a lot of sanitation and publications and grants. There are many supervisors who do it that way. Maybe then that is what you get more questionable resources practices, at least in the association and the supervisors that help people to do the right thing in the sense of responsible research practices, open science modalities thinking before doing, being kinds, and helpful for your colleagues and so on and so forth. These people with that style, they also report less QRPs and more responsible research practices. Publication pressure is quite an issue, and it's there. It can be measured quite well we believe, and when people feel a lot of publication pressure, they engage more in questionable research practices, and less irresponsible research practices which might be the case because they take a lot of time, and when you feel publication pressure, you also feel time pressure.

We have code of conducts where individuals adhere to or not to encounter contacts on research integrity. Basically, all comes from Robert Merton in this beautiful 1942 article. It later enabled as being the Mertonian norms, and they're still recognizable. The first idea is that he calls it communism. That means it's about sharing, it's not your private property research. You need to share it with the communities, otherwise you get nowhere. The second thing is not about you and what you consider to be true. It's about something larger than the subject. We try to be as objective and into subjective as possible. This one is about conflict of interest in modern terms, and you should do science for discovery how things work and to help improve the world not for your own benefits and the one I like most is organized scepticism, and it's close to transparency, and open everything, because that makes you vulnerable and open to criticism, which is great and which is driving progress in academia.

Well, we move on to culture. You say that's you need help as an institution there. This is a great European consortium I was involved for standard operating procedures, for research integrity, and these are guidelines how to do it well as an institution, and we have for funders as well. These are 4 guidelines for education, they’re quite practical. They're well used the piloted and tested out and produced in the right way with co-creation, service, workshops, progress interviews. And we have them for students, for the senior researchers, for the helpers in research very important, and also to get going and get people talking about research integrity many times in a year, and not once in a one-day course. There are to be honest 131 of the standard operating procedures.  Below all the slides are all my references to websites and papers.

Let's also not forget that research is a human endeavour, especially young researchers, early career researchers and the people being the supervisors and the mentors, they need to help each other in the right way. And it often goes like in this cartoon. The supervisors focus on content only while the early career researchers and something different on her or his mind and that's the reason that mentoring and supervision is so important, we believe. And the software has some guidelines about it as well, and we developed a course that we consider to be quite nice. Although there's only a planet study available of it called Superb supervision junior and here, we are trying to combine doing the right thing for the people you supervise, and also to learn them the open science modalities that can help to improve research integrity. It's now taking on in other Dutch universities and are similar initiatives in the Netherlands in the broads of course, the issue is that we may need a license to supervise, or maybe at least a few good courses to do it, because in academia, it’s one of the most difficult things you have to do. But it's also one of the most wonderful things you have to do.

Let’s move on to the incentives. The incentives often come in the form of assessment of resources, of grant proposals, or whatever assessment for vacancies, promotion, tenure, and award. And still, although we are trying to change it, we tend to focus on the number of publications not so much on their content, the number of citations, and they work amazingly well, when you do this, you get more publications and more citations, but there are some undesirable side effects of focusing on quantity, not on quality. You get more plagiarism, and you get more duplicate publication. You get ‘salami slicing’, this meaning looking for the smallest publishable units. You get a lot of gift authorship, because when I wrap you back you can wrap me back, and we both have the double number of publications, and it needs to the popularity of Paper Mills and predatory open access journals because it's so easy to get publications there, and they can help again for your career. So, we need to reform that, and we are already doing so. But it's not easy. And in this narrative review, Noemie summarizes what is out there in the debate, and also what are the promising developments to do it better in the assessment and look at whether someone is good at open science practices, is publishing his datasets, is helping others when they want to reuse their datasets. It's a good supervisor, is a good teacher, is a great peer reviewer. All these things, many people at least believe that you need to get career points for that as well, because that shows that is important. And it's not only the number of citations and publications that counts.

In summary, we need research integrity interventions, but we need interventions that work that are evidence-based. And that is where meta research or meta science or research come in. We need to study and to document the effectiveness before we start implementation, and then, when we do so, we need to measure several outcomes. It's a primary outcome that really matters like the incidence of FFP, fabrication falls against plagiarism, questionable resource practices, responsible research practices, research quality that's not always easy to measure. And then you go to the intermediate outcomes that attitude, knowledge, and skills. That is what all the teachers do in educational research. And the problem is, it's only weekly connected, most people believe, to the real outcomes, and then you can manage approach process whether people like what you're doing, whether they do what you say. Or they say what they do, and when they believe that it is useful. Now also the outcome measurement needs to develop better. We need better instruments and better skills. And maybe in the long run, we need also what we see in clinical research already for decades. A core outcome sets, that is a set of outcomes that we promised to each other to all measures when we do intervention research on research integrity intervention, because that helps native systematic reviews greatly.

This is from Brian Nosek. Once you have an evidence-based intervention, you should first make it possible, then make it easy, then make it normative, and finally make it rewarding, and lastly make it required. That is the same, and we should go up in another pyramid.

I won't talk about open applications, open funding procedures, and open peer review. I'm a great believer in all these 3 things as well, and I believe that also a year transparency can help to boost validity and trustworthiness. But my time is up more or less, and I'd like to end by telling you that's in essence. In June next year the World Conference on Research Integrity will be organized all the topics of these presentations, and many more will be on the menu there. Please have a look at the website and look whether it's interesting to submit an abstract. The call for abstracts will go out in one or two weeks, so you've got time enough to put your act together regarding this conference, and if you want more, go to the website of the foundation behind these conferences. We are different twitter as well, and we are running a female channel with many great tops. For instance, of the Symposium we had a few weeks ago preparing for the world's Conference.

    

Leeza Osipenko: While you were talking, I just realized that when I was doing my PhD, I haven't even heard the combination of words, research integrity. It never even came through in any steps of my process, and I did my PhD in the US, and has anyone done a survey, or are there any data on how many PhD programs now have research integrity courses training requirements as part of the curricular? And next question is, for example, it looks like the Netherland is doing incredible job between you and Professor Larkins and many other leaders trying to bring research integrity to the top of priorities. Amazing efforts. But how these efforts play in? If bigger players, let's say, countries that produce a lot of publications are not on board…

Lex Bouter: I believe that in many countries and anonymous is one of them. It's getting quite normal to have research integrity course for PhD students. However, it's not always mandatory. It's typically one or two, or maybe three-day course. And it's a stand-alone thing, and it might not be effective when the rest of the environment does know about it. I've taught a lot of these courses and typically what PhD students say afterwards, thank you, that was really interesting, that was new. And now please go on and tell my boss and my supervisor, so we need to have courses outside that scope as well. It's about culture. It's about mentality. It's more than a course, but the course is not a bad idea of course. And PhD students not always like the idea, especially when it's mandatory to go to these courses, but when they're there they usually can recognize that it's not the corporates out there, it's about their tendency and temptation to good corners. It's about debt and energy in the research. And it's about helping them to do better.

Second question. Well, there is a lot going on, not only in the Netherlands, but in Europe. Thanks to the granting scheme of the European Commission that's focusing on research ethics, research authority, and open science a lot. Well, your country was not so clear for, and left the EU in that sense, but still many people from the UK participated in these programs, which is great and what are we doing for the rest of the world? Well, that's what we're trying to do with the world’s conferences. We travel from continent to continent. Most times, we had Hong Kong and Cape Town, and it will also make a difference in essence. We focus very much on engagement of people from other communities at the World Conference. There was the Aryan starters, the African researchers there at the network. It's really effective and active. There is South American and Central American research integrity network. This is one of these conferences, and when you do open access stuff like open videos and open email or movies, and so on, people can use it and can use the course so there's a lot of great material made in Europe used elsewhere. And now it's catching up elsewhere. This great stuff in Japan and Malaysia being made in Indonesia nowadays, we can use in Europe, because it's better than the stuff what we use. So, there is a kind of movement going on, but it's still a bubble of course. It's still for the insiders and our big challenges still to reach out to the other people. A few more years we are maybe still in the stage of the early adopters, and it's still catching philosophy, and there's a lot of work to be done.

Leeza Osipenko: What I have noticed is that among young researchers by the students, or who are just starting their career, there is a lot of embracing of these ideas really pioneering this work, trying to preregister reports and publications, and there's a lot of enthusiasm for it. And for young people, it's common to be idealistic up to a certain point when the career kicks in. So, my question to you is for these people to keep these ideals and to do this work, they need support from their supervisors, they need support from their institutions, they need support from the system, and for example, a lot of things you described is this kind of support from the system that they can find the support. There are conferences. There are all these movements. But what happens to students who are young researchers who see their supervisors who do not share these ideals, who actually do things which might not be right, and speaking up about that may be completely risking their future careers and everything, so do you have any solutions, for these people who want to do right, but they might not be in the environment which enables them to do so?

Lex Bouter: Of course, there are all types of tensions, and dynamics and difficulties like I have alluded already. It is so important to get the senior people aboard as well, and the leadership of institutions. And it's happening. But it's happening slowly, and there are still pockets of strong resistance. And when your PhD students in one of these pockets, it's problematic. Well, what you can say cynically is that pick your supervisor wisely. But that's not always an option of course. There are great supervisors. Many of them are awful supervisors who spoil even brilliant PhD students. That still happens. And that's the reason there needs to be a little bit more open science, mentoring from people outside the direct relationship we have forbidden in our country, and that happens in other countries as well, that PhD only has one supervisor. You need to have at least 2 or 3 or 4, and some institutions have an additional mentor in a novel department just to arrange for safety. But these young people are so wonderful, they are the motor of change, and they are the future. Another scenario can be that we only need to wait a few years, and then all these awful people will have retired, and then the wonderful young people are there in charge. That might be a little bit too long for young people, so you should empower them as good as you can.

One more thing in the software for our project I alluded to. We had many young people as well PhDs and postdocs. And what we encourage them is to speak up, and finally, they wrote a beautiful paper that has recently be at least pre-printed and believe already accepted for publication of what we can do as senior leadership as such consortium to make their life better and easier, and to learn more. And these are things like, listen to them. Put them on the highest level of leadership, of a consortium as well. They have the different points to give them specific tasks, they can do well to make them responsible to do these tasks and to help them mentor them in learning on the job. But there's so much we can do, and what I'm getting carried away a little bit in the idealistic modes. It is, of course, true that when your PhD student, and you see you’re supervising doing awful stuff. It takes courage to speak up, and some people don't speak up, and I can understand that. It's awful, but I can understand that. And other people speak up after they got their PhD degree which is understandable as well. And again, other people are so courageous, like the three students who felt strong enough to be collective whistleblowers, and that happens quite often. That people team up to do the difficult thing, and that's the reason it's awful to be a single solitary PhD student alone in the environment. You need to have a bunch of them, because these people help each other enormously, and they can feel stronger together, and then let's not forget that most universities in many countries now have whistle-blower arrangements. They don't always work, and it can still be bad for your career. But you need to help people who are damaged by being a whistle-blower and be understanding for people who maybe become whistle-blower. But decide not to, because of the awful side effects that are doing. That’s the world we live in, and but the cost is also harmful. We're moving in the right direction I believe.

Leeza Osipenko: While you were speaking, I had a thought. The reason I asked you that question is because of an anecdotal story of undergraduate students who might not even be brave enough to be a whistle-blower as a PhD student and team up with someone, and the thought I had while you were speaking, and maybe it already exists. Is there an organization, a 1800-number that someone can call to get support, to get advice, to explain the situation, to speak anonymously and safely, to see what they can do and what the implications might be, because not many students might even understand that, and it doesn't have to be students it can be absolutely any individual. So, it just seems that having this place where you can go to, if it's not in your university, because even that is a high risk, because somebody might call the head of the department say, hold on, what's going on in your department there. So, it depends on the cultural setting. It depends on the international setting. It might not happen in one country, but it might happen in another, and career is still risking.

   

Lex Bouter: You are completely right. It is risky to talk to people about your suspicions. When you are still in doubts about it, and when you are more certain, it's still risky for you. And many universities in my country have all systems of confidential counsellors for this and for that data, and they are confidential. So, the functions are quite well, and let's not forget that people pick their own mentors. And I recommend people to pick their own mentors. So, students may have in the family, or know someone who is doing something in academia as well, completely unrelated. You can talk to these people. It happens to me as well because I'm visible in research integrity. I get a many weird emails, but it could be also very nice emails of people who I can help with a small telephone conversation, or whatever. And I know from any colleagues that they fulfil similar tasks. I'm not sure that's another helpline would help, but it might be an idea that can be on the table as well. But still, it doesn't completely remove the risks, and it is need through that mostly people need help to understand what happens, and when they understand, then the second decision is whether they want to blow the whistle when it's completely wrong with what they are seeing. But often they think they see something wrong, which is not the case, or necessarily the case. Then they thought initially, and that's the kind of help you can offer them as well.

Kamela Krleza-Jeric: We've been studying that for years now, and so I could recently see everyone in one analysis we did with input observatory that it is improving. We are not 100% happy. But the data sharing has been better than it was 10 years ago or 20 years ago. So, my question to you is, have you seen any improvement of this research integrity? And the other question is a lot more complex. I've noticed there is a conflict of interest. You mentioned the change of culture. There is a conflict of interest in the university levels. They would like to have their graduate students do their PhDs. So sometimes they can help them a little bit, and so I do not know have you seen that? Because in your survey, unfortunately you did not survey institutions, you surveyed individuals. That would be interesting to see how much universities, especially in medicine, the clinics university teaching hospitals need to have PhDs? So, I don't know how to deal with the conflict of interest easily.

Lex Bouter: To start with the last one, for institutions they aren't preferred incentives as well. When you pay per PhD thesis, you get more PhD thesis that has been proven in many countries. That is what you get. And that means that attention is focused on that. And maybe the corner cutting will be involved. In the preparation of the Last Force Conference on research integrity in Cape Town. I've been in South Africa several times, and there I discovered that the ministry is funding the universities per publication. That's almost the only parameter. Surprise, what do you get, a lot of publications, salami slicing, republication, plagiarism, and co-authors from other universities, because it comes in two universities of three or four or five universities.  So, for first incentives, are there at a level of institutions as well? We are going to a former question of you.  Open data, it might be improving, and I think so. It does. But recently it has been shown again that people who haven't so called open data statement in the paper. They have the effort delivered when the data are already asked, and I've done open data analysis myself in the past, and then discovered that the data sets were completely unusable. I believe that is improving nowadays, and your last questions about whether I see improvement on research integrity indicators. I don't know. The awareness is growing, and that means that you get more initially. It's in Nature a few months ago, Ivan Oransky of the restraction watch. He also shows a rising graph of retractions, and he said, this is a sign of improvement. We're getting more detections, and he said, we probably don't have yet one third, or even one fifth of what we need to get as retractions. So, when you see more, it's not a measure of improvement, and you'll see more people being caught on research integrity issues. And I believe that we should implement, say that this is an improvement as well, although in the long run it should go down again.

Question from the chat: Where do you see the main responsibility? The funders or the research institutes, or the individual researcher? The publishers or the individual journals? And how do you think they can be pulled together to make a real difference? Journals publishers are overall, not providing proving to be willing or fast enough to correct the record effectively. How hopeful can we be? And what would be the incentives for publishers and journals to act more responsibly?

   

Lex Bouter: She is completely right. There are many stakeholders that have responsibility to improve research integrity, and to make the quality of research better. It is yes, the researchers, institutions, publishers and funders. Funders are especially interesting because they don't need to be popular. They can basically do whatever they want, and we comply because we want their money. They should not misuse the power, but they can change things. Open data got flying when funder started saying, hey, listen, you can get your grants, but we need open data. Then it happened. For publishers and journals, it's more difficult because they are in a market competition game. Although, on the other hand, the only thing they're really selling is quality assurance. And they better do as well, and they better start doing it better. And from that perspective, they can make change happen as well, and they are to some extent. Many journals adopted resistance report formats, many journals mandate and open data statements. And now they learn that they need to ask for more because it's not really happening when you have a statement only. And they also work on the downsides. The SDM (Shared Decision-Making), for instance, has a research integrity hub, which is a great software, helping editors and refuels to detect bullshits in manuscripts like fake pictures, like statistical tests that are sentenced by plagiarism, like citations to restricted papers, and so on and so forth, like use of ChatGPT. That's there in now as well. We are all in it together, and it's normally the saying was, well, we have one bad apple, and that is an individual, and let’s sanction the bad apple. And then science is great again. We now know this is not the case. It's a more systematic disease we're having. And there are no magic bullets. The curation needs to come from many angles, and all these parties have role to play.

   

Search