Home
This site is intended for healthcare professionals
Advertisement

Clinical Research Methodology Day 2023 | Clinical Trials | Mr Vikas Khanduja

Share
Advertisement
Advertisement
 
 
 

Summary

This on-demand teaching session is geared toward medical professionals to understand the importance of research and clinical trials. It explores why research, what type of study designs are relevant to assess outcomes, an example of clinical research, how to plan a trial, and the hierarchy of evidence. The session also discusses the three important ingredients for successful research, five pillars of evidence based medicine, potential biases to consider, and the differences between explanatory and pragmatic trials. This session will help professionals understand the importance of clinical research and aid in evidence based decision making.

Generated by MedBot

Description

Orthopaedic Research Collaborative East Anglia (ORCA) is bringing you the 4th annual Clinical Research Methodology Day! This is a trainee-led East of England Orthopaedics (EoEOrtho) event focused on disseminating research methodologies, projects regionally and nationally.

Follow our social media platform!

www.twitter.com/eoeortho

www.twitter.com/orcapaedics

www.twitter.com/CambridgeOrtho1

www.twitter.com/NorwichOrtho

www.eoeortho.co.uk

orcapaedics@gmail.com

Learning objectives

Learning Objectives:

  1. Explain the purpose of clinical trials in medicine.

  2. Discuss the different types of study designs used to assess outcomes.

  3. Describe the importance of asking the right question in research.

  4. Describe the five steps of evidence-based medicine.

  5. Explain the benefits and drawbacks of randomized controlled trials.

Generated by MedBot

Speakers

Similar communities

Sponsors

View all

Similar events and on demand videos

Advertisement
 
 
 
                
                

Computer generated transcript

Warning!
The following transcript was generated automatically from the content and has not been checked or corrected manually.

And okay. So first up, uh want to talk a bit about clinical trials do and how do we actually get there. So outline of this lecture is uh talked about why research, what are the different types of study designs that you can think of? How do you assess outcomes? Because that seems to be the basis of all the clinical trials. Uh want to give you an example and then how do you plan a trial? So in medicine, in surgery, we really need to make sure that we have a critical review of medical research, which is essential in promoting evidence based decision making and practice. Uh If you don't have that, then you won't be able to advise your patient's and make or help them make decisions appropriately. So it's, it's essential that you do that. It's, it's our responsibility to be asked you to praises of most of the current evidence and the way medicine and surgery are progressing. If you're not on dog of the evidence, you're going to be left behind in terms of treating your patient's good research always commences with developing a question uh that is relevant to a particular area and it will always come from a large volume clinical practice because that's where you'll actually find the problems. So if you're not in a large volume clinical practice, you're not seeing the problems, you're not going to generate good clinical questions that need answering and questions. As I said, we'll often arise from a high body in clinical practice. Now, these are the three important ingredients, you think, you know it all. But research is best accomplished in teams because your, your USB is only going to be in clinical medicine, medicine or surgery and methodology. You need a lot of others to actually make sure that that research is good and relevant and most appropriate research design and then is selected to match the primary research question. You define your study population and the most appropriate outcome measures and variables and you can start thinking of your own clinical problems that you're facing in clinic. And how do you actually answer that on the side? Just check, we're sorry. Good night sleep. I get back in just this came attention and it is Chevy is that where we're up to your middle? So quest teach and enjoy, you know, it's because it's a type of shacks screen sharing. That's one thing that can develop, okay, you know what I mean, right part of this treatment. But uh so it goes to share with uh oh yeah, let me have advances. I'm like really so, so good research. I think the most important it is to two things, you need to ask the right question and for asking the right question, you need to make sure that you have a high volume clinical practice. So interestingly, we're doing a paper now, we're looking at all the research that the NIH are has funded over the last 10 years. And a lot of the questions actually are irrelevant questions and the amount of money that has gone on to get the outcome of there is no difference between the two interventions is huge. So it boils down to that. Ask the right question, ask the question for which the patient's are actually going to benefit from. Okay. And then the next important it is choose the right methodology to answer that question. Otherwise there's a lot of research waste unfortunately in this country and then evidence based orthopedics, we keep talking about evidence based orthopedics. And then we say evidence based orthopedics is about research evidence and what we forget actually, it's about research evidence in the right clinical setting with the patient preferences. So let's say the draft trial which talked about K beiring and plating uh distal radius fractures. Uh that may be very relevant research evidence for patient's in this country. But take the same research to Myanmar or to India or in fact to me as well as an orthopedic surgeon with the same fracture. Would I wanted a wired and created because I want to get back to operating within two weeks. So, clinical setting is very important for that kind of question and also the patient preferences. So it's not just uh research evidence. So these two bids are actually usually forgotten. And if you have a high volume clinical practice, then you'll be thinking of these two bits as well because clinical preferences and what exactly the patient wants are relevant. And then it's the five days of evidence based medicine. You need to formulate the right questions you need to ask, you need to search for the best evidence to acquire that. Uh You need to assess the quality of evidence, praise the literature. Uh use the best available evidence, you're applying it. And then finally your combining the evidence with patient and provider preferences. So you're acting. So these are the five days uh of evidence based medicine and that's how you will actually proceed. And then people talk about this hierarchy of evidence and you start with expert opinion of therapy, you build onto case series, going onto case control series. But what exactly why is it hierarchal this way? Why are uh randomized controlled trials or meta analysis at the top of the pyramid? The only reason they're at the top of the pyramid is because if you keep going up, you did decrease the risk of bias and that's what's happening. So as soon as you decrease the risk of bias and basically your research becomes more relevant uh and practical. And these are all the by prices that you need to be thinking of when you're actually planning a trial. So you've got selection bias. Uh when you think of your inclusion and exclusion criteria, you've got recall bias detection bias, performance bias, attrition bias in terms of systematic difference in individuals who will drop out of the study than those who actually remain in the study. And then finally, you've got expertise bias as well. I've been not progressing. Sorry. We're still a quick, okay. Everything together. So you can stop it. Uh Yeah. Okay. Once we're doing this, how many of you from ST 12, ST three? Just raise your hands? That's brilliant. Ok. And how many of you who are actively involved in doing a higher degree? Raise your hands? Choose, what are you doing? M fill em ssure phd. Uh Okay. And what are you doing yourself ma'am? And felt perfect. Okay. How many of you thinking of doing a higher degree? I like that. Ok. Aggie, you can't put your hand up your urine it, now you're not thinking of it, you're forced to do it. Okay. So this should be good. So then when you're thinking the next thing you think about the trial is timing, okay, uh whether it's a prospective study. So you're looking at time going forward, whether it's a retrospective study, which are designed to assess the outcomes where the exposure has already happened. And then you're actu collecting the data and a lot of studies now are longitudinal studies which probably more relevant where you're actually assessing it as, as it goes by and you're collecting that data painful but good studies, especially in the hip, for example, birthdays disease. So these are longitudinal studies. So we'll start off with the basis of case series, right? So that's probably the first one which most of you as as earlier registrars would be uh told to write. Uh I've got a series of your consult and say I've got a series of 100 of these. Why don't you go and look them up and let's write them. So that's a case series which is level four evidence. There's no comparative arm uh can be prospective or retrospective advantages are obviously easy to perform, requires a few resources. But the biggest disadvantage is that all the biases that, that you can think of are in there. You've got selection bias, you've got recall bias, you've got surgeon bias, you've got institution bias, all these biases are there. And therefore you cannot really derive an estimated treatment effect from case series. Okay, because of all those biases. So certainly useful for evaluating a normal surgical technique, uh assessing the feasibility, I think that's where it comes in. So if you're planning a big randomized control trial, doing a feasibility study before you do that trial is essentially good and you've got your case series there and it provides you've been a baseline data to inform the sample size that you need for your trial. So well designed case series will have a study protocol and that can be published. You have clear inclusion exclusion criteria, you have prospective data collection, high follower and clinically relevant outcome measures as well. So that's where your heart is, will actually start. Okay. And I'll come to an example of it as well. Then you go into case controlled studies. Obviously, you've got two groups here, you've got the cases and you've got controls, your analyzing them retrospectively, and then you're comparing them for exposure to the risk factors. You're looking at patient characteristics and also the treatment options and then you're measuring the strength of association between the risk factors in the outcome and that will come through an odds ratio, which I'm sure, but we will talk to you more about in his in a start stock. So case control studies, obviously, the advantage is being that it's useful for rarer outcomes. That's what you want to pick up. Simple to conduct and relatively low cost in disadvantages. Again, uh lots of bars is you've got performance buyers, you've got recalled buyers against election buyers. And again, therefore, because of that, it's low on the evidence hierarchy scale, then if you want to prospective cohort studies, you'll have two groups here. Again, the exposed and the N expose and you're actually following them up prospectively for outcomes of interest for example, uh lung cancer and smoking, that's, that's a classical one which people talk about and who gets the disease advantage being it's resistant to recall bias is whether you get the disease or not. That's what you're checking. Uh timeline to progression is obviously evident in there. You can easily match the groups for known confounding variables and you can standardize eligibility as well in these groups. The disadvantage being it's, it's definitely resource intensive. A less strengthen treatment effect versus randomized controlled trials. I'll come to that in a second. And again, the amount of bias their needs, selection, bias, detection bias and performance bias uh is high. So look, haven't had a look at all these, then you come to randomized control trials and they at the moment represent the highest quality of evidence above them is the meta analysis of randomized controlled trials and you're pulling uh the our cities together. So you're getting a population of eligible patient's uh identify prospectively with clear inclusion and exclusion criteria. And there are two basic trials there. You've got an explanatory trial which basically is testing efficacy and there's a very strict criteria that you need to follow. And then you've got the other one which is the pragmatic trial, which tests the effectiveness that this criteria is less printed. So as it happens in the hospital pragmatically is what you're actually checking there. Now, randomization, I think this is a big word and you need to understand why this is important, basically mitigate selection bias. That's, that's what it's all about. Okay. And that's why randomized controlled trials are the best for evidence. It balances the groups and confound ear's, it will isolate the treatment effect as well. And then you can be randomizing by patient, by surgeon, by cluster a region by hospital. And the more you randomize the better it gets. So if you want to have prevention of selection bias, right, in the beginning of the study, then you have concealment. And what does it mean? Basically an individual identifying eligible patient would be unaware of which street and down the patient will be allocated to. So that's concealment, right? You are awarding professional enrollment and allocation of patient's with favorable prognostic characteristics. Okay. He's a young guy. Uh he's got all the negative risk factors for the hip arthroscopy. So I'm gonna put him in the treatment arm for hip arthroscopy. He's an old guy with negative predictive factors. So I'm gonna put him in the hip arthroscopy arm that were poor outcomes. So if you want to make sure that you're not going to be biased in selecting those patients' because you've got that feel of who they are and where they fit you want to conceal. And that's what concealment is all about. Uh And the best one is done centrally offside from the center of phentermine. And now there are computer programs that will basically select which arm of the trial, the patient will get into and with variable box as well. The next bit that you want to prevent. Yes, ma'am. Yeah. Yeah. Do you still on the selection bias problem if you know what hospital is doing, like select patient for? Uh So it's a very good question. So very good question. So in your, in your selection of the hospital, you need to make sure that there is uh there are both patient's available and that surgeon will have clinical equipoise for treating both patient's i an older patient with the Nepal force commanders take an example and the younger patient, if there is variability of clinical equip boys or if there is no clinical active boys, then you would actually not select that hospital for that purpose. Okay. So and that's, that's how you prevent that from happening. So the next bias that you're thinking of that you need to prevent in our cities, detection of performance bias. Okay. So there may be a surgeon who's very good at doing one procedure and he does a lot of those versus a surgeon who's doing only 50 of those are not doing them so well. And then surgical trials that seems to a big problem, it seems to be a big problem and that's performance bias. So blinding is a good thing. You're keeping one or more individuals unaware of the treatment allocation and you can blind the patient, you can blind the treating clinician ie the surgeon, uh other clinicians, you can blind the data collectors, the outcome assessors, the data out of this the manuscript in itself. So the more you blind, the more powerful the study becomes, the more robust the study becomes because you're preventing that. So continuing on blinding feasibility of who can be blinded, you're obviously very upon uh the intervention and one of the biggest ones that we know in surgery, uh if you're thinking of surgical interventions and the best way to blind them is with the use of sham surgery and arthroscopic knee surgery. This paper came out in the, in the NEJM where you had a sham group as well. But to get ethical approval and uh there's, there's a whole lot about sham surgeries, you're thinking about the same with the ob scopic surgery of the hip. And it was extremely difficult for us to get through just making incisions and not actually doing the operation. But there's a whole uh consenting bit and the ethical Bridcutt surrounds it. And for pharmacological interventions, again, you can blind them with the use of placebos and then comes the final bit, which is the abscess mint. So you will have continuous variables and dichotomous variables in depending on what you want to choose as your outcome measures. So for example, blood loss time, uh to fracture, healing, surgical time, range of motion, all these are continuous and then in terms of complications. Nonunion, madly, union by the President's these become dichotomous. Now, essentially, if you want to apply for an NIH grant, they want your primary outcome measure to be a patient reported outcome measure. And those are what are essentially in use. So it could be a genetic one which is measuring the general health status of the patient or we are very disease specific one that you're enquiring about specific aspects of the disease in a small comprehensive. Okay. And that's what they want. And then the outer measure that you choose should have gone through all this, it should be reliable. That means when you it should have good intra and inter observer variability, it should be valid ie face validity, content, validity and construct validity of that outcome measure and it should be responsive. That's the ability of the tool to reflect the changes, both high and low. That's the floor and ceiling effects. So how do you test the validity of an outcome measure? So the face validity, how real were the questions in there? Uh the construct validity. How effective are the variables measured at differentiating the levels of skill content validity and then on carbon quality? So these are the four that you're thinking of and then the floor and the seeming effects uh of the instrument as well. And we'll see will talk to you about that and how responsive uh is the question to answering those problems. And in the young adult hip world, examples of this genetic one are the EQ five D and the answer to the specific one is the hot 12 to 12 questions. Just specific to young adults with hip pathology. That's what we use. The last bit before going on to the example is what type of a trial is it? So there are two types of trials. You've got the noninferiority trial where you're looking at uh the treatment effect in comparison to the control population as long as it's not inferior. And then the second bit is the superiority trial where you want your treatment effect definitely to be much better than what your control population has or the intervention. And this in a nutshell basically tells you what I got clear you've spoken about from across a case series to go hard to case control and then the randomized control trials. So now putting all this together, I just want to take you through an example. So this was in the young adult hip world, an important trial where we looked at patient's with Femara established impingement and no arthritis. If you randomize them to physiotherapy versus surgical intervention, ie arthroscopy who does better. So this was a surgical trial and the outcome of this trial was that hip arthroscopy goes more effective than non operative treatment or physiotherapy for Femara established impingement with concomitant tanase zero and one are osteoarthritis. So that's, that's what came out of this trial. Phage trial. Okay. Oxford was the base center. We contributed to it as well. Uh, but the next bit was what is the best intervention for patient's with camera established impingement who've got grade two Tonys arthritis or grade three tanase arthritis? Is it surgery, is it going to be a joint replacement or is it going to be non operative management? And that's, that's what the question was because that's what we see in my clinic all the time. Patient's who are young, let's say 35. They don't have to honest, when they have to honest to out honest three, they're sitting on the borderline and you don't know how to actually help them. Should I be doing an arthroscopy for these patient's should actually be giving them a joint replacement or should I not do anything? And there isn't a clear answer for this. And that's why I said that you really need a high volume clinical practice to answer or to pick up the right question that you want answered because that's the population that is actually staying in my clinic all the time. And therefore the question comes in there and therefore you start thinking of PICO and Steve will talk about his trial as well. So you think about the participants, you think about the intervention, you think about the comparator and what outcome measure you want to use. So that's what we've talked about. So we published a protocol on outcomes of hip arthroscopy in patient with Femara established in pigeon and concomitant great to osteoarthritis. So that's the select patient population that we're talking about. And we did a systematic review on to see what exactly was available. And it was very interesting to see that surgeons in North America are offering these patient's arthroscopy left, right, and center surgeons in Europe are very conservative, were already thinking of a joint replacement, not offering arthroscopic intervention because we think they will fail too quickly. So it was divided, this was the paper that came out. So there was inconclusive and in fact, contradictory evidence for outcomes after hip arthroscopy in patient with the FBI and osteoarthritis of grade two or greater. Okay. So that's the first systematic review that came out. The next one we did was okay. Let's look at the non operative management of patient's. If you just subject them to physiotherapy or if he just they had occupational therapy only or they had chiropractic level, any non operative management or injections, there was no evidence existing on the outcomes in this scoping review for patient's with FA I and grade to Thomas uh hip boy. So that was the next paper. So what we got from here was there was evidence which was inconclusive and contradictory in this area. There was no evidence for non operative management. And then we did a Delfay study with 30 experts or 30 experts from around the world to find out what they would do for this patient cohort. And again, there was no clear answer. So that basically tells us all these studies so far. Tell us that there is, is an area with clinical Ector points. I paid surgeons do not know how to treat these patient's. And it took us two systematic reviews and an international Delta study to arrive to this area. Okay. So the next bit obviously is planning the dynamite's controlled trial which is happening now. So it's a three Parral armed trial, which is non operative treatment, hip preservation surgery, and joint replacement surgery. It's a registry based trial. It'll be the first registry based trial. So well embedded in the north north of plastic registry over 100 and 10 surgeons contributing to that. It's a randomized control trial. So it'll be double blinded allocation to treatment and single blinded assessment of outcomes. It's a superiority droughts. We want to see whether this treatment is superior to what we offer or not. And it's an adaptive design so we can actually change the sample size. So again, I won't bore you with the details, but this is how we calculate the sample science, uh expected lost to follow up about 15%. And again, I'm sure about single go into the depth of it, but we've got about 250 patient's actually to look at how we do this tribe and then outcome and points family outcome as NIH are wants primary outcomes to be prompts. We're going to be looking at IHOP 12 and the secondary outcomes will be changed in the range of motion rates of revision surgery, three operations, complications, and radiographic progression of disease. Stage two to stage three, and then having a hip replacement. So what I have to do is give you the basis for how you actually construct a trial. What do all these terms mean? So that when you're reading a paper, you can actually figure out what exactly they're trying to do, reduce the bias by blinding by concealment. Your example of how you work up clinical liquid points and how we design a randomized control trial. So this is a phd in itself and as you can see, it's not a single person's work, right? So many people involved. So you are just a clinician all and the trial methodology ist okay. That's your expertise. You need statistician, you need healthy economists, data managers, you need trial coordinators, the PPI that's the patient, public involvement, admin, personal patient's and epidemiologists to actually make all this happen. So if you're thinking you can do it alone, definitely not going to happen, meaningful research, big teams and that's how it will happen. So there's one message that I can give you start forming your teams or get into a team now and this video unfortunately doesn't work, but it is uh it's a fantastic one where in 1.82 seconds, they changed all the tires and they're backed up on again. Let's see. Uh Okay. That's it. Thank you very much. Hope you have a good day. That's just crazy. Continued. Yeah, it's up to you. All right, I'm just gonna check if we are morning as well. Um uh So the thing that has skipped over briefly, patient's are PPI is so important now in design of outcomes for how many questions patient's here for, after a huge think a outcomes for how do you get how they're doing uh the first five questions and then guarantee. And so try and make sure that we're getting the right outcome measure and they're not asking too many questions. He loved it. They asked all the uh if you have to say, I'm sure the young adult population uh population uh do some of them but never get a really home down or questions report to you. So I think that's, that's extremely important to the extent that we've got a patient who's on the ground now because as the PPI you've got and uh the hot 12 makes it easier because they're only 12 questions. So the tool is good. But again, despite that, the compliance on the registry is only about 60% and bending it within the registration. So we'll have the trial coordinator who will be bringing them up. That would be the difference. Cool. Thank you very much. Thank you. Okay.