Home
This site is intended for healthcare professionals
Advertisement
Share
Advertisement
Advertisement
 
 
 

Summary

This on-demand teaching session is relevant to medical professionals and provides an essential knowledge base about the critical analysis of randomized control trials. It will begin with a manometer quiz to get an idea of the knowledge level of the audience and continue with a systematic demonstration of areas of appraisal for randomized control trials. Participants should expect to learn about the importance of study design and randomization, as well as practical topics such as washout period, half-life, and crossover RCTs. Attendees will explore an example and get to the bottom of the effectiveness of the study results, focusing on what it means for the population at large.

Generated by MedBot

Description

You will come across RCTs throughout your medical student and clinical career and will need to draw conclusions from them. To do this effectively, you must be able to tell if a study is robust enough to draw sound conclusions from. This webinar will teach you to do so.

Learning Objectives:

  1. List the sections of an RCT
  2. Understand how to critically appraise the different sections of an RCT

Learning objectives

Learning Objectives:

  1. Explain the importance of inclusion and exclusion criteria in randomized control trials.
  2. Identify the two types of randomized control trials.
  3. Analyze the implications of a washout period in a crossover randomized control trial.
  4. Describe how randomization ensures versus characteristics in each trial group.
  5. Identify essential factors to take into consideration for critical analysis of a randomized control trial.
Generated by MedBot

Related content

Similar communities

View all

Similar events and on demand videos

Advertisement
 
 
 
                
                

Computer generated transcript

Warning!
The following transcript was generated automatically from the content and has not been checked or corrected manually.

All right, good evening, everyone. We'll get started in a second. But thank you all for joining today. We're gonna be talking about the critical analysis of randomized control trials and just as a disclaimer to start off, this is not going to be me teaching you about the different parts and pieces of a randomized control trial because I'm going to assume that a lot of, you know, the basics of it anyways. So it's going to be about actually seeing why the different parts are relevant and how to actually analyze the different bits and see whether they're good or bad or whether you should actually trust this study. OK. So the whole, so we've talked about the learning objectives. The whole point of this is because whoever in he whoever's in healthcare, they're always going to see randomized control trials everywhere. That that is how drugs get into the market and approve. So we must know if the randomized control trials are actually robust and good enough to give us the evidence. We need to decide whether a, a certain intervention should be used for a certain condition. Um And just another reminder, if you do have any questions, just post them in the chat and there will be a Q and A session section at the end where I'll go through all the questions or if you're having any trouble with anything, just put it in the chat. So as I like to do uh for some of you who've been to one of my webinars before and I see some familiar names in the audience. We'll start off with a little man meter quiz to get an idea of how much knowledge everyone has on this topic. So I'm just gonna share that screen and I'll put the code in the chat as well. So the screen is up here, you should be able to see it. I'm also going to put the link, the code in the chat, do the link. Apparently, there you go. Links in the chat. I'll give it a couple of minutes for everyone to join in. I promise it's nothing too strenuous. This is going to be much less stressful than the last webinar I did on the analysis of lab based research. Much more enjoyable and relevant to everyone. I think. So a couple more minutes and we'll get started. All right. Actually, we've got a good number now, I'm gonna hit start. All right, you have about 25 seconds in this one, I think, to read the question and the answer options. Yeah, I was generous. Oh, so don't worry, don't worry about getting things wrong. Right now because these are things we will discuss in the duration of the webinar. But no, actually they're not less reliable than parallel RCT S. So next up everyone's here and let's get started. Oh, just take your time. There's a lot of words. Oh, not the easiest one I gave you to start off with. That was a bit mean of me. I'm sorry about that. Um But yeah, we'll talk about P values during the course of the webinar anyways. But yeah, it doesn't tell you about clinical significance. It tells you how significant the effect was. That doesn't always mean that your intervention is gonna be effective clinically. I'm gonna stop sharing this now and let's go back to the actual presentation. Um What, what there's a question saying what is washout period and we will be discussing that in the webinar. So don't worry, we will get there. Let me just share the other screen once again. OK. So I've tried to be systematic about this. I've split the webinar into three parts, three areas of appraisal for randomized control trial. And that starts off with study design and then how the authors have actually analyzed the outcomes and then a few other things which I couldn't come up with a better name for. So we're gonna call it miscellaneous. OK. So study design, a randomized control trial. Oh, I forgot to mention I've taken extracts from an actual randomized control trial. So you will see little snippets at the bottom and it's gonna look like a lot of text. Don't read that. Just focus on what I'm saying and read the bullet points above if you, if you must because you don't need to read all the text at the bottom. Yeah, I'm just gonna explain the highlighted bits there. Ok. So all randomized control trials will have some sort of an INL inclusion and exclusion criteria and what that tells you is, well, it's in the name who's been included and who's been excluded from the study. And authors will have different reasons for including and excluding certain people like contraindications of the drug. You wouldn't want to give an intervention to someone that's contraindicated to it. Um But the more people you exclude, the less Generali your study becomes because you're essentially testing your intervention on a smaller population group or on a smaller sample group, I should say. Um So it's very important to look at these things because some people that you exclude might actually make up a large proportion of the pe of the population that you want to treat. So if you're excluding them, then will the intervention you're testing actually be useful in a clinical scenario, it it would be significantly diminished. So we can actually have a look at that. In this example, we're gonna use this study throughout. So you have some continuity there, a randomized controlled trial of liraglutide for adolescents with obesity. Now I'm no expert on obesity or liraglutide, but it is a G LP one agonist. So it causes something called the incretin effect, which is implicated in obesity and type two diabetes. Um And so we'll look at the inclusion criteria. First, history of failing to lose sufficient weight with lifestyle modification is one of the criteria that the participants must have met to be included in the study. Now. Well, sociologically anyways, a lot of people with obesity just find it difficult to initiate those lifestyle modifications. So this randomized controlled trial wouldn't even be able to check on people in that population. People who haven't managed to initiate their lifestyle modification because they have to have had a history of trying to improve, to be able to be able to join the study. So just by looking at that one line, we've already seen a bit of population that we can't generalize the results of this study to. Um And then if you look at the exclusion criteria, they included type two diabetics in the study, which I think is quite good because the two do go hand in hand quite often, but they exclude people on other antibe treatments, anything other than Metformin, which I don't know what proportion of type two diabetics are not on Metformin or on some other sort of diabetic control. Um But they would all be excluded and the study results wouldn't apply to them. So that should have you thinking in your mind. OK. Is this study actually relevant? Because if you're excluding all these people who's left who you actually, who are the results for and that's what critical analysis is. Not just looking at things and seeing w well, it's trying to work out why they did that, but it's also trying to see what that does to their conclusions. So they might decide in the end. I, we haven't got there yet but they might say the regular type is amazing for adolescents with obesity, but they've excluded all these different people. So is it really that good? And that's what critical analysis, analysis is all about. So, moving on from inclusion and exclusion criteria, we're going to go a bit onto the layout of how studies go. So we start off on the left hand side. Let's look at the left hand panel first, the classic parallel randomized controlled trial. I'm not gonna explain it, but we will discuss the in some individual bits of it later on a crossover trial is less common. So I will talk about a few of the bits of it right now. For example, someone asked about the washout period. The what it it all depends. It's the washout period will make more sense. When I explain how a crossover RCT works. Your study, your study group is split into two groups. One gets the treatment A like in the diagram, one gets treatment B but after a certain amount of time on those treatments, they swap treatments and there's a period of time in between the swap where they're on no treatment at all. Just so that the effects of their or their first treatment wears out of their system. So the drug is no longer in their system. So a bit of analysis you can do here is, well, if the drug you're trying to test has a half-life of five days and your washout period is two days. By the time your patient group is on the second cycle, on the second cycle of treatment, they'll still have drug A in their system. So are the results actually representative of drug B? No, they're not. Which is why you need to have an appropriate washout period. And that will involve some Googling depending on the intervention they're doing. I don't know the half-life of liraglutide. But if I was trying to extract, extract relevant results from them, I would look it up and have a look at exactly the washout period. They used uh crossover studies. I hope that answered uh the previous question in the chat. By the way, otherwise we can discuss it further in the Q and A section. Uh Another problem with crossover randomized controlled trials is they tend to be a lot longer because you have fewer people. So people tend to drop out of these studies because it's just hard to stick to the study protocol. Uh especially if it's longer and depending on the intervention of there was a big crossover trial to do a diet to test something called the Dash diet and it's, or, and the effect of salt intake on BP, I think. Um, and that study lasted, I think, over a year with hundreds of people who had very specifically provided diets by chefs and they could only eat that. And I don't know how happy you'd be eating salt this food for three months in a row. Um with no option to have your own food. So you can, I think you can appreciate why people might not follow these things perfectly or might drop out of the study because it's difficult even if you're getting paid. Um So moving on from design, then randomization, randomization essentially in, in simple terms is that every participant should have an equal chance or probability of being assigned to the treatment group or the placebo group. And it's done so that you get balanced characteristics in both the treatment and the placebo groups because so that you can say your result at the end was actually due to your intervention and not due to a difference in the baseline characteristics of each group. So let's say we did the, well, actually, let's look at this example. Um If you can read that little table, there are a few of the things on the table there. You'll see that this study has accounted for a lot of important factors that might play a role in obesity. They have the BM I waist hip ratio, glycated hemoglobin, fasting blood glucose, BP, cholesterol, triglycerides, even quality of life. So they've produced a treatment group and a placebo group with balance characteristics for all those things. And to me that looks like most of the important confounder that might affect their results or that might influence obesity have been accounted for and have been accounted for. Well, because the numbers look very similar in each of the groups, liraglutide and placebo. But uh some studies might miss important characteristics. For example, let's say I did this study, but I didn't randomize relative to BP as well. Then there could be a possibility that my lir liraglutide group ends up with completely normal blood pressures. But the average BP in my placebo group is something like 100 and 60 systolic. That difference in blood pressure between the two groups might make your results look funny or might make your results not realistic. Um So it's important to look through the table that every study should provide on randomization to see if they've missed anything that might be relevant to their uh the condition they're trying to intervene for. So it's really, that's how you would appraise a ran randomization just by going through that and seeing if they've missed a certain characteristic that they should have randomized for. OK. Moving on from randomization, blinding again. I'm sure a lot of you understand what blinding is. I'm not going to go into what single and double blind is, but it just, it's just to do with whether the participants or the assessors are blinded or both of them. And it prevents something called assessment bias because trial, uh the people who are running the trial obviously want to see their drug work or their intervention work. And if they know a patient has come to them and they're on the treatment, they might be a bit more favorable in measuring, in measuring, for example BM I or waist hip uh ratio. In this study case, they might give them better lifestyle advice or they might tell them to eat certain foods just to make it look like their drug is doing a bit better than the placebo, which is why it's essential that blinding is carried out in at best. It should be double-blind. You can even have triple blind where the people who are analyzing the data are also blinded. Um the more blinding you do the better essentially to prevent something called assessment bias that we've just talked about. That was quite a quick one. I mean, you can see in this example that in this study, it was a double blind trial. OK. Moving on to the actual interventions, they may have something called a run in period, which is a period of time where you start off the drug at low doses and maybe build up to the right dose or you start with some other form of therapy to then get patients ready or participants ready. Uh And that period isn't usually analyzed in the study. Um But things to look at when you're thinking about the intervention that was used in the randomized controlled trial is what dose was used? Is that similar to other studies? Uh Other sim other studies and practice, is it way higher? Is it way lower? Why is it higher? Why is it lower? Could that have had any effect? If give dr a drug at a massive dose, you might wanna look, think about side effects as well. Has it caused any side effects? Is it safe at that level? Or you might wanna go back and look at the phase two and phase one trials for that drug, if it's a new drug, um was enough time provided to see an effect. Obesity is a long a chronic condition. So the change that will be accrued will happen over a long period of time. So you want to see how long the study was carried out for. It's also a type of condition which relapses once you get off treatment, it's notorious for that. So you want to see how long they were followed up for and what sort of results that follow up period provided when they were not on any treatment? And is it so long that people won't comply? Or people might drop out and I was hinting at that earlier. Attrition is an important thing in longer studies and we'll discuss that more later anyways. So if we look at our live example, this study had a 12 week run in period where participants were provided lifestyle therapy to start off with. Not much you can analyze there. I find it interesting though that earlier, they only included people who had already attempted lifestyle therapy. And then they also had this run in period with lifestyle therapy, which may be just to get everyone on the same level of what type of lifestyle therapy they'd received something like that. Possibly. Um I can't really comment on the dose of liraglutide. Um I'll leave that up to you guys to Google later on. Um And the study was carried out for 56 weeks. So about 13 months there, which is quite a long period of time. And you should ideally see a change in BM I or whatever their outcomes were over that period of time. And they did indeed have a follow up period for about six months. Is that enough of a followup period? It depends, obesity could be a lifelong condition. It all depends on the results and we will, we will have a look at that later on. So it's just stuff you need to be thinking about while reading through and working out whether this is a good drug for obesity and whether it has to be a lifelong drug or a short term treatment outcomes. So you have the primary and secondary outcomes and the important thing to note here is do the outcomes actually answer the question? They want to look at obesity. So, have they measured obesity markers? It's quite interesting in this study. If you read some of the red boxes, I've highlighted in the study, their primary endpoint, which is the main focus of the study was the change in baseline BMI. Standard deviation score. Now, I, I don't expect I hadn't heard of this before. So I'm not sure if you have, but what the BM I standard deviation score is just how different a patient, uh participant's BM I is from the mean BM I of their population. So they've, uh I didn't know why they did this at the start, but I realized they've done this study over five different locations, five different countries. So it's important that they do the study relative to the population BM, I, of the country that the study was carried out in. And what they're assuming is that the BM I standard deviation is very high at the start because you're, you've, you've got a lot of obese people compared to the mean BM I, which they're assuming is normal. And then they would expect to see that come closer to the mean BM. I, what this doesn't account for is whether that population mean BMI is actually healthy. For example, uh this is a made up example, the mean BM I could be something like 55. Um And your BM I could start getting closer to that. So the standard deviation score could decrease. But that doesn't mean they're not obese and it doesn't mean they're healthier. OK. So it's important to think of these things and what the results are actually showing. Do. They actually answer my question about whether this drug is good for obesity. In this case, luckily, their secondary outcomes were very, very detailed. They looked at change in the actual BM I number itself, body weight, waist, hip ratio, glucose metabolism. So all factors which are measures of the burden of obesity in a patient. And very importantly, I really looked for this in studies, they looked at quality of life as a secondary outcome because not everything is about curing the obesity or treating the obesity. It's also about how the, how the participants live later on. So II, I definitely look for that in all randomized controlled trials and that's something you can critique about randomized control trials. You can say that. Oh, fine, it improved BM I. But did that actually change a person's quality of life? Will it actually be relevant? In reality? That's how I would approach something like this. And then you can also look at how often these things were measured, whether they were self reported, self reported BM SS are notorious for being biased because people would just report them, how they want to report them. I would not want to report a very high or very low BMI. I, um, but, but people are in the study, they've signed a waiver. So they might be more inclined to be honest, but it's best to take their word at caution. Um, so next up, ok, we're coming to outcome analysis and I see there's five questions in the chat. So I'm just gonna have a look at that first just because we've got quite a few piling out. Mhm OK. There's a few questions on parallel RCTS and clinical significance, which we'll talk about later and a bit on my in uh my voice is breaking. Um If anyone else is having this problem, please do, let me know. Um Unfortunately, there's metal is telling me that my network is at 100%. So I don't think there's much I can do about that at this moment. I'm just gonna carry on and hopefully it's recorded perfectly. So then if you, if you are having difficulties right now, you can always uh rewatch the video later on metal or youtube. Um OK, perfect. So we'll move on to outcome analysis now. Um OK. Don't read the words, we're gonna discuss it first. There's two primary types of outcome analysis. There's something called per protocol analysis and there's something called intention to treat analysis. Per protocol analysis is an analysis of the study population or of the participants but only the participants who stuck perfectly to the protocol. They didn't have an extra snack outside. They took their drug or placebo every day at the right time at the right dose. They didn't forget a single one. Only those participants are included in a per protocol analysis. Everyone else is just chucked off and, ok, I'll explain intention to treat first and then we'll go into the pros and cons in an intention to treat analysis. Um OK, I will go through the run period later. I'm just having a look at that um in an intention to treat analysis. Every participant is included regardless of whether they took the drug, whether they didn't, whether they misstepped or anything. And what happens is the intention to treat analysis. They've done studies on this and actually an intention to treat analysis is much more realistic to what happens in a, a clinical or GP setting. And it's because not everyone is going to take your drug at the perfect time on the perfect day and the perfect dose, not everyone listens to their GP for every single word they have to say so an intention to treat analysis. Although it'll underestimate the result because you're including people who haven't taken the treatments perfectly. It will be a lot more realistic. Whereas the poor protocol and that analysis will tell you exactly whether the drug is working or not, but it's not as realistic in a clinical setting. And currently the gold standard method is intention to treat analysis as was done in this study as you can see highlighted and there. So if you see only a per protocol analysis in a randomized control trial, that's a big red flag. They should always have that as a secondary analysis, the intention to treat should always be the first thing they do. OK. Moving on the actual data. So the graphs in a randomized control trial are usually quite standardized. They all have the number of participants on them. So it tells you about dropouts. And again, we'll come to that later on, but they also tell you about P values and the effects of the intervention. Ironically, they haven't actually given the P values on this one, but they've measured the changes. These are the graphs for this study for changes in BM I center deviation score and change in BM I itself. And you can clearly see over the 56 weeks, liraglutide significantly drops, uh both factors. But you can also see after the uh treatment period after the 56 weeks, that 26 weeks of follow up, both uh the BM I standard deviation score and the BM I itself rose right back up um at a faster rate than the placebo. It almost looks like if you carry that on the lines are gonna meet up and everything's gonna be the same again. So you can clearly see from here that um that this drug will have to be used if it is used as a long term intervention rather than just a short term because these people seem to be relapsing to obesity and their BMIs seem to be increasing once again. So it just gives you an idea of what the reality of the situation in clinic would look like for the GP. OK. We'll pause again here and I'll look at some of the questions, explain the run in period. Again, I will do that at the end. And why's intention to treat analysis better than per protocol? I'll go through that again in the question section. OK. It's not information you need now for the next bits of the webinar. So we start off with longterm effects. Again, we've already kind of talked about this um after they stop the drug, their obesity starts to creep back in. And that is a conclusion in itself that you can draw from the study. And that is an analysis you can do of the drug itself, but also to understand the longterm effects of the drug. This was the whole thing with the COVID vaccine. No one thought no one knew about the longterm effects of the drug. And even now, I would say it's not been that long of a period of time. So most studies won't capture the true longterm effects of the drug, which is why you have phase four trials, which look at the drug after it's been introduced into the market. So you should always keep a look out for that. Um And you could, you could also look at the yellow card scheme from, from the UK government to see side effects of certain drugs after they've been introduced. Um to get a more realistic idea of the longterm effects of the drug. Um So for example, you could do that for li liraglutide, I'm sure it's used for other things right now, you could look it up and see whether there are any other longterm effects reported through that system. OK. Kind of fill the gaps and was the trial too short to draw conclusions from, in this case, it doesn't look like it because you can clearly see a massive difference occur in quite a short amount of time. So I think the actual treatment duration was good enough to find the difference and the follow up period was also good enough. Um to look at that relapse, I would say the follow-up period potentially should have been a bit longer just looking at the data cause like I said before, they seem to be matching the lines seem to be matching back up. So it would have been interesting to see if they actually did match up or whether liraglutide did have a more permanent effect in the long term. So again, that's the kind of thing that is important to notice in the data from a randomized controlled trial and to draw from that. But hopefully, you can use these slides eventually as a kind of checklist almost to remind yourself because you can't remember all of this at once. It comes through practice. So, um attrition bias, attrition is participant dropout. And why does that create a bias? Let's say we have a part, we have a sample of 100 people, for example, let's say 25 of them have severe hypertension and the other 75 don't. And let's say we've done this exact study. We've given half of them liraglutide and half of them placebo. And let's say for some reason, all the patients with hypertension dropped out of the study. If you just analyze, well, you've changed, completely changed the characteristics of the population because now your study results are no longer applicable also to patients with hypertension. I hope you're seeing where I'm going with this and why attrition is quite bad, especially if it's a massive number of people that drop out from the study, it changes the baseline characteristics. And if that change leads to a difference in the two groups, the intervention and placebo, then your results are no longer valid because you don't know if the results have occurred due to your intervention or due to the change in characteristics of the group. So in this case, there was only I calculated about a 16% dropout which isn't too bad. Um Do we? OK. That, that's a really good question that someone's just put in the chat there and I will talk about that in the Q and A section. Good, good, good, good spot. Um So yeah, 16% in this case of dropouts is not too bad. Anything, 25% and above, I would start getting suspicious. And you wanna think of why they dropped out? Was it due to side effects due to the duration of the trial? Was it a difficult protocol, like I mentioned earlier with the Salt study? Um Actually, I guess I'm answering your question here. Studies should tell you whether, why the participants drop out if they didn't. That's another red flag there because it's almost like they're trying to hide something and not be completely transparent about it. So that's something you can definitely comment or pick up on from a randomized control trial. Um Yeah, and then I was talking about are the results reliable? If many people dropped out, will it actually be effective in real life if so many people aren't, are dropping out or not taking the drug if 50% of your patients drop out and don't take the drug? What's the point of even introducing the drug into the market? Um If that makes sense. So moving on from attrition funding bias, one of the more simple ones, pharma companies love to see their drugs work. Um So you always want to look at who's funded the study and what sort of impact they had on the actual experiments. So in this case, let's read through the trial was designed and overseen by representatives from Novo Nordisk. Normally they'd want to uh hire an external group of people to do everything to do with the study. All they wanted, all they should do is just pay for it and have supervisors or people just overseeing you. Ideally, studies shouldn't have them involved with data analysis, data collection and writing of the manuscript. In this case, it's quite interesting. Um The sponsor, so Novo Nordisk actually performed the data analysis, which I would say is a bit of a red flag there. Um I don't know how much I'd trust that uh because that should be completely independent of them because obviously they want their drug to work. So you don't want them to kind of bias the analysis in any way to make it look like their drug is working. Mhm Yeah. But then they also say later on all the authors had full access to the data and supervised the data analysis and interpreted the data. So that kind of softens the blow, I would say um yeah, that makes it a bit more legitimate. So moving on and how Generali is this now, you should already be getting an idea of how Generali it should be from what we were talking about earlier, the inclusion and exclusion criteria, the attrition bias, what type of analysis has been carried out all those things combined tell you about the general liability. Where was the study carried out? Like in this one, I told you it was carried out in five different countries that makes it more realistic to a wider group of participants. Um How long was the intervention? Can it even be feasibly carried out? And in a real life setting? Um And are the results even clinically significant? For example, you can have a tiny change, let's say the mean BM I due to liraglutide changed from 35 to 33. If you have enough number of people that tiny change can come up as statistically significant. But will a change in 35 in BM I from 35 to 33 actually improve someone's life actually make a difference to their clinical condition. Probably not. So it's not always uh the endgame to look at statistical significance. You also have to think about the actual results and judge for yourself whether they represent clinical significance. And once again, one study can never be generalized to a whole population. Many different studies have to be carried out in different locations with slightly different formulas or slight tweaks in the method. And then all those studies get combined in a meta-analysis to give a much better, bigger picture idea and more about that in our next webinar next week. So moving on from this. Um so the Q and A is after the summary. So that's when I'll be answering your questions and some of you who've been to my previous webinar with to do with lab based research critical analysis will think that the summary slide looks exactly the same as that one. And it pretty much is because you want to go through a randomized control trial, the same way you want to go systematically through all the bits, go through the study design, through the outcome analysis and through those miscellaneous sections. And if you don't understand something, Google it, I had to Google the BM I standard deviation score. I had no idea what that was. Um And always be cynical, ask yourself what else could have affected these results? Could these results be not real and why? What other confounding variables could there be to explain the results? So it, that that's how you start to build really good critical analysis, analysis skills. And again, like all rec no research is perfect. So you will find flaws. It's just deciding whether those flaws are bad enough that you don't trust the study or whether they're minor flaws. For example, um You might have a minor flaw as in, they've excluded a, a group of people who make up a, a standard part of the population. That might be a small thing because it reduces the Generali of the results, but it doesn't actually affect the conclusions. Whereas something like, um let's have a think, what could they have done? That's really red flag. Let's say they've only done a poor protocol analysis and they've not even attempted an in intention to treat analysis or let's say the funder did everything the funder collected the data, the funder did the data analysis and statistics and everything. Those are big red flags, which would make you trust the study much less. So it's all about that judgment. OK. So I'm gonna answer some of the questions now. Um So if you have any more, just please feel free to put them in the chat at this point. Um Let's go back to the top. OK. Starting here. What is the difference between clinical significance and statistical significance? So I think I covered that um already towards the end of the webinar, statistical significance can occur even with a tiny little difference, but that tiny little difference might not be clinically relevant. Like the BM example I gave then why might a researcher choose a crossover RCT versus a parallel RCT? A crossover RCT needs fewer people because um patients can sort of swap the placebo and swap the treatment. So you only need about half the number of people because every patient takes a treatment and placebo. Um 1st and 2nd after the washout period. And another reason is because every patient takes the placebo and the treatment their data can be compared before and after which means the variability in the data is much lower. You have very little biological variability because the same person is being used as their own control. So it's much more powerful in that way. But it also means that patients have to do the study for a longer period of time, which might make them more prone to dropping out. So if, if I am answering those questions, well, well, please do put it in the chat otherwise I can try and reword. Um, the next one up was please explain the run in period. Again, the run in period is a, a, a length of time before the actual study starts where you're assessing the patients, giving them a pre intervention or something like that. So here it was the lifestyle modification. So before they even started the liraglutide and placebo, they would just have the group of pa participants and give them some lifestyle therapy just to get everyone kind of on the same page because they included in this case, they included people who had previously had lifestyle therapy, which they had no idea what kind of lifestyle therapy it was. So this would just get everyone on the same platform. The run in period is essentially for that to get everyone on the same level, everyone kind of comfortable with what they need to do. And the routine of the trial essentially. Um last two questions, why is intention to treat analysis? Uh uh I will get to that. Why is intention to treat analysis better than per protocol analysis is because intention to treat analysis includes everyone even if they haven't taken the drug at every day, on every day or at every given point. And it's better in because it's much more reflective of reality. So everyone isn't going to take their drug at the right time every day or they might miss their dose one day or two days or three days. So, an intention to treat analysis accounts for all of that, which gives you a much more realistic idea of how effective the drug will be in a patient population. Whereas the per protocol analysis only looks at people who took their drug every day at the right time, which is just not reality. So it tends to overestimate the the effect of the drug. Um And we talked about whether you expect the study to tell you why people drop out. Um How do you tell if something is clinically significant? I guess that comes from experience and reading up around the topic, um, or talking to clinicians depending on how big the difference is, I guess. Um, not quite sure on that one. It depends on, on a, yeah, I would take that on a case by case basis and I would read up. That's why I like to also look at the quality of life score because that gives you an idea of clinical significance. If the BM I, for example, in this case, it's changed a lot, but the quality of life didn't really change, then what was the point of giving the treatment, it might have helped their BM I, but it clearly didn't change their quality of life. So something else is going on there. Um Next one, if we consider only intention to treat instead of per protocol, how can you confirm the drug will work properly or not for the targeted population? So it depends on your targeted population. If you do a per protocol analysis and a lot of people have dropped out from your study, you're not actually analyzing your targeted population, you're analyzing the new population after dropouts if that makes sense. So you've lost a massive chunk of your target population. So even the per protocol analysis doesn't really tell you about the effectiveness of the drug for a target population, which is why our intention to treat is better. I di I did mention yes earlier. If you don't have that many dropouts, then the per protocol analysis will tell you if the drug is good, but it doesn't tell you how well it'll work in people because people don't always take their drug at the right time on the right day or every day. Like the doctor tells them to, um, I'm sure some of you have skipped your dose of painkillers or skipped an antibiotic one day, something like that. And that's what the intention to treat analysis accounts for. But the pro protocol doesn't. So yes, per protocol analysis tells you whether drug a treats condition a but it doesn't tell you that if you give drug a to a patient, whether they'll take it and whether then their condition will improve, it's that intermediate, that patient factor that is not accounted for by per, per protocol analysis. I hope that was helpful. Um We have plenty of time. So do put in more questions if you'd like um in the meantime, however, do crossover RCTS have blinding as well? Yes, they do and they absolutely should. Um because again, people will do different things. If they know they're on a placebo, people just won't take the placebo. Um or they might, yeah, they might do different thing. Does the intervention determine if intention to treat or per protocol analysis is performed? For example, contraceptive pills? Um from my experience, no, an intention to treat analysis should always be carried out. Ideally, you'd always have both in the randomized controlled trial. So you were not losing any date, any information, but no, regardless of the intervention, it should be an intention to treat analysis. If you wanna uh explain that a bit further, I can maybe go into a bit more detail. For example, why do you think, what type of analysis do you think should be done on a randomized controlled trial of contraceptive pills? And why? And then we can talk about that in a bit more detail. What are the key things to look out for when looking at how a study measures their intervention or outcome. So that was on one of my slides. Um It's whether their outcome actually represents what they want to answer. So in this case, they measured the BM I, they measured um BM I standard deviation, they measured waist hip ratio, um which ex which answers their question of whether liraglutide treats obesity. But let's say they wanted to see whether liraglutide treats hypertension and they didn't measure BP. Then they're not actually answering the question that uh they're asking. So it's just looking at whether it answers the question that they're asking. Um in the meantime, while more people type in their questions, we're just gonna move on to the uh post webinar quiz right now just to see how much everyone learns. So I'm just gonna set that up really quickly. Um It's sure screen. OK? And I'm also going to put the voting link in the chart. Please do join up. This will be much easier because now you have a great idea on all the topics we've been talking about. And again, it was a lot of stuff today. So you can always come back and look over this. It should be on youtube and med all in the next couple of days. Let's wait a minute longer and then we'll get started. All right, we seem to be at a stable number so more people can join in later. Let's start. OK? 25 seconds. OK. Let's have a look. Perfect. Uh four out of four. That's what we like to see. Let's move on to the next one. All righty. My fork la oh A fifth is joined. Excellent. Let's begin. Oh All the questions you guys were asking are, are the questions I've given away the answers now? Wow, everyone got both my questions, right? Great. Love to see it. OK. Stop sharing that. Uh I'm just gonna share the slides once again because that'll have the feedback form that you need to fill in to get your certificate. Um All righty. So you have the feedback QR code there. I'm just gonna put the link in the chat as well. But if you do have more questions, I will be here for a bit longer. So do post them. I saw the one asking for my email and I will put that in the chat. There is a link to the feedback form as well and do join us with the next one. a critical ana analysis on what essentially how to critique a meta-analysis. And that's something I think people struggle quite a lot on because it's so technical and we'll be going through each of the steps and seeing what are the goods and what are the bads. Um And the email, anyone here at any of the webinars can message this email if they have questions, you'll come to mind a believe. Mm we monitor that, you know, quite frequently. So I will get back to you very quickly. So I'll be here for a little while longer. I'll leave all this information up. Um So do ask any more questions if you have any and please do fill in the feedback form. It really helps us, but otherwise thank you for attending and I hope it helped. I hope you learned something and I hope you'll join us for our next webinar. Alrighty, doesn't look like anyone has any more questions at the moment. If you do have some after my email is there which you should be able to see after I log up. But again, thank you for joining. Do fill in the form and come see us while me at the next webinar as well. Have a good evening guys.