Computer generated transcript
Warning!
The following transcript was generated automatically from the content and has not been checked or corrected manually.
Okay. Why don't we make a start and people will join as they come in. So good evening everyone again. Thanks for joining us today about the webinar to all to do with the different types of studies you're going to encounter when you read the literature and medical literature. And what I'm basically going to do is go over the different types of studies because there's a lot of them and we're going to talk a bit about the disadvantages and advantages of the different studies and when you might use some instead of the other and that's quite related to the advantages and disadvantages actually. So you might be wondering what the point of learning all of this is what you have as a healthcare professional. It's really, really important to understand how the different studies are carried out. Because when you read the literature, the only way for you to understand it is to know how they work. And knowing some of the pros and cons is also really important to be able to critically analyze the studies. Because if you can't do that well, you're gonna have to believe every study you read and you shouldn't believe everything you read on the internet. So lastly, I haven't put this point in there, but it's also important for your exams in medical school and as a doctor as well. So for my favorite part of these sessions, we're going to do a quick mental meter quiz before we get started before we get stuck into it. Really just so that I can check how much you learn by the end of the session. So I'm just going to share that meant a meter screen now. And if you all could just join up on that when screens being shared, that'd be great. So you should be able to see my screen right now. So if you have the mentee dot com and use the, use the code, I'll wait for a couple of minutes for some people to join in and we'll get started. If some of you have been to the previous webinars, you know, you know, the, you know, the pattern already, you know what we're gonna do? Perfect. We've got a few people joined in. So I'm gonna get this started. So you have about 15 seconds to read and then choose the answer you think is right. Yeah, there's no speed points here. Oh, okay. So no one got that one. Right. Well, it's a good thing you're at this webinar, then we'll talk about why uh cohort studies don't have recall bias really. And how the best part about case control studies is that they're super cheap. Well, I say super cheap they're not. Yeah, you get what I mean? Okay, next question. Quite an important question actually. So I thought I, I put that in there because I thought everyone knows about the nice guidelines. So you'd go for it. So it was good. Um Yeah, it's the Cochrane collaboration and I'll chat a bit about that during the presentation as well. So last question of this quiz. Yeah, good, excellent. So you guys have some understanding of risk ratio is um which is really helpful. So, back to the Power point, let me just open that up. OK. Back to the power point. And I did want to mention that if anyone does have any questions during the meeting, please post them in the chat and there will be a Q and A section at the end. So we'll go through it then. So we'll first start off by highlighting the hierarchy of studies. So just going over the different types of studies and why some give you more robust information than others. So I've got this lovely little diagram which everyone will have access to later on youtube and metal. So as you go up the triangle, up the pyramid, the quality of evidence you're provided from the study increases. So meta analysis really provides you the most robust evidence, offer, offer treatment, working of an exposure being linked to disease of any two variables you're trying to associate together really. Um And that's because it combines data from a lot of different studies and that's one of the ones we're gonna deal with in particular, then you have below that you have your experimental studies. So I'm sure everyone knows about randomized control trials and you know that they're, they're the gold standard for proving causation and for getting drugs on the market really. Um So they, they're heavily controlled and very stringently carried out with placebo groups randomization double blinding, which is why they're quite high up in that pyramid. And then you have the observational studies where you just watch what happens and in for from them. So you never intervening, which is why they tend to have lower qualities of evidence because they suffer from more bias which will go through, we'll go through in particular this time, cohort studies, case control studies, randomized control trials briefly and meta analysis. So now that you're all aware of which studies are better and which studies are slightly less good, we'll go into the observational studies. So work our way from the bottom up. Um And we'll start with the case control design. So I'm sure some of you have heard about this design of study, um you might have actually done one so um feel free to put in the chat if you already know all of this. Um But I have this lovely diagram here, very proud of. Um And it kind of explains visually what a case control study in involves. So you start off at the point where it says onset of studies and you're in your ward or wherever you're on your computer and you look through patient database is to find out who has the disease and who doesn't. So you're in your present and you've worked out okay. This group of people have these, this disease and this group of people doesn't. And you think of something you think might have caused that disease in the past. So then what you do is you look back into the past to see whether the cases were exposed to something that you think might have influenced their disease pathology. And you also do the same for the controls. And then what you can do is calculate something called an odds ratio. And what that works out is the odds of a diseased individual being exposed compared to the odds of a control individual without the disease also being exposed. Now, clearly, if you're cases have a lot more exposure than your controls, then you're starting to think that okay, maybe my exposure is linked to the disease and, and that would look like an O R. So an odds ratio being greater than one. So it just tells you the odds of your case is being exposed is greater than that of the controls. And the odds ratio of less than one is the opposite of that, which you guys know since most of you got that answer correct in the mentee quiz. Interestingly, an odds ratio of one means that exposures and cases are on link. So there's no real correlation between the two. And then you can do statistical tests to see if these differences or odds ratios are significant. And the main one you everyone's probably heard of is a confidence interval. And what a confidence interval tells you is how accurate is your odds ratio. Because if you think about it, you've only sampled a little bit of the population when you've done your cohort study. So your confidence interval tells you how reliable that that sample odds ratio is to the true odds ratio of the entire population. And the fun part about that is all you have to do is look at it and see if it overlaps one because if the confidence interval goes through one, that means there is a possibility that the odds ratio could actually be one when you consider the entire population. So if that happens, it is very, very likely that the uh results are not significant. If your odds ratio, confidence intervals do not overlap one, it's much more likely to be significant. And so we're not going to go through have to do the statistical tests because there's a lot of programs do that. You're never going to do it manually, have no idea how to do them manually. But so that's the design and then we'll go through some pros and cons. So, like you saw in the mentee quiz, some of the pros are, is cheap and quick. You're just sitting at your computer and you sort of work these things out from patient databases and then you can look through their history to see if they were exposed to whatever you're looking for, but you might also have to interview them and look backwards. Um And it's quite useful for rare diseases because you already have the list of patients who have the disease. You're not waiting for someone to get a disease which is quite rare and you can look at multiple exposures because all you have to do is ask the patient or the volunteer whether they had 10 2050 different exposures. If you, so please, now you'll see the cons table for this type of study design is quite hefty. You have selection bias because you're the person sitting there on your computer, deciding who to include and who not to include in the study, which creates a little bit of bias because it might just be patient at one hospital or patient's in a certain geographical area, which then might not be representative of the overall population. You can then also have control group selection bias, which I put separately because you're finding your case is it's easy. But then you have to look through the database and just work out okay. This person doesn't have the disease. So they can be in control and you, I'm sure some of you can see the problem there because you might pick and choose your controls to make your data look better and please don't do that. It makes research a mess for everyone. But that is a disadvantage of these case controlled designs. Recall bias is probably the most important 11 of the most important ones here because like we said, we're looking into the past, right? So if you're, if you're looking through a patient's notes or the history, that's fine, there's less risk of recall bias there. But you have, if you have to go and interview patient's, then they might just remember things so far back in the past differently than what they actually are. And that creates a problem because then you don't know if they've been exposed first and then got the disease or if they actually got the disease first and were exposed later and they just remember it incorrectly, which is why that's a huge, huge, huge issue in this case, controlled studies and then confounding variables like any observant what? No, actually cohort studies are quite good at this. But case control studies suffer from being only able to detect correlation and not really causation. Or at least if you read a case control study, you wouldn't trust it for causation that much because of confounding variables and what all the other negatives we talked about as well. An example, let me give you an example of a confounding variable, which could really mess up the conclusion from your study. Let's say you're looking at how sun exposure affects skin cancer. You do your case control study, you do everything and you find your odds ratio to be greater than one. So you work out okay. Sun exposure is leading to increased rates of increased odds, I should say of skin cancer. But it could just be that these the people who had sun exposure exposure were using some sort of uh carcinogenic sun sunscreen. So it was actually the sunscreen, which led to the skin cancer, not the sun exposure. So you never truly know what's led to the disease because you're always looking back at the past. So now I'm not saying that other studies don't have confounder is in them. This is just a lot more apparent in the case control design. So I hope that made sense to you and please, if you have any questions, post them in the chat because this is quite a little bit mind bending. Um And you'll see on all my slides while most of my slides, I have a little reference down there, which you should go and have a look at if you want to get a bit more detail about the things I've talked about in this study, uh study webinar. So we'll move on to the cohort design, which is also an observational study. So you're just watching people. But here you start off by finding you're healthy people or your patient group. And, mhm. Some of those people in the group you've identified are going to be exposed to something you're not going to expose them. They're just naturally going to be exposed to, um, to something because of their lifestyle because, because of many reasons and then you just watch them and follow up at different times to see if they've developed a disease or if they have not developed the disease. So your, this is more of a prospective study rather than a retrospective study like the case control design was. So here, that's why you, you don't really have much of a risk of recall bias because you're following them up as time passes, which is um which makes, which puts it higher up in the hierarchy of studies anyways. And here you use your data, you can calculate what's called a risk ratio. So it's the risk of someone developing the disease if they've been exposed or not exposed. And it follows the same sort of rules as the odds ratio. So a risk ratio of one would suggest the exposure is protective against the disease creator than one would be the opposite. And a risk ratio of one would mean they're unlinked. And once again, you do the same type of statistical tests, you're 95% confidence interval, which again, same rules as before. If it crosses one, your results are likely to be un significant or insignificant because it suggests that that's it could be that the risk ratio is one which means the exposure and disease are unlinked. So, moving on and looking at the pros and cons of cohort designs, you can see there's a lot more pros here. You have no control group selection bias because you just get one big batch of healthy people and you watch them as they get exposed or unexposed. You're not selecting who's going to be exposed or unexposed. So you can't add to any bias that's not already there in the study. Then you also don't have recall buys. Again, we talked about that. You're looking to the future, not the past, you can also assess multiple outcomes, which is at follow up period. You can measure multiple things so you can ask multiple questions. Um And then here it provides information on some information on causation because you've seen the exposure happen before the disease. So it's more of a uh cause itiveness. I've way you're looking at things. And then finally, you can also calculate incidents because you can see in an entire population at a certain time point, what was the chance of how many people develop the the disease? And then if you just divided by the total number, you can work out the incidents of new cases. Um But the problems here are there a lot more expensive and time consuming because you have to keep following people up. And if it's rare, if the disease is rare, you're gonna be following people on for years and you're gonna need huge numbers, which means you need to compensate people as well. So it's expensive and time consuming. And there is a selection bias in the overall group. Similar to the case control study, you might end up selecting people from one geographical location, one hospital, um, one nationality, things like that. Um You also could have a healthy volunteer effect. I'm sure some of you have heard of that before where people who are keen to take part in these studies go and take part in them, which might not be representative of the entire population because these people generally tend to be healthier, more in care of their health, more aware of health problems and health things to improve your health. And then quite an important problem in prospective studies is lost to follow up. So you might go to follow up but someone just doesn't pick up your call or they don't live in the same place anymore. So you, it just happens that people fall out of your study and that can cause issues in terms of end number. So you might reduce your end number because of uh follow up loss. It can also change the characteristics of the group. You have. All the males ended up dropping out of the study. Your study is completely changed. You're only looking at females now so lost a follow up can reduce your end number and also completely change the characteristics of your uh study population. Good. I hope that's all. Not too heavy, not too boring. So, the next bit, I'm hoping a lot of you already know about. So it won't take as long, um, randomized control trials. Everyone's heard of them. I don't think anyone really likes reading them but they're the gold standard for testing a drug and industry and in academia. So we must know about them. Um We won't go through critically analyzing them in this webinar, but there will be an upcoming webinar where I go through a checklist of how to analyze a randomized control trial and decide whether it's robust or not. So I'm sure everyone knows the methods of how this happens. You have a group of patient's, you randomize them, usually you get an external company to do it so that there's no bias and you have blinding. So the assessors and the patient's don't know whether they're going to be given the drug or the placebo, that's really important because that can otherwise affect your measurements and how you analyze data. So you randomize, you give your treatment group, the intervention, you give your control group the comparison and you measure whatever outcome you want. Importantly, this is an interventional study. So like unlike the other ones which are observational here, you actually intervene and give patient's something. And again, you can, you would measure a risk ratio and do the same statistics as before. But there's two main ways of analyzing all that data you get from a randomized controlled trial. We have the intentions to treat analysis, which looks at absolutely everyone in the study no matter if they took the drug, if they didn't take the drug, um, if they followed your guidelines to them perfectly, if they didn't follow your guidelines at all. For example, if you tell your candidates or volunteers not to smoke for the duration of the study, but half of them do smoke during the study, an intention to treat analysis would still include all of them a per protocol analysis. On the other hand, only analyzes the people who absolutely stuck perfectly to your study guidelines. So what we currently prefer is the intention to treat analysis. And that's because it gives you a much more realistic overview of what's going to happen in the population. If your drug was to be included, was to be licensed, not everyone's going to follow the instructions that doctor gives them perfectly. People do all sorts of things and it's all up to their health beliefs. So it's very important that an intention to treat analysis is carried out. Another reason you do it is to reduce um excluding too many people. So we mentioned earlier for the cohort study that you can have a loss to follow up. But here in a, if you start excluding people who didn't do your, uh, study perfectly. Then again, you're losing crucial information. For example, let's say all the males, I'm sorry, I'm using males and females. But that's just the simplest thing that's coming to head right now. Let's say all the males decided to start smoking in a study where you weren't meant to smoke at all. If you exclude all of them are these results of your randomized control trial really reliable and generalize about to the population. No, excuse me. No, they're not generalize a ble. So that's why we really prefer an intention to treat analysis. So if you're reading a study and they've done a per protocol analysis, you should be suspicious. That's, that's what I would suggest. So, pros and cons pros, they assess causation very stringently because you intervene, you're not just leaving it up to chance for an exposure to happen. You control dose, you control frequency, you control every little bit of the study. You don't have any recall bias again because it's perspective, minimal confound ear's because you're always, you have an inclusion and exclusion criteria. So you prevent things that might um create problems in your study. And again, you can assess multiple outcomes at once depending on what you measure at your follow up times. Cons so expensive and time consuming, I'm sure you all know the they're silly expensive. So then there's also the healthy volunteer effect like before. Um it's always the healthiest fittest people who tend to sign up for these randomized control trials. And that makes you study a little less generalize a ble and that you're strict inclusion and exclusion criteria may also affect um the generalize ability of the study because if you exclude everyone who smokes, everyone who has hypertension, everyone with diabetes, who are you? Actually, I'm gonna use the data to treat and then attrition bias and lost a follow up like we mentioned before as well. Same issues there. Attrition bias has a particular issue. It means people are dropping out of your study and the reason could be because of some bad side effects of the drug, which they're not actually measuring. So if you read a study and you find a lot of attrition bias, be careful because they might, the drug might be causing some side effects, which maybe you need to do a little bit more reading about and then funding bias. So usually the only people who have money to fund such expensive studies are pharma companies and there could be some sort of conflict of interest there. But usually the Pharma company tends to disclose any of these conflicts of interest and they're different uh analyses you can do to make sure there isn't too much of a funding bias in the trial. Okay. Sorry, I'm talking a lot. So it's making me cough a little bit. So we'll move on now to the final bit the most beautiful studies, critical appraisal, critical appraisal type studies. So a meta analysis, we're not going to go over systematic reviews just because it's, this is more quantitative and it gives you more sort of robust evidence for a certain intervention. It's called a meta analysis because you're analyzing previously analyzed data. So it's quite a metal like that. So how you'd start off is first of all decide what you want to do your meta analysis on. So you wanted to look at uh the effect of paracetamol on hair growth. I'm being quite random here, just bear with me. So you decide your topic like that and create a protocol. And what I mean by create a protocol is you need to decide what types of studies you're going to include and exclude in your analysis. So you might only want to include studies which have used the regular dose of paracetamol or you might want to include studies only which gave them, which were for three weeks long, for example. So if you have to be very strict in that, so you start off by identifying all the studies which have looked at your uh outcome or issue and then you narrow it down, you start excluding things at different levels based on your inclusion and exclusion criteria. Then probably one of the most important steps is to assess the quality of the studies because there might be some really bad studies out there. And you want to know. So like we saw in the mentee questions, the Cochrane collaboration has a really useful tool for ana analyzing the quality of randomized control trials or other studies for a meta analysis. They also have very useful tools to take you step by step through a meta analysis if it's your first time and it looks quite similar to the diagram on our left here, but it's a really good place. Well, everyone should be using that to do a meta analysis to make sure they're consistent and up to scratch. Um But some people don't and looking at the quality of the study will help you decide um what's good and what's not, then it's all about extracting the data you want from the studies, very manual labor, very time consuming. But you then are able to analyze the data, which it's simple enough as analyzing data. You combine all the effects from the different studies, different ways of doing that such as Cohen's D and other measures of mean. But we're not going to go through that in this webinar. But an important test that you should do is called the heterogeneous city tests, also known as the I square test. And it tells you how variable the studies you've included in your meta analysis are. And you can do a statistical test on that to see if that heterogeneity is significant. And if it is there is a possibility that your studies are too variable to actually make a sensible meta analysis or a sensible draw a sensible conclusion from your meta analysis. But again, you don't want to little heterogeneity because then your results might not be as generalize a ble as you want, which is what really the outcome of a meta analysis should be to make individual studies more generalized able. Yeah. And then you look at the effect size in the different studies and combine the effect sizes using different methods. Like I mentioned, which again, the Cochrane collaboration will go into more detail on that and it depends on the types of studies you've included. Okay. So some pros and cons here pros are to combine information from multiple studies. Like we said, that will first of all that will increase your sample size. Let's say you have 10 studies with 100 people in them individually by themselves, they stand quite weekly. But if you do meta analysis on them, you've got essentially an end number of 1000. So if that meta analysis then shows significant results, you can be much more certain that the results are actually true compared to the individual studies you've included, then it also increases the generalize ability of single studies. You're never going to find two studies exactly the same because what would be the point of publishing it. So when you do meta analysis, you will have some heterogeneity, which is what I said earlier that some is good, too much is bad. And that heterogeneity actually allows you to make broader conclusions on the information you've read. So it increases um the range I would say of whatever you're looking at. So it's quite, it's quite powerful in that way. And it ends up quantitatively summarizing a field very easy to just look through the diagram you get from a meta analysis and just visually understand what's going on in the field at that time point. Then another thing you can do is a sensitivity analysis which allows you to see whether a certain decision could have influenced the entire results of your meta analysis. So if you think back, we said you have to have inclusion and exclusion criteria, but those are subjective who decides what it, what to include and what to not. So you can change your inclusion criteria once you've done, once you've finished your first met analysis, and then if you change your inclusion criteria and do it all over again and see that the results are pretty much the same, you know that that uh inclusion criteria, for example, didn't really have a massive effect on this study at the end. So it was all right to include it. But if you do find some significant difference, you have to think to yourself. Okay. Why is this factor so important in the outcome of my meta analysis, then you might actually get some hints to do more studies on that. And then subgroup analysis just means you can look at different groups in your meta analysis and analyze them separately to make more specific conclusions in the problems. We've talked about quite a few of them actually, subjectivity and inclusion criteria. We've talked about heterogeneous, heterogeneous city we've talked about does not look at side effects. So that was something I also mentioned in randomized control trials because meta analysis mainly look at sort of um overall mortality over five years or total all cause mortality, things like that. They don't often look at side effects. So it's a good, it's good practice when you're reading meta analysis just to look at the studies included as well to see if there were any side effects or any issues there. Um Then there's publication bias which actually combines with not enough studies in the field. Publication bias is when people, but it's true. It's, it's everywhere but people mainly want to publish studies that showed positive results. So then you're losing any negative results that might have been there, which will skew your meta analysis to the positive end. Um But you can analyze that using something called a funnel plot and it just checks whether there were also studies with negative results published in the over in the studies you found and in the overall literature and then if it's significant, you know, okay, maybe I have to be a bit cautious um interpreting the results here. And then if you're looking at quite a niche field, there might just not be enough studies there to make a worthwhile matter analysis. Um But as it goes, if all goes, well, your meta analysis should have all the pros, all the advantages and it can be done from home, which is quite nice. If anyone's looking to do a study or to get into it, I would read around the literature, find a topic and then maybe start with a systematic review, which I haven't gone over, but it's just a qualitative analysis or assessment of studies in the field. And then you can move up to a meta analysis using the Cochrane collaboration handbook I've uh mentioned down there in the references. Excellent. So I'm sorry if I've gone too fast over these things because they are a bit challenging. Um But if you have any questions at this point, please do post them in the chat. Actually, before you do that, what we'll do is have the mental meter quiz for after. So I can actually analyze whether um you've taken something away from this and hopefully you have. So let me just get that ready. But in the meantime, please post questions in the chat and I will get to them after this. Okay. So my mentor meter screen should be share ing same as before. Please go to menti dot com and join up using that code uh on my screen. So we'll wait for a couple of minutes for people to come in. So now, hopefully you're much more ready to answer the questions I have in store. Promise they're not too difficult. So please feel don't, don't be shy a couple more people. Excellent. All right. We have a few people in here. So why don't we get started? First question? Good luck. Everyone. What an easy one there get it started. Nice and easy. Good. Yeah, I'm glad majority got that right. Yeah. It's a meta analysis because like we were saying, it combines the data from randomized control, multiple randomized control trials or multiple cohort or case controlled studies. So overall it gives you a much more robust understanding of the field. Okay, we're ready to go. OK. Question too. So we actually said that it was the intention to treat analysis. That's better because it includes everyone gives you more realistic overview of what will happen in the population. If you introduce your drug. A per protocol analysis looks at only the people who perfectly followed your rules of the trial, which is not really not as representative as you want it to be and you diminish your end number as well if you do such an analysis. Okay. So next question, I know it's difficult. You've just done this, you're not really Yeah, nice case control study. So a cohort study looks forward usually. So when you look forward, you don't tend to have as much recall bias. Whereas in a case control study, you look to the past. So people remember things differently. Um I don't even remember what I ate for breakfast. So you have recall bias when you look into the past. Okay. Um That is it for the quiz. So thank you everyone for taking part in that. Very much. Appreciate it. So we'll go back and please post any questions in the chat. So I'll give it a couple of minutes. Um, just for people to type their questions in. But if there's none, we'll move on shortly. Yeah, and it's quite difficult to learn so much information all at once in the span of 45 minutes. So I recommend you to go find this weapon on youtube after where it'll be posted or also in metal, this slides will be there. You can go through at your own pace and my explanations will be there as well and the references if you ever want to have a look, okay, just a minute more. Okay. If there are no questions, we'll go into the final slide. So I would recommend doing this feedback form. Now, the QR code is up here. I'm also going to put in the chat. That's how you get your certificate. So you don't want to miss out on that and also sign up for our next webinar on the common methods and lab research. Um because you want to know how to understand lab research because a lot of the reading you're gonna do will actually be lab research. Um uh Let me just put the link of the feedback form in the chat. So the link to the feedback form is in the chat is uh do fill that in. I'll leave this up for a few more minutes before we all log off. Yeah, I hope you guys enjoyed and thank you for joining us. Once again. I know the feedback feedback form takes a little bit of time. So guys, thank you so much for joining and I'm going to log off now and please catch this webinar. I'll upload it as soon as possible on youtube and metal. Um And you'll have this link in QR codes available there as well and you should have hopefully gotten an email to fill in the form as well, but great work everyone and I will see you for the next webinar in a couple weeks time by everyone.