Home
This site is intended for healthcare professionals
Advertisement
Share
Advertisement
Advertisement
 
 
 

Summary

This medical webinar provides an in-depth analysis on how to critically appraise sections of a meta analysis. It will show medical professionals how to tell if a meta-analysis is good, trustworthy and generalizable to provide the most relevant and applicable data in order to form accurate patient treatments and healthcare advice. Specifically, it goes through important topics such as study design, outcome analysis and miscellaneous topics with detailed explanations including an example paper of a meta-analysis about stroke sodium restriction in patients with heart failure. Time is allocated during the session to answer any questions or concerns and a quiz is also included to get participants warmed up.
Generated by MedBot

Description

You will come across meta-analyses throughout your medical student and clinical career and will need to draw conclusions from them. To do this effectively, you must be able to tell if a study is robust enough to draw sound conclusions from. This webinar will teach you to do so.

Learning Objectives:

  1. Recall the components of a meta-analysis
  2. Understand how to critically appraise the different components of a meta-analysis

Learning objectives

Learning Objectives: 1. Understand the purpose and importance of meta analysis within healthcare. 2. Critically appraise the different sections of a meta analysis, including study design, inclusion/exclusion criteria, bias assessment, and outcome analysis. 3. Understand the components of a meta analysis search, including the Cochrane risk of bias tool. 4. Recognize how to differentiate between a good and bad meta analysis. 5. Analyze real world case studies to practice critical appraisal of meta analyses.
Generated by MedBot

Related content

Similar communities

View all

Similar events and on demand videos

Advertisement
 
 
 
                
                

Computer generated transcript

Warning!
The following transcript was generated automatically from the content and has not been checked or corrected manually.

All right, why don't we get started? Once again. Thank you for joining me today. Um I know quite a few people signed up to this webinar and I'm guessing it's because meta-analysis on the surface seem quite challenging and tricky because there's lots of studies and numbers and analysis that they carry out. But we'll try and go through it simply today. So you can understand and analyze a meta analysis. So at any point, if you do have questions, please put them in the chat and I'll have a chance to go through the questions at the end and in little breaks throughout the webinar because this tends to be quite a heavy topic. So we won't rush through it too quickly. So the learning objectives are quite general, how to critically appraise the different sections of a meta analysis. And why do we care is because they're the pinnacle of clinical data, they're even uh they're essentially a combination of different randomized control trials. So they give you an even higher level of data than randomized control trials. So they're quite important for healthcare professionals to know about and know how to appraise. Um Again, if you've been to my last webinar just the same in this one. I will not be going through what a meta-analysis is, but more on how to appraise it. Tell if it's good, tell if it's bad, see whether you should trust it or not. And as those people in my previous webinars will know, I'd love to start things off with a little meantime, meter quiz. So I'm just gonna share that quiz. Give me one second. Let me just start it up. So you should all be able to see my screen. Now. I'm also gonna put the code or the voting link in the chat. So please do join there. It's not gonna be stressful. I promise. Um It's just a couple of questions to get us warmed up and started and then we can move on to the actual content. I'll give it a couple of minutes for people to join in. Usually it takes a bit of time here. Thank you, Strawberry. If you need the link again, it's in the chat. All right, in one minute, we'll get started. All right, let's get, let's get going question. Number one, no points for speed. You'll have 25 seconds to read the answer options. So let's do it. All right. Let's see. Really good. Yeah, majority wins is to understand how Generali it is. And we're, we're, we're going through a real paper as well in this webinar. So I'll exemplify this a bit further. Then next slide. My five clears are ready. Let's get started. A hefty one. Don't worry, just if you don't know it, take an educated class. OK. We were on the right tracks. Um And I will explain that a bit further in the outcome analysis section. So don't worry about not understanding it right now. So I'll stop sharing that one and let's get back to the webinar again. If you have questions, just put them through on the chat. Um I will get to the A novo question later. It's not quite to do with me analysis. I just put that as an answer option, but I can explain it very quickly. Should you need? OK, let's go to the powerpoint. OK. So the different areas we're gonna focus on exactly the same as my previous webinar is study design, outcome analysis and other stuff that I couldn't come up with the name for. So we've called it miscellaneous. Um Let's start off with study design. That's how the study built up and how they actually carry out the meta analysis. And OK, let me just explain this slide first. Don't go, don't read anything that's at the bottom. That's the example paper that we're gonna go through. Um Right now, just listen to me and then if you want to look through everything later, I'll tell you where to focus or you can also look through it later on youtube or mental. So, what the inclusion and exclusion criteria tell you are what studies have been included in the meta-analysis and what studies have specifically been excluded. The more studies you include in your meta-analysis depending on your question, the more Generali it'll be. So if you're looking at your paper, your meta analysis and you notice, huh, they've excluded this study, this study, this study, they've excluded this group of people and any studies that included those groups of people. The more studies you start to exclude the less useful your meta analysis becomes because the whole point of it is to combine many randomized control trials, looking at a similar outcome and a similar intervention to form a more Generali result. So the more studies you exclude the less Generali your result of the meta-analysis becomes. So in summary, you want to include as many studies as possible into your analysis and then you'll see later on, you can check how, how effective that inclusion and exclusion has been. So for now, the inclusion and exclusion criteria decide how Generali will it'll be. And so looking at this actual study, I've got one to do with stoke sodium restriction in patients with heart failure. If you read the bits I've put into red boxes, all randomized control trials were included regardless of their design in any language with any length of followup and any outcome measure and the intervention was solved or sodium restriction um in pa with or without fluid restriction. So another Generali area there in patients with in heart failure, in inpatient and outpatient settings, any class of heart failure, any ejection fraction. Um So what that tells me is they've really included a whole range of people in this meta-analysis of it. They've not focused on a specific severity of their heart failure. They've not focused on a specific range of ejection fractions. So this study is actually going to bring together a lot of randomized control trials, different types with different ejection fractions, different classes, inpatient and outpatient and combine that into one lovely little result. So that so sodium restriction can be used more widely as um a lifestyle alteration for heart failure. So I in this case, I think the inclusion and exclusion criteria are very well designed because they will allow the results to be very Generali sidebar that might not always be a good thing, but we'll talk about that later. But right now, this study is looking quite good and just a reminder, critique doesn't always mean looking for the bad things. It's also looking for the good. OK. So let's move on from this. Now. Um any questions just pop them in the chat? So oh can we see the whole image here? We can I think how do they search for studies in a meta analysis? This is quite a quick section search everywhere. Look through every database PGM. Um Clinical trials dot gov is one of them, the Cochrane Library you need to look through MEDLINE is quite a big one as well. You need to look through every possible database where a study could be published in order to find all the studies because otherwise you're ignoring studies which might have a relevant impact on your meta analysis. And if you're excluding criterias, if you're excluding studies after you found them, there better be good reasons for it. So, in this case, in this example, we have here, they've told you they've excluded 2950. Well, they've excluded some studies because of duplications and they've ended up with 2950 that are non duplicate. And then they've excluded 2912 articles after just looking at their abstract and they haven't said why they've excluded those 21912 studies or at least even a general idea. And then they've also excluded 15 articles later after reading the entire paper and they haven't said why. Which is a bit weird to me because most meta analysis will tell you that. So right now my like suspicion light is switched on if you see something like this where they're not explaining every little thing that they've done. Your suspicion light should turn on. So this was a quick section we'll move on and then looking for bias. So randomized control trials have bias in themselves and you should only include really good quality studies in your meta analysis. Otherwise the bias will be transferred into your meta analysis and there's a tool to do this. It's called the Cochrane risk of bias tool. Every meta analysis should have a diagram like this telling you about the risk of bias in the studies they've included and overall it should be a majority of green which suggests a low risk of bias. And if there is red, it ideally shouldn't be in very important criteria. Um Some might say everything is important but things like incomplete outcome data or attrition bias studies might have attrition bias that you've included, but they might have accounted for it already there. So it's not as big of a deal as some of the other ones. For example, in this study, there's a lot of red regarding blindness of blinding of participants and personnel. II, I would consider that quite an important area because the fundamental, one of the fundamental aspects of a randomized control trial is blinding of the participants and personnel. So you don't get any measurement bias or any other sort of bias, performance bias like they've said there. So the massive amount of red there is making me think, ok, which study contributed to that redness and what effect did that study have on the outcome of this meta-analysis? And we're actually gonna look at that in a second. So be aware of bias and be aware of the studies introducing this bias into the meta-analysis. OK. We're gonna pause there for a second before we move on to outcome analysis and I'm gonna have a look at the chat. What is an A Nova test? Um We'll go through that at the end because it's not directly relevant to this webinar. How can I get into live? Um I should be the only person allowed into live. You should be able to view it anyways. If you can't, I would go to the, go through the link again and try to join it through there. But if you can't join, um you'll always have it to look at later on on youtube or metal. OK. When searching for studies, does the meta analysis have to show that they also included unpublished studies? Yeah. Very good. Uh Very good spot. Yes, they should do everything possible to include unpublished studies as well. If that means um emailing uh authors or professors who are very into a certain field, then they need to do that. They need to show in their methods section that they've done everything possible to find all the studies relevant to their question because otherwise they haven't really answered their question. Do you need to know how cochran risk of bias tools work? Um It's really just a checklist. There's not much to know you can open it up on the Cochran website um later on, but it's essentially a checklist telling you. Um Does the study have this, does the study have this, does the study have this, there's different sort of categories and you just give them a yes. No, and maybe no. Is red, yes. Is green, maybe is yellow and then it sort of aggregates all of that for you into a handy little diagram like I showed here. Um But no, unless you're doing a meta-analysis, you don't really need to know how to do it. You just need to know how to assess it like the different biases and the, how important each uh zone is. OK. There's no more questions. Why don't we move on to outcome analysis? This bit might get a little bit heavy. I'm just warning you in advance, but we'll take it slow, we'll take it easy. OK. So the first thing is what outcomes have they actually looked at from the randomized control trials that they've included? Are they clinically relevant? For example, a lot of studies will include things like um class, for example, class of heart failure and this study included as well. But does it change? Does that change in class actually impact the patient? Is it actually clinically relevant sometimes? Yes, sometimes, no, if the classes are quite close to each other. Um and are there any other outcomes relevant to answering the study question? So a common one you'll see is all cause mortality or cardiovascular mortality or death due to cardiovascular incident, especially in something like this. And they are really important to look at whether sodium restriction is helpful. But another important thing which sometimes is missed out, I think is looking at the actual change in quality of life because at the end of the day, even if the heart failure class doesn't change, even if it doesn't reduce mortality, if it improves their quality of life, it may be clinically relevant. So that's an outcome that's quite important to consider in my opinion. And you will always have a table in your meta-analysis with this peak cost in it to tell you exactly what populations been used, what inter intervention and comparators been used, what outcomes have been measured. So in this case, just like I was saying CV S, mortality, all cause mortality, stroke M I, hospitalization change in class and quality of life, which is why I quite like this meta analysis, they've looked at a whole range of outcomes. Um So it's quite a thorough meta-analysis I'd say and then how have they analyzed these outcomes? So this is where we get into the different models. And don't worry, you don't have to know them in very much detail. You just need to know two sentences. There's two models, the fixed effects model and the random effects model, the fixed effects model assumes that all the studies that we've included have a single end answer. So let's say you've taken 100 studies on this exact topic, 100 randomized control trials. That study is assuming that every single one of those 100 randomized controlled trials has, let's say a risk reduction of 0.8. Ok. They all have the same underlying effect size. But a random effects model is suggests that all the studies have different underlying effect sizes, which means one study might be 0.81 might be 0.91 might be 0.7 and so on. And the one that should be used in 90% of circumstances is the random effects model because all studies will have something slightly different in them. Maybe they've used a different dose of their intervention, maybe they've looked at patients for a longer duration of time, maybe they've used only inpatients versus outpatients. All studies will have some degree of heterogeneity. So that's just a fancy word for differences, which is why a meta-analysis using said studies should use a random effects model because it's very unlikely that all your studies have the same underlying effect because that essentially means every study has been done exactly the same way, which is very unlikely. So in that note on in this study, they, in this meta-analysis, they used a random effects model throughout. So it was quite good. Um I would trust this kind of analysis moving on heterogeneity analysis. So we just mentioned heterogeneity before and it's basically tells you how different the studies are from each other. And you use something called the ICO statistic to assess heterogeneity in the studies you've included in the meta meta-analysis. If the I score value is too high. It suggests your studies are too different to draw any reasonable conclusions from. So you shouldn't really combine those studies because whatever result you end up with in the end isn't really valid. Um And they're due to differences in a bunch of different things like I've said there. Um And again, like I said, high hydrogen me analysis are not great and they will always tell you the heterogeneity for whatever subsection of the meta-analysis you're in, I've highlighted that down there for um this self-analysis that they conducted. I think this was for all cause mortality. Um And it tells you heterogeneity, I score is 0% which suggests no heterogeneity. And they give you a P value for the heterogeneity as well. 0.79 which tells you it's not significant if it was less than 0.05 you think? OK, there's significant heterogeneity here. Should I really trust this information? The answer is no, don't trust it. OK. And then actually, you can see in this overall effect um combining those studies that sodium restriction didn't um have much of an impact on patients with heart failure in all cause mortality in this case. OK. So now we know about a heterogeneity analysis. Let's move on subgroup analysis. OK. This is when you have all the studies that you've included, which you split them up into different groups to kind of compare different uh variables that might have influenced the results in this case, they've compared studies which used fluid restriction as a coin intervention and studies which didn't and this was on all cause mortality as well, I believe. And it shows that regardless of fluid restriction, um the sodium effect was negligible, there was no significant effect on restricting sodium. Um Other examples of subgroup analysis you can do are male versus female. Um You could do inpatient versus outpatient in this case, like we said, you could do subgroup analysis for different heart failure classes because they had all that information. Um But this is the example I chose to show. So the next thing we're going to move on to is sensitivity analysis. And I want you to be very clear that subgroup analysis splits your studies into two groups to compare other, in other influencing variables. A sensitivity analysis. On the other hand, is essentially a type of exclusion. You exclude a study and you see if the rest of the studies produce the same effect as the original original analysis. And what you do that for is to see whether your different decision making processes had an effect on the results. For example, when you're researching through studies and you're deciding what to include and what to exclude. You might exclude a certain study. But you might think, oh, that might have actually changed my results quite a lot. So when you do a sensitivity analysis, you can bring back that study and see if it did have an effect on your results. You could also think we saw in this paper that some of the studies had quite a high risk of bias using the cochran tool. So you can think, ok, maybe including those studies wasn't the best idea. Let me take them out and see if any of the results changed. So that would be an example of a sensitivity analysis as as well. In this case, they included six new studies in the sensitivity analysis because they must have excluded them earlier on. I think because they had some sort of coin intervention, I think they might have had diuretics or as or fluid restriction as a coin intervention. And they wanted to see what kind of an effect that had on the results. And actually you can see the results here that including those studies change the results. You can see the little black diamond at the bottom. And now this analysis suggests that people should be having people with heart failure should be using a high sodium diet because for some reason that seems to improve their all cause mortality in this sensitivity analysis. Um And with your background, uh knowledge on healthcare and heart failure and cardiovascular disease, you might be thinking really high sodium for heart failure patients. That seems a bit odd. And if you look down at the other thing I've highlighted here is the heterogeneity I square is 43% and the P value is less than 0.05. Meaning it's significant, you can also see that the six studies that they've included all lie towards the favors high sodium diet size side. What that's telling me is, first of all, including those studies increase the heterogeneity so much that this, that this group of studies is not really comparable reliably. The second thing it tells me is that yes, those six studies are what's pushing the odds ratio towards favoring a high sodium diet. So overall, I would not trust this analysis. Um just because that heterogeneity is too high and those studies have quite a huge skewing effect on the end result. That's the kind of thought process I would go through when doing a critic analysis of this data. So I hope that helps in maybe helping you think of questions to ask yourself while you're reading meta analysis and looking at what they've actually done and whether the results actually make sense. OK. One important thing they didn't do in the study is what we said earlier. They didn't do a sensitivity analysis excluding that high, those high bias studies. That's something I would have wanted to see, wanted to have seen because now I don't know if this meta analysis is biased or not because the studies included may have had a lot of bias in them. So that's making me a bit wary on the results here. Another important thing you can see is if you can see my pointer that little study there has the highest number of people included in it, this Patna 2011 study. And that's probably pushing the results to the right side as well. And Patera is the same group of people who did the uh studies highlighted in red. So that's making me think. Ok, could this study have influenced the results very strongly as well? Because it's such a massive study, it would be very highly weighted. It provides 18.1% of the results here would excluding that study have had any impact on the overall result. And I think it could have actually pushed favor to a low sodium diet. So when I have, when I noticed a study that influences the overall results of a meta meta analysis so heavily, I tend to think, OK, have they done a sensitivity analysis checking whether excluding this study has an effect? And I would also probably read that study to make sure it's tiptop because if it's not, then the meta analysis is flawed because that one study is influencing it so heavily. And if it's flawed, the meta analysis is flawed, so we went through quite a bit there. You might have to look through it, look through the webinar again, maybe on youtube or somewhere to understand it fully. But for now, I think that's all right. And if you have any questions, of course, put it in the chat perfect. A good time to put your questions in the chat so I can have a look all righty. What is underlying effect size? So that just means how effective the intervention was in the randomized control trial that was carried out. That's just what we call effect size does. Effect size, just mean quantity, essentially. Yeah, it's how, it's essentially what the results were. That's what I mean, when I say effect size, what is a good I squared, I would say anything 20 to 25% or below is probably safe to interpret from like you saw in this one. Um The I score was 43% which is really high. So I wouldn't trust it. And the last one, how are heterogeneity and sensitivity related? Again? They're not, but a sensitivity analysis is just essentially another analysis of some group of studies. Um that is different to the main group of studies, for example, excluding a study or including a new study there. And it'll automatically have a heterogeneity which goes with it. The heterogeneity will just tell you how much you trust the sensitivity analysis and whether it was appropriate to include or exclude those studies. So I hope that helped answer those questions. Um If not do put it in the chat and I'll try again uh in a different way, but for now, I don't see anything. So I'm going to carry on and then if there is anything we can come back at the end miscellaneous publication bias. I'm sure everyone's heard people love to publish positive results. People don't really like to publish negative results. So often you tend to see studies which have low sample sizes and large variances and large amounts of error don't get published. And you need to check that because if there are only positive studies published, of course, your meta analysis is going to show a positive result, but that might, might not be the case in reality. So you do something called a funnel plot because it looks like a triangle funnel and see whether there was any publication bias. If your funnel plot looks like a triangle like here, then there's no publication bias. Um If it doesn't, then it probably is. And you also get an automatic p value for the funnel test. Um And you can look at that as well to see if there's any palpitation virus or not. In this case, there wasn't, which is great. And they did this pub uh funnel plot for all their different outcomes and all their sensitivity analyses, which is really good, very thorough. I I was quite impressed when I saw this because not everyone would take the time to do and check whether there's publication bias for every little analysis they did. So it's quite well done. Um Yeah, it essentially uses odds ratio and kind of the sample size which is related to the standard error. So don't worry too much about how to calculate it. Just understand right now how to what it means and how that affects the results of the an analysis. Clinical relevance. We kind of talked about that earlier. Uh I know how I said I was initially impressed by the uh outcome measures like quality of life. But you can see here later on in the study, they say the data on the outcomes of the change in NH A class or quality of life was not suitable for meta analysis. So actually, they've ignored the two very uh important things that I would look for in a meta analysis to ensure that it's clinically relevant. So maybe it didn't change, change all cause mortality like we saw in this study, but maybe it would have improved people's quality of life, you know, you never know. So I would say this study is actually not too clinically relevant right now uh because they haven't really looked the quality of life and I'm not uh fully convinced yet and then other factors. Mhm. Yeah. Are there any large driver studies? So I'll give you an example of that study earlier, Patera 2011, which had like 800 people in the intervention and placebo groups. Um So any study that's heavily influencing the results needs to be checked very thoroughly because otherwise it could mess up the entire meta analysis. Um And you should do a sensitivity analysis to check the impact of those studies. So remove that study redo the analysis and see if the result changes. We have another one of those types of studies here, Eez Co is 2022 had about 800 people in total. So it would have heavily influenced the end result. Again, pushing it towards, towards the favor high sodium side. So removing that would probably have changed the results. I would say um it, it provides 60% of the results which is a lot. Um without that study, the results would be very different. I can guarantee you that. And they didn't do that sensitivity analysis. So a little bit odd, they should have really noticed something like that. OK. I'm gonna summarize, we're gonna look through the questions in the chat and then we're gonna do a little quiz and the feedback form. So essentially to summarize, you should just go through these studies systematically look at strengths and weaknesses. There will always be new things. I find new things every day. Google. It, Google has everything. Um It might be confusing but if you find it confusing email research would mind the bleep and maybe I can do a webinar on it um and be as cynical as possible and ask yourself. OK. Is there could there be any other explanation for the results? Did they include enough studies? Did they exclude the right studies and include as many as they possibly could? Did they explain why they excluded some studies? Were there any massive studies influencing the results. What was the heterogeneity like throughout what did they measure as an outcome? Always think of these things. And you can always use this webinar as kind of a checklist to check, to see if you've checked through everything and whether um whether they've done it well, because you can go through me talking and I've essentially gone through how I would critically analyze that paper. So you can use that format when you're starting off. And then as you practice more, because this is a scale, you'll be able to do it more independently. And again, no study is perfect. So there will be flaws but that does not mean the conclusions are in valid. You just have to decide whether the flaws are big enough to make you question the authenticity and the reliability of the study. OK. Let me quickly look at the chat now. OK, explain heterogeneity sensitivity again, no problem. We have time. So a sensitivity analysis is to check whether the impact of a large study or something like that influence the results. What you would do is let's say I have 10 studies, but the 10th study has 60% of the weight that it is providing through the analysis. Let's say I exclude that 10th study. Now, now I have nine studies and I redo the analysis. That is my sensitivity analysis, then that sensitivity analysis will automatically have an associated heterogeneity. If that heterogeneity is very high, I wouldn't trust the sensitivity analysis. OK. So essentially heterogeneity is the fact that studies are different from each other. And if they're too different, they shouldn't be combined into one result. So every analysis in the meta-analysis will have heterogeneity with it. A sensitivity analysis or just a different type of analysis. OK. So every little graph you see in one of these papers will have a heterogeneity number associated with it. Um Is it appropriate for meta-analysis to exclude studies that does have the outcome of interest but have higher hydrogen? So what they should do is include all the studies as a primary analysis and then as a sensitivity analysis, exclude um the study and show that the heterogeneity has changed and see whether that impacted the results or not. Um So they can exclude it but really they should include it at the start and exclude it later on. What is the diff um Veronica? I will come back to your question in a second. Um Just because that's a more general thing that I can give advice on. What is so again, gift, uh what is the difference between meta-analysis and systematic review? A systematic review has no calculations, no numbers. A systematic review is just a word explanation or summary of the field. Whereas the meta-analysis includes studies uses their results to calculate an aggregate result. That's the difference one has numbers one doesn't. Um OK. Veronica many of us are interested in learning more about clinical research. What courses and journal clubs would you recommend us for further development of our knowledge? Ok. Um courses I could recommend is if you join mind the bleep, we have a lot of um different information on clinical research there and we're only building up more and more now. Um And I'll also be running little workshops which hopefully will help you get, learn a bit more about clinical research, journal clubs in particular. If you're a healthcare professional, I would recommend your department journal clubs. Um because I don't, I'm not sure if there's any just journal clubs out there that you can join. I, I don't have any much information on that. Very sorry about that, but I would recommend keeping up to date with Minda believe because they keep coming up with new research content and looking at your department for any journal clubs there and also going to conferences which present research, there's a lot of variety of conferences and those will also have workshops for you to get involved with and learn a bit more about clinical research. The pinnacle I would say is probably email some clinical researchers see if you can chat with them or be a part of their research project. People are quite willing. Um So that's what I would recommend you to do. Um Let's move on then. What do I have next? OK. We've gone through questions. I mean, if you have more questions just put them in the chat because right now, oh OK. One of my uni courses required to critically analyze a systematic review in the near future. Do you have any recommendation or do you plan on doing a webinar related to this? So I don't plan on doing a webinar related to this. And that's because it's very similar to this webinar. You wanna be thinking of exactly the same things when you're looking at a systematic review, which studies have they included and excluded, have they gone into depth into all the studies? Is there anything they're missing a systematic review is a bit harder because it doesn't necessarily have to have randomized controlled trials. It's just a summary of a field of research. So the best way to do that is to read through that field of research yourself, read through a lot of the papers in the field and then you'll be more educated to see whether a systematic review has summarized the field accurately or if they've missed anything out or if there's any other explanations for things that they've uh happened there. OK. We're gonna move on to the mental meter quiz now. So I'm just gonna share that page. Give me one second, get ready. So you should be able to see it. Now, I'm also gonna put the voting link in the chat out. So please do join our last two questions. It'll be really easy now that you're expert in meta analysis. Um And then I'll show you how to get your certificates. So give it like a minute more for a few more people to join in. We had five at the start. I'm sure we can get five now. Ok. Hey, why don't we get started? And our fifth member can join us later on. Ok. Again, 25 seconds. No points for speed. Let's go for. So we talked about this a bit towards the end. So hopefully that's quite fresh in your mind now. Perfect. Yeah, excellent work guys. Excellent work next up. Oh, ok. I'm just gonna get started. Oh, it shouldn't be on fast to get more points. Um Don't worry about speed. Ok. Take your time. I must have forgotten to change the time settings. I'm very sorry about that guys. Yeah. Good, good work, everyone. All correct. I'm very happy to see that. Let me get back to the powerpoint. And what I'm gonna share next is the feedback form. Please do fill that in because that's how you get your certificate and that's how I can improve these webinars for you. I will also post the feedback form, link in the chat. There you go, linked in the chat again. I'll be here for the next couple of minutes. If you have any more questions, please put them in the chat, I'll answer them. But otherwise if you're gonna go, thank you so much for joining us. I hope that helped a little bit, um I know it was a bit heavy, but if you look through it again, I hope I've explained things in a way that it will help you in the future. Yes, you will get the recorded version. Indeed. Um I will probably upload it tonight so you can look on metal as catch up content or you can look on youtube on the youtube for mind the leap and we have a whole research playlist if you want to look at any of the previous ones as well. Um First slide, um What do you mean by first slide? Sorry. Do you want me to go to the first slide? Uh I might give it a couple more minutes and then I can go back to the first slide if you need. OK. Do you mean the title slide? OK. Let me quickly get to the title slide here and I'll come back. That's the type of slave. OK? There you go. OK. I'm just gonna flick back to the feedback form. Now. Um You can have a look at this later on. Um recorded version will be out later tonight. Hopefully. All righty. It's eight o'clock. I'm gonna head off now, guys have a great evening and thank you once again for joining, we'll have more research webinars. Oh, I didn't get the main purpose of meta analysis. So it's to combine the results from many different randomized controlled trials to make them more Generali um Yeah, we'll be having more research, webinars on how to write papers, how to make presentations and posters. So do keep looking on metal and on Facebook for those updates, but otherwise have a good evening guys. Oh, sorry. Yes, indeed. Sorry gift. The anova question. Um Anova stands for analysis of variances. You don't use it in meta-analysis. You use it when you're comparing um discrete data between more than two groups. So for two groups of normally distributed data, you would use A T test. The classic test, you'll see everywhere. But if you have three groups of four or five or more, you can't use a T test anymore. You have to use something called an A Nova. That's what that is. OK. I hope that helped. Hopefully we will be doing a stats webinar at some point. So stay tuned for that and I'll explain a bit more details there. All right guys have a good evening. I'll see you at the next webinar.