#### Computer generated transcript

Warning!

The following transcript was generated automatically from the content and has not been checked or corrected manually.

OK, welcome everybody. Thank you for joining us for our SRM A teaching series. Uh This is the seventh session of the series. And today we'll be looking at how to conduct a meta analysis and delivering the session. Today is Doctor Connor Gillespie who is an F two doctor on a specialized foundation program in Clinical Neuroscience at Cambridge. He graduated medical school with honors from the University of Liverpool and he has an integrated m field degree in surgery and oncology. He's a former chair of the Neurology and Neurosurgery Interest Group and will be starting a neurosurgery academic Clinical Fellowship in Cambridge in August. He's published extensively in journals and has even produced a book titled Neurology and Neurosurgery 200 SBA S for medical students. Now, just a reminder, please fill in the feedback form to get your certificate of attendance. I'll put it in the chat part way through the session for you. Uh So that's all from me. Thanks, Connor and I'll hand over to you. Well, thanks. Thanks. Very good. Sorry. Everybody about the, about the lateness. I was just, there was a, a nine year old who arrested on the ward. Uh so I had to sort them out the just for, could you guys just post in the chat? Just what stage of training you're at? Is that, is that OK? It will just help me tailor the talk. You just put what stage you're in? That would be really helpful. So I kind of, I know you were on projects but I think it would be helpful. Ok, so third year. OK, I see. OK, so we've got a couple of people, clinical years of medicine. You one year, three or 64. So most people are kind of medical students. OK? And you're already working through a project or an idea. So you've been taking through the steps. That's fine. That's good. OK, great. So we'll just get, we're gonna start in that case. So that's a variety of years of meds of medical. Really? That's fine. And what your uh it's just jazz green. Can yourselves uh take your, can you tell me if you can see this? Can you see the screen? Yeah, I can see that. That's fine. Um OK. That's good. So we'll just get started in that case. Um I just want to me meta analysis war theory. So we'll do two talks. We'll do one talk this week on the theory and then the next week on the practical examples. So I'll basically be taking you through an example. This is just all the building blocks you need to get there. So all color is. And as far as meta analysis goes, why you should do one when to do one, how to do one. And then some usual resources and things that basic lessons you can take from it to prepare yourself. Next week. Next week, we will also cover some written and practical. We'll go through the practical steps of doing a meta analysis using some examples. I think people should be aware of why they should follow the advice given in a teacher. Really. So I've been involved in approximately 25 systematic reviews and meta analysis around 14 which are published as you can see there and there's another five or so in review, but the 14 are just just meta analysis only. So not systematic reviews by themselves. If you do systematic reviews by themselves, it's approximately 30. So I've definitely made a lot of mistakes across those number of publications. So hopefully it'll be helpful for you guys. So what a meta analysis is, is more or less you just combining two, you know, two or more results to produce or from studies to produce an overall result. My friend, strangely enough described it as the easiest part of a systematic review and to avoid a pylon. I've not named who he is because I think me analysis is something that you, that you think on the servers. If you've never done one before, you think this is actually very difficult and it's a really complex statistical method, it's gonna be almost impossible to produce the results. However, this isn't the case. It's pretty straightforward to do. Once you have done one, you could in theory do 30. And I think that's why I've done quite a lot because it's actually relatively straightforward. Once you've done one or two and you are aware of your methodology and the theory behind it more or less in a summary, it's just sort of pool pooling the study data to produce an overall result. So if you just pool that, then you produce one single result as a combination of studies, when you should think about doing a meta analysis, or when is the right time to do a meta analysis is probably the most difficult thing. The f you need to satisfy a few criteria. This is according to sort of the Cochrane handbook. So I'd recommend using that handbook for your guide on how to do a meta analysis as a as a first point of call. When you should do a meta analysis is firstly, when you have more than one paper in a study, people ask, ask me a lot, Connor, how many studies do I need to have a minimum of, you know, what if I've only got three studies or four studies or like six, do I need to have 10 studies or more to do a effective meta analysis? And the reality is no statisticians will tell you that you only need to do two, you only need two papers really to do a meta analysis. If the papers are OK, then the the methodology still stands. But more importantly than the numbers, you really need to have a, a good clean data set when you're pooling. And when you're answering that question, the reason behind this is you anyone could do a meta analysis but to do a proper one and to do it properly using clean data and homogenous data, that will give you an answer to the right question rather than just some misinterpretation based on differences in each study that have contributed to the results. They found that's kind of the art really. So you have to be very selective of when you do it if you have studies with different population sizes. So you know, you have some studies with 20 patients and some studies with 10,000 or 100,000, then meta analysis is a great way to combine those results. Rather than saying, you know, four studies of five people said that this treatment was negative, but one study of 2000 said it's positive. So it will help iron that out and give you one overall measurements. It's also good if you've got contrasting studies. So if you have studies that have different conclusions in your systematic review, let's say if you've got four or five that think amLODIPine is the best and most effective BP medication and then you have four or five that say, well, actually it's, it's ace inhibitors. If you have studies that differ in the literature, then it's great to pull them all and then to give a, a definitive answer. And it's also good if you just want a round number for something as in, if you just wanna combine everything to give one single result, it's important to appreciate when not to do a meta analysis as much as it is when to do a meta analysis. We really shouldn't be doing a meta analysis just because you want to, because you think it's gonna make your paper look better, you know, because you just wanna do it to kind of show off your statistical prowess is really not the best thing to do if your data has significant heterogeneity or the definitions and populations are different in the studies, you've included, we will discuss this a bit later and it's also important not to do it just because you have two studies, you have to have a plan and a target for what the meta result is going to produce. As in, do you specifically want to know if a medical treatment reduces blood loss, you know, in a quantifiable measure that you can then combine to form one result from your systematic review or are you doing a meta analysis? Because you think the technique will make your paper better? It's usually pretty clear cut as to what you wanna do if it's option A then go ahead and do the meta analysis. If it's option B, then I wouldn't do it. A good example of when not to do a meta analysis would be a question such as, you know, what types of devices are available in medical education. I guess what types of, you know, smartphone devices or virtual reality or 3D devices are available? Because your research question is inherently exploratory. You're not aiming to compare to things or some treatments you, you're just aiming to describe and evaluate the literature. And I think if your systematic review is kind of like that as it's mainly not numbers based, it's mainly sort of ideas and concepts based, then I would suggest going against meta analysis and in fact, your paper will probably come out much stronger if you selectively don't do it and justify why I would also say not all data is created equal. So just because you have numbers in front of you in forms of papers. If they're not using the same definitions, the same populations and the same outcome definition, then it's quite difficult to justify including all of them in your study. I not all data is created equal in that measure, but you'll get more used to it as you kind of progress through your different projects. And some of you may already have that knowledge to hand the this is just discussing the the levels of evidence. I hope I'm sure you've, they've been over this in the talks in the past, more or less, I just wanna use this to say a meta analysis or a systematic review is only as good as the data that you analyze. So for example, if you only have cohort studies in a systematic review, its highest level of evidence will always just be level three, it is not ever going to go higher. So hopefully that should deter people from doing a meta analysis, unnecessarily thinking it's gonna change their study and make it the highest form of evidence is really based on the literature that you actually include. OK, is low bit of information. Regarding the different theory types for meta analysis. A as in the different types of meta analysis, one could perform essentially that's driven by the data that you have and the clinical question or question you're trying to answer the most. I'll go through all of these in depth using some illustrative examples. But the most common is going to be a meta analysis of binary variables and groups. There's other ones like meta analysis of continuous data, for example, does a certain treatment reduce volume loss in mills it depending on the group. So if your, if your measurement is milliliters or hours or days or weight, that's a good one to use. You can me analyze single data which I'll explain and you can also me analyze diagnostic accuracy. These are generally the most common ones that you can use. And I'd special advise these the ones to learn about as a beginner. The most common type is a meta analysis of binary variables. This is the most common one you'll use. Essentially, it means you have a primary outcome that is binary. For example, a group, a yes group and a no group. Did the primary outcome occur? The example from a publication that I have is comparing two treatment types. So V PS being shunt and ETV being a sort of endoscopic third ventriculostomy, they're both types of neurosurgical procedures. And the outcome is preventing recurrence in hydrocephalus. So my outcome recurrence as in did the patient have a recurrence on their scan requiring surgery? Yes or no. It's fairly evident that you can measure this equally between your two, all your studies and you can meta analyze this data quite well. Now this plot on the right hand side was looks complicated, we will dissect it in one of the following slides, what everything means. But it's just to let you know that really the data that you're working with is not that complicated. This table in XL on the bottom left shows the exact data points that I have collected and input to make this chart. So all I've got is ETV the amount of cases that recurred shunt and how many shunt cases recurred. And I always have to study the name of the author I have nothing else. So this should really prove to you that if you have the right code doing this process and doing the steps, you don't have to do anything crazy with your data set. If it's lined up, right, you only need sort of three or four variables to actually create this full meta analysis. So hopefully that's one nice myth bustard. Oh There's a yeah, there's also essentially this is using the binary example and I'll go through like the chart and what it all means in one of the following slides, this forest plot which is the plots generated to showcase the results of a meta analysis. That's what these plots are called. If essentially this m the continuous outcome and variable is a number. So as I said before, it's kind of like hospital stay in days, operation, time, minutes, minutes of exercise, volume of blood loss, et cetera. And but the most important thing is the groups are still binary as in those two groups. So in this meta analysis, we looked at the effect of tranexamic acids, which is a, a treatment for bleeding in surgical patients. So we had a TX a group and a no TX a group in randomized controlled trials. And we can more or less see that in all the studies, they have a total a mean volume of blood loss and a standard deviation. This was all just extracted from the papers that reported it and it will kind of pull them together to give you a result which I will explain another type is meta analysis of proportional data. So in this category, it's slightly different because there's no comparator groups. So you're not saying in this treatment group or this observation group, is there a difference? And I want to compare the two, you're just saying you have one single group. So let's say if I have, I want to find out in my population, what percentage of patients with a subretinal hemorrhage develop low sodium. So develop hyponatremia during their hospital admission. So I'm not comparing them to say like, you know, a non subretinal hemorrhage group or a general public group. I'm just focusing on my population. So you can do a uh a meta analysis of proportions which will just give you the a a specific percent. So it won't give you an odds ratio, which I'll talk about. It will give you just a percentage. So you can see at the bottom uh right hand corner of the screen, this just says 37% and it gives you like a confidence interval which will go through. This is the important slide. It's how to navigate the forest plot and understand what it means. Most people will have been taught this already. But I figure it's just important to go through just in case there's anything you might miss first bit slice your work first is the study. So on the first part, you have all the studies that you have to kind of type in in your excel all your data points. So usually I put the last the first author and the year of publication, as long as the style is uniform, that's fine. This is your event page. So in your group, you just have to remem include say. So this is comparing this forest plot is comparing uh vasospasm events in patients with hyponatremia compared to no hyponatremia. So normal sodium levels in each study, you should have, the two groups should be pretty separate, pretty clear, you should have a total number of patients. So in this one, in the normal, in the hypo hyponatremia group, there was 100 and 11 patients and 58 of them developed vasospasm in the nor natremia group, there was 89 patients and 39 developed vasospasm. And if you plot it the right way, you'll see a bunch of squares and box and whiskers. This is the odds regio. So the odds ratio is basically the odds, more or less the Quinn study using that an example. So it's 1.4. So in this group, the hyponatremia group are 1.4 times more likely to develop vasospasm than the Noreia er in the conference interval is shown there. So it's naught 0.8 to 2.46. Um That's just shown by the boxes. Essentially all those results will be kind of put together and given an average at the bottom, which is the red diamond. So we can see that it's 2.93 here. So it suggests that patients with hyponatremia are 2.93 times more likely to develop vasospasm on average as opposed to people with normal sodium. The next spot on the right hand side is the weight. This is referring to the populational differences that I spoke about in the example before. So each one is, each study is going to be given as signed a weight to what num what it contributes towards the overall meta analysis figure based on the firstly, the population size. And secondly, it's deviation from the expected mean. OK. So let's take, for example, let's say, for example, the study, let's let's just say the study of the the top. So quin 2020. So you've got about 300 patients there. So it's given it a weight of 8.4% out of the total and let's take another one. So let's take one like this one, Nakagawa 2010, it's got about 80 patients. So you can see it's assigned at 3.7%. So a weight that's proportional to the population, it's also proportional to how it deviates from the standard. So Nn Ngawa might be slightly less because it deviates from the mean. So it might be 4% normally, but I just ignore that and appreciate that they'll be in different places and it'll pump out one single result. The model is presented here. So this, I'll talk through the software. I used to generate this plot shortly. But uh this model, which I'll explain is a is which model you make the meta analysis based on. There's in summary, two types of models, there's fixed effects and there's random effects. Most of the cases you'll use random effects. So if you're ever not sure, just pick random effects, it's more justifiable to researchers. Essentially. What that means is random effects is ex that you're expecting to have significant heterogeneity within your datasets. And I will explain what that means in the following slides. If you essentially have no heterogeneity, you'd use what's called a fixed effects model. A fixed effect would just give every study essentially the exact same weight regardless of population size or the odds ratio itself. So I would just assume all studies are created equal and they all are the exact same. So I'm just gonna give them the same weight. I'm just gonna get an even measurement of the results inspecting this plot this bit further, the random effects. You can see the heterogeneity is defined by an I square value. So that's 71%. This is generally quite high. So I'll explain what that means later. But essentially heterogeneity means that your, you can see there's quite a variety of results here in this data set. Some are, you know that one's 10, that one's seven, that one's one and that one's 0.13. So they're kind of all over the place. And it also suggests that the populations and the definitions used in all these studies differ significantly. That's not a problem because the random effects model takes this into account, but it's just something for reference. So a fixed effects model would not be appropriate in this case. And finally, you have your overall odds ratio as I described. So your forest plot will go to one side and if the diamond extends and doesn't touch zero or well, it doesn't touch one. Cos essentially you want it to be more than one. If it's more than one, then it increases the chances if it's less than one, it decreases. And your confidence intervals don't intersect one. And you've got a significant effect. A quick note on the funnel plot. So most, most tests will produce a funnel plot alongside a forest plot. So your funnel plot essentially is, it's a bit difficult to describe, but it's just a measure of something called publication bias. What that is is in a stu a lot of studies that are small. So generally speaking, smaller studies with less patients or participants tend to be negative more often as in they're not, they don't lead to positive results. So what they do then is what they do then is they essentially will keep them, they'll not publish them or they won't end up being published or presented. But the big ones will. So what you, what you look for is sort of a gap on the bottom left hand side, which you can or you can kind of see here, cos in theory, as your odds ratio goes up, essentially your effect size goes up. Your standard error should also go up as well, reflecting kind of the smaller study sizes. I wouldn't get too caught up on that at the moment because I think that's not the purpose of the talk, but just keep an eye out because that's something that will be generated with your plot. And you will need to understand what it is and have a basic understanding there. I've touched on it already. But the next part is talking about heterogeneity. So there's essentially differences within your studies that could affect your outcome outside of the exact numbers that you see. There's I like to think of it as two main ways that a population can be heterogeneous. It can exhibit heterogeneity. The first is in your age bracket. So, well, basically in your population differences, in an example that I'd show, let's say if you have one study that only includes patients over the age of 80 compared to one study that only includes patients under the age of 50. By default, the over 70 group is gonna have higher mortality and that would affect your odds ratio. But it's not due to any of the treatments you've looked into. It's literally just cos the population's older another example is your outcome aroe this is very common. So most studies don't use the same definition for their outcome. Even something like recurrence could be defined in so many different ways. Some people could define recurrence as appearing that way on a scan and some people could define that as requiring an operation. So they, you know, it it's so bad that they also have clinical symptoms and they need an operation. So you can see that if those 2 D2 definitions are used, you might end up with different numbers even though the results and the data might be the exact same. Essentially, it just results in a misbalance in the study which could then affect the conclusions that you make. So it's really important to use measures of estrogen such as the I square statistic and random effects models to make sure that you don't miss out on that. I think we've discussed this already and we've talked a little bit about random effects and fix effects. But essentially you can uh it's not demonstrated very well here. But my suggestion is to use random effects as much as you can, you'll use it as your baseline and then probably never really unless you know, you, you're pretty experienced and you know what you're doing, then you can think about fixed effects. Most studies will have a high degree of heterogeneity. You can see at the bottom diagram forest plot here that this lab will be this actually has a high square statistic of 0%. So it means there's almost no hydrogen 80 in the results. And therefore you could kind of justify using a fixed effect. But most of them will look like the one on the top, it has an ice score of 90%. So you should be thinking about using random effects most in most cases. And I think statistically that's the method your least way to be pulled up on. Another thing you could potentially consider is a sensitivity analysis. This is a good principle to use. It's something that isn't used very much in big meta analysis and papers, especially when there's a beginner. Uh but essentially I kind of group this similar to subgroup analysis. They are two different things. But this ece this principle essentially includes repeating your data after changing your definition to kind of account for confounding factors. A good example of this is let's say you've got 15 studies and let's say 10 of them are high risk of bias. Sometimes studies that are a risk of bias have misleading results. Or you know, let's say you've got loads of randomized controlled trials and you have a lot of non randomized studies. Sometimes their methodology can influence the outcome on the result that you see as we said before, not all data is created equally. So you can look into doing a sensitivity analysis, which is where you might say for example, remove all of the non randomized trials or all of the high risk of buyer studies and then repeat the results to present and then repres the fourth plot to see if it changes the results. You can see in the example presented, I've taken the same diagram we used before and I've removed all the high risk of bias studies and you can see the odds ratio doesn't really change, it goes from 2.93 to 3.07. So, you know, the bias isn't significantly founder in this data. So how to do a meta analysis itself? All you need is the data that I've shown you. So very simple. You need a software program that can do it and you need the code slash the right buttons to press to generate the plot. That's what we'll showcase for you next week, there's a few different platforms that I recommend you can use. I don't know if I've listed them down. I don't think I have, but essentially the three platforms that I recommend are, well, there's so there's our studio which is what I use, but it uses coding. So I wouldn't recommend that for a beginner, essentially, there's like the Cochrane kind of tool. So the, so I'd recommend using that one in the first instance. And there's also one called S PSS which is sort of button based that now does meta analysis. So you can definitely give those a try. And I think if you're first starting out, I recommend the Cochrane tool, first of all. But that's what I'd suggest. I wouldn't suggest going into coding one straight away. But next week we'll go through it and I'll show you using all these platforms anyway. So I wouldn't be in a rush to try stuff. You can definitely wait till next week's practical examples. And I think that's the rest of it was just more information that I've discussed previously, but I hope that all makes sense. And next week we'll go to you some practical examples in the meantime, if you have any questions, here's my email. So just um yeah, just send it my way. And I think that's a stone. Brilliant. Thanks Connor. What I'll do is I'll post the feedback form in the chat and if everybody could please fill that in to get your final certificate and if you have any questions, pop them in the chat or um email, Connor, do you mind popping your email in the chat? So people, people have it. That would be brilliant. Thank you. Right. Excellent. Well, and any questions I can uh they can definitely ignore it fab. Ok, brilliant. We'll leave it there then. Ok, thanks. Thank you. Thanks. Bye. All right. Yeah, they gave us about mentorship on me. Thanks.