Home
This site is intended for healthcare professionals
Advertisement

Session 9- Interpreting results and writing

Share
Advertisement
Advertisement
 
 
 

Summary

This on-demand teaching session, part of the SRM A series, focuses on how to interpret results and writing in medical research. The instructor, Neeraj Kumar, a PhD student of Cardiovascular Science at the University of Leicester, guides attendees on an overview of the important points in interpreting results and writing reports. Neeraj gives a step-by-step walkthrough of a research write-up, covering important aspects like reading Meta Analysis charts, understanding heterogeneity, and how to critically evaluate results. This session is useful for medical professionals aspiring to conduct independent research and interested in boosting their research skills. A feedback form is provided for claiming a certificate of attendance.

Generated by MedBot

Description

Delve into the NMRA Academy Teaching Series, an enlightening and engaging educational program for those who wish to learn more about how to run systematic review and meta analyses.

This series will be carried out by experts in the fields and by the NMRA committee, and we will be providing you with all the tools needed to be able to carry out your own SRMA.

Join us for this 10-lecture series:

1. Introduction and refining your research question

2. ⁠Writing your protocol and selecting inclusion and exclusion criteria

3. ⁠Creating the search strategy

4. ⁠Screening

5. ⁠Risk of bias assessment

6. ⁠Data extraction and synthesis

7. Meta-analysis part 1

8. Meta-analysis part 2

9. ⁠Interpreting results and writing your paper

10. Getting ready for submission: ⁠referencing and paper formatting

Learning objectives

  1. Understand the difference between single arm and two-arm meta-analysis and when to use each one in the context of medical research.
  2. Interpret the weightage, total, and confidence interval in a meta-analysis to understand how these elements contribute to the pooled result of a meta-analysis
  3. Explain the concept of heterogeneity in a meta-analysis, identify how heterogeneity is measured and its implications for the usefulness and reliability of the research findings.
  4. Critically assess the impact of sample size on the results of a meta-analysis, including understanding how bias and measurement power can affect the interpretation of the results.
  5. Understand the whole process of taking an research idea, from initial concept, through writing the protocol and conducting the meta-analysis, to writing up and submitting the final report.
Generated by MedBot

Related content

Similar communities

View all

Similar events and on demand videos

Computer generated transcript

Warning!
The following transcript was generated automatically from the content and has not been checked or corrected manually.

I'll know from you. Keep the section open. So you so welcome everybody. Thanks again for joining us for our SRM A teaching series. This is the ninth session in the series. And today we'll be looking at how to interpret results and writing. Delivering the session today is Neeraj Kumar. He's delivered sessions a few times now for those of you who don't know Miraj, he's a current phd student in Cardiovascular Science at the University of Leicester and previous medic at UCL. He's the founder and president of N MRA and has worked on over 50 projects. So just a reminder to please fill in the feedback form to get your certificate of attendance and put the form in the chat part way through the session for you. Um And yeah, I'll hand over to you. N thank you very much. Fantastic. Thanks Jacob. Uh So, yeah, hi, everyone. Uh Jacob's already introduced me amazingly. But uh yeah, so to get straight into it, to be honest, um this is I kind of give this, this speech at the start of every lecture just for some transparency and to tell you guys a bit more about what we do with Nr But um yeah, essentially uh I found in Mr 2022 it's a nonprofit aimed at giving students doctors, young career, academics, a foundation in research. Uh We wanna give you guys a fair shot to get into research. We wanna make sure that you're doing. So in a way that helps you in your development. If you're, we want you to be learning, getting new skills, network conducting projects with us. Basically, anything that we can do that will help flatten the hierarchy can give you an easy routine that's, you know, working with you and your goals to conduct independent research. Um Yeah. Um So going on to the next slide, the, we do this in the academy every week, Thursday 6 p.m. we're pretty much done at this point. We're doing the ninth session. So, you know, if you've missed the other sessions beforehand or you wanna go back over them, we are basically doing a chronological tour of starting off an idea from an initial idea, writing the protocol going through to searching screening great risk of bias, identifying data and synthesizing it. We covered meta analysis the last few weeks of conner and now bringing it back to a full completion. We're gonna talk about a research, uh write up to date, we can talk about results today and then next week, I'm gonna be back again, talking about submission and just getting ready for journals and conferences and all that stuff. So hopefully this is all gonna come together quite nicely. If you end up doing a project with a mentorship, you'll be using most of these skills quite hands on, which would be good. But uh even if you don't and you're interested to do a project with us or something else of your own accord, hopefully this will give you an overview of everything you need to know. And as usual, any questions, any other projects you'd like to do with us, anything that we can do to help, just get in touch and we'll, we can figure out from there. So without further ado just to get into the teaching. So from last, the last couple of sessions, Connor has gone through how to do a meta analysis, he's shown you the first parts, how to create your final parts. He went through Revman and R. So if you've not seen that go back and check over it, but you'll get a good kind of hands on idea of how to conduct the meta analysis part of your paper. And um but if you're gonna watch on from here onwards, the idea is we've got that done. Now, we need to fit that into your paper. We need to know what it means and we need to be able to write up so we can actually get to the end of where we wanna go with this. So just to go through this a little bit, um, this is something you may have seen already. Uh This is uh one, it's a bit of an unusual me analysis. But I think it's important that you guys see a, a breadth of different things in literature, you're gonna see a load of things in papers. And if you know, if, if you'd like to do something a bit unique yourselves, that would be quite good as well for your, for your knowledge and your ability to answer different research questions. So um this is what you call a single arm meta analysis. Uh Typically, when people think of meta analysis, they're used to comparing drug A versus drug B. And that's uh obviously, there's one arm of all the studies that have included drug A and then the second arm is all the studies that include drug B and the comparison is typically some kind of an odds ratio or risk ratio or hazard ratio between the two. Here, it's a little bit different. What we're essentially looking at is prevalence or likelihood of an outcome happening, but we're doing that for all the studies. So in this case, this is from a paper I did uh six months ago, it's looking at survival after cardiac resynchronization therapy for iron, drug dependent heart failure. So in this case, just to talk you through some of the basics here. Um So when we say events, that is the number of people who exhibit the outcome of overall survival. So events is everyone who survived. The total is as you can imagine the total number of patients within that study. So the sample size, uh you don't really need to worry too much about weightage. The reason the weightage is different between these studies is because we use a random effects model and essentially based on the sample sizes and the different study methods, it will generate a separate weightage for each study. Um The the size of the red square for each study corresponds to its weightage. In this case, they're all kind of very similar. They're all about between 45 6% each. So there's not a huge variation but you will get studies where one study might contribute to 50 60% of the weightage because it single handedly contributes 50 60% of the sample size. And it's just the largest, most significant study in all of your meta analysis. And at that point, you have to think to yourself. Well, why is that the case? And how does that impact my results? Because if I take it out, possibly my results might be very different. But if I include it, the aggregate result will change with that. And is, is does that variation, is there bias in that variation? Can we be certain that this is a, a good way to a good way to conduct the metasis? Do we believe that this actually adds value? Those are all kind of questions you need to be thinking about. Uh the answer will obviously depend on what your study is. But most of the time it's quite beneficial to have a lot, you know, big studies in there because they are the most representative of a generalized population. And so they do have certain advantages. Um The end result fundamentally for all this fancy papo is the stuff involved. That's what you really care about. So we've got a total and a confidence interval. That's gonna be your main, main result. Um So we've got the sample size of 334 people included in the analysis. And then the key result is here 0.590 that's your survival rate. So for this outcome of overall survival, the the the pooled kind of result is 0.59 or 59.0% of people survive overall. And um and the confidence level for the estimate is between 49.0% and 68.6%. So about 10% plus or minus, which is not great for the accuracy of your results. But with the samples of 334 that's kind of what you're gonna get. And someone like as a reader or yourself or any other researcher, your obvious thing to think about would be. Why is it so bad? Why does that result have so much variation? And secondly, how does that even stack up to other therapy because if it turns out that doing nothing gives you a 50% survival in any way, you've gotta weigh up that 9% survival benefit with. Is this drug any good? Is this therapy any good? Rather? So it these kind of questions from your analysis result, lend themselves quite nicely into your thinking clinically regarding what is the impact of what I'm doing. Um One other thing to kind of cover at the bottom of this one is heterogeneity. I'll go into that in more depth later, but just to flag it up now. Um So you have TAU squared and chi squared. Those are two different metrics which can be used to measure heterogeneity between studies. Uh degrees of freedom is just the number of studies and then you need to add one because that's the number of different things you're doing within your analysis. And um so your I squared is, that's your measure of 80 that's 69% and p less than 0.1. So it's a pretty high amount of 80 we're certain that it is very, very significant. So yeah, that would tell you that these papers aren't very different in terms of methodology and the way they're conducted and that probably did impact your results, which is why a random effects model is properly appropriate. But regardless you'd be mindful that studies are so vastly different that it is gonna affect your results. I to some extent that is unavoidable. But as a researcher, you're, you'd be smart to think about it, to know now know about it ahead of time and to put in measures in place to measure it and to document it, which is why we did the right effects model, for example, uh going on to in more detail. Um this is from the co handbook and they've kind of just stated, they thought, well, yeah, it's unavoidable because it's impossible to not have diversity within studies because otherwise you just have the same data 10 times. But how you measure that is essentially just using the I IQ of the I squared score. Um Yeah. So you don't need to know this formula, don't stress about it some very niche or technical papers. So for example, if you're writing a thesis on it, you might want to just state that and put the formula and explain it. But for a regular paper just don't bother it. It's not worth it, it's not worth the headache. Um But yeah, so the I square is, is the, is the most common way we measure the stuff. Um Cochrane kind of helpful give you a rough understanding of what heterogeneity might look like. But as you can see, it does overlap and the reason it overlaps is because um you have to also think about what that means. So for example, if you have a study with a really small sample size or a very big sample size, that value of I squared is gonna change in its meaning because clinical and methodological diversity is not the same in two studies with 10 studies, uh patients versus two with 100,000 patients. It's just because the the statistical test has different power and the impact of bias also changes. For example, if you have only 10 patients and because of bias, you know, you, you say all three of them had an illness, that's 30%. But in 100,000 patients having three more people or less people have that outcome barely makes a dent in later. So the magnitude of something being biased is a lot more significant in lower number of studies. And that's, you know, something that you have to think about. Um most people kind of chunk this up into like 0 to 3030 to 70 or 75 and then uh the rest of 75 to 100. So because we just do low, moderate and high estrogen, 80 some people will use four categories like this. But there is a, a level of ambiguity in terms of how you determine I square statistic and how you do that. It's not something II think would be appropriate to go into at this stage. Um If you do end up doing a study with us or have any queries about it down the line, we can talk about it in more depth. But at the moment, I just want you to be familiar with what this looks like and be familiar with the fact that it's an unavoidable statistic. You must report it in your studies and have a brief understanding of why that might come about. Um So yeah, we'll leave that there for now. Um This is another type of meta analysis trying to show you some variety here. So this one has a mean SD which is standard deviation. And we're essentially comparing uh the QT into in patients with anorexia versus a control where we don't have anorexia. And so obviously, because QT interval is measured as a a time. So in milliseconds, it is a, it's a continuous, continuous variable and therefore, it's gonna have a mean and then because it has a mean, we're also gonna provide a standard deviation for that mean. So essentially what we're doing here is just working out the mean difference. So this is the standardized mean difference. Uh It's essentially gonna tell us how much difference there is between the studies. So again, we focus on the dark bold uh column here and we see that the difference is 16.93 milliseconds between the groups in the QT interval. And obviously, that is the 95% will tell it's, it's between 4.5 and 29.3. So for quite a big range, but definitely statistically significant because that does not overlap zero. And if you wanna be confident that you're absolutely certain with that, you check the test for overall effect. So because this is mean difference, we use something called AZ score um the Z score. So you report the Z score of 2.68 and then you also report the P value 0.007. It's less than our threshold of 0.05. So we could be confident that there is a difference in the QT interval across these two groups. Once again, we also have heterogeneity. So we done TCO done C squared and we've worked out the I square is 93% of the time. So really massive number, very s very significant level and also very strongly correlated. You know, it, you've got ap of 0.0001 T. So definitely significant gen 80 statistically significant hydrogen 80 which is alarming. But again, you wanna know about it beforehand and yeah, so these are all things that you want, you, you need to understand when you're doing analysis because otherwise you're not gonna be able to analyze your data properly. But also these are things that your reader will want to know about. So you need to mention these within your manuscript. And I've kind of mentioned to you best practice is what Z score and the P value with the main difference to show the strength and magnitude of association because you wanna show that there is a difference, there's a clear difference in that. Um you know, the anorexia group do have longer qt intervals, but also how much or longer and what's the significance of that? These are all important things to a reader. So it, it's, it's good practice to report that and I know it sometimes can feel a bit counterintuitive because it's already in the, in the figure. But the text of your results needs to reflect that as well. Um I'm gonna show you an example from a protocol so that I can specify that this is stuff that you need to, you need to think about beforehand and need to cover within your analysis. So as you can see here, the, you know, these guys, this is a protocol that I found on the internet looking at CPR with chest compression. Um Basically, they've, they've said they're gonna use Revman. So they specified the, the type of software they'll use, they're specified, well, they'll look at Hero and um once they've looked at Hero, they will then do risk ratio and 95 consequences for dichotomous data. So again, it's two groups separately and it's checking the difference between them. They will use uh Mantel Heel method. So that's just a method to use for weightage for and you can use ver variants. It doesn't really the details. Again, don't really matter at this stage, but it's just, you know, manism is kind of like the, the gold standard. That's the one everyone uses the variants is fine too. Um, we usually estimate the 80 of the I squared if I squared isn't 40 we, we'll assume it's unimportant. We've seen that in the Cochrane. That's what, you know, these guys said it. If it's less than 40 it might not be important. So don't need to stress about it. Um, but they are also deviating from a Cocker said because they're saying if it's more than 40 we're not gonna go into whether it's media me, uh medium or high risk. We're just gonna say that it's significant and we're gonna use a, a random effects model to counter anyway, which they've reported it beforehand. They've shown that they understand the I squared based on Cochrane, they've thought it through, it's fine, it doesn't particularly matter. Um because they thought it through and they've pre specified exactly what they plan to do and the final management should reflect that. However, if you know, things do change because maybe you find that actually you can't even do a me analysis. You don't have the sample size because maybe you thought, oh, we'll find 10 papers for a me analysis and then they actually find food and that can, that can kill your project or make you force, force yourself to change what you're doing. So if you end up changing, just report it in your manuscript as long as you're transparent and you're keeping on top of what you're talking about, it's never a problem Um Yeah, so this is just going back for the QT uh going back to the kind of the QT interval stuff. Uh This is essentially just an analysis uh that I reported, but these are just the write up for the analysis. So as you can see, we have reported um the mean difference MD conference interval and the P value and also the heterogeneity. So you put down all of these things, you we also go into specifying the number of studies, the number of patients within the analysis cos that's quite significant, right? As a doctor, would you trust an analysis that has 200 patients or 1 to 200,000 patients, right? It's gonna influence your perception and your a your interpretation of the result because it's gonna be more valid if more people are aware of it. And if it's if, if it's got more people within the, within the um the analysis, it's it might be, it might have better data quality, it might have more generalisability to code different populations. There are, you know, even the power of the stats will improve. So there's less likely to be random error. There are, you know, no, no notable reasons that a large sample size and a large number of studies can be advantageous, not even just because it shows you that this is something that multiple people have studied and it reveals to you that this is an area where there is sizable confusion in the literature. And this one metal analysis is pulling everything together to create one aggregate result that's quite useful to know. So it's important to think about. Um but yeah, you can even see that variation exists within studies. So for example, QT CB values are in report in 26 study with 2084 patients. At the very end, two, only two studies with 63 anorexia patients reported Q TCF R and uh but this one is also, but this one is specifically significant P equal 0105. Very, very, very barely though because the almost reach a zero. But you can imagine why there would be hesitance or, you know, you'd, you'd, you'd, you'd want to be more cautious with this result because it has such a tiny example and only two studies which you have to then obviously go and check how they're conducted, whether you're happy with the results, whether you think they have introduced bias and, and so forth all the things that we've previously covered. So understanding the, the meta analysis process helps you write it because you'll know exactly what is important to a reader and you'll know how to report that effectively. But yeah, so that's kind of the, the result side of things. Um I'm gonna kind of go through everything else as well. Um So talking about writing a manuscript, I'm not gonna be covering, writing an abstract today because I believe we're doing a separate session on that. It's, it's a very big topic. There is a quite a bit of detail to cover in terms of how you write abstract for different things, how you write for conferences, word counts. So I'm not gonna go into too much detail. But at this stage, I just want to illustrate on the screen that it is when you read a manuscript, it is the very first thing And it's meant to be kind of a, a very quick summary for someone who's not seen your work before to get up to speed on why you did it and why. But uh yeah, so this is kind of your standard layout. Most journals will do this, especially if this is an argument analysis. You might see small variations such as they might, they might see materials and methods for the method section or some journals might say don't have a separate conclusion section, just make the last paragraph of your discussion a conclusion. But fundamentally, it doesn't change how you write your paper. It just changes the formatting and structure a little bit. So, yeah, this, I will go through every single one turn. Um And you kind of get an idea for what you're gonna do. Obviously, with the results, I won't cover the matter. I'm gonna try and talk more about kind of narrative stuff and structuring. So you get an idea of the rest of the results section but uh Yeah. So going straight into the introduction section, I always use this, it's kind of the, the idea of an inverse pyramid. You start broad and you narrow down. Um this, this diagram is slightly helpful in, in, in that it, it kind of tells you what, what's, what sort of each paragraph should look like. If you think about it, in terms of four paragraph structure, you identify the problem. So it could be like a disease or an area that you're interested in. And then you go into the detail of what's been done so far, then you go into, well, why are you doing this particular study? Because if there's already literature out there, why does your study need to be done? So you have to show the value that you're gonna add and then the last paragraph is how you're going to achieve that. So it's very specifically talking about the aims of the study and what it aims to contribute. So I will show you an an example of this now, but you'll get an idea it, it's, you go broad and then you narrow down. So to start off here, uh this is a meta analysis. So systematic review that I did 2.5 years ago now, maybe three years ago. Um This was about the neurosurgical care and for sorry, music, neurosurgical training process and how it's been affected by COVID-19, particularly with a focus on LM I CS uh low income countries versus H I CS which are high income countries. So to start off with, these are the first two paragraphs, we've started off by saying, you know, the big picture problem is that neurosurgical care is massively divided. And there, there's a very big difference in how we do that and how we exam, how we look at this between L I CS and H I CS and training is really important because that's how we aim to bridge that gap, but it's not been formulated properly. Um The second paragraph kind of ties into what is being covered in the literature. And we're starting to bring in our research questions slowly but surely because we're thinking more about the pandemic, we're talking about restrictions that have been a rising because of the pandemic. And we're also covering the, the literature and what it tells us so far. So we've covered, you know, the changes in neurosurgical practice and how it's impacted us. And also therefore that it has reduced surgical training opportunities, it has reduced the education opportunities for, for both trainees and students. And lastly, just at the end of that, we want to say, well, actually it's not all doing improve the literature does report a few things that have been done such as teleconferences, online learning or et cetera, but that's kind of the, the the layer of the literature at the moment. Paragraph three, we, we, we say, well, that's, but people haven't examined how this is dispar, how this has disparities between I CS and H I CS. And also it's just to address the fact that we need to con understand this particularly within the context of a pandemic, which has obviously affected the way we think about these things and our logistics and our management of training. So there is still a gap in the literature despite what we said in paragraph two and then the last paragraph, say, well, in order to address this gap, we're gonna do a systematic review and we're going to do it in order to assess impact, to assess alternatives to training. And we're doing it particularly to identify solutions and to prevent things getting worse when COVID is not over yet. So we've kind of nailed down exactly how this study is going to add value. And in what context it aims to achieve that, that's genuinely it. We've only written four paragraphs here and these are very simple. Um some studies will write more it, you know, that that's completely fine. Um because to be honest, I in the context of the pandemic at, at this stage, we were writing in 2021 there was not too much of literature available. Um Hence, we only have six references up to this point. But if you can imagine you're doing a study on, you know, a, a more popular drug that's used for some of the diabetes or something really popular there might be a lot more literature out there. And so you need to be very specific on what has been done and where exactly your literature fits in within this enormous field. So this is a simplistic example. And I'm trying to warn you that it suits my purpose to teach you guys, but don't necessarily replicate this exact formula and the same length and structure perfectly every time because sometimes you will need to go into a longer depth. And that's a good thing because if you cover that depth, if your reader will benefit because they'll know exactly what you're up to and why you're doing what you're doing. So um yeah, that's that the method section I've covered well, I end of the lectures, I should say we've covered this in a lot of depth. We've, we've covered eligibility criteria. So cover strategy, we covered screening, we cover data extraction, data management, we've covered risk of bias, we've covered data synthesis and um meta analysis, which is an slide, but we have covered in each individuality every single step of this process. So it's just a case of what I wanna do is just help to put, put it all together a little bit and you'll start to see a method section of a manuscript come together from that. So to start off with um the very first things you wanna do is you're going to talk about the context of how you started to do this manuscript. So the initial things we've said is we, we wanted to cover the search strategy. So where exactly do these articles come from? We've said uh for this particular systematic review, we've said, well, we're going to look at search covering the relevant terms and we're gonna do it in line with the pro protocol that's prepublish. So the very first things that we did when we were doing the review, finding uh writing a protocol and identifying a search strategy, those are the first things that we report in our method section. And it is deliberately done in, in my opinion, in a chronological order. Because what that does is it uh it gives you a reader something very logical that can follow. Because if they're gonna re again, the example I always give is let them redo your work. If they're gonna redo your work and verify it, they can then see that you first did a pro, then they can see that you did a search strategy and you searched from these dates and they, I if they followed those dates into the O OVI Meline and all these different video databases, they should end up with the same outcome. I ie the same paper that you did because you copied it. Um It's also really good to mention that we've covered here that the systematic review was designed for a report with Prisma. So Prisma are essentially a set of guidelines or recommendations that are relevant to almost all view meta analysis and they will guide you regarding how to conduct your, your study. And there, there's also a really helpful checklist. Um some journals mandate that you kind of check, fill out the checklist and put specifically on the checklist where each item was located within your study. Um It is good practice. It's really helpful. It will guarantee that you don't miss anything. So you'll have a very thorough, very high quality meta analysis, but it is not mandated by 80 90% journals. So it's not a requirement. I think it's a really good practice and helpful to do. Um You'll also really notice that we, we're being very specific, we're telling the reader that we did it on this date, we did it with these terms as in following this specific protocol and all the things that we did are available in the appendix. So again, anyone who wants to replicate your work should be able to do that exactly based on what you've done. And if they can't, you were, you were not specific enough, that's the simple truth of it. Um So yeah, we've covered the rate. She started the initial thing. So the next thing naturally as well, what are, which papers of those would you want for your systematic review? And that's where you need to identify and specify your inclusion exclusion criteria. So you can then use that as a framework by which you will specify and screen through the papers you really need for your systematic review. So I was going chronologically through what we did and just following that through the next thing naturally, as you can imagine is screening, uh we call it study selection here, but it's the same thing it's screening as ever. It's done by the inclusion exclusion criteria and it's done in two stages by two authors independently. So we do title an abstract and we do full screen and then there's a third reviewer who can break up stuff if, if things get confusing and someone's, you know, we've got differences in opinion. The third reviewer steps in and they do a tiebreak simple. It's all stuff we've seen before. It's nothing complicated. Um Here, data extraction and risk of bias kind of like my lecture are merged together. But that was purely out of coincidence because we got the data for the the the the data for the data and we did the assessment at the same time. So it's fine. Um But yeah, so essentially we reported same as usual 22 reviewers do this, three defined Excel spreadsheet. We picked out, we, we tell you all we picked out some practice guidelines, relevant recommendations, data that was needed for quality assessment. As per the risk advise tool, we specified what the risk advised tool is. Um I think that's just helpful as good practice to kind of show because the agreed tool is very specific it's only used for a phrase of uh clinical practice guidelines. It's not something you see every day like Cochran muco bias. So it's important to cover because most readers might know, not heard of, might not have ever heard of it. So it's good to cover uh data synthesis. That's just how we, how we go from data extraction on a spreadsheet to how we go and write it up in the manuscript, how we'll put it together because this is a guideline study. There's no meta analysis, there's no data. All we're doing here is we are phrasing the guidelines with agree scoring and we are going to tabulate and synthesize the key recommendations and compare them. So all, all we're really doing is just de putting down in ta table and word form, what the recommendations were and who made them. And in the discussion section where we have to elaborate on why it's very simple. It's, there's a reason it's only three sentences because quite frankly, we didn't do that much. Um Yeah, that's it really, it's just remember whatever you do in the process of before you write up, make sure you get all of it down and you'll be fine. There is not too much that's complicated with uh methods. Actually, it's, it's to be honest, it's something that I like to do near the start of my manuscripts because when I'm writing up, it's the stuff that comes to me most easily because I literally did every single step. I'll know what I did. I know how I did it. So I can just write that up the most quickly. I don't often even need to make any references. I don't need to go look up anything. Whereas you know, introduction, you might need to go up and look up the previous literature discussion. You'll definitely need to look up previous literature. So as you can imagine those sections require more thought process, more time, more research. This stuff is essentially quite literal. It is literally what you did results. Um I'm not gonna go again to the meta analysis side, but we will just stick with the basics. So essentially, we're gonna describe everything we said that we would describe within the methods section. So initially, we're gonna talk through a screening process and what papers were included where they came from. We're gonna talk about the studies particularly if you've been through my data extraction. And since these lectures, you'll know that I talk about kind of a table one, table two, table three, table one is always your study characteristics. Table two is your patient characteristics and table three. If your outcomes, obviously, you don't have table three. If you're presenting outcomes as a meta analysis, cos we'll just do the analysis. But as long as you have a structure like that, which clearly delineates all the work you're gonna do and what you're gonna cover, you can't go wrong Um So yeah, the key thing to covering those is just being straightforward. Um and making sure that you're actually synthesizing results but not commenting on them. It's not appropriate to comment on results in the results section because that's what, what the discussion section is for if you and this is where I think some people get quite confused because when you do essays or other projects like that thesis, for example, it can be quite easy to kind of say, well, you know, this was analysis of 200 patients and uh the the analysis was I don't know, an odd ratio of five, it comes into 3 to 7 which which we believe adds bias because it's more sample size, it's easy to kind of add in personal opinion and and your interpretation because that's what would have been marked on favorably in essays and things. But in academic writing, it's not appropriate and it's something that can be a little bit of a switch in for process here is just synthesis description and leaving it alone for the discussion section. So uh yeah, just to kind of show you an example. Again, this is uh a study that we did on neuro gliomas. Yeah, sorry. Yeah. And um this was a few years ago. So um essentially we have the Prisma diagram, if you've not seen this before, it is literally just a run through of your screening process. So you identify all your papers, your databases, you remove duplicates, you screen for abstracts, screen for texts. And then this is what you keep, that's it. And at every stage you tell people, you tell people um why you remove the studies. That's it. And um yeah, so all we've done in the, what we've done in the text is essentially just summarized and reflected what's in the diagram anyway. So we've said we identified 2850 articles, which is there on the first box in the prisma diagram where we've told you that we have identified duplicates, which is again in the second box. And we've also mentioned that we did 100 and 78 articles was sort of abstract 89 at full text which is here and then 27 were at e equals your criteria. Once again, you'll notice it's important to kind of give an idea of how many patients were in this analysis of 1985 patients. Because again, that gives you an indication of the magnitude of the analysis. It gives you an indication of what to expect and it kind of steers the direction towards you doing analysis and considering these potential biases that will arise. But uh yeah, essentially there is, there's naturally gonna be overlap between what's in the figure and what's in the text. But that is by design, you should be rere reporting the key essential details, but you're doing so in a way that doesn't take away from the, you know, you're not reporting every single piece of thing in the figure because otherwise, what's the point of it? And you also notice we did cite the figure, we, you know, figure one. So people would go and read it and go get everything that they did wouldn't have seen otherwise fine. Uh So this is back to the, the neurosurgery one. This is more for like the kind of narrative synthesis side of things. Um narrative synthesis, obviously, if you at the data synthesis lecture, I went through kind of thematic analysis, how to do it, how to come up with themes and making sure that your themes line up with answering your research question. Um That work that you did at that stage essentially builds into a very clearly structured result section. And to give you an example of how we did that, I'll just talk you through this. I haven't put the whole result section in because it's long and I don't think that I want anyone to read that particularly just in this lecture time, but it's essentially structured as per how we want to answer the research question. So we wanted to cover neurosurgery, training for trainees or like residents and also for medical students. And that's why we took all the literature was edible and we split up the results as per the themes. So we had the first training education, second theme was various and solutions the third one was medical students education. And last one was their perspectives on it on the change in education and the the how it impacted their mental health. But then obviously within that training, education is a massive area, right? It's just a whole thing that covers every single aspect of the training from operative time to them being able to access resources online versus not having as many um as many opportunities to be in hospital for just training and being, being around patients. There were so many changes that occurred that it was, it's important that we analyze each individual section as per the literature that we have in front of us. So for example, um in this case, one key thing that arose was practical skills training. So we've talked about, well, residents don't have as much, a lot of time. They don't have as much time to subspecialise and learn, you know, hospital management case uh caseload. They, they just don't have the opportunities and um particularly, you know, things that have changed is the, for example, things that have changed particularly is um elective procedures have gone down. And obviously, some people actually, you know, the the there's variation between what M LM I CS and H I CS perceived which again was one of our research objectives. So we've kind of specifically gone into that detail and said, well, for example, um 62% of neurosurgical residents in the US survey didn't believe that it would make a difference on the surgical abilities. But in LM I CS, they said actually it would really, it would really make a difference. 75% have said it will have a negative effect. So what that tells you is obviously you start to think well, is that a variation? Because people in the US think they can compensate with other types of training or what's going on. And those themes lead to it helps you to assist in understanding your question better. And also gives you a read a lot of context about what's going on there. And that is really beneficial because you're laying out the full picture for them based on that literature. And that synthesis of I think we had 30 articles in this manuscript, you know, into a few paragraphs is what is the art of the good research project? Because you're condensing so much literature into its key elements that are important and you're nailing down the key themes, the key research questions and you're specifically telling people why they should care about this information in front of them. That's very important. Um Yeah, just following on from this, there's practical skills training. Then there's also, well, you know, the fact that people didn't actually get to be on neurosurgical wards, they got shifted to, to COVID wards. And you know, because of that they didn't get to go to see as many patients. And you know, it took away from the training time. So that's an impact that came up in the literature and is relevant to these patients. Um As you can imagine, however, it's not all doom glue. So something that was flagged up was that because these guys were focused on elective cases uh being canceled, they ended up doing more research and doing more remote work. You know, they, they focused on other things and in some ways that can be beneficial, right? Because you're making the most of the opportunities that you have and you're developing other skills that will also benefit you because as a clinician and a surgeon, these are other things that are important to you. So as a, as a researcher doing these projects, I'm thinking broad picture and these themes allow you to then frame the narrative and the research question accordingly, that's really beneficial. Um But without good data extraction and good synthesis, it's not gonna make sense. So that's why I really strongly encourage that you, you go back to that stuff and go into it in as much detail as possible. And then this write up becomes a lot easier. But II just wanted to kind of specifically go through this example because somatic analysis is challenging uh this is kind of breaking down and now uh what, what's in a paper and then what's important to our research question and that can get twisted because again, people start to interpret things, people don't exactly always get what they exactly want. Other papers is a complicated area but it's something where with the right navigation and, and dags and things like that, it does become easier. Yeah. Um, another thing that I kind of re raised was well, ok, fine. People are no longer doing practical skills training. They're instead doing online teaching and that's something that's changed. You know, people are doing online lectures, online courses and that's, that's something that is encouraging people to learn more uh in, in an era of remote social distancing, things like that. So, yeah, uh for example, uh but however, we i in line with our research question, we also identified key disparities. So in North America, 98.5% of respondents in studies said that they could do online, their, their program online. But in um in Africa, in a in studies on the African continent, it was 14 3.99% and in India, 61%. So potentially what we're revealing here is that there are disparities that are leading to people missing out on neurosurgical education. And that's stuff that we are able to synthesize by looking at all these different studies. So you get an idea based on the the references, right? So 8 to 12 is there are these, you know, we're talking as a general here about treat training in 1256789, 1011 studies. And then obviously, we're going here to more geographic details. So that's also really important, right? And here the these three main studies are the ones that compare both high income country and low income country. And they review this pattern out of the 30 papers that included. Sometimes there's only a very small subset that help you answer your question. But those are the ones that get the most attention because they're the ones that are helping you prove your point. But uh yeah, so again, this is stuff that is a little bit abstract. It's not something you can easily get your head around. Uh My kind of strong recommendation is to read as many of these papers as possible because the more you read them until you see how they're structured and you get an understanding of why they've done the particular things they've done, it becomes a lot easier to replicate that and use the same methodology again. Um But yeah, the next thing I need to cover with you is discussion sections. These are probably the most significant part of any paper I discussed in section I I've I kind of put here, there's like a mini paper that's pretty much the best way I can think of describing it because you're basically gonna start from covering why you did the study and your most salient results. And then you're gonna go through other studies in the field that have been done comparing and contrasting your work, you're gonna show they you know, they had insights and you had insights and you're gonna start to synthesize those to develop uh a way going forward and essentially provide people with little people reading your paper with little nuggets of these are future studies that can be done. These are implications for future practice. You know, you're gonna be kind of laying the land for what is coming next. And then the last thing you wanna do is um strengths and weaknesses, essentially. I like to think of this as a chance to rebut the comments that might come up from peer review or from people reading your paper because you know, there are some very glaring obvious things that come up with meta analysis is my review that we, you see again and again, I'll go through those when I show you two examples. Um I've put here just in case you journalists, but, you know, last you give a conclusion and summary of your main work implications. That's if you don't have a separate conclusion section, but I've also done a separate conclusion section at the end just to show you what it looks like. But yeah, if your general wants it to be all put together, just put it together, it doesn't really change anything except you reduce the conclusions I've had. So because it's not a big deal. But uh yeah, so to start off with, this is just the overarching again. This is a CRT paper, it's just an overarching thing that we cover in the first few paragraphs. So we've said, well, listen, this is the largest meta analysis for CRT and the most relevant findings are as well. So I CRT is beneficial for the survival uh intraprocedure mortality is low and it helps with survival. N yh A class, the A commission and also um for patients with left von branch block and left ventricle dilatation, there's additional benefit for these guys. So we've essentially summarized all of the key findings from all of those different meta analyses in 12345, 4.5, 5 points. That's it. So that really kind of nails down the key essence of why we did this project and what we achieved in a paragraph. Um And then we've gone on to say, well, let's see what, what's out there, right? So end stage HF prognosis is limited. There's some scope about other people using LV AD or medical management, but these have pretty poor survival. It's 6% as a whole and then it's 25% in the rematch trial. Um However, we we in the discussion section, you can start to raise these points. So we can say, well, actually these patients weren't all, all, all of them had I dependency. So can we be certain that it's an appropriate comparison? Um But yeah, so you start to kind of use your brain critically and think through these problems now. Um So what we're saying is essentially here is that, well, you can do CRT, we either have um LVAD or heart transplant. And um you can consider these as alternatives because that's what the guidelines say. However, um that they have their own challenges and their own limitations. So in context of these, we can then talk about where CRT fits in, right? So we can say, well, CRT is less invasive, has no risk of complications and a relatively high level of safety. So as a result of what we've done, the future lay of the land is let's look at CRT more seriously and not just think of heart failure as the main or LVAD as the main way to do this. You know, we have a potential new therapy that based on our re is actually pretty decent. So we're building up that evidence base and we're showing in line with our research question the future of where this might go and the benefit that this offers to clinical practice. Um Yeah, we've also covered kind of differences by patients. So for example, people with left branch are more likely to go to therapy. And um that's actually quite beneficial because we know that yeah, you can do this. Uh it can it has a particular mechanism by which achieves that. So that's also very important for clinical practice because what we don't want is that a reader thinks, oh, cr T is amazing, we'll just give it to everybody because that would be poor practice. As in line with evidence with medicine, we need to acknowledge that this therapy will respond differently in different patients. And if we have data that will inform that we want to report it as clearly as possible because it's, it's in line with our duty as scientists to put down all the evidence and to show the direction in which the, the data takes us and how we're going to structure this. But yeah, um that's pretty much it. It's mostly um just about laying out your argument, a lot of these situations. When you do these kind of discussion sections in your head, you need to have a point that you want to take, make across. So with us, it's very clear, it's let's look at CRT more seriously and at the moment, it's not, we focus on other things, but that's not necessarily fair because CRT is actually good in comparison. So we've laid out the land of, well, at the moment, this is the current survival rate and the current alternatives to therapy. It's not great and we're not sure about it and CRT is and should be part of that conversation. So we have a very specific way of how we want to introduce CRT and a very specific way of how we go through and navigate all the literature. And you know, what we have to recognize here is that we, at the end of the day as a scientist, you're gonna answer things based on your research question and your research aim. So as long as you're doing that, you, you know, that's what you need to talk about within your discussion. But also we need to identify and point out things that are different, right? So, for example, we have specified, well, these studies had lower survival rates than we did, but they're not the same patient group. And so we need to bear that in mind. And it's very important that we raise things like that because critical thinking and a phrasal of literature and analysis of where the literature takes us is what makes a high quality discussion section. It's important to be able to refer back to other trials and to refer to guidelines and to mm kind of navigate your o through that. Correct. Fundamentally, we have a very specific direction which we want to go and we stick to that. No. Um The next thing I'll go through is strength and limitations. So all of these biases will come up at some point in teaching series when we do meta analyses or just data extraction, identifying papers. Uh as you can imagine all of these are biases because they shift the direction of your analysis and the results in this particular way. So as you can imagine things like small sample size, it's gonna affect the power of your analysis. It's gonna affect the impact of potential bias study. Design is gonna impact bias because an observational bias may occur in observational studies. Clinical trials benefit from things like randomization. So there's equality between both groups that benefit from blinding. So theoretically neither group of patients or researchers know who's involved and they're not going to impart their own personal biases onto the results of the study. Um There might be differences between included patients because groups might be different. So they might have different outcomes, be different responses to drugs and therapies, um which will again bias these groups between one and the other. There might be differences in how we measure outcomes or define them. And obviously, that's gonna change the result we record and then that might lead to differences between groups. And the last thing is difference in study environment. So you might find that there are differences between the geo geographical location in which things are done, which obviously might affect, it might affect any of the others and now as well. Um you might find that like, for example, um if you're doing surgery and surgery is done in different techniques in different countries, that's gonna in fact outcomes because you might find that actually, you know, the skill of that particular surgeon impacts the way it's done and the outcomes because it's not, it's not comparable with your groups anymore. Uh It might be cultural because you might find that things are measured differently in different cultures or recorded differently in different cultures. Um One example I can think of off the top of my head is, um I'm currently doing a study on ethnic disparities and the way we measure and think about ethnicity varies between Africa and Europe and in America. So we're not doing a fair comparison anymore and that's gonna impact the, the results we record. So these are just kind of the, the big things that come up. Um There are obviously 100s of other biases that can be important to us. But, you know, I for brevity, I'll stick to what's most likely to come up. And then if there's particular niche things that you, you see or particular things that come up within papers, we can, you can go to that privately in, in our own time, we can go through that. But um to start off with this is an an, an a meta analysis we did looking at essentially skeletonized versus pedicled, um Coronary art coronary artery bypass grafting. And we wanted to, we've recorded essentially all the main biases here. So the first one is that, well, the study designs are different, they have different qualities, some of them are prospective, some are retrospective, the design is different. And so when we do a meta analysis, the bias within each study gets carried over into our meta analysis. That's a limitation. The second limitation is the definitions of graph failure and the difference of and that will lead to differences in how we measure the outcome and how we record it, which could then affect our results. Um We've also kind of somewhat covered sample size because we've said, well, graft failure was only reported in a limited number of studies. So it's poten that variable will not have nearly as many patients as the other analyses might have, which is gonna obviously limit the usefulness of that analysis compared to the other ones or just in general. Um The harvesting of the it is operator dependent. So that's obviously the environment because different surgeons and different hospital techniques may differ which will lead to difference in outcomes. And the last one is well, studies are formed in different countries. So, you know, again, they might have different, um they might have different practices, different ways of doing things. Uh The last one is a little bit tricky. There's not in my list, but um it's a study level meta analysis. Um I don't believe we've covered in individual patient data meta analysis. It's not really in the scope of this teaching series. Uh Essentially what that means is you, you would contact all the authors of the individual papers, ask them to give you their raw data. You would then collate that raw data into a massive database and then run the analysis with their raw data. Because obviously, if you do, if you do an analysis based on study level data, that's it's already been preprocess, you don't always have all the variables. And so there are limitations as to what you can do with that data as well. Um You know, you're kind of assuming on all of the biases and the process that they've done to that data and you just have to deal with it. So it does limit uh clinical variables. You can include, it does limit um the potential to investigate those relationships. Um It's, it's quite a niche thing and IP DS are not done commonly, but they do add a lot of statistical value. So they're very highly regarded. Some of them you might see in three top top tier journals such as the Lancet New England Journal, things like that because they add a lot of value to the literature. But yeah, essentially, as you can imagine from this, anything that kind of limits your ability to answer your research question. Anything that's gonna take away from the strength of your meta analysis in my review is a potential limitation. And you know, like I said before, think about it as if you're flagging these up before your reader will or your peer reviewer will. Um But yeah, this is another one that we did for you study again, the main thing. So we've combined future directions here. That's something that some journals will ask for. Some won't, but some people would like to talk about anyway. It's quite helpful. So yeah, uh the things you specified here are limitations being sample size, lack of RCT. So all the studies are observational, we tried to find case control studies, but that didn't really help. And because of that, obviously, patient populations are included are of, you know, they, they have differences, some of them have differences. N yh A class, some of them, you know, some of them have differences in age injection fraction. So there is variation there and that does impact the result we get. So in that context, in the context of these limitations, we start to think of, well, what's next and the next natural thing that comes is actually we probably need an RCT to prove the point of how good CRT genuinely is. Um It would also be helpful to kind of think about potential subgroups of why it's gonna help. So for example, we know we in our paper, we found that left branch blockage actually helped people response to CRT. So maybe there's a context in which we should think about, you know, looking at therapies that address that within the context of CRT and that's a potential research gap that someone else can pick up on. So essentially what we're doing is we're, we're seeing what we've left behind and we weren't able to answer due to limitations or the lack of scoping up work. And then where do we go with that? And that develops into a future research direction. But uh yeah, the last thing I want to cover is conclusions I just made it a separate thing because some journals will, it really doesn't really matter either way. It's simply a one paragraph summary of your take home message of what you did, what you found and something for the future. It doesn't need to be very complicated and to be honest, it's just a summary and I follow one from your discussed ine because you would have already covered all these things. You have already in your discussion section covered what you found. You've already covered the lay of the land you've covered what's wrong with your study potentially to the limitations. And you've also covered where future literature should look, should point to you're just summarizing what is, is, is maybe 1000 or 1500 words into a little paragraph. It's not groundbreaking and I'll show you that with an example. So yeah, see our conclusion here is CRT is one, it may be an alternative to LV HT because it's less invasive and it has fewer risks. And it's helpful. There may be benefit to patients with a left one, the branch block. But given that there's limited data, we should do RCT S in the future to identify and determine its widespread implementation. That's it. So we've done, we've summarized the question we've demonstrated why it's important to the reader by telling them what they should think what they should do about it. So in this case, it's an alternative to LVAD and it's really beneficial, particularly in those patients with left on the heart. And we've highlighted that future RC are needed because it has been done before. And that's a limiting step in terms of how we perceive this and its future implementation. So let's fix that. And that's pretty much it really, it's not groundbreaking, it's not adding anything new. It's just a very, very brief summary. This is probably not even 100 words. It's, you know, it's a tiny paragraph summary and that's all you need to do. But uh yeah, that should be and it is everything I need to cover for today. Do we have any questions? I don't think we have any questions in the chat yet. Mhm. Uh Oh, there's a question. Hello? Are all the presentations available somewhere to revise. Yeah, they are that yeah, the the the recordings always get uploaded. That's fine. But yeah, otherwise if there are any questions that you ask me now go for it if not. Um I'm happy to take emails or questions otherwise and a fair chunk of you will probably end up dealing with this stuff directly in the mentorship program at which point we can just deal with that together. So yeah, that should be everything and I'll stick around for a few minutes if you wanna talk to me about stuff. But yeah, thank you for listening.