Home
This site is intended for healthcare professionals
Advertisement

Session 6 - Data Extraction and Synthesis

Share
Advertisement
Advertisement
 
 
 

Summary

This on-demand teaching session is suitable for medical professionals interested in systematic review and meta-analysis. The lecture series' sixth session focuses on data extraction and synthesis in systematic reviews. Presented by the founder and president of the N Mra, a PhD student in Cardiovascular Science, the session provides an opportunity to learn about the systematic review process in-depth, with examples from previous papers. Attendees will learn about study characteristics, patient characteristics, and outcomes of meta-analysis, and how to efficiently structure their study for a clearer understanding for future readers.

This session focuses on making research more accessible to students, medical professionals, and anyone considering entering research. By systematically breaking down the components of high-quality evidence and demystifying the Cochrane handbook's content, the session empowers learners to take control of their research by providing the tools necessary for understanding and synthesizing complex information. So, join in and add to your independent research knowledge while exploring various research procedures for efficient data extraction and synthesis in systematic reviews.

Generated by MedBot

Description

Delve into the NMRA Academy Teaching Series, an enlightening and engaging educational program for those who wish to learn more about how to run systematic review and meta analyses.

This series will be carried out by experts in the fields and by the NMRA committee, and we will be providing you with all the tools needed to be able to carry out your own SRMA.

Join us for this 10-lecture series:

1. Introduction and refining your research question

2. ⁠Writing your protocol and selecting inclusion and exclusion criteria

3. ⁠Creating the search strategy

4. ⁠Screening

5. ⁠Risk of bias assessment

6. ⁠Data extraction and synthesis

7. Meta-analysis part 1

8. Meta-analysis part 2

9. ⁠Interpreting results and writing your paper

10. Getting ready for submission: ⁠referencing and paper formatting

Learning objectives

  1. Identify and understand the key principles behind a systematic review and meta-analysis, including data extraction and synthesis.
  2. Demonstrate proficiency in extracting relevant information from academic papers to be included in a systematic review.
  3. Understand and be able to discuss the different steps and processes involved in conducting a systematic review, from defining the research question to data synthesis.
  4. Understand the importance of risk assessment and bias in the systematic review process, and how to effectively minimize and account for these factors.
  5. Develop the ability to critically evaluate the quality and relevance of published research and appropriately integrate and synthesize this information in a systematic review.
Generated by MedBot

Similar communities

View all

Similar events and on demand videos

Computer generated transcript

Warning!
The following transcript was generated automatically from the content and has not been checked or corrected manually.

Do we start? The only thing I reach out it's just loading, it's taking a better time. Sorry. Mhm Where we like, yeah, we life. Do you wanna maybe try sharing just your slides? OK. Yeah, I mean I'm already on the legs. OK. Yeah, perfect. Um OK. So hi, everyone. Welcome to the N Mra six session. And today's session is a succession of the systematic review and meta analysis lecture series. And um today's session is on data extraction and synthesis in systematic reviews and analysis. Um Today, our speaker is N and he's the founder and president of the N Mra. He's a current phd student in Cardiovascular Science at the University of Lato and a previous medi graduate from UCL. So I'm just handing over to you now. N Cool, thanks. Uh Hi, everyone. Um So we're back for the NMR Academy. Welcome uh to all the pros prospective mentees and anyone else joining? Um So for today's session, we're gonna be doing sys systematic review, data extraction and synthesis. Um I've kept it quite brief because I feel like this is, this is a topic that is best done with practice rather than me telling you how to do it. But I've tried to include as many examples from my previous papers we've done in an Mra and or I've done myself to try and illustrate how that's gonna look. So um just to start off with, as usual, I kind of give you guys a bit of a background. So N Mra I found in 2022 it's a nonprofit trying to give students young academics, doctors, anyone who's going to get into research but is struggling to do the platform by which they can achieve that by breaking down barriers to research, entry and facilitating you guys to get involved with opportunities for development, skills, networking and the confidence to really do independent projects. That will, you're very much in the same spirit as a mentorship scheme, you can come up with your own ideas and you can fulfill them and hopefully add to the literature by the end of it. Um Just to put this on the screen, this is kind of the over line for the academy as usual every single Thursday 6 p.m. So at the moment, we're on number six, data extraction and synthesis. What we have left is meta analysis. We're gonna talk through how to do it. Uh give you all the steps uh and examples. Then we're gonna do, interpreting the results and writing it up and then the last session is just getting your paper ready for submission. So it will be a full whistle top store of everything you need to know by the end of it to be to do your systematic analysis. Um If you're gonna be on the mentorship program from hopefully next week, um then you guys will obviously just jump straight in. You can essentially, you'll be able to follow through again everything that we've done because you will be doing the work for us in, in session, one such as finding a research question and then you'll kind of work through with all that anyway. So hopefully you've already got that knowledge a little bit in advance and then you can just work backwards and keep ahead of the curve. Um But yeah, this is all stuff that I've covered before. It's just a recap. The main meat of today's session is what do you really do to the extract and find the information you need out of those papers? And then how does that turn into something you can synthesize? So from the previous session, we've kind of established that you're gonna have because you've screened and then from there, you will have identified which ones are most relevant to your systematic meta analysis based on your inclusion occlusion criteria. And then from there in my session, I also covered how's your risk of bias? So you will be able to identify how good those studies are in terms of, you know, do they have bias? Do they accurately represent the literature? A are the trials well well conducted, is there a lot of variation within them? You will be able to make those judgments and you'll know how good the literature is that's going in. So with that foundation, the next step is let's go and find out the relevant information in those papers so we can put our start to put our review together. So extraction is quite a simple process. It, it's I in when I start talking about it, you'll start to see that it, it kind of mirrors your research question and it mirrors what you want to get out of uh the review itself. So this is the the checklist that comes from the, the Cochrane handbook. Um I can't recommend enough how much you should at least have a cursory knowledge of the Cochran handbook. It is quite lengthy and in a lot of details. So you don't necessarily need every single step and every single letter and.co you know, you don't need to go to every single word of it necessarily. But knowing the content and the the the principles behind the high quality evidence that it perpetrates is really gonna help you. So now this is a kind of a mess of all different types of things that you can find in a paper. If you read through a paper, you will find all of these things in various places. But there's, there's kind of a method to the madness here. So, and what I'm gonna do right now is just gonna break this up into little chunks and you will start to see why I'm doing that because that's what you're gonna be presenting in your work, in your, in your study. So the first chunk is this kind of starting point. It, it's about the study, it's about what kind of design it had. It's about, you know, how was it conducted? It's where did it come from? It? It's very much the kind of the meta data. Uh the the it's the information about that study and how it was. You also notice that all of this stuff designed sequence allegation blinding all this stuff. This is essentially a risk of bias analysis. So while you're doing it, you can start to take that off as well and do two birds with one stone that's gonna be really helpful just to keep your work as efficient as possible and to make sure you're not overdoing things unnecessarily because the last thing you want is that you, you spend all the time to read a paper to a risk of bias and then you end up having to go back again and do the whole thing again just to go and get the rest of the data. So it's just an ine inefficient way to do things. Um So this essentially would form your study characteristics. You're gonna take all the key information about the study in terms of the who the author are or what kind of study design it is, whether it had, you know what level of follow up it had, whether it was a blinded study, prospective study, whether it was a, it was an observational study, all these kind of things about how the study was conducted and what kind of methodology it uses. So that will then guide the reader in terms of what, where your data comes from and what kind of papers are, are constituting it. So that's very important because a reader will need to know what, what forms the basis of your systematic review. And that will be essentially the first table in your results section. You're gonna kind of illustrate all of these in a nice table. But first off, you're gonna extract it from all the studies. And then once you've done that, you covered the study, the next thing you wanna talk about is what the study is about. So the next chunk is all about the participants, you're gonna tell them the number of patients that what, where they come from, the country they're from, whether they're from a hospital or from a primary care setting. You can go to like demographic factors, age, gender. Um You could talk about deprivation, ethnicity, whatever is appropriate to the right question you're answering. Um It might, it might also be helpful to go into like comorbidities or things that are associated with what you're studying because that might help you to indicate whether, how, how unwell these patients were and whether that affected the disease you're studying this will. Again, it's, you have to think of this from the perspective of your research question. But also from the reader who's trying to understand your question, they need to know what the patients were that were being studied and what that's gonna constitute for the analysis that you're gonna do. So that's gonna be your table two, your patient characteristics, I for a part participant because some trials don't have patients. And so it's, it's just a bit more of a broad. Um So that's table one table two in most meta analysis, you'll start to see this or, or you and I'll give you a couple of examples in a second. Um So that's table one, table two, the third table and this is not necessarily a table. It can also just be your meta analysis, but this is your outcome. So this might be, you know, your mortality rate, it might be the ri risk of a disease happening. It might be how it was measured. And you're gonna talk about that data is where it comes from, how it was measured. You're gonna and you know, you, you might even put down like an EP values. If there's comparisons being done, you'll put down what the comparison was. It might be an odds ratio, it might be a hazard ratio, it might be just percentage differences, whatever that is you need to note that down so that you can then go back and talk about it later or potentially you're collecting that data so that you can then go back and do a meta analysis of all of those studies. So it's very important to look at outcomes. But also this is where you have to be very thorough. So you wanna have as much detail as possible. You want subgroup analysis, you want, how long the follow up was you want what the measurement unit was? Um You want if, if there's any difference in outcomes, so you know, you might have outcomes being measured as a percentage in some studies, but as it a raw number in others, you need to make sure that it's consistent wherever possible and that if it's not consistent, you can note down what it was measured and why they're different. So these things are really important. Um You will notice there's kind of a chunk at the end for miscellaneous. So it's called like funding. It's called comments, references to other studies. Those are for your own interest and potentially for things to talk about in the discussion section of your article, they're not things that are directly gonna go into your results necessarily. So that's why they're, they're kind of extra things that if, if, if they come up and they're appropriate note them down, but they're not necessarily gonna feed into a table as clear cut as these things also, do you have a table one, a table two and a table three. So these are very clear cut and that's essentially everything you need to data extract the, the majority of interventional studies and studies that have very clear defined inclusion exclusion criteria and a clear thing that they're trying to measure this will be enough. Um Le let me show you an example, action. So and we will, I'll come back to that in a second. Um Yeah, just as an example. So study characters, it's called, you know, where, where the study was done, what kind of technique was used? What kind of follow up they had? The population is in terms of, you know, what the samples are, these are your kind of standard study characteristics. It's the stuff that binds who, what is that study looking at? Now, this paper was a meta analysis we did. So you'll notice there's a lot of numbers which is not necessarily gonna be the case. You know, when you do data extraction, you might find that you have to pluck out like pieces of text from tables or from actual work that you do. Um Also, obviously this is a table from a paper itself. So there's kind of like both steps of extraction and since it already occurred. But here, there's nothing we can synthesize, right? Like if, if the study tells 84% of them were male that that's it, there's no more additional needed is what you're gonna report. So in this case, I hope that that's clear that this is essentially just straight from the extraction. Uh two we have here is again, demographic character. So we've talked about like comorbidities and doctors. We've talked about what I know what kind of drugs the patients might be on. This is again to give a reader an idea of, well, within this study. Yeah, you have all the different patients. But do they have different drug characteristics? Do they have different risk profiles? Are they all genuinely representative of what you'll find in the real world? And if you know this information, you know, you might start to s you might start to see differences and you can start start to say, well, hang on this study has 88% hypoglycin eight. Does that make a difference? You know, clearly it might do because drug medications that, that patients might be on will be different and the way that we treat them might be different at the end, they might have different outcomes. They might have interactions between drugs. All these things are important to know about because as a reader, you're thinking, well, why did they did what they did? And how is that gonna impact me reading this paper to apply to my patients? You know, a lot of research, the trick behind the best research is it's done with an end user in mind and it's done in order to help them with what a, a specific research problem. So it's very important to kind of give this kind of context and make sure that whoever is reading your paper understands these things. Um But yeah, as I've kind of prefaced before we did a meta analysis in this paper, so there's no table three because the outcomes, we just analyze them and we'll just make forest plots out of the various things that we covered. So you wouldn't present it as a table and a forest plot, you don't need both. It becomes counterintuitive and you're just taking up unnecessary space within your paper. Um Oh, another thing to notice is the use of the nr. So if the study doesn't include data, you just put nr and that's not a hard and fast rule, sometimes there are ways around it, which I will talk about. But in our case, we just said, well, if the study doesn't have the information we need, we can't produce it. So we're just gonna accept that there's a limitation and that, that data doesn't exist for us. So that's, that's completely fine because it's consistent. It's something that we, we have decided beforehand and we didn't go back on it. And as long as you're consistent with what you choose, and you're very transparent about how, why you're doing it. And the reasoning that kind of decision makes sense to a reader cause they'll say, well, fair enough like you, if you can't find the data, then don't worry, w what can you do? However, there are ways that you can get on this. Um I'm just gonna backtrack a little bit and talk more about the process of data extraction. So this will make more context. Um So just going back here. So just like when I was doing screening, this is all you need to know about the process of extraction on like a overall raw sense, it's done by two authors at least independently. And if there's any discrepancy, you need to have a mechanism by which you're gonna resolve it, that often is just by talking through between the authors who are doing the screening, maybe one of them missed something, it might be human error, it might be that they looked at different data sources. Um You know, that should be kind of pre predi discussed and potentially mentioned within your protocol so that it is very transparent what you're doing. Um And then when you extract, you're gonna extract, prespecify things that you wanna take out of every single paper and you will have a preprepared table or a spreadsheet into which you'll collect that. Now, before you look at the papers, what you wanna collect out of them, I in the, in the majority, sometimes there are things that you might find when you read the papers and you'll be kind of, it's it's unexpected and you think actually this is helpful to add to my uh study, which is fair enough. Like it's, it, you know, you can obviously spontaneously add, add things in sometimes, but you kind of don't wanna do that. And I want you to kind of think amongst yourselves, like, why, why would that be the case? Is there a, is there a reason that if you just, if you don't prespecify and you just pick out what you want out the papers that might not necessarily be the best research practice. Um Just have a think amongst yourselves, maybe put some answers in the chat and discuss if you want. Um But yeah, sometimes the other thing that can be helpful and this doesn't breach what I just said about prespecify. Some, you can trial a spreadsheet when you're kind of experimenting and seeing what you need, just do it on like a small pilot sample of studies and see that you're actually getting all the relevant things you want. And at that point, you chop and change, it's not the end of the world because you haven't committed a certain way of extracting data. You're just trialing it on like maybe two or three studies or whatever, that's not the end of the world. Um So yeah, that's pretty much everything you need to know for, for extraction. Um There are a few things that you kind of bear in mind in terms of planning ahead for common problems. So missing data, big one which I kind of showed you in that table. Sometimes you know, if you have 10 studies, not all 10 of them will have the exact same variables and the exact. And so you might have that some of them are different or potentially you can start to be, you know, kind of think about some solutions. So sometimes you can work backwards from other presented data. So for example, K and MYO curves are for your survival trajectories. Sometimes um there are tools that you, you can put the, the, the image into and essentially get like a, a, an estimate of that data would have been because it kind of calculates it backwards just by interpreting the graph really carefully. Um Sometimes if you have like a, a mean insanity deviation, you can start to work backwards and figure out like you can transform that into a, a median and you can, you can figure things out in that way. Um or alternatively, what you can do is you can contact the authors who wrote the original study and say, well, can we have your raw data and fill in the gaps from there? Um You might see that mentioned in a few trial protocols and things. Um sorry. So like meta analysis of trials, they'll mention that um it's not something that happens very often and of it's not always fruitful because there's a very reasonable chance that the reason they didn't include it in their study is because they don't have that raw data like that. They they it doesn't exist to report, right? So that's something to be mindful of. Um the other thing that you have to think about sometimes is difference in outcomes. So you might have outcomes using different scales. Um you know, maybe they've measured things with different units and sometimes that can be confusing, it can be, it can be challenging to interpret. So a potential solution that you may document within your protocol and conduct is a transformation. So you might then ma you can then use a statistical tool or a formula and make sure that all the maybe all the variables are in the same units, but after the transformation, so that it's consistent and that's really good. And that will then allow for you to do, you can present cleaner data that's more refined and more appropriate. And the last one that I think is quite important, I think people get this a little bit wrong sometimes is um sometimes you can over extract data. Like some I've seen people who will literally put a paragraph or from a paper just directly copied and paste it in. And that's not always very conducive to good research practice because you need to be so more specific in terms of what you're including and why you're why you're including it. It's not as simple as just saying, well, this is relevant in, in terms of it's mentioning the same thing as I think that it is in my table. So I'm just gonna put the whole thing in, you know, you need to really nail down like specific data points or specific sentences and phrases within your the paper you're reading that will then guide you in terms of exactly what you need and the exact specific data you want. Um So, and this can be quite challenging sometimes when you have two different uh reviewers, sometimes maybe someone's like just read something from the abstract and just put that in. But the other person may have copied and pasted a chunk out of the method section. So now you've got to say, well did one person under extract and just didn't have enough information because I only looked at the abstract or maybe they were just being succinct and taking out like a really specific thing that was mentioned in the abstract and the other person is just gone above and beyond and taking way too much, which doesn't have the appropriate context and is not not necessarily what you needed for answering that specific type of research question and getting that this specific data you wanted. So again, that's where you need to be confident that you're gonna look at the same things and you're gonna be very specific about what you want and why? Cool. Um This is just for some context. Um So when you write a protocol this last week, very what it's got level of evidence in general. No. And we're also gonna give a looks like. So these are the sample tables, authors will be see and filling in in accordance with this protocol. And that level of depth really helps because it shows a reader exactly what we did exactly what, what we took out of all the studies. And like I always say, right, the best research is the research that can be replicated. You could go and take these tables and fill them out yourself and see if you've done it right. And we've explained it properly, you should get exactly the same stuff that we did. And that will show you that that was a really well high quality well conducted my view because it's just, it's so systematic that you can copy it. And just to illustrate here is the rest of the tables being included. So it's called again, all the interventions will be we what the table and not it I and prespecify exactly the. So that's that, you know, and we've got um the A me out. Oh to yes, the patient last your table too. It, it the, that's what's in my slide like that is, is you can, you cannot not do that. These are a must. And we've just mentioned if data was not reported in a study, we just assume it's missing. And again, this is what exa this is what we showed in these tables. If it, if, if it's not there, we just assume that and because we specified, that's what we're gonna do, it's consistent and that's really the crux of it. So um yeah, that's in a nutshell. All you need to know about the process of doing a data extraction. So yeah, it's really that simple, like just you need to pick out what you're gonna collect, make your tables. And then as long as you're happy with the research question being appropriate and you're happy that you're collecting all the data you need in order to do that, you'll be fine. And again, I've given like, you know, dimple tables, just an example, write ups. So when you get to that stage in the your program, you're more than welcome to kind of take some of this stuff and get an idea of what exactly. Um Cool. So, data synthesis. Um I don't think I do have any questions actually before we go into p Thanks. Sorry. Uh Is there a folder in your, with the slide authorization so far for future reference? Um I don't think so. I haven't made one and, but all the recordings do get uploaded. Eventually. Uh Metal takes its time sometimes. So you can obviously just flick through that and see the slides. That way I will try and put, II can try and put up some of mine and ask around for them, but I can't promise because obviously some of these aren't mine and other people will come in and presented. So they might not necessarily be keen to give slides up, but I can ask the question for sure. Cool, thanks. OK. Um So just kind of getting back into data synthesis, there's two main ways to do it. So there's qualitative, which is kind of a narrative synthesis. You'll, you'll see that term used a lot in, in kind of qualitative studies, things that are looking at more broad, you know, like they basically any study that doesn't have a blatant metanalysis will probably have some very narrative synthesis. Um And then the other one is quantitative, which is your meta analysis. I'm not gonna do a quick me meta analysis today because that's Connor for the next two weeks, he's gonna be going to all the depth of that. So I won't try it on his toes. Um Yeah. So the type of synthesis you do essentially just depends on what data you have. If you have very clearly just been given a lot of, you know, patient outcome data in terms of 20% of people died in X trial and 21% died in different trial. And if you have a lot of numerical data that has been quantified for hard outcomes like that, you're pretty likely to be end up doing that analysis provided that the data actually lines up appropriately. It's consistent. It, it's, it's all from a fairly similar pool of patients as long as it makes sense to do, then, you know, quantitative is probably more appropriate. But in the vast majority of studies, it's not possible to do because you might find that the data isn't there like that. It, it, it, you might find that the study type is like it's, it's more conducive to qualitative study because you've, maybe you've done a survey, maybe it's based on li uh interview data, maybe it's based on inconsistent data from that Numericals and doesn't lead to a meta analysis. There are a number of reasons why it wouldn't make sense to pursue a meta analysis. So in that context, the bulk of systemat you read and you find will probably end up having some kind of qualitative analysis and the process by which you synthesize all these is very similar. So um I'm just gonna give like an overview of this and if there's any questions or again, if you get to doing it in a program or in other studies and you find yourself stuck, come ask me any questions. But uh just to kind of give you a brief overview. So the main way we do this is why something called somatic analysis. Um It's a little bit confusing initially because the way this looks and I'll talk through an example. So it makes a bit more sense. But essentially you start off with finding code. So you, you look through a paper line by line and you pick out all the individual things that are being said and you give them a code. So one and then after that, once you've done that, you're gonna start putting the codes together themes. So codes that are in common get put into themes and then you then turn those into analytical themes in line of how you can group them up to answer your research question. Now, that's really confusing, right? Because what's the difference between a theme and analytical theme? And it's, it's one of those things that takes a lot of time to get your head around the truth is it, it requires a lot of critical thinking because you have to start to think about the argument of where you wanna go with your, give you what points will help you answer your question. And then as you develop that theory, you can start to put all the themes together and say, well, this will answer this question and so forth. Um If you throw, if you pass your memory back to the first session, I did you remember the dags? I'm kind of skipping it. But the dag kind of model gives you an idea of a hix. It gives you a framework of what you're studying and what constitutes it. So, um in this case, for example, where if you're studying the link between a teacher, salary and student achievement, you might say, well, an increase leads to morale being better, which obviously then leads to teachers trying harder and students having better material, then the other one is a better environment, which then means that students do better. And you've got, then each of these arrows is kind of a reason and then kind of ac a causal chain that then leads to in increase improved achievement. But when it comes to studies that you're looking at what you might find is that individual code refers to one of these things. So for example, you might read a study that points out that uh uh as a result of teacher salaries increasing, more able people are starting to teach. So that would be a code, you'd say, well, that's something that's been identified within the literature. And then from there, you might also say, well, actually, another thing that's been pointed out is that teachers prepare lessons better. And you know, maybe you might also say, well, teachers also employ a greater variety of pedagogical strategies. These are all individual little arguments and points that you will pick up in your literature. And as you, as a good scientist, it's your job to start grouping them together and making that argument of why they're significant and how they may answer your research question. So you can start to put all of these ones on the right hand side together and say, well as a result of all these things, a teacher salary increasing some of them being better teachers, all this stuff on the left, you might pull together and say, well because of that students work harder. So your analytical ones uh themes are the ones at the bottom because those are the ones that answer your research question. And these ones are the individual points that lead to that. So having a good dag gives you a framework and a theory that you because you can start proving it right and wrong based on the evidence in your, in your studies. So a good synthesis allows you to combine arguments together to prove your point because fundamentally, that's what you're trying to do, right? As a scientist, you want to prove a point and you want to show that all this different varied literature combines with Xy and Z and brings you a brand new argument and a solution to whatever research question you're answering. I kinda just mentioned it here. Um So yeah, I've mentioned it here as well. But um yeah, so you need, whenever you do this, your synthesis should answer your research question. And um the way you do that is by thinking about what you're trying to achieve. So there are two main types of systematic review that you can. So the the first ones are effective now, that's essentially how good something is. So you might look at the effectiveness of a drug therapy at treating a disease and then you can say, well, you know, compared to a placebo, this drug is five times better. And then what you've done there is, you've determined both a direction of effect. So it's an improvement in their health and also magnitude five times more than a placebo or it might be 35% of people take this drug remission from cancer. That tells you something because the again, the erection, which is treatment in terms of improvement of health and there's a magnitude which is 35% of benefiting from it. So these studies, as you can imagine often do lend themselves to meta analysis quite easily because there often is, you know, when you have an end of magnitude of the best way to analyze that is to quantify it and that data can very yield itself to meta analysis. Inflammation reviews are a very different type. These are more, you know, kind of overarching. We're gonna think about the process by which stuff is done. I think about I if research is done effectively, you know, we're gonna look at, are there any barriers to something being implemented? Are there, are there things that facilitated it being done really well? You know, these are the types of questions you might answer with a survey or an interview and you might go and ask people well, what do you think or like you might say, well, have you faced any barriers in doing Xy and Z? And because that information is more likely to be qualitative, it's more likely to need this kind of synthesis. So once you identify what your research is in it and what you wanna contribute to it, you will start to see how you're gonna synthesize it and how you're gonna answer those questions. Because if you know, you're gonna study a bunch of barriers to whatever, doing good quality research, you're gonna start to think to yourself. Well, what's stopping people? And once you know what's stopping people, those are the things you're gonna look at. When you analyze your data, when you synthesize it, you're gonna say, well pay potentially, you know, things like financial problems or a lack of education or whatever. Those are things that you're gonna start to think will come up. And those are the things that you can then structure your themes around. And then when you write your paper, it becomes very simple because you just write a paragraph on, well, XYZ study mentioned financial barriers or XYZ study mentioned educational barriers, those just naturally lend themselves to your write up. And it makes it a lot easier because as a scientist, you wanna present all of these arguments in a structured way. Well, this is your structure, your dag and your research question because you solve all of this and put it all together. The reader is gonna have that framework built in. They're gonna know exactly what you did and what, and they're gonna thank you because you solved the question. You went from here to here and you put all the rest of it together in the middle. So that's why. But I wanna talk you through an example. Um This is from a really good paper written a couple of years ago. This, this guy, he's a, he's a, he, he was doing his PT and he literally wrote a second paper. So he did the systematic review and then he wrote a second paper essentially analyzing what he did and how he got there. And it's kind of brilliant because it gives you an insight into how people think about doing these kind of reviews. So, um this research question was about shared musical experiences in dementia. So essentially looking at well for patients and carers, how do they, how do they share experience of dementia and how does it impact them? So while this, whilst this, this researcher was going through all the literature, this is the codes they came up with regarding how music can be supportive. They found that it can help, you know, it's, it's a common platform and for help with memory learning, it creates a supportive event. This is, you know, they, they kind of segregated all of these individual themes that seeing and all the and you know, the codes, they, they're all being separated out just from reading the papers and from there, they, they kind of made this mind map and they say, well, let, let's start to put this all together. Um You have, well, you know, music, your, your learning it can help with your understanding of communication and musical connection. It might even come together with you. And you know how you feel. Now, these are not necessarily 100% related because being a supportive structure and then tying it back to memory and uh learning doesn't necessarily make, you know, you might think yourself, well, what's a connection? And that's where when you go into themes, you can then start to put that together. So a really big one that they identified, which came up in 258 codes is connection. So they've kind of men connecting with memories. No, all these things and see musical connection, musical connection, music as a catalyst. So they've taken these kind of disconnection they made between the, the, the, the, the, the Codes and a potential theme and then they put it all together. So it all kind of brings itself together when you think about a bigger picture. Then another one that they brought together you can see here is, you know, they've said, internalized, Haven, something that's soothing, that's predictable. And they've said, well, it's, it's this idea of having comfort and boundaries. This is a over overarching statement they've made, which is based on their understanding of the literature and their understanding of what was meant mentioned within the study. It's a, it's a whole study. It's not, not just four words, right? There's gonna be AAA full argument that is entered within that study about why and what exactly the patients were referring to when they said this. So having that understanding and knowing where this comes from really helps you structure your, where you're gonna go with it. And so you can kind of see that process of this, this person did the mind map, they went away, they've looked at more literature, they talked to his phd supervisor and then they've kind of jumped across, down into all of this. So that having that kind of connection and that understanding of the literature is what's really gonna get you from A to B. Um The final thing they did was they went from themes to analytical process. Now, what you're gonna immediately realize is that all of this work all of this about supportive becomes one other analytical theme, which is that music can be a supportive structure because what as you combine so together, what you start to realize is that they're gonna overlap if you're not careful. So you have to make them as possible and each one of them should answer a very specific argument within your research problem. So they have completely separated everything apart. And they said, well, there's stuff about, you know, well, being in terms of shared activities, there's an ecological system, there's shared activity, shared activities as they're experienced. And then also the fact that it's supportive, they've really tried to separate each individual aly theme in its own unique chunk that does not overlap because otherwise, you know, here we've got connecting, containing, that might overlap with stuff like shared dietic experiences. It might, might share overlap with stuff like belonging, connect, you know, you have to be deer that you're making one argument for one point because if your overlap is gonna get confusing, so that kind of process of really separating things out and nailing down what's being said for what, like what seems like a really massive point with 536 codes, one little thing at the end here, just ignore that. And OK, and then another example, this is from J BI systematic Reviews. It's, it's a, it's a pretty simple but helpful guide about how you can kind of start to get into dates, give it a read if you want. So what they've done here is they're looking at a study, study of patient perceptions scanning. So, um what they've done is they've coded into different them. So they found that people, it's been known, they found that it's different world, they found, you know, that submitting themselves to medical context. So these are distinct things but you can start to put them together into slightly more combined the and then the very end you can s that the whole thing to get into a full finding. All right. So you've got an alien experience being, in other words, swallow and thinking that is what they're deri here is that an MRI is nothing like any normal experience that they would have on a day to day basis. So, you know, that's why all of these can be lumped into. What is the closest pool description out of this world, a alien experience. And you'll notice that they kind of specifically reuse the words that were in modes so that they're not detracting from its meaning. They're not going too far. They're literally almost paraphrase. They're, they're trying to say the same thing so that they're as close as possible. And the synthes doesn't end up having an interpretation of what the author is trying to make out of it. It's staying true to what the papers are interpreting. And um so yeah, sometimes you don't really have to changing that much, you know, like will, you know, in this case, fear of the unknown and MRI scanning is a broadly encounter. These are actually being, there's a little bit of interpretation here. They're saying, well, if you have a fear of the unknown, when you go into an MRI, what that tells you is that you can't, you, you don't know what it is until you do it. So they're kind of reading between lines and saying, well, we need to experience it to understand what it's gonna be like. And then you won't, don't have that fear anymore. So there's a little bit of reading between the lines, but not so much that you're gonna detract and change its meaning as an overall concept, you wanna stay as close as possible and adequately reflect that because otherwise you're gonna change your arguments and your entire tag falls apart because now you're just made up, you know, adding in your own arguments, chopping and changing things. It's gonna start making sense. And yeah, at the end of the day, when you do your synthesis, you want to have everything you need. So that when you write your paper, this kind of frame is 100% transparent. And uh I think that is everything I need to go through for today. If there's any questions, anything in the chat or whatever, just email me. Let me know. Would you like? Mm Cool. Um Cool. Um Also guys don't forget to fill in the feedback form. I wonder did you put that in? Uh Yeah, just put the link in. Cool. OK. Um Like I said, if there's any questions, anything that comes to mind, you're always welcome to talk to me about it in mentorship scheme or email me or just put it in the chat now or I'll, you can still put stuff in the chat later cause I can come back to it like it, this is an open meeting. So that's also a possibility. Um But yeah, if that's not, if, if there's anything else do let me know, but otherwise we're gonna wrap up for today. Cool.