Computer generated transcript
Warning!
The following transcript was generated automatically from the content and has not been checked or corrected manually.
Hello, everyone. My name is Raymond. Well, I'm the national research director for Scotland this year. Welcome to the fifth part of our critical appraisal Webinar series. Today's topic will be systematic reviews and we are very happy to have Dr um Mood a problem as our speaker today. Doctor Problem is trained in academic public health and has over 20 years of experience as a public health educator and researcher. She is a senior lecturer in public health, is at the University of Aberdeen and she leads the Masterpiece Public Health Program and teach on several undergraduate and postgraduate programmes. Just over 90 research outputs as publications in high impact scientific journals, poster and all representations, please with redo timing any questions that you have at anytime During the presentation, Doctor Po Pollen will be addressing your questions in the Q and A session at the end after the Q and A would be great if you could fill out the feedback form which I will send out the link later in the trap, a certificate of attendance will be automatically generated for you after completing form. So, without further a do let's invite doctor put pollen to share presentation on systematic reviews. Thank you very much. Thank you so much. Semen. Thank you for your lovely comments. So I'm assuming everybody can see it. And as Simon said, um, my name is same job too down. And I'm based in the School of Medicine, Medical Sciences and Nutrition in Aberdeen. Um, the title that I've put in there, as you can see, um, you know, although, you know, we're thinking about systematic reviewing, there is a purpose of why we do it. And that's why I've called this title about informing practice and policy. So at one point or the other, then you all graduate and become consultants somewhere. You will be taking the center stage to inform practice. A policy doesn't matter how small or how big the institutions where you work in. So when it comes to that, it becomes really important to assess the evidence and critically appraise literature that's around us to inform the practice and policy. And that's why I got the title Assad. So, uh, I mean, I already mentioned who I am, and so to give you a little bit more information of where my origins are, uh, I originally come from India and I trained in medicine in this medical college we are call it CMC, the law in south of India. And at the moment, it's a very big hospital. Um, about 2500 beds, 28 specialties. And it's also an institution which kind of not only has the secondary tertiary care in the main hospital, but also, um, uh, satellite centres to look at Ruhr away health. And it has urban center in the urban slums. So it kind of, uh, um covers quite a wide variety of, uh, the motor neural health. So after I graduated, I was posted as a medical officer working in leprosy in one of the remote and neural areas in South India. And that's where I developed my interest in public health. Um, but I moved to U. K. In 98 from working in infectious diseases. I moved into non communicable diseases, and I did my PhD in N C. D's. And now most of my work is based in diabetes and cancer. Uh, and as Damon said, I'm a senior lecture. I coordinate the MPs, uh, and I also do a little bit of healthcare education research of how to teach as it were, but in in discipline specific. Um, and in a methodological sense, although I do most of my work in public health method logically, I do systematic reviews, and I do qualitative research. Um, so that that's a little bit to give my background. So you might be sitting here and thinking, Okay, Systematic reviews are quite a popular thing or a hot topic to do, but I'm a clinician. Do I really need to know why we look at evidence when I'm a clinician? Um, but as I said, somewhere in your career doesn't matter. Very own nephi one or thing, at least in my clinical setting. When you're sitting in thinking in your office how you're going to treat people, Um, where does that decision as a doctor come from? So from experience. And, um, what I've done before is a clinician, and there's a researcher. Usually it's informed by what you read around you like an academic paper from B mg, and we take that as gospel truth. And you think, Gosh, it's published in BMJ. So that must be so if I'm thinking about what am I going to give for, I don't know. Fever, paracetamol, a aspirin. Part of your evidence could come from your academic paper. But also, there are people who are around there who are there to convince you. Like your medical reps. They come here, you go for conferences. They're everywhere. Doesn't matter which discipline you attend. Their their. And also you have your seniors, your senior consultants who are around you, who will be there to say, this is how we do it. And this is what our departmental policy is that we give aspirin to everybody who comes with such and such a thing. So, um, each one has an agenda in one sense. So the scientists are of the putting their case forward. The medical reps have their own agenda. They're there to sell their own products. And whatever thing it is, they're going to try to convince you and your senior consultants have a tradition that's linked to a department and there. That's another influence. So, out of all of this, what's the most unbiased? If you want to make that independent decision about your practice and if you become senior enough to become the policy of your department or your hospital or a region for all you know. So where does that evidence this is where collating the evidence becomes important. Because if you want to inform policy and practice, you need the best evidence that's available. So when you want to look at evidence that that is something that you want to look at evidence for, So how that happens is sometimes you're given a question by somebody you know, your head of the department. Come and say, Hey, you know, I want to look at you know what do what do we I mean, these are very simple examples that I put out there, but there can be complex type of surgical techniques. It could be anything. Somebody comes to you at a senior level and says, Look, we need need an answer to this question. Go and look at what's out there or something is rumbling in your own head. Whether you're a registrar or a consultant, something is in your head and you want answers for. And the four questions that I put out there is something very simple, which is broad. It ranges from a clinical condition where you're looking for a particular drug to treat it. Or it could be as simple as looking at barriers for dental checkups, uh, in young people or you're looking at associations. If you become, um, GD attrition, you're dealing with dementia. And you might be thinking, uh, you know, if there is an association between vegetarian diet and dementia, so these are different things that could be there, and we need to find an answer to these questions. So where do you start when you have these questions either given to you and expected you to go and find out? Or, um, you want to find an answer to this question? Literature is out there. This is a world that's full of, uh, papers, um reviewed. Aurand reviewed Literature is out there everywhere and where do we start? So there's something that we call, uh, scoping. So scoping of literature is to go and see what the landscape is in that particular topic. So what it does is if you can see my arrow pointing literature is out there and you go and put few words, uh, relating to that topic. For example, if it's arthritis and paracetamol and aspirin, you just kind of go into uh, literature And look at what? What's out there? So what that does is kind of contextualize is, um, what you're looking for. It highlights the problem. Um, so in your mind, you're thinking I want to see a difference between paracetamol and aspirin to treat the arthritis, but there could be some other drug that's out there. So it kind of brings to focus what you want to look at and crystallizes your topic at the scoping level. It doesn't bring all the evidence that's out there, but it kind of crystallizes a little bit more about where you're coming from. Why are we even looking at arthritis? Why are we concentrating on aspirin and paracetamol and not any other drug? So it kind of crystallizes a little bit of that problem and then clarifies the question for us. And from there on, there are two things that we could do If there is a research question that needs to be answered. One of the things that we can do is you go and design a study, and you could conduct that study and find an answer. Um, but that will be in a very small area or in a very small hospital where you're working, right, or you go and see what other people have done to answer the same research question. So that's how my presentation following on, uh, will be looking at. So the first thing I'm going to look at uh, tell you about is, um when do we do the primary study To find an answer to that question? And when do we go and look at what other people have done and what's the advantage of doing that? So if you are looking for something, I'm going to go back to the same, um, same question of Is paracetamol better than aspirin for arthritis? And you go and do the scoping. You throw in some words, you know, you're looking at aspirin, paracetamol, and you're looking at arthritis and you look at in. I don't know MEDLINE Google Scholar to start the scoping, or you're going to embrace but still put in a few words to see what's out there. And there is no primary studies at all in that area in that novel drug that's coming up or a novel technique that's coming up. Then you have to go and do a primary study because there is no studies out there. Okay, so then you design a study, you collect data, you analyze, and you answer your question, and that design will be based on. You could do a survey for that question because you can go and get everybody who has arthritis and ask them for your pain. Have you been using paracetamol or aspirin? You could do that, or you could do a trial. You take a group of people with arthritis, you give one group paracetamol and another group aspirin, and you can find an answer, which is better than the other. As I said before again, that answer will only related to the population where you're able to do that. So in a community with, I don't know, 1000 people 100,000 people, The answer only comes from a sample of people in that community, and you can answer that question. Sometimes you will find studies, but it will be done in other populations and then other countries, but nothing in the country that you are interested in and the population you're interested in. So in that case again, you can do a primary study, but your justification will be. I found an answer whether personal aspirin works in arthritis in Nigeria. But there's nothing done in India, so I'm going to go and look at the primary study in India. But the other way of looking at collating that evidence is to see whether there are other studies across the world. They are looking at a related question already. So that takes us to all the published literature around the world. Then I say, published published in academic journals. Because that process that publishing has a process, it's a very robust critical process. And that's why we rely a scientist more on that published literature rather than opinion pieces or, um, uh, blogs and things like that, because that doesn't have that peer review. And that's why this this pyramid here is called the hierarchy. Probably you already know about this. It's called Hierarchy of Evidence. So it was originally identified by Oxford University. And when you look at it, the evidence which informs the clinical practice guidelines, comes from that systematic review and meta analysis. The reason why, that is, is sometimes when you do drugs, you yourself know it starts with animals and laboratory studies and no humans involved, and we cannot make clinical practice guidelines based on that. Then the next step is people talk about case studies, a rare condition coming with the side effects of drugs or a case report that they have found, um, and opinions experts, people saying, I think this is what we need to do And then comes the observational study designs. They're all primary studies done in different parts of the world, but they are dependent on their prospective studies, where they could be bias coming in or cohort studies again. It could be the most perfect study done, but it's done in a very small population in one corner of the world. So So still, it's a very robust study. It's an observational, um, design where you do follow up. So this is the hierarchy. So the next thing is in case control studies where we go backwards, people can't remember what they've done. So if you go and ask a survey, okay, you have arthritis. What treatment have been taking? How many people can tell you exactly if we don't know. So then the cohort studies is prospective. Follow up. So it comes, goes keeps going up in the hierarchy, then the trial is a gold standard at the moment because we give these two drugs two different people. But in every stage of that trial, we remove the bias. But the systematic review collates all that evidence that's existing together for some research questions. There will be all of these study designs, but if there is enough trials, then that's what we should take. If that's not there, it hasn't come to the trial stage. There are only the the bottom three levels of evidence that's out there. Then we can still do a systematic review, but at a lower level of evidence. But if we have enough trials, collating them together and conducting a systematic review is what informs our clinical practice because it's on top of the hierarchy of evidence. Okay, Raymond, do I keep carrying on? And then the questions will take at the end, right? Yeah, that's absolutely fine. OK, so what do we mean by systematic review? Because I kept talking about scoping at the beginning, and now I'm talking about systematic review, the collating of evidence that I talked to you about collating all the papers that are published because it has a rigorous process before it gets to the journals. We want to do that systematically, the more systematically we do that review of collating the evidence, it is called a systematic review. And it's not rocket science, right? So but there are some stages to make that systematic, and that's what we follow in a systematic review. So again going back to the same question so that we can follow that process, you have a clear question of what we want to look at because it when you look at evidence literature, there are so much so that the definition of what a systematic review is to have a clearly formulated question. We'll talk about that a bit more. How we how we get there, is this. Does aspirin help treat arthritis are clearly formulated question? Or does it have to have more? We can look at that. Then we go about systematically collecting all the primary studies that are out there across the world, so that has a robust way of collecting that. Then you critically appraise the research that we collect, you analyze, and we summarize. This is in a nutshell. What a systematic review is. Why is that important? Is it because I've got this picture here? If you want to answer aspirin versus, um, uh, are simple question will be. Does aspirin help treat arthritis? And if there are multiple studies, where in India they've done both genders and their sample is 1002 100 in us, we don't know. Probably. There are both genders. It's slightly bigger study. It's a randomized control trial better than the clinical controlled trial sample sizes bigger than we seem to have another study in UK, but only looked at women. So, as you can see for the same question, there are several studies across the globe where they have done and published. So our job will be to pull all that evidence from all the studies, which makes it a systematic review following a robust process. So just to recap of that difference between that scoping or what we call a literature review and systematic review is it's all literature review. But when somebody throws something that you're saying, Oh, is it the scoping review or a systematic review? The scoping is to identify, done to identify the knowledge gap. So the first thing that you do, you go into to midline and just put a couple of words in there to see. Has anybody done this difference between or does aspirin treat arthritis? And it's not as rigorous search you're not expected to do. The more rigorous you are, the better you. You will see the landscape. But it's not a very rigorous search, and that tells you whether there are other studies done or not. So it identifies that gap, and that will tell you yes, there are some primary studies done looking at the research question you're interested in, but nobody has put it all together, so it creates a precursor to a systematic review. Then we can go and do a systematic review. But if there are no studies done at all, then there is no, um, reason to do a systematic review because there are no studies. So what are we going to collect? So we don't collect anything? Then it sets a scene for conducting a primary study. So so a scoping literature review is either a precursor to tell us there are enough studies to undertake a systematic review, or it will show show that there is a big gap in the knowledge about that particular question. So we need another primary study. So when it comes to a systematic review, then how do we make this robust? Compared to scoping is there will be a very rigorous such and there are many more stages to that systematic review which will go into. And then you collate that evidence which will ultimately inform our practice and our policy. Okay, so this is to recap what I've been saying so far. So if then we decide we are going to do a systematic review. And if we go into even MEDLINE, for that matter in back in the day when I was a student, you go back there and you go and look at journals. You go and pick up the papers, the books, uh, and we try to and they're it's as bad. You know, there's a big role of books and textbooks. Which one do you pick? But now, with the exploding literature around us, sometimes it's quite hard to, um, look at which way do we turn in this overwhelming literature to identify the key papers? The most important papers that you need to collect together, which is going to inform the policy for the country. So the sign guidelines that you look at the the NIH are, you know, the Department of health guidelines. You look at, uh, all the evidence is based on systematic reviews. So somebody has to be robust enough to collect all the evidence to get your clinical guidelines that come from the authorities. Um, so we need to be quite robust, uh, in getting and getting the top literature that will inform practice and policies. How do we go about doing that? So in in the scientific world as a systematic review, our whether you become one or not always think about PICO. So P is participants. I, as intervention sees a comparison, always an outcome. So if you keep that in mind, then we can expand this question. So always say to people think wider to narrow your question. So here we are, saying that aspirin healthy in face value, it thinks, Yeah, that's a good question. But actually, when you start thinking wider, it'll help you to narrow that question. And we base that on the what we call the PICO framework. The other things other than understanding. The PICO is. Where am I going to go and search, right? So is it Google Scholar? But we want to look at electronic bibliographies data basis like Medlen and and base. So you want to think about that before what kind of study designs that you want to use to find an answer to that question. If there are enough our cities as we saw in that pyramid, that's the pinnacle of evidence. It's gold standard, so you can stick to that. But if we don't have our CT, you can look at surveys to collect the evidence. But we know the limitations of service itself reported, so there could be limitation. But if that's what we have, that's what we use to collate are evidence, and we also want to look at the time period to go and search. Because literature goes back to 19 forties, the journals have. So how do we decide? So if there is a new drug that's come in in the nineties or in early 2000 and you're comparing that drug to what standard in a standard way your department is using for arthritis, then there's no point going back before 2000 because your new drug has only come in 2000. So thinking through a little bit about the time period that they want to set and also any exclusions with any specific conditions. So these are something that you want to um, uh, think about before we plunge into doing something called the Systematic Review. Okay, so let's go back and look at how do we expand that to think wider to narrow our question. So if you're looking at the participant, we want to say, Okay, when we say arthritis in, who is it all adults, or do you want to include Children as well? Do you want to see whether it works in men? Women and all of that need a scientific justification. Um, rather than just saying no, I decided only women. But why? Why women do women have more arthritis than men? Then that's a scientific evidence. Why not Children? I don't know. Only 2% of Children or 20.2% Children have arthritis. So no, they're going to speak to adults. So there has to be a scientific, uh, scientific justification of why you are thinking of participle than the intervention So when you say aspirin, what do you mean by that? So are we thinking about 200 mg aspirin? Are we looking at? Doesn't matter what dosage it is any aspirin? So we need to do that. So the comparator, is it another drug? Or you could be comparing 200 mg aspirin versus 400 mg aspirin to improve that arthritis. So that's what we need to look at and the outcomes? What is an outcome? So is it everything or are we looking at specifically the pain outcome, inflammation outcome or some radiographic, um, evidence of erosion or whatever? So we need to be clear about what outcomes we are looking at and again. As I said here we are so broad saying, arthritis, As you know, there are several kinds of arthritis. So are we going to focus on a specific arthritis or are we going to take anything? So that takes us to a bit more focused Question about this Aspirin vid Use pain and inflammation in adults with osteoarthritis? You could also say in women or men it depends on where there is a gap. In that question said that if there has been a systematic review done already on, uh, rheumatoid arthritis. But nobody has compared aspirin with paracentral for osteoarthritis. Then you focus your question for your review on osteoarthritis just to I put that in a different color because some of them doesn't strictly follow that framework, and you need to be Don't try to force things into it. So if you're looking at an association between vegetarian diet and dementia, yes, there will be participants. You could look at only women or only men or older people. Younger people. Only in, uh, elderly population, older adults over 60 80. You can have part, but there's no intervention here. There is no comparator here, So then don't try and force your research question into a PICO, but think a bit more wider. That and it's just an association. So if now take the question. So now we've got a question. Does aspirin reduce pain and information in adults? The reason I've had this multicolored lovely um, a phrase is because each of them are core concept. Then we dive into this exploding literature in MEDLINE or M Base, or Google Scholar wherever you want to, because if you go and put in this whole phrase into it, you will hit millions. So we want to identify from there the main concept that we are interested in. So we definitely want. If you're picking up papers, we definitely want aspirin in it. But we haven't got paracetamol in it, so it will pick up all the papers that's comparing aspirin with It doesn't matter what you could be comparing osteoarthritis, pain and inflammation with just physical activity. But it'll pick up because you've got aspirin in it. If you want to focus more, then you have to focus your research question a bit more. So at the moment we want papers that talks about aspirin because we are interested only in pain and inflammation. We don't care about how satisfied patient's are with aspirin. If they have a bleeding or a heartburn. We're not interested in that. Then you need to very clearly have pain and inflammation as one of your key concept. We want adults because we don't want Children and we don't want any arthritis. We want only osteoarthritis. So now we've come with a key concepts or key words where we want to pick the papers from in this exploding literature around the world. So once we've got that, we want to find the synonyms of that. Because not every country, every author will call it aspirin. We don't know that sitting in one place, we don't know that everybody. So we need to find a synonym, whether it's aspirin, India might call it a still Southwick acid. United States might call it a say research of time that we don't know that. But as a reviewer, we want to identify all the synonyms of that key concept that we identified. So we want pain and inflammation. If there are other words for information, you need to identify dance and then Austria. Arthritis also has other names, so we have degenerative arthritis in your joint disease. Some people call this djd, so we want to put all of them in and then adults and grown up. And if there is any other words, we put that in. What we want to combine them with is because we want doesn't matter if his aspirin or a still salad, slick acid or a s a. So we combine these words with an or we want pain or inflammation. Doesn't matter if the paper is talking only about pain. We want that because we're interested in pain. If a paper is talking about only inflammation, we want that because we are interested in both of that. So one paper might be only pain. Another paper might be only information, but because we're interested in those outcomes, we want to combine that with an or then again, the same, uh, principal involved. Then we want to use operator called Aunt. So these are an and are called boolean operators. And so we combine the synonyms of the concepts we are interested in with an R. And then we combined everything with an aunt. So we want a paper with, uh, this block and pain and inflammation and osteoarthritis and adults in a single paper. And that's where the and comes in. Okay, so how does this look when you actually go and look at the searching? So you look at aspirin because we were interested in a cell salicylic acid A s a. Then we are combining that with an R. So we're going to say one or two or three. That gives you 1009 100 you think, Oh, my God, That's a big number to look at that. A lot of papers, then we want to our next concept paying inflammation. We put that in, and we combine that with an R because we want both. That gives you 45,000 because that pain and inflammation could be in arthritis. It could be in cardiovascular, the M I. It could be with anything under the sun, and that's why the numbers are big. So then you're going to put the next concept in. Combined with an R. We've got 58,000. You want adults? You combine because that's everything under the sun that if there is a word adult in it, you have it. But then you combine that with an and saying, Look, I want papers that's having number four, which is all my aspirin Number seven. It's all my pain and inflammation. 12 all my osteoarthritis and other synonyms, and my 15, which is about adults you hit a very neat number of 482 is all my imaginary thing that has a still south like acid in arthritis, in adults looking at pain and inflammation. So what do we do with that? 482 so that 48 to might not all be relevant. So you might have to weed through that to come up with the final number. And in a proper systematic review, we have to state that in a very transparent way. So when your minister of health or department of health or your in covid, your scientific officer, look at it, they will know exactly where your evidence is coming from. So you start with when you go and develop a search strategy, you start with the total numbers, and then you keep excluding because you will have some exclusion criteria as well. Probably you say I want to know studies only in UK, or only developed countries, or you want only developing countries. Forget developed because I'm interested. Or, I might say, only in India. I don't care about any other country. So if that is, then if there was one in Nepal or Sri Lanka, I would exclude that. So then the process based on you that focus research question which you have thought about everything from there you can confidently show your reader that I've excluded Nepal because my research questions said, I'm only looking at India and Nepal. Whatever. So you are very clear. And at the end of the day, you show the Prisma diagram where, how ultimately the final number of studies you've got which will inform your practice or your policy. Okay, so the Prisma diagram is quite important. So then what do you do? So you got your 10 studies that going to determine your evidence of whether aspirin should be used for osteoarthritis and adults to look at, uh, pain and information because you're going to make that, uh, conclusion based on those 10 studies. But as you know, these randomized control trials might be coming from different countries, different researchers, and so there are varied quality of those sent. I wish all those 10 studies are perfectly done. Our cities where you can just go and inform everybody. Hey, this is what we need to do. But But if they are different, then you need to know whether the evidence is strong, weak are inconclusive based on the studies. So what we do is assess appraise the methodological quality of included studies. Okay. How do we do that? How do you know whether in our city is good or not so good. We don't have to worry about it so much because there are validated tools called critical appraisal tools where we can pick this tool, take your RCT paper and you match them. You read the paper and you assess whether there are good, strong, weak or not not conclusive at all in what they're saying. So I put one. In example, you might have heard, uh, tools like Cast Tool, You Castle Ottawa School. Uh, tool, but I use quite a lot of journal. But Cochran risk of bias tools, probably. And that's a risk of bias tools is, um Cochran Tool because Cochran is so held in a very high esteem. So for any trials, if you use Cochran discord, bias to you're absolutely fine. Jonah Bricks institutes an up and coming Australian institute, and they have appraisal tool for every study design for a control cohort. They have everything, and I put a link there. So all you have to do is that 10 studies you've got, you've got to make a decision in. And if it's all our ctsu going to pick up a critical appraisal tool for our CDs, pick up that paper and you score them okay to say how it is because that evidence you say is going to be based on even if you say aspirin is effective in osteoarthritis. Uh, based on this methodological quality, you will say, But it was based on six good strong studies and four moderate or six strong to moderate two weeks, then the the people who make the decision at a very high level know exactly where that evidence is coming from to make that a policy for the country. So how do we do that? So once we you take each paper and assess the quality and you think, yeah, they're good enough, I can make a value. You have to extract the relevant data because, say, a particular study might be talking about patient satisfaction and all sorts of outcomes. But, you know, from your research question, we are interested only in pain and inflammation. So you expect only data from that. You want to know the demographics If you are taking adults, doesn't matter. Men or women. You want to know their age and sex because you can compare men and women when you go to the analysis So you need to do a data extraction. Um, that is relevant to answering your research question. I don't want to go too much detail of this particular thing. If you're interested later on, at some point we can, or you can come back and you can go into the detail of how to learn how to do matter. I can get a statistician to go through that session to do that, but at the moment, the main thing that you need to know is when you want to pull that summary together, put doing the data because it's based on the 10 studies. So this is what is going to tell your Department of Health whether you're going to commission that drug or uh technology to be used in this country until the next review. So that is quite important. So pulling the data is important. However, all of them cannot be done via meta analysis. Meta analysis can happen. Only the studies are similar. If your sample is very varied, if they are very different in any way, you cannot do a meta analysis so you cannot force meta analysis into a systematic review. Some people get very worried. I can't do a meta analysis, and it's not a systematic review that's not correct. Use. It's still a systematic review because you follow the stages of systematic review. It is just that you cannot do a meta analysis because your studies are not similar. So we do a test called I square test, which tells us whether there is heterogeneous city or differences between these stem studies so different that you can't put them together and come up with an answer, and we do that. But also in meta analysis, there is a random effect model. There is, um, fixed effect model, which we can run. If it's very heterogeneous, then we can run the random model. And if it is reasonably similar because not all studies are going to be a blueprint of each other, there will be slight differences. So if it is slightly different, we can still run a random effect model to do a meta analysis. But if it's very different, it's unethical that we put them together to come up with a summary estimate. So that's a judgment which we always do, even as a systematic reviewer. Doing this for 15 years or go back to the statistician to, um, get some help to see whether this can be done, whether we can pull the data using meta analysis and then we Otherwise, we can do a narrative summary, um, so and that this tells us whether it's beneficial or not. And based on that, we make, uh, recommendations in terms of policy and practice. Okay, so what happens once you have done and you've got your result? Obviously we have to talk about the main findings, but in a good, systematic review, we talk about the strengths and limitations of a review because, say, if you didn't look at multiple databases, that is a limitation, because we could have missed a good paper in psych info, which we didn't search. So these sense and limitations are important, but there are also the 10 studies must have had limitations, which we don't have any control as a reviewer, so we can say all the studies were of very small sample size. That's a limitation of individual studies. It's not a limitation of your review. It could be interpreted limitation of a review because your sample size all the 10 studies put together is not big, but we can say the strengths and limitations of our review and also limitations of the primary studies that we don't have any control of as a reviewer. And the quality assessment that I talked about the methodological quality tells us the strength of the evidence based on the quality of the studies. One other thing I would say is Always talk about the direction and the effect size that effect. Size will only happen if you were able to pull the data, but the direction is always important. So, for example, the the the question that I put in Is there an association between vegetarian diet and the dementia? There's no point saying, based on the test, that you do, um, saying yes, there is an association that is still an answer, But what is the direction? So which way is it traveling? So if you eat vegetarian diet, is it increasing dementia or is it decreasing? So don't go by the assumption that if you're a vegetarian diet, it's going to improve dementia, probably the other way around. So the direction of that effect is very, very important, and how much it reduces by what is the strength of that association. If we are able to pull the data, we should be able to say that there's no point just saying yes, there is an association. It is stage one, but we want to know the direction and we want to know the effect size and how do we apply those findings in clinical practice? So there are, uh, implications for clinicians and for policymakers, sometimes for statisticians. If there is, uh, statistical difference, they might say yes. It is statistically, um uh, significant value. So it has a difference. And or for a statistician, they might say it's not because the the association is only point only 20% in the exemption. But for a clinician in a rare condition, improving that 20% might still be important might not be for the policymaker because they look at economic evaluation of that technology, they might say No, no, until it improves 80% of people. I'm not making that a policy. So our review, our statistical values might be very different for clinicians, for statisticians, for policymakers, But as a reviewer, our job is to make it transparent and put it out there, and each will you interpreted? So that implicates. And then we can make recommendations for, uh, future research saying this is how we need to do say, recently I was doing, uh, something about mindfulness. Everybody says mindfulness improved that. But nobody says what is mindfulness. So we can make a recommendation based on the the the limitations and the flaw that you find in the individual studies, and you can make a recommendation for future research. There is a standard way of reporting if you know the stages of the systematic review that we talked. We use that checklist called Prisma checklist, which is slightly different from the Prisma flow diagram. So don't get confused. They are called the same. One is a Prisma flow diagram, which tells us, How did you get to the number of studies that you used in a review Prisma checklist is, which tells you, Did you go and do a search? How many databases did you use to see how robust your I guess so to summarize for your any research question, think wider. That's what focus is your question. So is aspirin effective than paracetamol for osteoarthritis to reduce pain? Information in adults is crystal clear who your apartments are. What's your intervention? What are you comparing it with in what and to what extent to what outcome? Then you develop that that's her strategy, using your inclusion exclusion criteria. So your inclusion is it has to be arthritis. It has to be adults. It should have aspirin and paracetamol, and you need pain and inflammation. Your exclusion will be. I don't want Children. I don't want to go back before 1990 because aspirin came in 19 nineties. I don't want to master arthritis because there is a review already done. So you with that criteria, you develop your search strategy. You screen it. You select the studies that is going to inform your review. You can particularly appraise it. You assess the quality of it. You extract the data you analyze and synthesize, whether it's meta analysis or not, a narrative summary. So these are the stages, and each one involves it. But that's, uh, just off what a systematic review is. So I've got some references, um, which you can go and look at and each of the stages, as I said, just to go back. Each of the stages is quite important, because that's what builds the story. Um, and but following each of that, this This is why I said right at the beginning, not being able to do meta analysis. Uh, still, it is considered systematic review That doesn't negate your systematic review because you follow the all the steps of systematic review. But you're not able to do meta analysis because the study's already different. Okay, so I'll stop there so that we have a good number of, um, time at least, uh, for some discussion. Raymond, is that all right? Yeah, of course. Yeah. So I think there's one question from the chat box. Oh, is this the chat box I'm seeing on the side? One message. Okay. Um okay. Who was previously looking to the systematic review for the first time, But she seems discouraged to continue doing this because then she literature search filtering work seems daunting and complicated. Other. Any tips or tricks that you were? Yeah. Okay. The answer for that question is the initial scoping probably daunting because you go around in circles. You know, the words that I showed in the search getting your research question and getting the concept however simple it is. C for a proper, systematic review your search strategy will have about, I don't know, 25 30 statements to combine with an R and an and but for a scoping, even a scoping, for example, Let's go back to that. Uh, right, so in a this is probably a proper, systematic review search, and the more complex. This is one of the simple research questions, the more complex research question. Just that first block could be about 10 or 15 words, so the next block will be another 15. The next block will be on the train, but in a proper systematic review. You need all of that and combine it properly because an expert will come and say, Hey, you missed out that particular word. So you need to have that. So that is for a proper, systematic review, such but in a scoping, there is nothing that's stopping you from putting aspirin and pain or inflammation and Austria trick. Forget degenerative arthritis because that is a rare word that's used. But putting the keywords and combining with an and and seeing what has been done might solve that problem of getting at least the keywords and putting it properly with an and otherwise. If you say aspirin and then you get I don't know, 100 and 50,000 words and God, aspirin. Then you go pain information or aspirin. Then you get another, and then you get overwhelmed. But doing that the and the and combination, however simple it is, just go and look aspirin, paying our information arthritis and combine it with an aunt. And that will get. And also I said, See, there is another systematic review Done. Say, for example, for our question. We want Austria arthritis the only way that you can go and say I want to do this In Austria, arthritis is only if, you know, rheumatoid arthritis is already covered. So what stops you from saying aspirin? Pain out, information, adult on the systematic review and say, and systematic review, which means it'll pull all the papers that have systematic review word in it. So then you go and look at the systematic reviews and see whether something is done in, uh, in any arthritis, for that matter. And if you see systematic review done in rheumatoid, then you got that. But there's nothing done in osteoarthritis. And then that scoping is done. You've defined your question with good justification, saying I've done, uh, rheumatoid arthritis is done. Now there is a gap where I'll go and do a proper systematic review in osteoarthritis and having all these words in. And this is where clinicians sometimes help because they know the words that are being used. Or sometimes when you are messing around looking, doing the scoping, you do identify these words hope that, um, answered your question. Qantas, too, is for review. No, I'm star is so that's for technology. I mean, risk of bias can be done using there are multiple ones. So let me Oh, okay. I couldn't have it. So in what way? Thoughts on us. Just because these are for different study designs. So if you look okay, quality. So Qantas is for diagnostic. So? So if you're looking at ultrasound versus MRI for a particular, uh, you're doing a review, then, um, to comparing accuracy, I think we use Korda's. There's nothing wrong wrong in doing that, but you need to identify the right checklist for the right study design. Am I making sense? Um, I just hope that answers your question. So if it okay, Yeah. So the The main thing is, so the reason I put in Jonah bricks is that has got everything. So if you're doing diagnostic, it has a tool. If you're doing cohorts, because sometimes in your review you will get multiple study design. Unless in your exclusion criteria, you say, Look, I have enough Are cts to look, I don't need surveys, so, um mm. Yes, you can. Okay. Assessing equality is part of a systematic review. That's what I think you mean. Judge is so systematic. Review has all these steps. At the end. We decide whether it's going going to go down the meta analysis route or is it going to go down the narrative, systematic review, But the process, the steps that we take are exactly the same. So doing What is that? Okay, so here is where you decide whether that's a narrative, systematic review or a systematic review with meta analysis. Until then, any systematic review follows all these steps. That's what makes it a systematic review. Your critical appraiser, your quality assessment, a robust search strategy, your proper date extraction, all or part of systematic review then it comes to analysis and synthesis. It either becomes a narrative, systematic review or a systematic review with meta analysis. So your quality assessment is part of it, no matter whether it's narrative or meta analysis, and that quality assessment has to be done using a validated tool. Depending on that, the research questions. So if you are comparing to using our CTS, I would go and look for a quality assessment tool for an RCT. But if you are doing uh, an RCT looking at diagnostic accuracy, then you will pick Qantas. So if you're doing a review of review, then your quality comes from AM stars, so each one has its own. So if it's a non randomized, you would have heard Spider and things like that. So these are all assessment tools that are out there so you can use any of the assessment tools. But they are part of whether it's a narrative or a systematic review. With that analysis, Am I making sense such as Okay, that's good. Any other queries? I know it's, uh, Walwyn, but we need to do we need to understand the basics and the steps, and after that, each of that steps. Yeah, like anything. Each one has a little bit of details in it. How do you develop a fantastic search strategy? I took six months to learn that, but in, uh, after a month, you get okay and then you Sometimes you do a master's. You get there like sandpaper, but a professor or somebody who has done it for, like, 15, 20 years. I can do a really good search strategy in a couple of weeks time because I also go down in circles. Is this award I go to a clinician and say, Do people use any other word for that particular clinical condition? So I come back and I add, and I amend. So I take about two or three weeks to develop a good search strategy. So each of that data extraction form how you do that. Everything has but the basic principles. Without that, we'll get into a real model, uh, and wallowing like hundreds of papers not knowing which one, which is the top one. So if you follow this process step by step, to a certain extent, including the scoping, then we we we are able to do a good systematic review. Yeah, I think there's one more question from the audience asking about the statistics part, but, uh oh, right. Sorry. Sorry. I behind and see that. What? What level do you need to know? To produce a good quality meta analysis. Okay, meta analysis and okay, data Say you picked up a study. Mm. And you do the data extraction. So you want to know in a data extraction everything that you know need to know about the participants. Who are they, then? If there is an intervention? Yes. We need to know everything about that intervention. Say, for taking the same example we want to know. Because at the end of the day, we want a really good answer for this comparison. So I want to know everything about the so called adults. So what age say? I have 10 studies, everybody talking about adults. But each one will have a different age group. So I want to know every information about that. I want to know everything about aspirin. Is it liquid form? Is it? Did they think that What's the dosage of aspirin? Similar paracetamol. How do they measure pain? How do they measure information so the data extraction will give us all the information and from their meta analysis, only compares the outcome. The number of participants, all of that will inform how different these studies are, which will inform our meta analysis. But the outcome of how much pain differed between aspirin and paracetamol is what will decide your analysis, so to do a meta analysis. Actually, we have a software called, um, a review manager, so that is available freely, too everybody within Cochrane collaboration. But to download that review manager, we need to go and register, either as a researcher as a student, that there is a form to fill in and you will be able to download that review manager within our university, at least in Aberdeen and majority of the universities Review Manager is part of a software that is uploaded in majority of our computer rooms, as it were for people to go and learn and do it. If not, you can download that meta analysis, and that one will specifically ask for certain, uh, information about your outcome. So in the pain, it'll ask what your sample size. How did they measure? Um, is it mean or median of pain. So is it pain? Yes or no? Then that becomes a categorical variable. So it asks for it? Yes. You need to know the basic statistics, but the software's will guide you on what information you need. And also, you need to know how to interpret, uh, the statistics, Abby. So at a masters level, or if you done already a research project, you will learn that level of statistics I don't know with from your supervisor. Um, but if you're going to do a project there are at least in Aberdeen, there are stats, clinics, but you need to know the basics and you can take your data extraction. You can put it on a word document and go to a statistician and see what kind of information should I need to put in a kind of test that you can do. But if you do a masters and if you've done a basic stats scores, you should be able to do that. Um, so for a basic meta analysis, yeah, you need to know the main median, how to, um, interpret I square. That's for meta analysis and that that, uh, you need a little bit of stats, um, knowledge for that. But the software will take you through every step. It's called review manager. You can go and do a course on it if you want. Cochran does videos and things like that. Abbie, have I answered your question? Of what level? I hope so. Because, I mean, we need to know, Um, what's categorical variable? What's an ordinary variable to make that make the judgment? Any other things? I know we are coming up to the end of it. Him and anything else. I can't see any other questions there. Well, are there any more questions from the audience? I don't think so. So we could read, uh, wrap it up here. So, um, thank you very much. Doctor Pabellon, for the session today. I've definitely learned a lot. So just before you guys leave, we would really appreciate if you could fill out the feedback form, which the link is in the chat now to help us improve. Yeah. Please let me know if there is anything that's unclear, or I should expand more if I do it again somewhere your feedback will really help me as well. And hopefully it's given you a gist of how gender take my review. Okay? Yeah. Thank you. Thank you so much. So our part six of the Webinar series will be held on the 29th of November, which is next Tuesday. And we have invited Professor Stephen Turner to speak on formulation of research questions. Please look out for our promotion on an instagram. At times a dash. Scotland comes up to the time we hope to see you again. Recent. Thank you very much again for joining us today and have a good night. Bye bye bye.