Home
This site is intended for healthcare professionals
Advertisement

Session 4 + 5 - Screening and Risk of Bias

Share
Advertisement
Advertisement
 
 
 

Summary

Join the on-demand SRM A teaching series, where Neeraj Kumar, founder and president of N MRA and a PhD student in Cardiovascular Science at the University of Leicester, delivers an educational session. Topics covered include screening and selection of articles for your SR, and carrying out risk bias assessment. Kumar has extensive experience, having worked on over 50 projects, and provides an in-depth look at the systematic review process. The session includes how-to demonstrations on effective database searches and navigating through relevant terms, offering a valuable resource for medical professionals and researchers. Upon completion of the session, feedback can be provided via form and a certificate of attendance will be issued.

Generated by MedBot

Description

Delve into the NMRA Academy Teaching Series, an enlightening and engaging educational program for those who wish to learn more about how to run systematic review and meta analyses.

This series will be carried out by experts in the fields and by the NMRA committee, and we will be providing you with all the tools needed to be able to carry out your own SRMA.

Join us for this 10-lecture series:

1. Introduction and refining your research question

2. ⁠Writing your protocol and selecting inclusion and exclusion criteria

3. ⁠Creating the search strategy

4. ⁠Screening

5. ⁠Risk of bias assessment

6. ⁠Data extraction and synthesis

7. Meta-analysis part 1

8. Meta-analysis part 2

9. ⁠Interpreting results and writing your paper

10. Getting ready for submission: ⁠referencing and paper formatting

Learning objectives

  1. By the end of the session, participants should be able to understand the process of screening and selection of articles for their systematic reviews (SR).
  2. Participants will learn strategies on conducting an effective literature search using databases such as Ovid, MEDLINE, and others.
  3. Participants will gain knowledge on how to evaluate and select suitable articles through carefully planned inclusion and exclusion criteria.
  4. Participants will understand how to perform a risk of bias assessment on selected articles.
  5. Participants should be equipped with the skills to manage and organize the selected articles for their systematic review, including the use of export and other functions within the databases.
Generated by MedBot

Related content

Similar communities

View all

Similar events and on demand videos

Computer generated transcript

Warning!
The following transcript was generated automatically from the content and has not been checked or corrected manually.

OK, welcome everybody. Uh Thanks for joining us for our SRM A teaching series. Um apologies for last week, but we're going to cover both last week's and this week's topic today. Um And today we're gonna chat about screening and selection of articles for your sr and carrying out a risk of bias assessment and er delivering the session today is Neeraj Kumar who is the founder and president of N Mra. He's worked on over 50 projects. He's a current phd student in Cardiovascular Science at the University of Leicester and a previous medic er from University College London. Er, and just to say, um many of you will know this, uh please fill in the feedback form to get your certificate of attendance and I'll put it in the chat part way through the session for you. Uh So thanks Neeraj and um I'll hand over to you. Fantastic. Thanks so much. Um Yeah, so apologies again, everyone. We had a small emergency. So session four was canceled. Um I won't go into the details of what happened, but essentially the doctor had something come up, which was a lot more important than doing the teaching at the time. So because of that, I'm gonna be doing both screening and risk of bias. I'll manage to squeeze them into hopefully about an hour. But apology if I do go over a little bit over, so that's the aim for today. We're gonna be kind of getting through both in one. If, if, if there's, you know, if you need to come back to anything or if there's any questions, obviously you have the recording. But as usual, you're more than welcome to get in touch, you're more than welcome to email, email through to me. And um I will, you know, happily answer any questions and solve anything that comes up. Um As usual, I like to kind of give people a bit of a background to N Mra. So if you are new to what we're up to the N Mra is a nonprofit, we launched in 2022 and we give students and young medics basically a chance to get themselves into a foundation to medical research. We wanna give you guys an opportunity to get involved with, we will give you the training, we wanna give you guys a dementor program. Some of you guys will be here for that. Um We also offer networking, we offer other events for skill development essentially by the end of a good year of, you know, but involvement with N Mra, we hope to kind of develop people to become independent scientists. So by that what I mean is you're able to conduct your own research studies and you're proficient with what you want to achieve in your goals in academia. Um provided that you're staying on top of all of that. I think that's a job well done from my end. Um Just also to are the slides changing. Yeah, fine. Um So yeah, just to kind of follow up on where we're at with the academic series. um I'm combining in screening and risk of bias. So that's sessions four and five on the left. But this, this is the overall agenda of what we're gonna be doing for the next month and a and a bit, essentially, we're gonna be covering a start to stop and that uh work through of how to do a meta analysis and systematic review type of literature. So we've already covered how to find a research question, a protocol and a search strategy. And we essentially, I'm gonna be picking up from there with the, the assumption that you know how to find a, a search strategy that's relevant to your study. And then we're gonna go through now what to do with that? How do you screen through the papers and how can be confident they're any good subsequently, we'll obviously go through how to extract relevant ST uh information, how to do meta analysis, how to write up a study and then wrapping up we'll talk about publication and how to get things through with peer review and submission. So in all, we will cover everything you need to know, to be able to conduct a successful systematic review and be engaged in all the steps. Um If you're here for the mentorship program, I'm probably still interviewing you guys. Um So you'll be applying this in, in the coming weeks anyway. So, um just to get off with uh where we were. So in the previous sessions, we've kind of gone through about how to find a search strategy that's gonna be appropriate for your research question. And then obviously you can search for that using um suitable databases. I just wanted to kind of touch through some practical way of doing that. Um I'm gonna attach a few screenshots and kind of show that to you. So it's, it's a bit more clear. I mean, he did a really good job of going through this stuff, but I think sometimes just seeing an example of it again after a while can help to refresh your memory and keep you on the right track. So um on the screen now, which is coming up is the layout from the database called Ovid. So that's essentially the, the, one of the main databases we use, it contains two of the biggest uh libraries that we would use for systematic meta analysis. Uh namely embase and MEDLINE. They are basically commercialized and upgraded versions of what you'd find on PUBMED. So it's essentially enormous library of over 20 million different journals and papers and it, it's very, very extensive. These go back to 1947. So most medical advances in the next, in the last 70 years and are included in. They're already. Um, but yeah, so the most straightforward thing you can do with this and I think this will keep most people happy with whether, uh, is. So I've kind of logged in and I've clicked on the advanced search uh bracket as you can see there. That's the one that's highlighted in bold. So that um essentially is kind of it. It allows you to tinker with like your, your, your boolean terms and your, your search strategy, your mesh terms with all, all, all of which is already covered in last possession two weeks ago. So here you can literally just copy and paste your entire search strategy straight in and that's just gonna straight spit out all of your answers, which is probably the easiest way to do it. Um There's no mess it, you can immediately get on with what you wanna do. Um The only complication with that is obviously when you start to attach on these limitations. So there's stuff like full text, human, remove preprints. All of these things can seem to be really valuable and they can be. But the issue is that sometimes an article that's in Ovid might not have been coded properly or the metadata. Um So for context metadata is basically the information that comes with the study about the study. So that might not have been coded properly. So sometimes something might be a full text, but there was no meta data telling over it that it's, it's a full text study. So if you click that limitation, it might just remove it because the computer doesn't know that that's a full text. And, you know, there are errors like that built into the database because naturally it's got millions and millions of articles. It's not perfect. Things do get slipped through. So sometimes just be careful using limits, you might accidentally miss out something that might have been a really good paper. Um Sometimes it can be easier to just leave the limits and then Tinker with that one like Excel and you can filter through what you want yourself because you can see it with your own eyeballs. Sometimes that's a little bit easier. But um I often think if you go for stuff like English language or full text, they're pretty generic. So it's never really a huge issue in terms of losing loads of relevant articles. Um The next thing that I think is really cool is if I click here on, on the search tools option, you can basically create a little map which will show you all the different to uh relevant terms. So I put in asthma and it shows you all of these different things. You've got severe asthma, you've got asthma control tests, you've got, you know, a million types of different things that are relevant to asthma. And when you're building a search strategy, uh and when you're building, you know, an extensive library of, of search terms that you will, will help you doing stuff like this. You can then obviously navigate through that and you can pick out the ones that are most relevant to you. So that will ensure that you've not missed anything. It will ensure that you've got really good depth in your service strategy. And therefore you can be confident that you're getting all the articles you want. So that's really helpful. Um You might notice here, there's this idea of exploding, which it kind of comes pre ticked, exploding is quite useful actually. Um because then it goes into all of these individual subheadings which will load on the screen in a second. There we go. Uh so that you can kind of see it can help you to target um the focus of your search. So obviously, you might wanna look at, for example, if you're looking at asthma in the context of, I don't know epidemiology, you might wanna click on ep here or potentially, you might be looking at it in the context of a side effect profile and you know, other complications in which case you go for slash as I here, sometimes these can be useful because your study may belong in a certain niche. Um however, again, the same, the same caveat supply things get miscode, you might miss articles and sometimes your study won't fit this precisely. So, it's not gospel. You don't have to use it if you don't want to, you know, you can just search through stuff. So that's completely fine. Um, it's, it's, it's one of those things where at the start, especially if this is your first project. Honestly think around with stuff, just play around with it, see how you get on. Um And, you know, you'll, you'll start to figure out what, what works for you, what level of, what level of control you're willing to do in terms of screening through stuff yourself and how much you're willing to let the, the search kind of dictate and how, how rigorous you wanna be. That's, there is a little bit of kind of personal element to that I think because there's no such thing as like a perfect way to do a uh search and they're not all, they're not all the same. If I came up with a search strategy for a paper and you did, we'd probably get drastically different results just because we're gonna put in different terms, we're gonna do different things and we're gonna try AAA different approach, but that's, that's completely normal. That, that's how it should be. Now, let's say you've done all of this, you've tinkered around with this, you've got all your paper, this is just, you know, the f the first page of the results that you'll get out of it at the bottom. You literally just scroll down a little bit underneath the, the search bar and you get this and then I've kind of helpfully arrowed it for you, but you wanna go for the export page and then you can put as a CSV file. So that's just uh eliminated commas. Basically, it's just an Excel sheet, but simplified. once you hit that you can go into any reference manager of your choice. So II think a lot of people in research lately have been using Rayan. It's a really cool tool. Um It, it kind of allows you to just search uh search through your papers very easily. It's got a lot of A I optimizations being added to it all the time. So it, it's, it's very much kind of the, the up and coming reference management tool. Um Yeah. So that's a very quick kind of whistle tour for like 5, 10 minutes just about going from a, having a search strategy and then just running that through a database. Um I, I've gone through Ovid. Um obviously you can use scopers PUBMED others. They're very, very similar, they're almost identical. There's nothing kind of profoundly different with them. Um I'm happy to go through others at a later date. If you need me to just drop me a message, talk to me, we'll, we'll, we can cross that bridge. It's nothing massively complicated. Um But yeah, once you put it in and then yeah, so this is just an example of a, of a review I've got ongoing and around it literally just you feed in. Yeah, you feed in that CSV file and you end up with it just kind of creates like a little dashboard with all the papers and then you can just whittle through which ones you think are relevant. Um And you give them a, a judgment directly on the app and it saves everything for you. And I think you can even make, um it can even kind of c create all the information for a Prisma diagram for you. I'll go through that in a second. Um So this, it can be really helpful because it can also, you can, you can add your team on there and multiple people can work on stuff at the same time and it can track all that, which is really helpful. Uh Yeah, so it's great to be honest, I am a bit lazy. I've gotten away with doing a lot of this stuff just directly on Excel, just import, just ex import the results out out of Ovid and just directly do it on Excel. Um As long as you're, you're kind of confident with working through like a big spreadsheet and you know what you're doing and you record all of you, what you've been working on that. It's not the end of the World. And then obviously Excel can also duplicate stuff, you know, it's because it's just text, it's just code, you can just tell it to delete stuff. That's the same. If it's got the same title, same, whatever, you know, it's a, it's gonna be the same thing coming up again, but I can understand. It's not everyone's cup of tea, so I don't kind of actively recommend it to everyone. It's just something that I find easy because I've been doing itself for so long. But uh yeah, so that's just kind of just so you've seen it and you know what it looks like and you're familiar with it. Obviously, you, once you use it for yourself, you'll, you'll kind of understand what it takes and just getting a getting a hand around it. There's nothing that will replace that no matter how much I talk to you about it. Um But yeah, screening is a very simple process. I will put up on the next slide. Everything you need to know when it loads, that's it. It's the the the slide is loaded, this is everything, this is all you need to know. So we do screening in three stages, title, abstract, full text um at the first two stages, title and abstract, you can grade a paper three ways. So if it's a zero, you don't want it. It's terrible. You're excluded if it's a one, it's a maybe. Yeah. OK. It could be relevant. But we, you know, the the title of the abstract doesn't have enough information. We're waiting to read the next step and tell. And the two is, yeah, this is great, really, really amazing that definitely gonna wanna relevant include and then you take that paper straight through to the next step. Um Each step of screening you do two authors do it separately and blinded like they go off on their own and do it And then then once they're finished, they'll meet up if there's any discrepancy or something that doesn't match up. A third reviewer will come in and weigh in. Um There's two ways that are acceptable. Some people will say, oh, it's gotta be by consensus. So all three of you will kind of sit together and discuss and say, ok, yeah, we're happy that they should be excluded or included because essentially one opinion has convinced the other one that their opinion is more valid. So let's say if two people, one said, agree, one said, disagree in terms of exclude, one wanted to exclude, one wanted to include. And the guy who excluded had a valid point regarding the inclusion criteria, exclusion criteria. And he was like, well, yeah, listen, you need to exclude this. It doesn't fit if he can convince the other guy, then that's the consensus p No problem. The alternative way to do it is by just vote. So you've already had two votes. One said yes, one said no the third guy just tiebreaks and they say, ok, well, it's 2 to 1. We're gonna exclude it. That's it. Whichever is convenient to you guys. You do how you want either is completely acceptable. Uh, Prisma will or like any other guideline. No, there's no kind of right way to do it. They're both completely valid. Um, at the full text stage there is no such thing as a maybe anymore because you've read the paper at this point. You kind of have to not be on the fence. You pick one either you, yeah, you've read the paper and you think it's relevant or we've read the paper and no, we don't think it's relevant and you get rid of it. That's it. So title and abstract, you're gonna have a 01 and a two and then for full text it would just be a zero or one. It's either in or it's not. And that's pretty much it. Um, some studies will kind of say, oh, let's do. We only did abstract and full text. What they actually mean is they just combined title and abstract. I think that's a, that's a bit boring. It's also the hard way of doing it because title can eliminate 90% of papers. I'm not even joking, 90% of papers will be eliminated by a good title screen because you'll read the title and just think this is not even the same field of medicine. Why is this here, you know, these databases are so big, they're gonna let in some irrelevant studies and you can just, it takes five seconds to read the title and say, yeah, never mind. That wasn't relevant. So it's gonna save you a lot of time if you just chop those away quickly because then you're only reading abstracts that are relevant. So from a search of 2000, you might only end up reading 300 abstracts. Whereas otherwise the alternative is read all 2000 abstracts. It's very ti very tedious. It can take forever. And the fact that you're reading so much irrelevant work is gonna put you off to be honest. So don't do it just title first because it will make your life easier. Um Yeah. And then to prove my point that that's all you need to know. This is from a paper I published like a couple of months ago. Los That's all we wrote. This is all we needed to write about the screening process. Three investigators did it and we did my title abstract review and we did it on Ryan. And then when there was a disagreement, we got two consensus. That's it. You write that in your manuscript, the rest is understood. It is very obvious what you did. Like you can tell it was me and these two guys who did it and that's it. Um For, yeah, for, for context, you should write this like in your manuscript, make sure you document which authors did that part. It's, it's kind of like scientific accountability. When you do your paper, you want people to see what everyone in the team was doing. You wanna see why that process was followed, you know, good scientific literature is one that is self incorporating in the sense that everything you write about, you could demonstrate what you did and why. So make sure that you include that just for the case of everyone knows who, who did, what everyone can see. And it also keeps you guys accountable because theoretically, if someone was to ask for your research team to demonstrate what you did, you can then theoretically there should be AAA file with all three of our names on it. And we can just say, well, there you go. This is our screening and you know, someone who wants to verify it or someone who, you know, tried to replicate our work but got a different screening outcome. We can then say, well, this is what we did. Let's see how yours was different and it's never happened. We've, we've, I don't, I can't think of a single case in, in all my time doing research that anyone has ever asked. But theoretically good scientists need to be held accountable because otherwise you could just publish whatever you want. So yeah, but that's it. This is three sentences. That's all you ever need to read in a manuscript. Don't overcomplicate this step it is very simple. Screening is an easy job. All you need to do is make sure that your papers are included or excluded as per the inclusion exclusion criteria. If that means, you know, if you find an article in Spanish and your inclusion criteria is English only just get rid of it. Don't, don't, don't make that harder than it needs to be. That's literally it. Um Right. The next thing II wanna talk to you about quite quickly is kind of documenting this process. So the way we do that is by something called a, a prisma flow chart. It's essentially a, a summary of all the articles we found from our databases and then how we navigated through them to get to the number that we included in our paper. So you've got to be able to account for where those papers came from, why you excluded some of them and what and then what's left over. So again, I'll give you an example from a, a paper line. Um This is essentially it, it, yeah. So we've got um at the very top, you can kind of see all our inputs. So records we identified from a MEDLINE MBA Cochrane and Scopus that all comes in. And then now MEDLINE and embase are very similar. They both have a lot of overlapping journals. So you get a lot of duplicates and then we eliminate that ray can just, you just press a button, it just detect, de de detect duplicates, press it, it does it for you. And then we're left with 1138 records we screened and then you eliminate. So by abstract screening, we eliminated 1111 and we only read 27 full texts. So as you can see, 99% of articles are gone by the time we get to full text because the vast majority of stuff that comes through in a database is irrelevant. Unfortunately, that's just the way they are. They're broad, but only you can know, like how to make it deep and how to cover all the stuff. Because you're the one who actually reads the full thing, the computer doesn't know what's in the paper. The computer just is, is just gonna read, OK, like, well, this paper has the key word he wanted and it seems, you know, maybe it's relevant because that's all the, that's all the computer is designed to do when you put your search in, you're the one who has to then say, well, let's read the thing and figure out whether it's relevant to me or not. So that process of going from 2000 re records at the start to 27 at full text and then 19 synthesis, you're eliminating over 99% of, of papers you find. And that's why you really don't want to be wasting your time on that process. If you can whittle through it as quickly as possible. You'll be good. Um, in a really, like high level, uh, like a Cochrane review or something technically here. Um, at the where, where we've, where we've kind of put down full text articles, excluded where my mouse is. We would also like, just specify why we excluded them. So it might be because they had the wrong study design or they don't have the right outcomes or whatever it is, we will just list like why we, it's, it's kind of good practice to do that. Um Most, most of the time people don't because out of convenience. But if you're doing like a really high quality piece of work and you want to like, demonstrate that that's really important because then obviously you can show that like, well, I didn't include these papers because I don't think that Xy and Z is relevant and it's, it's kind of going above and beyond, which is really good, but it's not a requirement. You don't have to do that, but he should II think, I think it's important. But yeah, that's pretty much everything you need to know about screening like genuinely just finding the papers, getting them into like Iran or something, making sure that you're reading as per your inclusion exclusion criteria and making sure the papers have the information you want and then just being thorough and documenting all your findings. That's genuinely it. Um The one, the one thing I can think of. By the way this chart goes in your results section, a lot of people just assume it's in the methods. It's not, this is the first part of your results because the way I like to explain this to people is the results section is what you will like what you will do in the future. It's all kind of hypothetical. It what, as long while you're in the results section, everything you talk about is hypothetical. But when you got to this stage, you've done stuff, you've, you found all those articles and read them. So now that you've done stuff, you gotta document what you did and that is a results part because this is now you're actually doing stuff to get your, your study progressed and then you're gonna identify all these papers you're gonna write about them like that. So this is now a result because you're talking about what you're gonna include, which is obviously the bulk of where your da data comes from and where the evidence comes from. So, yeah, make sure that at the very start of your results, you talk about this, this uh this flow chart, you talk about where it came from. You talk about why you excluded the stuff you excluded and what's gonna be left over in your um in your study. But yeah, that's pretty much everything you need to know about, about um screening like that, that's genuinely everything. It's fairly straightforward. I think once you start to do it, you will appreciate how easy it is. But yeah. Um I'm happy to call out there. I, if anyone has any questions or anything, obviously, I'm happy to take them at the end or if there's like loads of stuff that's cropping up. I'm happy to pause for a bit. Jacob, what, what's going on? So there's only one question actually. And that is, what would you say are the main benefits of Ryan to endnote? I don't like a note. Um It's not my cup of tea. Ray is basically, I mean, n not people use more for like citation management. So like it can, it can integrate all your citations into your um manuscript, which is that uh that's a really good use for it. I don't use it for that. I, to be honest, II kind of do hand citations still and I've gotten really good at it. So it's probably easier for me to learn to learn how to use end. Not um Again, I'm a little bit old school but Rihanna is really good because you can record all of your decisions for you and your team in one place and you cannot, i it's, it's starting to get more A I integration. So sometimes the A I can give you like an opinion on the paper itself. Um There's this new feature they've developed where the A I can kind of try and read the, the, the abstract or whatever and give you what it thinks it is relevant or not because it will basically just search it for keywords. It's kind of like a neural neural, like an NLP model, kind of like J GBT. It just searches for keywords. If it's got like a bunch of relevant words that were in your po or something, it will try and say, yeah. OK. Well, we think that this is something you should include but it's not perfect. Uh But at least it's, it's better than nothing for sure. Um So yeah, yeah, and ran can take like sources from, you can put in as many database as you want, it can handle like thousands of citations. It, it's, it's very robust. Um And it's very popular. So if you tell any scientist you're using it, they'll probably be familiar with it, which is really good for like cross institutional working and stuff like that. Um So yeah, that, that's, that's probably why I would use it if I'm doing for stuff for myself, I'd still probably stick to Excel because it's for me it doesn't really matter. But yeah, in the modern day you got, I think it's useful to have at least one tool like that. You can kind of share with people and stuff. Yeah. Nice. Well, that's, that's the only question for that section. Cool. OK. So that's, yeah, that's screening done obviously if there's more questions or anything like that I can, I'll come back to it, but otherwise we can kind of hold on to, to risk of bias. Again, this one's not too deep. Um, the thing with risk of bias is that there's like a million permutations of it for every different type of study, every different type of bias. So it can get very niche. I'm gonna kind of stick to the basic stuff in terms of doing like a, a trial or like an observational study because that's what 99% of you guys are gonna end up getting a mentorship, but also just most medical studies, if there's more niche stuff, you want me to kind of talk through later with you and, and or something that comes up on a mentorship program, we will deal with that as it comes. Um So I'm, I'm happy to kind of go back to that stuff later, but for now let's just do the, the bare bones because this is what you'll need for the vast majority of your studies. So, um yeah, so essentially what this item, what I wanted to say was. Well, OK, you know what studies you're gonna include because you've just just finished screening. But now we need to know are those studies any good? Because the quality of your systematic me analysis and the quality of the evidence you produce is only as good as the studies, right? If your studies are all really terrible like co studies with a bunch of bias, your results from combining all of them together cannot by definition be any better. Like you are the sum of what you start with if that makes sense. So in order to assess that we need to know what our building blocks are and how good they are, so the way we're gonna do that, first off is, I think, let's have a conversation very briefly about what bias is. Um people get this wrong a lot bias. They kind of assume that bias is, you know, like you have like confirmation bias or whatever and those are types of bias, but they are themselves not bias. Um There's the way I kind of like to explain it is it's kind of like bias is if I, if I'm trying to hit a bull's eye and I miss in a certain direction, that direction I miss in is bias. But the process that caused me to veer off my target is the type of bias that did that. So like for so, and I've given the definition here. So bias is introduced by systemic error in research. It leads to an outcome becoming more selective than another uh beyond what can be expected in natural observation. So if you're expecting that, you know, A is gonna be 10 times better than B but you're only seeing it be five times better than B something is boosting up the scores for B and, or lowering them for a and the question is why? So that's why you've got to think about, are we measuring something wrong? Are we looking at something the wrong way? You know, why are we not getting a na the natural result in our experiment or our study? So an example I've put here is, let's say you've got a randomized controlled trial that is poorly blinded. So participants might know what arm of the study they're in. They might know that they're in the interventional arm or they might know that they're in the placebo arm. So you might find that, well, people in the placebo arm, they don't want the placebo to be thought of as good. So they're just gonna like underestimate how good it is. And you know, if they're gonna say that, oh it's useless. It doesn't do anything or equally, people want that drug to be better for them. So they're gonna overestimate how good it is and how useful it is. So, and then when a participant does that, the end result on the research is that we, as a researcher will see that that the treatment effect is being massively overestimated because the treatment arm looks so much better than the placebo arm. But that's not what is the natural result because actually that difference is over uh overestimated compared to what it is in real life. So that's that direction of movement where you know, the recorded estimate and the actual estimate are not the same, that movement and that deviation from the truth is, is bias. And that's important because if you have bias in the studies within your paper and then you do a AAA review of all of them, you're just gonna have mo you know, you're gonna have bias seeping through all of them at the same time. So that bias will compound and if you're not aware of why that is and where it comes from, that's gonna be dangerous. So it's important that we study this really well and we know what that comes from. So the way I like to think about this is well, risk of bias is basically the thing is critical appraisal. Your job essentially here is to identify types of bias and it is to also identify what that's gonna do to the papers you're gonna include in your systematic U ME analysis because again, like we said, the strength of your meta analysis and your system is only as good as the studies you've mentioned within it. So to keep this really simple, uh let's we can kind of cover what kind of studies and what kind of tools we use. I'm gonna not go into the overcomplicated details. Although I have unfortunately had to do that in other studies. It's, it's a pain, this can get really, really messy depending on what kind of things you're studying and different types of studies have different reasons of risk of bias tools um but the most obvious kind of case uses that you're gonna find are RCT S and observational studies. So RCT S are like the most over studied one. They, they have the probably the most robust tool as well. The one that's like most popular, most commonly used, which is the risk of bias two tool from the Cochrane um group um II will kind of break these down as well. So I'm, I'm, I've got like a bunch of screenshots explaining how we use them and how we'd apply them and then how to kind of demon, how to portray these within a paper as well. So I'll, I'll talk through all of that and it should hopefully start to make sense and come more clear. Um But yeah, so that's, that's the first half. Um The second half is um for observational, you can use some called the you could use. So the Cochran one is called Robbins one, which is like the risk of bias for non randomized studies. That one's a little bit clunky to use. The easier tool that I like is the Newcastle auto scale. It's, it's, it's more manageable, it's to do and to be honest, like bus in terms of the question is not that dissimilar, it's fairly manageable. So, uh yeah, whichever floats your boat is completely fine, similar things. Um But yeah, let's start off by talking with through Rob. Uh Rob two just kind of briefly. Uh So Rob two has five main kind of mechanisms of identifying bias. And the five biggest biases that come, they, they kind of came across. The first one is randomization, which makes obvious sense. I mean, if your sample isn't randomized, then you didn't even do an RCT properly. So that's obviously gonna cause bias. The second one is from deviation from intended intervention. This one is basically just blinding people all kind of get too bogged down in the details which I will get into cause there's two different types of deviation. But um yeah, essentially it's just, are you in the right group all the time? And does that affect how we do then missing outcome data? Um It's kind of the the oldest trick in the book to be honest with, with trials is just, well, things get lost, people get missed out in the trial. What do we do? Do we just not report them or do we just do half an analysis? Um And, and just kind of try and continue. So that's intention to treat by the way, um measurement of outcome, like if you measure different things in different patients in different, it can get really ugly. You, you wanna show that you're consistent, you wanna show that you're measuring the same thing every time and then selected selection of the reported result. Again, if you have 1010 vri ables and only three of them are significant, you just happen to report those three. Well, all of a sudden your, your patients are gonna look very sick, but that's not necessarily what was true because the other seven were no different. So what you're doing is by artificially only selecting to report some outcomes, you can start to bias how the comparison looks between the two and that's not fair. And that's gonna introduce bias because you're literally moving where the bulls eye is and just saying you hit an arbitrary target, which is obviously gonna cause bias. So depending on what you find in the text and you have to answer this only depending on what you find in the text. Uh There are five responses so you can say yes, potentially, yes, no, potentially do and not enough information. Um The Cochrane are really, really good with this stuff. They basically break down everything for you when it comes to um how to do the rob two tool and what to fill in. And they even kind of tell you like, OK, bias from randomization seems like a very abstract concept, but they'll tell you what to look for in the paper as well. And it's done in the form of really helpful kind of questions that you can answer along the way. Um So I'm gonna kind of peek out, I I've put a bunch of screenshots of the guidance from Cochrane in here and I can talk through that with you and then it will start to make sense of what you're looking for. Once you can identify these things within a paper, it will really go click together. So if you're doing mentorship, this stuff will become more obvious. But uh yeah, so to start off with, this is the uh the, these are all screening the um template from the Cochrane handbook uh and the the tool, the rob two tool. So you can literally see this being used. And um so, so when you look at randomization, the main thing you wanna look at is the allocation sequence. So essentially this should basically just be a bunch of random numbers. And this should, you know, it should be a bunch of random numbers that dictate whether a participant ends up in the control group or the intervention group. And nobody should know what this is. The patient shouldn't know. And especially like the the doctor who I was treating them should definitely not know because if there's a problem with this, what you're gonna find is like what, what 1.3 suggests is that there will be differences in the baseline cos theoretically, right? If everyone gets allocated as per pure chance there isn't, you could be in either group and you'll never know. So because of that, you have to automatically assume that in a perfectly random world, the baseline characteristics, age gender, everything in both groups should be perfectly equal because anyone could end up in any group. So there's no, there's no chance of a difference. But if there is an obvious difference, if you're finding that like all the young people in one group and all the old group are, then clearly someone's tampered with it. There's something that's gone wrong there. So if, if there's something obviously kind of causing that discrepancy, you'd say, well, hang on. Yes, like there's, there's gap here and you can put down in the comments and say, well, the baseline characteristics are like massively skewed. So in the text, you literally are gonna look for these questions and just look for the answers. And if they're there, you write that down. Um and then as a result of that, you give a risk of bias judgment. So your risk of bias judgment is done by domain. So 1234 and five all get a risk of bias judgment. And because of that, you're gonna give the study an end judgment as well. So in this case, let's say you said allocation sequence is random, great. It's random and nobody knew about it. So yes and yes, great. That's really good. And then you're gonna say, well, did, was there any baseline differences between the groups? Well, you look through the tables i in the paper and say, well, there's nothing obvious. So you say no. So in that case, you would say, well, this looks really good. There's nothing obviously wrong. So we're gonna say low risk of bias and then the optional stuff. Direction of bias doesn't mean anything because you just say, well, N A there's, we don't think there's that much bias and you leave it there and that's your domain one done. Like this can be done. You know, you just, it, it will be at the start it in any trial. This will be like at the very start of the method section, they'll tell you trial, you know, trialists were randomized by 1 to 1 ratio under such and such protocol. And they'll just tell you immediately, this will be like the first paragraph or the second paragraph of the method section. It won't be like lost in the in detail somewhere. But the difficulty with trials is that because of brevity, they don't always record this stuff, they'll just kind of be like, oh it's in the protocol somewhere and yeah, you can go check but it's extra, it's extra work. So that, that's where you gotta be a bit mindful. So that's domain one, domain two. There's two types of these ones, one in gold, very fancy, but um these are very notably different only in one way. So the gold one is deviations from the intervention effect of adhering to intervention. So this is for something where the intervention is ongoing. So maybe it's like an exercise regime or it's something that something where you have to stay in the same group of like you're either in the control group for a really long time or you're in the intervention group for a really long time. And in that case, uh is there gonna be an issue where like your blinding fails halfway through or like it becomes obvious that you're not adhering to that intervention? So that's where you use the gold one. Um you, the, the, the other one, the normal, the one, this one is called effective assignment to intervention. So that's, you get assigned to a group once and I don't know, you have a surgery or something and that's it. You're done after that. What group you're in? Like the assignment is just a one time thing. So, um yeah, again, here, the main questions are 2.12 0.2. And then from there, we kind of fit in what you are. So if 2.1 if participants were aware of their signed intervention, that means they weren't blinded. And if carriers and people delivering were it means that things they weren't blinded. So in a double blinded trial, you should say no to both of them. If it's a single blinded trial, you're gonna say yes to the one that was uh wasn't blinded and no to the one that was. And if it's an open label trial, there's no blinding at all. You're gonna say yes to both. If you say yes to both, then the question is, well, does a lack of blinding lead to deviation? And does that affect the outcome maybe it does. That's a good, that's a cause of bias. And then was there an appropriate analysis to estimate the effect of assignment intervention? Um Most of the time? Not really. So, uh you know, this is where a lot of studies start having bias or like at least some concerns because very few studies do both double blinding and analysis to uh to cover it's just too much effort and it's challenging to do so a a lot of studies fall through the cracks here. But again, this stuff is stuff that you will be obvious in the methods fairly early on in the methods. They'll tell you like the type of blinding they did how they pulled it off. And I in the methods section when it comes to analysis and like data reporting, they will tell you, you know, what was the analysis used to estimate this? So it should be quite apparent to you reading the method section that this stuff was true or not. And then your risk of bias judgment comes naturally from there. This is uh if you read this, they're almost the same. Um The only difference here is they kind of said, you know, more important non balance across intervention groups and then were the failures in implementing that intervention. So potentially, you know, nonprotocol interventions can be introduced, but that gets really complicated because you don't know what those are. You don't know if they're mentioned, they're basically, it's kind of improvising along the way. At that point, you're already off the track. Something's gone wrong. Blinding is broken. So, or maybe someone had like a medical emergency or something and you're basically just filling in the gap by adding in something new to the trial. So that, that's kind of already something's gone wrong. So it's not very commonly reported or seen in trials. So don't worry about it unless you see it. But again, Cochrane are really good. They have like extensive guidance on this stuff in the um guideline books. So if you need it, you can always find more information, but it rarely, rarely comes up the main questions, like I said, 2.12 0.2 and then 2.6 regarding analysis, that's kind of the bulk of it what you need to cover now. Um 2.3 just sorry, three, I should say domain three is missing outcome data. Uh was all the outcome data available? It's a pretty easy question. Um You just say either yes or no and almost entirely, you're gonna say yes, you know, like almost most most modern trials, the vast majority of them, you'll get data for everything. And if they don't tell you where, if they don't have data for something, for some reason, they will kind of report that to you and be like by the way, we don't have a study for these guys cause they dropped out of the trial or, you know, whenever someone died something they'll tell you. And, um, that should be fairly obvious. And then II cos if you, if you have this, you don't need to answer the rest of them. So it's automatically like a low risk of bias. You only go into 3.23 0.3 from four if you need to. And you're finding that. Well, yeah, now that we know that there's missing data, like how much was missing, how can we identify that was a big deal? Did, was there any attempt to account for in the analysis, like an intention to treat model or something that's when it gets complicated? But the, the vast majority of trials you just say, yeah, pretty much everything was there and it will be so they, they're fairly straightforward. Um But again, it should be obvious in the method sections when they talk about their data analysis, they will tell you what their plan is to do with missing data. They will cover this quite in because it's often a requirement for funding to talk about. Well, OK, you spent all of our money collecting this data. What if you miss something? So it's, it's very rarely like not reported at all. Cool. Um Then the next domain domain for risk of bias in the measurement. Um This one's a bit subjective, I mean, was the method of measuring it appropriate? It depends if, if they did something, it's obviously just not like industry standard, you'd say, well, hang on. Why did you do that? Or maybe they just didn't mention it. They just neglected to report that again. It might be like for brevity in the results section or something, you gotta go hang on. Why did you just not mention it? It was it because there's something you did wrong or it, because you just don't wanna, don't wanna waste words on it, you know, tight word count. It depends. Um could measurement of ascertainment, the outcome have to differ between intervention groups. Again, that's something like it's a bit subjective. But if, if, if, if it's very obvious that the same protocol has been followed, stuff has been done the same way, it's in the same hospital or something, then it's not to worry about, it's gonna be the same. Um If no to both, then you kinda think we'll hang on. Were outcome assessors aware of the intervention? Like, was that why? And could uh could the assessment of the outcome be influenced by that because if you know what the outcome is and you know how you're measuring it, then, you know AAA biased res researcher could just influence it to what they, whatever they want it to be, right? They can just maneuver things. So that's where you've got to think about. Hang on. Like what, what's going on there and why was that done a certain way? I need one into your cool uh I'll tell myself in a second. OK. Risk of bias judgment. So yeah. And then the last one, this one's very straightforward um was the data produced in, as per a pre specified uh analysis plan that was finalized before? So, did they have a protocol that was good? Yes or no. Uh Did they pick a result that may have come from multiple within the same domain? Often it's very obvious. It's, it's, you know, if, if there are like 10 ways of reporting the same thing and they chose a weird one, why did they do that or, you know, something like that? But again, these are the 2nd 25.2 and 5.3 are subjective, you know, it's, it's depend, this is why you should also be aware of like how things are done in that industry. If I in, you know, if you're looking at like a cardiac trial and there are set ways of things being done with cardiac measurements, be familiar with how that's done because then you won't be caught out by something that doesn't look right. But to be honest, again, for funding reasons and for, you know, a lot of scientists do things the same way for convenience. So this stuff is not that often done weirdly. Um a lot of the time these are very routine and having a protocol that like prespecify analysis is also very common. Uh Again, a lot of funding isn't even given without it. So this is very rarely like a a hill to die on in terms of a cause of bias. But it, you know, it can be done very poorly as you can imagine if there's just no protocol and everyone's just improvising, it can go horribly wrong. So make sure you do double check this. Um But yeah, so if you've done all of these, you should have a risk of bias for each domain. And then you just kind of basically you wanna go with like a consistent average or alternatively, sometimes you can go with like what the lowest one is. So if you have, let's say three of them are some concerns and two of them are low, I think you kind of say, well, the risk of bias is some concerns for that study because more often than not you have something to worry about. So, yeah, if there's one domain that is high risk of bias, that whole study is condemned, unfortunately, like, because once there is one area that introduces bias, the rest of it doesn't make up for it, you know, like it, you, if, if you're steering away from the bulls eye, like you're gonna miss anyway. So that's it. Like once you miss one, you're pretty much almost, almost always gonna give a high to that paper if one of these is a, is a high. But yeah, um now obviously these are like really complicated tables and as you can imagine doing this 100 times for a big meta analysis can be a pain. So you've got to have a nice way to report this that's easily gonna fit in your manuscript. The way we do that is with. So Cochran and, and Rob, they have this tool called Rob vis. It's really, really helpful. Uh I put a screenshot of it on the uh on, on the screen right now. Basically, it's just a really concise, intricate way to get all your risk of bias into these kind of tables oh into these kind of tables here. Um So it covers domain 123.5 and an overall judgment. And that's it like that is very neatly kind of gonna contain everything that you're talking about into one place for all the disease. And then obviously you can put that in the supplement of your manuscript or something and we can talk about it and say, well out of the 15 papers in my analysis, 10 of them and high risk of bias, five of them were low and you know, that's it. And then you can obviously refer them to the to the table demonstrate where that bias comes from in each domain. So it's a very nice kind of summarizing way to put together all that information. But Rob Viz is really good because you can literally just feed in all the data. So if you click on that U upload data tab on the top. Um It will take you to the screen here. Essentially, this screen will tell you basically you click on the tool you want to. Oh And then you can feed in an Excel sheet. Like if you put in all of these like study D 1 D2 D 3d 45 and overall, you can just make a simple Excel sheet and just write that all down and it will create it for you alternatively like on the left. Um it's cut off on my screenshot because it says if you wish to explore the app's functionality on the very bottom left just under there, there's something called like edit uh manually and you can just like click, edit, click, edit, click, edit and just manually put them all in if that's easier if your Excel is not working or something. So Rob Viz kind of gives you like both ways of doing it, which is really helpful. Um And then you get your R two and you just insert that straight into your manuscript. And for if you have only risk of bias of trials, that's you done. Um Some people recommend that they should say they say kind of only do risk of bias for like trials because you know, as in sorry, only include trials in your meta analysis because they say, well, that's more methodological. Pure trials are most likely to be to be the best evidence. I don't agree with that. I think you're missing out on a ton of data if you're not doing it. And obviously you're gonna know what the bias is from everything. If you really want to, you can do like a, a trials only analysis and then that like observational study, only analysis or something. But I think just entirely exclude is a bit reductionist. Um, but again, people have their own preferences and stuff like Cochrane. Yeah, they'll definitely not bother with observation of their trial only because they, they think that the purity of the study going in, dictates the purity of the thing going out and they're very strict on that. So it's, it's personal preference combined with where you're at in terms of how much you wanna include in your meta analysis, how, how important you think your results are there, there's, there's a little bit of leeway on that. No. Um But that, that's enough about like the risk of bias for trials for the observational side of things. It's a lot simpler. It is much easier. There are only three domains you wanna know about here. Selection, comparability and outcome and the way we do this instead of doing the whole. Yes, no, maybe no kind of stuff. We just give them a number of stars depending on whether a study meets criteria in each domain. So, um again, Newcastle has like a really helpful PF document explains everything. So it makes your life very simple. Um I'll, I'll again, I'll put all the screenshots in the next slide. So you've got to see it all and I'll talk you through something. But um let's just go. So this is the first one when it loads up. So selection is broken down into four questions. For cohort studies. You give um you give one star for every question within the sections. Comparability. There are two, there's only one question, but there's two stars available for it. So selection um as I will kind of, I hope you understand. But um how you talk through with a cohort study, you have an exposed cohort. So you have, let's say an overall population out of that overall population, you're gonna make a cohort of people that are entirely exposed with whatever thing you're studying and then you're gonna have a cohort of people that don't have it and then you're gonna compare and you're gonna study the differences between the two and because it's an observational study, you're not gonna do an intervention, you're just gonna study and you're just gonna check through all the details are. So in this case, the things that get you the stars are, those are the only ways in which you can start. So if it's representativeness of the exposed cohort, we need to see that it's either truly representative of whatever the disease is or somewhat representative. And if we don't believe that that's the case, we cannot give it a start. Um And then the non exposed cohort needs to come from the same community as the exposed cohort because otherwise you're not comparing like for like if I compare a bunch of Japanese cardiac patients with Aer with healthy British patients, that's a useless comparison because they're not the same and their, their starting points are not the same. So it's not gonna mean anything that you've got to compare like with like because you're not even able to do any intervention here. So you just need to see whether ideally the only thing you wanna be changing is whether they have the disease or not. And that's it because you can't intervene and do anything other than that. So picking the right cohort in a cohort study is kind of gonna make or break your study. Ascertainment of exposure. It's gotta be something that's replicable because otherwise you're gonna introduce bias in that people might say different things or it might not be reliable. So secure records such as like medical records, that's one or alternatively, it's like a structured interview. So something where, you know, they've sat down with an independent third party and given consistent answers. Self report doesn't count because you can make the same report and give five different answers. So that's gonna increase bias because that result won't be the same and you're gonna end up skewing your result. Um You cannot verify his validity either. So, and then the last thing is demonstration outcome in it was not present at the start of the study. So, if you're studying something like disease progression, you need to know that at the start that, that disease didn't exist. It's, you know, because otherwise what's the point of your study, you're trying to identify a change over time, but you didn't see a change because it was already there. So again, you're gonna introduce bias because instead of moving towards the bulls eye, apparently you're already there. It's, it's, it, your whole study stops making sense. So you need to see something in the literature that tells you that like, well, like, I don't know, the baseline characteristics were distinctly different from what it was at the end of the study or, you know, something that shows you that things have changed over time. The next one is comparability. This one as you'll notice is just one question. It's very, very easy. Um So comparability of cohorts on the basis of design or analysis. So the study will control for multiple things that will determine whether the cohorts are the same or not. Um So it needs to control for two things at the very minimum. Otherwise it's probably wrong, like there's something going wrong there. But ideally two, it might be one. So you get a star for each and this is the only time you do this in Newcastle Ottawa. Otherwise it's just a star for like, like for example, when you go back um structured record and secured interview, sorry, secure record instruction interview doesn't get two stars. You get one star for either because that's how you measured it. Whereas here you need to show that there was one factor being controlled and then another factor being controlled. So that might be that all the patients who are of the same age and gender or maybe they were from the, you know, they had the same comorbidities and something else. So as long as you can verify that there was two important factors that were controlled to make sure that the cohort are, are as close to identical as possible and comparable. Then that means you can give them both stars. And then the very last one is outcome. This one is very straightforward, like it's very common sense kind of stuff. So assessment of outcome, you need to see something that's gonna demonstrate that outcome like black and white sore record linkage hospital like hospital record is unarguable. If it says the guy died, they died, you know that that's it like do you have AAA record that is set in stone? And an independent blind assessment means it, it kind of says what it doesn't the 10 if the guy had an assessment and it was done by an un unknowing assessor who kind of can be trusted to be non-biased and independent, they have no, they have no way of knowing what group you're in. So they're not gonna, they're not gonna have any partiality to one group or the other. So you can be confident that even if they repeated the measurement, they're independent, they should do the same thing. Again, worse follow up period long enough for outcomes to occur. This is obviously um dependent slightly upon context. So for example, if you're studying a surgery and you wanna see short term hospital outcomes, hospital outcomes are often done at 30 day interval. So that's what we call it in hospital period. So if you have a follow up of 10 days, that's not enough. You would say no, but if you have six months of follow up and then they've done a little and they've done an analysis of 30 days, you'd say well, yeah, fair enough. You had six months of data, you also happen to do a uh a kind of a separate analysis of one month's data. Fair enough. You do what you want. That's fine. So depending on what the outcome is, the question is, is it reasonable that that happened within that time? And depending on what it is, you've got to kind of make that subjective call of. Yes it is or no, it wasn't adequacy of follow up. Um So the ideal metric is obviously complete follow up. So you follow up everybody. And then alternatively, you can also give a star for a small number of people that are lost due to follow up because you know, you might miss people or they might move or they might die or things happen. Right. So I've, and this is slightly subjective. So I've seen studies where we've done risk of bias and we said, well, 5% will allow because you can't control that. If it's much worse than that, we're gonna not, we're not gonna give you the score because, I mean, 5% is like a reasonable amount to lose. Right? Like in 1000 patients, you can't follow up, you lose 50 you end up with 950. Fair enough, no problem. But if you end up with like 600 you'd be like hang on. Why did half your sample disappear? So you've got kind of question within reason what's acceptable and what isn't. Um And also again, this is something that they should obviously cover in the methods section. They should tell you like this is how we try to follow up people. This is how long we followed them for. And this is what we did when we lost someone. And a good, a good study will tell you in their method section exactly how they did this. They will have a section talking about where the outcomes came from what the duration of follow up was and why it was that long? A good study will cover all of these things within its methods. And if not within the methods, then with the the baseline characteristics of the methods, uh results section So within those two sections, you'll find all the information you need for the risk of bias because that's fundamentally what you're, you're gonna be measuring, you're gonna measure whether they s they did what they said they were gonna do and whether they actually did it, that's it. That's, that's how you identify by us. And, uh, so this is from a study I'm doing, um, it's just kind of like a whole how you would characterize this within a manuscript. So, like you just put down stars as applicable for each study. And then obviously by the end you're gonna have, it's gonna be at a nine because 42 and then three is nine. So obviously, if they get like a, if they get a, a bunch of stars for everything, then the risk of vice is pretty low. If they miss out stuff and they're not getting that many stars, the risk of vice is high. So, um depending on what the score is, you kind of adjust. Um Yeah, of, to be honest, you wouldn't really give low risk of bias to a lot of studies because fundamentally it's just very difficult for mo you know, it's not common to see studies get everything right. So pretty much all of them are gonna be moderate, if not worse, unfortunately. Um But yeah, so something like this, it, it just kind of collates everything in a, in a tiny little table and again, you can put that in your manuscript somewhere and you can talk about it, you can say, well, you know, I don't know, Barban at all. They found they had a moderate one because of Xy and Z and you can talk about the details of why certain studies got knocked down on stuff, which is really helpful. Um I think that's pretty much everything I had to cover today. Yeah. Um If there's any questions, anything you wanna know, I'm gonna be around for a few minutes, but then obviously I'll take questions and stuff later as well. Obviously chances are a lot of you guys will be doing this in interment scheme. So at that point you'll get to see for yourself what it's like to try this in a real study. Oh, yeah. Thanks for listening guys. Brilliant. Thanks. Nash. Um There aren't any further questions yet. Can I just remind everybody to please fill in both feedback forms uh to get your final certificate? I put them both in the chat. Ok. I don't think there are any more questions. So, yeah, I got that feeling too. Are you happy for me to and it there? Yeah, go for it. Ok. Yeah.