Home
This site is intended for healthcare professionals
Advertisement

Levels of Evidence and Research Biases

Share
Advertisement
Advertisement
 
 
 

Summary

This session is intended for medical professionals interested in deepening their understanding of research methodology. The speaker breaks down the hierarchy of evidence in research practice, focusing on the healthcare sector and especially the NHS. Various types of research biases are discussed with practical examples intended to enrich the learning experience. The session encourages attendees to consider the crucial role of bias mitigation in their investigative work. The techniques and insights shared are based on the speaker's own experiences and those of her colleagues, providing first-hand accounts of research practices in medical settings. Additionally, the difference between Systematic Reviews and simple Reviews in a research context is emphasized, highlighting Systematic Reviews as a more reliable source of data. Attendees will leave this session with a clearer understanding of evidence-based medicine, research biases, the distinctions between different types of research, and strategies for minimizing research bias.

Generated by MedBot

Description

Welcome to Session 5 of our 'Research in the NHS: Teaching series for IMGs'

This teaching session for medical professionals will provide an introduction to hierarchy of evidence and the types of research biases

To stay up-to-date with upcoming teaching sessions, please follow our page.

Learning objectives

  1. Understand the hierarchy of evidences as practiced in research within the NHS.
  2. Learn about different types of research biases and how they can impact the interpretation and application of evidence.
  3. Develop strategies to minimize research bias in their own practice and understand the implications of bias in research studies.
  4. Differentiate between systematic reviews and other types of reviews in research literature, and understand the implications of their different levels of rigour and reliability.
  5. Learn to categorize research into qualitative and quantitative forms and understand how these categorizations can impact the interpretation of evidence.

Similar communities

View all

Similar events and on demand videos

Advertisement
 
 
 
                
                

Computer generated transcript

Warning!
The following transcript was generated automatically from the content and has not been checked or corrected manually.

Today's session, we will be touching upon two parts. So I will divide the session into two in the first half. I will talk about um the hierarchy of evidences that we practice in research commonly in NHS. And I will also touch upon the different types of research biases. There are and hopefully, I'll try to make it a bit more interesting for you to um by giving you some examples of different types of research bias. And then, you know, we can also talk about what we can do to minimize research bias as much as we can as a researcher. So just a bit of a disclaimer for you. So any opinions, any thoughts, um any tips that I share in this series are purely based on my experiences and experiences of my colleagues um who are doing research in the NHS. And it doesn't entirely entitled to any guidelines that are set by the NHS for junior doctors. OK. So the first part, which is what is evidence. So in the HS as doctors, we are encouraged to practice evidence based medicine and this is something that I have touched upon in the previous session. What evidence based medicine is now. But what exactly what do you mean by evidence? So evidence is the definition of it is the best research evidence that is usually found in clinically relevant research that has been conducted using sound methodology. So I've highlighted what is clinically relevant research, what is clinically relevant research? Well, evidence is nothing but basically your case series, your case reports, your cohort studies, your systematic reviews, your practical guidelines, the protocols that you set. These are all the different types of evidences that there are out there for us as doctors and as for policymakers in the NHS to look at and see what is the best treatment there is for a particular disease. But let's go back to the quote. It says that the evidence or the clinically relevant research, which is the evidence has to come from a method or some has to be has to come through a procedure that's been conducted using sound methodology. So using a sound methodology means there are so many different types of methods and different methodologies give us different levels or different quality of evidence. So what does that try to imply? It implies that not all evidences are equal just because you do research and you come up with something or you've discover something doesn't necessarily mean it is purely authentic or it's purely reliable. Well, what do I mean by that? So having that thought in mind, this is a very famous uh I think picture representation of the different types of researchers out there. And again, I think I've touched upon this in the previous session, but I'm going to discuss more about why it's a pyramid. What what, what are they trying to imply by putting it in form of a pyramid? So literally all the types of researchers that we can do can be put in a sort of a pyramid. And the reason why it's at the pyramid is as you go on top of the pyramid, your quality of your research increases, it becomes OK. Can everybody see the slice that I am presenting? Are we stuck? Right? Sorry about that. It's a bit of a technical glitch this morning, I think. Um Can everybody see the pyramid now? No. OK. So let me quickly change things up a bit. OK. Can you can everybody see that? All right. So a quick recap. Let's look at evidence. That's what I said mentioned about research based evidence. If it, if at any point my slice does um get stuck or some kind of technical error, please do notify me in the chat box. Um So that's literally what evidence means trying to find out what you discover, finding out information through your research by using a sound methodology. And these are the different types of evidences that there are, which are the different types of studies. And because we mention that not every study that you do will give you the best outcome, the uh you know, the Committee of Central Center of Evidence Based Medicine that is basically an institute at Oxford University. They have created this pyramid and this that whatever's at the bottom of that sits at the bottom of the pyramid is likely to be a very low quality research. And as you go up the things that sit at the top like the trials, the clinical trials, um the systematic reviews, the meta analysis, they give you a high quality of evidence. So they are the sort of evidences that um I as a doctor or as a a treatment policy maker or a guidelines maker I would want to rely on. So why is it, why is it that, you know, case series and cross sectional studies are they sit at the bottom of the study, the bottom of the pyramid, whereas things like trials and systematic reviews, they sit at the top. Well, it all based on bias as you go wake your way up through the pyramid, the amount of bias is reduces dramatically. That means things like trials, systematic reviews, literature reviews, they have a very rigorous methodology. The method the study design is set in such a way that we do it to make sure we are fighting research by us at every stage of our design. Whereas if you look at something at the bottom, like a case series, case series is nothing but me creating a report on a particular patient or a group of patients. And I'm and it's just my opinion out them. So when you look at the treatment guidelines, everything that's out there in the treatment guidelines comes through things from the top of the pyramid which are trials, clinical trials and reviews systematic reviews. So this pyramid is basically based entirely on the amount of bias um that can be present in a particular study that being said, I think um the evidences can also be divided into primary evidence and secondary evidence. So primary evidence are basically your original research, something that you do from scratch, for example, um in your in your um trusts or in your hospitals, you will probably be working by yourself or with your consultant or maybe in a group of team of um doctors and you will be picking up on doing things like case series or following up on data collecting data, writing it up and they're all part of primary. But now what's becoming very popular is secondary type of research which is are things at the top which are systematic review. A systematic review is nothing but basically gathering all the uh published paper out there and just compiling it and making up your own research paper from it, gathering all the different types of research, summarizing it into your own and trying to find out, trying to answer your research question based on that. Now there's also another way uh or another point that I want to mention is whenever you're do you know, reading through papers or you're doing a, a literature search, you will come across two things that are very commonly or interchangeably used. But, and it's very important to understand both because both are completely extremely opposite to one another. A systematic review is what sits at the top of the pyramid and they follow very vigorous protocols, they have their own guidelines. Um You, you know, the type of study you choose to include in your systematic review is um very much based on lots of protocols, lots of inclusion and inclusion criteria. And you know, they collect um research paper from a number of databases. Whereas if you just say read a paper that's a review of something, it's just usually like a summary of a topic. Uh and it's more, very much biased. It's something that the researcher chooses what they want to talk about. They don't include every paper that's out there in that. And it's really important when you're reading, when you're reading through papers, it's important to see what it's a systematic review or just a review because if it's just a review, it sits right at the bottom of the pyramid. Whereas if it's a systematic review, it goes right at the top. So if you yourself are doing a systematic review, it's important that the papers that you gather are more based towards systematic reviews because they tend to have more reliable data and their resources tends to be more robust. So let me give you an example of just a review and a systematic review. So a review, for example, at the top, it uh it is an emer emerging role of a gut microbiomes in cardiovascular disease. So here the that first paper, basically what they've done is they've gathered a couple of papers, they've tried to see if there's any link between our flora and the microbes in our gut and how it's linked to maybe the way our heart functions and the way we develop heart diseases in the future. That's just basically a couple of authors coming together reading up on that particular topic and just giving their own opinion, maybe adding in some facts from different sources and they've just created a review. So it's not completely authentic, it's not robust treatment, but it's just a bit of a review. So I think I'm still having uh issues with that. OK. All right. Let me just quickly share my screen. I'm sorry about this. Mm That's fine. Don't, don't apologize. It's actually my fault. I'm not sure why there seems to be a technical glitch today. OK. So I hope everybody is able to see my screen now. What I'm sharing. Can everybody see the my screen sharing? Yeah. If anything, please feel free to interrupt me or ask me any questions. That's completely fine. So let me continue from where I've left off with the different types of um, the difference between a review and a systematic review. So, like I said, it's very important to know the difference between two because we can easily, um, it's quite easy to fall into that trap of not reading it very carefully. Whereas a systematic review is an example of a systematic review that I've put out there is nothing. But when they try to look at the difference between physical exercises and whether someone is likely to develop type two diabetes in the future. So they have gathered different papers, different published studies out there. And they've tried to see what other results are and they've compiled it, made it into their own paper looking and reviewing different papers and they've basically plotted and they see what every paper has come out with and they try to look at the results and they see whether they're able to find their own pattern. Whether is it confirming that the, the chances of developing diabetes is less when you exercise or maybe there's no link between exercising and type two diabetes? So it's very important to acknowledge both because reviews, they're right at the bottom. They really don't mean anything. If you're someone who just wants to get a, uh, uh, an idea of a particular topic, you can read through it and get an idea. But whether you want to release it in a paper, whether you want to make a difference in patients treatment review might not be the er might not be an ideal type of research. Whereas a systematic review is a very powerful type of research. Now, research can also um be divided into two forms which are qualitative and quantitative. So if I'm something, if I'm looking at a research that's qualitative, it's not using numbers, it's basically using soft data and it's uh a descriptive type of research. So things like your case studies, your case series, you are, you're basically observing the patient how they're reacting. And a qualitative type of research is usually done in psychology or psychiatry where they're trying to observe patients behavior with a particular drug or a particular type of treatment. And they usually collect this through interviews. So if you are working in a, a qualitative type of research as a junior doctor in the NHS, you will be asked to make uh sit and talk to the patients, observe their behavior, go through questionnaires with them, we call them up, have a telephonic interview, ask them a couple of questions about how they're feeling after taking a particular drug. So these are the top, this is how you would extract uh soft data, which is qualitative data. Now, many people say it's not um is not really um the right way to put it is, is not so authentic. Um It doesn't have a lot of value. But honestly, right now, I think in the NHS, they want qualitative data more than quantitative data. And I think it's because at the end of the day, it's all about how patients feel. And um it's about how sorry, it's about how patients feel and how they're doing with that drug. Whereas quantitative is more of numbers statistics, I want to know how many percentage of patients felt better after taking that. Uh what are the percentages of them developing side effects? And it's very objective because I'm dealing with mathematics here and I'm gonna present my results with data as well. So these are again, two types of studies. Um that evidences evidences can also be classified based on the data that you collect. So that brings us to the end of the first half of the session where I've talked about the different evidences, why they are sort of staggered in a pyramid in that form? And that's all entirely based on bias. But when I keep talking about bias, what do I literally mean? So bias is something I think that can be used in every context. So you can be biased with when you're talking to your friends or you can be biased when you are making a political statement. But when it comes to medical research, it's very important to understand that there's something called bias. Because if you don't know there's something called bias, there's no way you will uh design your research to eliminate bias. So what is bias that we talk about in medical research, a bias is nothing but a systematic error that can occur either when you're collecting the data or you are using the data to analyze it. And in such a way that it's deviating you away from the truth of the matter. And I have put in the word systematic and I have highlighted it because I think bias is something that happens along the process. It's not something that happens today and tomorrow though, we know bias know it's something that collects accumulates as you are going through the research process, as you are collecting data, as you are looking at the data, as you are writing up the data, as you are calculating the data. And it's a very systematic error that can happen. And there are so many types of biases. So I think there are at least 16 types of biases and in that there are subcategories of biases as well. But I think at a junior level, it's important to know at least the basics and there are eight main types of biases that we're going to touch upon today. And um hopefully, if I give you an example of what each bias means, I think you'll probably be able to remember it and retain it for longer. But that being said, bias is something that we cannot avoid. So no matter how much you design your study, there's always gonna be some part of bias to it. And it's, it's something we all have to understand it's just part of being human, our cognitive process all sort of and our behavior, it all basically, um, sort of adds to the bias that can be in research. And the best thing that we can do when it comes to research is first of all understand that there's something called bias and again, design our study or our question, uh, or our research process in such a way that we tackle bias at every tackle and combat uh bias at every level. And, and that's really important rather than just, you know, brushing it under the rug. I it's important to acknowledge bias itself. So the first type of bias that I am going to talk about is selection bias. So this is also sometimes called a sampling bias. And I think the way you can remember this selection bias is if I've got a study, for example, I'm going to do a study where I want to see if there is any connection between uh the different types of social medias out there and how long people spend uh in those social media. I want to know if there's any kind of link, I want to find out the distribution. I want to find out why people are so addicted to social media, what's in it. So I'm going to gather a group of participants, a group of students, group of young adults, teenagers. And I want to ask them what uh social media they use how long they use it for? Why are they using it? I want to collect a group of participants for that. First of all, my study population has to be collected. So in order to collect the study population, what I do is I go on tiktok and I advertise my study saying that I want a group of people to take part in my study. Now, obviously, that's going to be very biased because my, I'm only inviting them on tiktok. So my study population is going to be filled with tiktok users. I'm not going to have Facebook users. I might not have uh youtube users. I might not have users who use X. So you have to understand um the way you gather, participants can actually impact and have a huge difference in your results because you're selecting people based on your convenience. So it's important that when you're gathering or you, when you're selecting the participants, you try to keep it as neutral as possible or at least as um uh in such a way, it's open to everyone. Everyone can equally take part in and not be very selective about it. So selection wise is maybe choosing only one particular type of participants to take part in your studies. And that will obviously give you unfavorable or unrealistic or maybe even incorrect results when you do the research. So that's an example of selection bias. So any questions you have in between um you know, feel free to quickly put them in the chat box as well. I'll just checking the chat box then and now to see if everything's working fine. So that's what selection bias is. Now, response bias is something that I think very commonly seen when you're trying to deal with data where you're extracting it from your patients. Like for example, if I give my patient a questionnaire, uh something that we are all familiar with, what is known as a liquid scale. A liquid scale is something that's used in a lot of feedback forms. So if you, you have attended my previous sessions, you will and you would have filled the feedback forms. You're probably um familiar with the format. How do you like the session? Rate it out of 51 being lowest five being less. How do you like the food? Can you give a, give the food uh rating out of five? How do you like our services? That's a type of uh measurement tool called lits uh tool. So, in a liquid tool, when you're trying to rate it, they can, you can have or you can cause some kind of response bias to that. An example of this, let me tell you if I'm sitting at a restaurant. Um and I've tried these new dishes, a new Chinese restaurant and unfortunately, the food was really horrible. I didn't enjoy it. Um And I just had to, you know, I chose the restaurant. So I probably have to pay for it and go. And then all of a sudden the waiter comes in and puts this feedback form in front of me and says, can you please rate our services and our food that and the dishes that you ate today? And then I say, ok, sure. Yeah, I can do that and I tried to do it but then the wait is just standing there watching me. So obviously me as a nice person, I'm going to think, oh, I can't be honest because, um, you know, the weight is waiting and especially if she says like, oh, if you fill the feedback form, my tip uh relies on this or I probably will get extra money uh or something like that. So I'm probably going to be thinking, oh, I can't be honest now, I probably have to uh give really nice ratings. Whereas in real life the food was, was really bad. So it's about, it's something known as a courtesy bias. So when participants, when they take part in your study, they want to be as kind as, as polite to you. So they might not have you uh feel the safe environment to give their honest reviews or honest um experience about a particular uh study or a particular drug that they have might have taken or a particular treatment that they're going through and they give you a false reply because maybe because you're standing there looking at their response to the thing. So that's an example of a response bias is when we put patient or when we put our participants in difficult situation or in a situation in such a way that they have to basically be very polite to us. So they give us all the incorrect uh responses. And another type is known as reporting bias. And I cannot emphasize on the importance of reporting wise because especially nowadays in the NHS where having a paper is very important for your portfolio when you're going training or when you're climbing up your ladder in your trainings, um you know, the importance of having published data, published research in your portfolio just makes you a better doctor. And because of that, it's pushing a lot of research into what is known as reporting bias. An example of a reporting bias is a publication bias. So there's a notion among all the doctors and all the physicians that if your research doesn't discover a very amazing fact, it doesn't, if it doesn't give you a positive result about what you're trying to research about what your research question is about. There's no way it's going to make it into the papers. There's no way it's going to make it into the journals. So, and because of that, we uh researchers especially, I think some reads ca are quite um can be found guilty for doing this. There is some sort of a modification or some words are changed in such a way that uh the research results are made to look attractive. Even if your research didn't find something that's completely fine. We have to accept the fact that not every research is going to give you a positive result. Some researchers are going to say there's no link between giving antibiotics and uh reducing the infection. There is no link between uh going through this particular surgery and having a better outcome in terms of life, mobility or your life functions. So just accepting the fact that you don't have to have positive results for it to be published, I think that's one way we can reduce reporting bias. And that's a very uh I think it's an important thing to understand that as well as a clinician. So reporting bias is basically trying to make your um the reporting part of it a bit more attractive, a bit more hot or I would say a bit more um attractive. Um so that you can end up on very highly uh well known uh journals, medical journals or surgical journals or papers. So you can get it published that's known as a reporting bias, trying to change some words or trying to change some data to make it look appealing and interesting for readers. Now, the next type of data is what is known as a uh is my um slide view um clear for everyone. Can everybody see my screen or are they are we still having any problems? So II understand that there's a, a bit of a technical issue. So what I'm going to do is at the end of this session, I will try to um upload the slides on the website as well. So it's easy for everybody to go through if you have any questions. Now, the next type of bias that I'm going to talk about is confirmation bias. Again, confirmation bias is when you as a researcher are doing everything and making sure it confirms with what you know, already, for example, if I have a drug, a and this drug A is basically known to reduce BP because of the pharmacological makeup of it. I know this drug will reduce BP of the patients. I want to try it. I'm going to gather a group of patients, I'm going to give them all drugs a drug. I'm going to give them all the drug. Um, sorry, give each one of them the drug and then half an hour later, I'm going to check their BP. Now, what I do is I give them the drug and then I check the BP. I realize the BP isn't reducing or the BP is higher. So what I'm going to do is I'm going to think, oh, this is probably an error. I'm going to discard this because there's no way that the BP will stay the same. It has to reduce. I'm trying to confirm a notion that I already have in my mind as a researcher. So, what I do is, again, I go, I go back the next day and I give them the drug again. Again. I'm getting the same data. I'm going to think, oh, there's some kind of something wrong with the research with it because there's no way this drug uh cannot be doing, the job is supposed to do it. So, trying to label uh your results as an error of thinking, this cannot work like that because I know what's supposed to happen because I have a confirmation bias in my mind. That's exactly what is known as bias. And I think another easier way to understand this would be um when we, when we are kids and we're playing with our siblings, when we're playing, uh you know, the, with the dices, we sort of roll the dice again and again, until we get a favorable number. That's exactly what a confirmation bias is. You keep doing it again and again, until you get a favorable um outcome from your patients or from your design, that's an example of a confirmation bias. Now, the next type is recall bias. This is very common when you are trying to deal with retrospective studies. So for example, if I'm looking at a retrospective study, uh example I'm going to give you is maybe I'm trying to see, look at a group of cancer patients, a prostate cancer. And what I'm doing is I want to go back to their medical history. I'm going to go back to the years, 1020 years back in time, I'm going to go through all their past medical history and I'm trying to find out if there's any risk factor, uh, that might have cause them to have the prostate cancer now. So the way I will start off by doing that is I will probably sit and talk to the patient and ask them, have you smoked? Have you maybe have, have you been working in a particular environment that gives you a lot of radiation? So when I do that, I'm completely basing on my, my, my data based on how much the patient can remember from his memory. And it's very important to understand that there's always going to be something called recall bias. Because if you're, if I've got a group of cancer patients, they're going to remember everything in their life because cancer is such a life changing event. They're going to remember that CT scan that they had 10 years ago and they might think I had a CD or an MRI, maybe that's why what might have caused the radiation or maybe I smoked for a couple of years that might have caused the cancer that I have. Whereas if I compare it with my control, who are healthy population, and I'm trying to see whether they've had any kind of risk factors, they're not gonna remember everything because if you ask a healthy person, if he's had an MRI or a CT, he's not going to care. It's, it's natural to not remember every detail if you're a healthy person. So that's what is known as a recall bias. People who tend to have the cancer, people who patients who are affected, they will remember everything whereas a healthy person may not remember everything and they might be missing out on an important detail that might be important for you in your research. That's an example of a recall bias. Any questions so far? Ok. Any questions, uh, please do keep them coming in the chat box. Now, we, we're almost done. We've got just 2 to 3 more, um, types of biases left. Next is the hawthorne effect. Now, the hawthorne effect is, um, quite similar to, for example, if I'm a GP, I'm a GP trainee or I'm a junior doctor. And I've been asked to go around patients who, uh, patients who are taking part in a clinical trial. And what I have to do is I have to deliver ABP, uh, uh, a blood blood glucose machine. And what I'm doing is they basically, they've taken a drug that's supposed to reduce blood glucose level because they're diabetic. And what I have to do is I have to call them up every day in the morning and ask them, have you checked your blood glucose today. What was your blood glucose level today? So, if I tell them I'm gonna call you at 8 a.m. this morning and I'm going to get your blood sugars. The patient might actually do something at that particular time to make sure that they have a very nice blood sugar. Maybe they will, they will not eat that morning or maybe they might exercise, uh, or they might fast the night before. So telling patients you are going to get their blood sugar levels, actually change their behavior so that they can actually give you really nice results because they want to please you because, you know, you're doing all the efforts in your research and, and they want to give you nice results that is known as a hawthorne effect. So when the participant changes their behavior to the, uh, because you're observing it and I think though it, it is a very, very hard, um, kind of bias to deal with because you have to tell the patient what you're measuring that comes with the ethics. You cannot completely tell them. I don't, I'm not telling you what I'm trying to get from you because patients have to know what you're doing. But at the same time when you tell them, there are high chances that they can change their behavior, they can do things to make sure they give you very nice results. So it's a very difficult type of bias to combat. It's a type of bias that's definitely can play a big role in your research. The next type is sort of the opposite to the Hawthorne. Now, here you're the researcher, you are trying to change your behavior to a particular, uh, uh drug. For example, if I, if I'm AAA, so if I'm a, uh, a doctor and I'm taking part in a research and I've gathered a group of my patients who come to my clinic and they're going to take part in a double blinded RCT randomized controlled trial. So a double blind means, I don't know what drug I'm giving the patient and the patient doesn't know what drug they're giving. And if I'm giving them, for example, if I'm giving them an antidepressant and I say here, please take this tablet. And I think, oh, I think ii hope this will make you better or, you know, I give them a nice smile, something that they pick up the patient might pick up from my behavior and they might actually say, oh, yes, it makes me feel better now. So your behavior or what you say can hugely impact how they uh react to that particular drug or how they respond to that particular treatment. And it's really important, I think as a researcher to not give them any kind of um gesture when you are taking part when you're delivering that particular treatment, so that the patients don't have an idea of what's also going on that's an example of an observer expectancy. Now, the last type of uh bias that I would want to touch upon quite easy. Again, very commonly that happens in research um is availability bias. Now, research is very expensive, getting your paper published is very expensive and maybe having access to all the journals that have very important data can also be very expensive. So as a as a junior doctor or as a med student, when you're doing your research, and you want to do, for example, uh a, a systematic review, you want to collect the different types of uh papers and you want to make your own paper out of it. It's hard, if you have very limited resources, you might not have uh access to high top reputed journals because they're quite expensive to have. So what you do is you base your research based on what you have whatever is available to you and you come up with your own date, you know, you come up with your own results, uh you analyze it and you say this is what I think is right based on ABCD E papers that I gathered and I studied and this is the results and you put it out there in public domain. Again, that's very biased because you're not extending your research to all the paper that's out there. You're only using the data that's available to you. That's an example of an availability bias. And I think um because buyers, a lot of the NHS Trusts nowadays have access to almost all the different papers out there, all the different journals. So if you want to do any paper within your trust, if you want to do any research, it's really important to reach out, reach out to your librarians because they will help you get access to that a particular website that it might be locked and that might not be available. And a lot of institutes can actually, you can actually request them to pay for it and help you access that data with the uh given the fact that you have a consultant or someone senior backing you up with this research. So these are all the different types of biases that we talked about today. Again. Um There are so many different types, there are subcategories within each one of them, but just understanding the eight main type of buyers, I think uh will help you get an idea. And when you're sitting there with your consultant or with your team of uh researchers, uh when you're helping them plan it out, uh you know, they will be talking about these different types of biases and just understanding that language will help, will, will have a huge impact on the way you perform the research as well. And help you get into the uh paper as well, help you perform better um in your research methodology and hopefully get that out as a paper. So a quick summary, these are all the different types of biases. And because of these biases, the type of research that you do has been put in a pyramid form with the case series and case reports right at the bottom because they have the maximum bias and systematic reviews and trials right at the top because they have a very rigorous method that makes sure that there is very minimal bias and they are the most high quality, most robust, most powerful type of study, the things that sit at the top. So that brings us to the end of today's session where we talked about different types of um biases. And we also talked about the level of evidence and the hierarchy of the evidences out there. So thank you for listening. If you do have any questions, please feel to feel free to pop them in the chat box, any questions. So these are the uh um we have completed five sessions. So we've got another two planned. The next one is quite important, especially if you are looking for junior research jobs in the NHS or if you've already started it on you, it's important to know about the good clinical practice and the declaration of Helsinki, it will be a short session. And then we've also got another one which is critical appraisal of a paper. So when you're given a paper, how to do a critical appraisal of him, how to report it on a paper. It's quite important to know that as well. So those are the dates that are confirmed for that. Um If you do want to stay up to date with the different sessions that we have, make sure to follow us in the commu um follow our community at Medal and you will be notified of any sessions that we have. I hope you found that session quite useful. And um hopefully if I can request all of you to fill in the feedback form for the session, um that would be really appreciated as well. Yeah, I will try to work on the technical error which I did not um expect to happen today. But um if we are done, I think we can leave it here in today's session. Thank you for attending. Have a nice uh rest of the weekend and I'll see you soon in the next session. Take care. Thank you.