Home
This site is intended for healthcare professionals
Advertisement
Share
Advertisement
Advertisement
 
 
 

Summary

In this session, medical professionals will delve into the second part of a series on the ABCs of statistical analysis and epidemiology. With a heavy focus on epidemiology, the facilitator aims to break down the often daunting topic, sharing valuable insights and tips on how to grasp its complexity. The session includes a recap of previous lectures and a breakdown of key aspects of epidemiology, encompassing incidence vs prevalence to understanding different types of biases. The session also makes space to discuss the importance of journal clubs for staying up-to-date in medical knowledge and enhancing critical thinking. In addition, the session includes competition details for journal article presentations, providing an excellent opportunity to enhance your CV and chase training posts. Regardless of your experience, this session promises to be a holistic learning experience.
Generated by MedBot

Description

Journal Club brings you the evidence-based medicine (EBM) crash course! If you are new to EBM or need a refresher, this series is for you!

Based on demand from the introductory session, a set of 3 webinars is coming your way, starting with the ABCs of statistical analysis!

Easy to follow and bite-sized information to help you interpret research findings. This interactive webinar will cover common statistical measurement tools and how they pertain to different research designs.

Learning objectives

1. Understand the principal concepts and terminology of statistical analysis and epidemiology, emphasizing its relevance and practical application in medicine. 2. Explore and comprehend various types of biases such as selection bias, measurement bias, confounding bias, and attrition bias and how they may impact research data. 3. Develop an understanding of the difference between incidence and prevalence and their importance in medical research studies. 4. Understand the importance of blinding in research studies to minimize bias, focusing specifically on measurement bias. 5. Recognize the structure of Journal Club meetings, understanding its contribution to maintaining up-to-date medical knowledge, critical thinking and effective communication.
Generated by MedBot

Speakers

Related content

Similar communities

View all

Similar events and on demand videos

Advertisement
 
 
 
                
                

Computer generated transcript

Warning!
The following transcript was generated automatically from the content and has not been checked or corrected manually.

OK. I think it's, it's available now. I think you guys can see it fully. Right? Yeah. OK. Fantastic. All right. So welcome everyone today. We're gonna be covering the second part of our series. It's gonna talk about the A B C's A statistical analysis and epidemiology. Um I realized it is called A BCA statistical analysis, but this uh this um session in particular is gonna be heavily based in epidemiology. Um I know that maybe a lot of you are like me in medical school. I found this topic to be quite tricky and it was very frustrating to study. Um So, um I wanted to put in quite a bit of um information here just to make it a bit easy to understand. Um It's not as hard as I thought it was. So hopefully you guys would be able to appreciate that as well. OK. Um So the agenda for today, I'm just gonna talk a little bit about our competition for those of you who are new in terms of our journal Club series. I'm just gonna give a bit of a crash, be um a crash course to give you some basic information and, um, hopefully it'll give you a bit of a head start. Um, so that you can, um, participate in our competition and I'm gonna talk a little bit about that. I'm gonna give a quick recap and then I'm gonna get into the epidemiology. Um, it's gonna cover some aspects of bias and we're gonna do the statistical analysis and then I'm just gonna talk about, um, what comes next. Ok. So, um, obviously taking part in a competition like this has a lot of advantages for anyone who is watching. Um It can help build your CV. Um If you're working at any point in your career, at some point or another, you may have to do a presentation at work in terms of Journal Club. So, um this particular uh presentation is gonna help you understand the different research papers that you come across in terms of common terminologies and its implications. Um So it's gonna help you out in your work. It's gonna build your CV. It's gonna give you a competitive for training posts and of course, it's gonna help you out. Um Should you decide to join in our competition? And of course, the more the merrier um in terms of the details of this uh competition, it's gonna be available on our, on our journal club page at mind the bleep.com. Um It's gonna have the details on how to submit, when to submit important dates. And of course, once our winners are announced they'll be posted in our website, of course, with their consent. Um We wanna make sure that as many people get you, we wanna make, I wanna make sure that everyone gets the most out of this. So, um people who win first place are not gonna be our only winners. We will also have some runner ups that will have an opportunity to present their presentations as well in other um in other sessions. But the finalists, the difference is the finalists will be um will have a group of them and they will be the ones who will then mm uh they'll be the finalist. So that will then become um uh will be picked to be the Journal Club's National winner. Um And of course, if a finalist is unable to participate in that case, the next runner up will take, um take their spot. So of course, our finalists and our runner ups will all get certifications and awards. And um so yeah, it's a really great opportunity and I think um it's great to give it a go. Doesn't matter how much experience you have, whether it's little to none or a lot of experience, it's just a great opportunity um to partake and not only for the people who are presenting but the people who are um watching and participating, they get to learn a little bit as well. Ok. Well, on a little bit of a ramble there, apologies. Um But now we're gonna go into our quick recap. Um So Journal club again, our regular meetings in which healthcare professionals discuss and review recent research articles for medical journals. As you know, medicine is a very fast paced specialty. There's a lot of um things that take place um and a lot of changes in terms of guidelines. So the Journal club is a way to help us stay up to date in terms of our medical knowledge. Um There's a lot of benefits as you can see, encourages critical thinking, improves knowledge, uh opportunity for peer review, enhancing communication and identifying gaps in our current knowledge and research needs. So there's quite a bit that you can gain out of it. So any presentation in Journal club has a certain skeleton or flow chart. So the first part is a clinical encounter. So this may be something a case that you may have seen at work or something that you may have heard of and you have a question in regards to this case. Um Of course, when you are presenting, make sure that you have the consent of the patient and you um anonymize this patient. So don't, don't, don't um don't put the patient's information out there, but it's just basically for, for the learning point. That's why these journal clubs are made um our session yesterday focused on developing the clinical question and the different study designs that are associated with it. You can definitely check it out after our uh presentation today. Um our section as well is gonna focus a little bit about the literature search as well. Um So in terms of understanding it, so we're gonna start off with epidemiology here. I'm gonna talk about the different types of biases. Um I'm also gonna talk about incidence versus prevalence, um which is something that you may come across quite frequently and it, it is the bread and butter for quite a few different um concepts. So if you understand this, it'll make things a lot easier for you in the long run. OK. Um Of course, there are many, many different types of biases. The ones that I've included are some of the most common ones. Um So on your own time, if you want to learn about more, you can do some of your own reading. But uh I've, I've included four. So this is our first one. Selection bias. This occurs when there's a systematic difference between individuals or groups that are selected for study participation. Um and the target population. So, um just looking at an example of this, let's just say that you have a weight loss drug and you wanna test uh test the efficacy of this drug. Um If you were to conduct a study where you have a control group and an intervention group, the control group will get a sugar pill, for example, a placebo and the intervention group will get this weight loss drug OK. Um, if you were to select people who are active in the gym, in the gym, working out, um, and are very physically fit in your intervention group and then select a random group of people for the control group. This would be considered a selection bias because these groups are not equal, they're not the same, not everyone is, it's not gonna be equal between both. It's going to introduce a bias. This is a selection bias. So the bias here is in terms of your sampling, um if you were to conduct the study in a more proper way, you would have, you would select random people from a population and then divide them equally into your intervention group and your control group, right? The control will receive a placebo, the intervention will receive the weight loss drug and you monitor them over time. Um This is what we um discussed in our last session and this is what is known as the randomized controlled trial. Um So the problem with doing this is that it can basically lead to a false result. Um So an over and under me underestimation of the true effect of an exposure or outcome. Um Does that make sense so far? Ok. Does this make sense? Ok. Great. Wonderful. Um measurement bias. Um are systemic errors in the measurement or assessment of the study variables? Basically um in any study that's conducted, for example, like the one we have here, the randomized controlled trial. Um You may give an intervention group, a weight loss, drug and a control group. Um The placebo, um what people would normally do in order to reduce the amount of bias is that they will blind the participants and maybe even the researchers. So they will, they won't tell. For example, um specifically the participant, if they're in the intervention or the control group that way when the participant is thinking about it or seeing, thinking about their um their progress, um you want to reduce the amount of bias that they're, that they're taking into account. And the same thing for the researchers, if the researchers are uh assessing data and they don't know if these participants are in the intervention or control group, this reduces the amount of measurement bias. Um It's very similar to the concept of detection bias but bias from the uh aspect of the researcher. Ok. Um We've got a couple more. So confounding bias is when there's a third variable also known as a confounder which distorts the association between exposure and the health outcome. Um And it's usually the only factor that differs between an exposed and exposed groups. So what do I mean by this? Let's look at an example over here. So you have a population of people who drink a large amount of coffee. I'm definitely one of those people. Um So if you drink a lot of coffee, um there's a studies have shown that it's associated with cardiovascular disease. Ok. So if you wanted to conduct a study and this is what you're looking at, you may not take into account that some of these individuals who consume large amounts of coffee may also be smokers. Ok. So smoking has a strong association with high coffee consumption, but also with cardiovascular disease. So this is a variable that's unaccounted for in this study, which is what I've highlighted in this box uh that could affect your results. And this is what is known as a confounding bias. It's a variable that is associated both with the um intervention as well as the outcome. OK. And it can result in misinterpretation of association. Um Another type of bias is attrition bias. Um when there is any kind of uh study that's being conducted a lot of the time, not all the participants who initially start in a present in a um in a study will be there. At the end, some people may drop out and some others may unfortunately pass away. Um So at the end of the study, what you may have started initially with 1000 participants, you have 500 in your control and 500 in the intervention. By the end of the study, you may only have 421 in the control and in the intervention, only 209. So those who are lost to follow up um and not included in the study leads to what is known as the attrition bias. So the difference in the number of people within a certain group can definitely affect the values. Um You may have heard that in studies, the larger the sample, the more accurate the results. Um this, this is also taken into consideration when we look at attrition bias. Um and also the differences between the groups as well can add some unwanted um variables that could affect the interpretation of a study and this is known as attrition bias. So, in order to do to reduce this, you also wanna include those that are lost to follow up. Ok. So we've covered the form really common types of biases. I'm gonna talk about incidence versus prevalence. So over here is what is known as the ep epidemiologist's bathtub. Ok. So you have your shower, it's pouring the water and the bathtub is filled with water. Ok? Um This bathtub is cracked, there's some water that's dripping and if we were to take the water, as for example, the population. Ok. Um And this is the population that has a certain condition. Um some may be lost mortality and some may have recovered. Ok. Um Let's take a look at the incident. The incidence is the number of new cases that are occurring, which is why the po water is pouring in. So it takes into account how quickly a disease can develop. Um It's useful in detecting rising transmission rates. And it can be used in decision making regarding necessary public health interventions to mitigate the spread of a condition. Ok. Prevalence are the number of cases that are already present within a given population, right? So they're already afflicted with an illness, either at a particular instance at a particular point which is known as point prevalence or across a designated time frame, which is known as a period prevalence. OK. So the prevalence takes into account new cases as well as preexisting cases. Whereas the incidence only takes into consideration new cases. Ok. Um The incident, the prevalence helps us to determine the burden of the disease within a given population and it's used by public health for service planning. Ok. Um Before we move on any questions so far, any questions he started the Yeah. Um So Divya is asking, would you say study is done with the elderly population just to have attrition bias most of the time? It can most definitely. Um uh uh there are popu the elderly population um obviously does have a lot of patients who are elderly tend to pass away. So there is quite a bit of attrition bias there. Um Yeah, so certain populations will have it more than others. Um But it can also take into account certain conditions as well. You know, those with high mortality rates. Um you can definitely take that into consideration. Yeah, excellent questions. Mhm. Yup Alzheimer's disease, Parkinson's. Yeah. Yeah, most definitely. Yeah, definitely. Yeah. OK. Um Any other questions? No worries. OK, great. So, um now we're gonna talk about measures of asso can you explain the measurement wise again? Uh Yeah, of course, I'm just gonna go back to that slide again. So measurement bias takes into account. So, so measurement bias basically is where if you, if you were to conduct a study, for example, you've, we'll, we'll take the um weight loss drug. Let's just say that your company, you've made a weight loss drug and you want to advertise this drug and you wanna say, OK, this drug causes a, uh, a great amount of weight loss and it's great. OK? Um, in the study, you have a population of 1000 people. OK? You've recruited 1000 people, 500 are in the control group. So you've given them a sugar pill and 500 are in the weight loss group. Uh weight loss drug group. So let's just say in the study, you did not do any blinding, which means you've told every single participant and researcher who is in which group. So for example, you'll tell participant A, OK, you'll be given the placebo or the sugar pill and participant B, you're gonna be given the weight loss drug. OK? So as the study continues, and as the study goes on, yes, there are things that you can take into consideration like the patient's weight loss, like the measurements that you know, are, are difficult to uh misinterpret but um there are certain other factors. So a patient might say in the placebo group. Um, not, sorry, not the placebo, the, the weight loss group. Yes, I've, I have been noticing that I'm losing weight but they haven't, you know, um, so that's something that they may be looking into account or better way to explain it. Is it from the researchers perspective? So they'll say, oh, you know, I've been noticing within the um intervention group, those who are taking the weight loss pill tend to be losing a lot more weight. So you're focusing more on one area compared to another. Um And you may not be taking into account maybe the control group. So it, it has a lot of similarities with detection bias, right? So detection bias is where you say, oh, I'm not, you're not, you're paying more attention to one group compared to another and that can lead to measurement bias. So these are differences that you don't take into consideration. Um And these things measurement bias can occur consciously. So people sometimes can make a conscious effort to do this or subconsciously. Does that make sense? If not? No worries, I can, I can explain it again with a different example if you'd like. Oh OK, great. OK. So all right measures of association. Um So for this one, I'm gonna be talking about smoking and lung cancer. Um And I'm gonna be using this to explain the different concepts. OK. Relative risk. So um relative risk is a measure that's used to quantify the risk of an outcome in one group compared to another group. So relative risk is a ratio. Ok? Take into consideration this is a ratio of the risk of an outcome which in this case is lung cancer in the exposed group. And when I say exposed with our example, we're talking about smoking, OK? So outcome is lung cancer, the exposure is exposure to cigarettes or those who are smoking. Ok. So it represents a ratio of the risk of an outcome, the risk of developing lung cancer in the exposed groups, in those who smoke um to the risk of developing lung cancer in the unexposed group. So in this case, if we were to look at this particular box, we have um a population who smokes their risk of developing lung cancer is 20%. These values are not backed by science. It's just, it's just to help solidify the concepts guys. Ok? Um So we'll say smokers have a 20% chance of developing lung cancer and nonsmokers have a 5% chance of developing lung cancer. So we say the relative risk is 20% divided by the 5% which means the relative risk is four. And remember this is a ratio. So you've deve you've um determined the relative risk of developing lung cancer um in those who smoke versus those who don't smoke. So what do you do with this value now? Well, you can divide the relative risk values um in those that are below one, above one or equal to one. So if you get a relative risk that's equal to one. So for example, if those who smoke 20% develop lung cancer and those who don't smoke also have a 20% chance of developing lung cancer, our relative risk will be equal to one. In which case, the risk of the outcome, which is the lung cancer is the same in both groups, which means there is no association between smoking and lung cancer. We obviously know this isn't true. But again, this is just to solidify the concept. Now, in the second, he um in the second point here where the relative risk is greater than one, which is what we have here, the relative risk is for the risk of the outcome. Developing lung cancer is higher in the exposed group compared to those in the unexposed group, which means there is a positive association, OK? And if the relative risk is less than one, OK. Um In that case, you can say the risk of the outcome, which is lung cancer is lower in the exposed group compared to the unexposed group, which means there's a negative association, which means that you can also say that smoking is a protective factor against lung cancer. Again, not true just to solidify the concept. So in this case, relative risk. Uh smokers are four times more likely to develop lung cancer than nonsmokers. Ok. So that's our relative risk. Um Those of you who may have attended yesterday's session, this may look a little bit um familiar for those of you who haven't, I'm just gonna go over it quickly. This is just a bit of a chart to help. Um For those of you who are visual learners to um keep in mind an organized way of looking into different types of study designs. OK. So we have the case control, which are the types of studies that look for outcomes in the past. You're cross sectional which look for studies in the um in the meantime, and those that look for uh their outcomes in the future or the cohort studies. So relative risk is something that can uh that is a measurement that is normally used in cohort studies. OK? Um It provides a measure of the strength of association, the strength of association between an exposure and an outcome and it helps to assess the impact of an exposure of a risk of the risk of developing a disease. OK. It's most commonly used in cohort studies as well as randomized controlled trials to compare disease risk between different groups of individuals. So if you were to take into consideration a study where you have 1000 people, OK. 500 of them are smokers and 505 100 of them are non smoker, you follow them over time and see how many develop lung cancer and how many don't. This is a cohort study. OK. And please uh feel free to stop me and ask questions if there's anything that's unclear. OK. Oh, yes, I forgot. I put the slide in. So it just shows, it just comes to show you uh it's a visual representation of what I was talking about. So you got the smokers, 20% develop lung cancer, nonsmokers 5%. So 28 divided by five is four, which is our relative risk. And this is commonly used in cohort studies. OK. I'm just gonna skip this for now. Sorry. So what we have here are case control studies. So these are studies that look back in time. OK. So you have a certain outcome, you may have a certain population, some of them have lung cancer, some have don't what you do is you take this population, you interview them, you look at the things that happened in their past and you try to find an association between the development of lung cancer and something in the past. So you can say, wow, OK, we have a uh a population of 1000 500 of them were smokers. We look back in time. Mm uh 500 of them have lung cancer. We look back in time. Most of them are smokers and 500 of the people who didn't develop lung cancer. We look back in time, most of them didn't smoke. So there is an association between smoking and lung cancer. Ok. So this is a case control study. Um and for case control studies, you can use what is known as the odds ratio. Ok? Um if you've seen these terms before then you know that it might be very difficult to tell the difference between the two but please bear with me as after I explain the odds ratio, I'll discuss the differences between that and the relative risk. Ok, so the relative like like how the relative risk is a ratio, odds is also a ratio. Ok, it's a measure used again, we have our smokers that develop lung cancer and nonsmokers who who still develop lung cancer. So a study may have been conducted that showed that one in five smokers develop lung cancer and one in 20 nonsmokers develop lung cancer. So what are the odds of that? So the odds ratio is those who don't develop lung cancer, those who develop lung cancer who are smokers, which is 0.2 divided by those who develop lung cancer who are nonsmokers. So the odds ratio is four. So again, what do you do with this value similar to relative risk? If the odds ratio is equal to one, the odds of the outcome, which is the lung cancer is the same in both groups, which means there is no association between smoking and lung cancer. If the odds ratio is greater than one, which is like four in our case here, the odds of the outcome which is the lung cancer are higher in the exposed group compared to those in the unexposed group, which means there is a positive association. And the third possible outcome is the odds ratio is less than one, which means um it's lower in the exposed group compared to those in the unexposed group. So what's the difference when you calculate? Let's just say that you have a scenario here where you're in a museum? Ok. And there's 100 people, the dots represent people. I know there isn't 100 but just bear with me. So there's 100 people in a museum at a given time. Ok? Out of the 101 person coughs. Ok? The risk of the the risk of a coughing is one in 100. Ok. So that's 0.01. The odds doesn't is not the same as the risk. It's not one in 100. You take the one person and you subtract it from the total population. So it's 100 minus one which is 99. So the odds of a person coughing in the museum in this given time is one out of 99. Ok? So the values the risk and the odd here are really similar. So where do you start to see the differences? You see the differences when the incident of a condition is high Ok. So in this case, the incidence is the cough. So since the incidence is low, it's only one the risk and uh the odds is not that different, are not that different, but let's change it up a little bit. So same scenario, you're in a museum in a given day, there's 100 people and as opposed to just one person coughing, 55 people cough. Ok. So the risk of someone coughing is 55/100 or 55%. Ok? The odds is 100 people minus 55 you get 45. So the odds are 55/45 or 1.22222 right? So there is a very big difference now from before where we went to 0.01 for risk and 0.0101 for odds, we went to uh 0.55 and 1.22222. So it is, you see a difference when the incidence is higher, that's where you start to see a difference between the relative risk and the odds ratio. All right, this one is a pretty fun one, very straightforward. So we have our same case again. Um, we say, for example, you have your non smoking population in a non smoking population. The risk of developing lung cancer is 5% right. There are other things that can cause it. But the baseline we'll say is 5%. Um let's just say we have Mark here. He's a nonsmoker. Mark has a 5% chance of developing lung cancer as a nonsmoker. Um, but let's just say that he does start smoking. In that case, his risk of developing lung cancer goes from 5 to 20%. Um As we know, there are more, there are more than, there's more than one cause there are more, there's more than one thing that can cause lung cancer. But our focus right now is about smoking. How much of this risk is attributable to smoking? That's what we know. That's what we uh refer to as the attributable risk. To calculate the attributable risk. We take the 20% of uh risk for those who are smoking and subtract from it, the 5% who are nonsmokers and we find that 15%. Ok. Of the risk of developing lung cancer is attributable to smoking. Does that make sense? Does that make sense? Ok. Great. Great. Wonderful. Ok. Now we're gonna talk about sensitivity specificity, positive predictive value and negative predictive value. Ok. So starting with sensitivity, this is a measure of the ability of a diagnostic test to correctly identify individuals with a given condition or disease of interest. Ok. So a highly sensitive test is able to identify a majority of positive cases and a diagnostic test with low sensitive sensitivity can miss out on positive cases. Ok. So for this section, excuse me, um for this coming section um I'm gonna be using HIV as the condition and the sensitive, the test in terms of sensitivity is our Eliza scan. And for sens specificity, we're gonna be using Western blood. Ok. So let's just say that we have an ELISA scan for HIV and we say that it has 97% sensitivity. What does that mean if we have 100 dots here? And each dot represents one individual with HIV. We have 100 people here with HIV. And every single one of these people, we do an Elysa scan for them. 97% sensitivity means that the Eliza scan is able to correctly diagnose 97 out of the 100 with HIV, which means that three out of the 100 will be given a negative, a false negative test. So they are HIV positive but the test will falsely say that they're negative. OK. So again HIV, but this time we're looking at specificity with a Western blood test. So specificity is a measure of the ability of a dia diagnostic test to correctly identify individuals without a given condition or disease of interest. So a highly specific test is able to identify a majority of negative cases and one that has a low specificity can miss out on negative cases. Ok. Let me explain this with our example. Again, we have 100 people. OK. These are people that don't have HIV. So they are not HIV positive and we do a Western blot for all of them. Ok. And we say a western blot test has a 97% specificity. It means that out of the 100 people that are truly negative for HIV, the western blot test is able to correctly say that 97 of these people do not have HIV. It means that three out of the 100 will have a diagno will not have a diagnosis of HIV, but will be given a false positive test. So they'll be told you do have HIV, although they don't, so that's our specificity. So a lot of the times you may be wondering, ok, in that case, if we have a test that is 100% sensitive uh or a test that is 100% specific, why do we need to worry about the latter? Like why, why if we have a test that's 100% sensitive? Why do we have to be worried about a, a test that's uh specific and vice versa? This is because if we have a population of 300. Ok. So initially we were testing 100 we have a whole population now and the test as opposed to being 97% is 100%. Ok. If we have one, if we have this population and we say, ok, those of us who don't have HIV can be incorrectly given a positive result. Ok. So this is looking at specificity. So if a test is 100% specific. Ok. Or more than you could say. Oh, ok. Like a lot of people are negative but those who are negative maybe given a positive result. Ok. So someone who is an HIV negative and you have a test that is, you just do one test, they may be told, oh, you're HIV positive and they're not. And this can introduce a lot of problems if you look at it clinically. Um it creates panic for the patient. OK. And those that are healthy can be treated for a condition they don't have, it's an unnecessary waste of resources and time. Does that make sense? Any questions? No? OK. And um again, the opposite is true. Um So if you have a test, um and you say, oh part of the, if you have a part of a population with HIV, they may be incorrectly given a negative result. So they're told, oh, you don't have it but they do have it. And again, this gives the patient false hope or a sense of security and take it again. Uh You want me to repeat the last part again, this one? Uh OK. So what I'm trying to explain in this, in these two slides here is that if you have uh like the medicine is not a perfect science. So if you have a test that will detect everyone with a given condition, it can also falsely give positive results for those who don't have the condition. And again, if you have a test that is able to correctly identify everyone who doesn't have a condition, it can also give negative results for people who do have the condition. Ok. Yes. So the problem with relying on a test that is on, that is highly sensitive only or highly specific only is that there's gonna be a portion of the population who's going to be given a false result, whether it be false positive or false negative. And that can result in um if it's a false positive negative panic for the patient and a waste of resources, if you decide to treat it and if they're given a false negative result and they won't seek the medical attention that they need. And um obviously, their health is going to deteriorate and um it's, it's going to be very problematic, right? Um Which is why in this next part, I explained that if you have tests that are highly sensitive and highly specific, the best thing to do would be to combine the two. OK. So usually what would happen in patients who were testing for HIV, we will start with a highly sensitive test, which means this test will be able to take into account as many people as pos possible who are truly HIV positive and give them the result that is positive. But that means that there are people who are, who don't have HIV, who may have a false positive test. OK? Those who are truly positive or have a false positive test. Anyone who tests positive in the ELISA scan will then undergo another test through the western blot and this is a highly specific test. So in this test, it will confirm which patients um don't have the condition. So you want to combine the two test results in order to get the most accurate outcome. Um and this might look a little bit familiar um because it's usually put into this graph right here. So I'm just gonna talk about the different constituents of it. So what we have here is a bell bell curve graph. Um this line, the continuous one, everything that's underneath it is considered true negative. So again, carrying on with our example over here is a population that is truly negative for HIV. In this second bell curve over here, the dotted line, this one, all the population underneath it would be considered a true positive. So those are individuals that are HIV positive. OK? And in the middle here, these are the ones that are considered a little bit inconclusive. So, but just disregard that for the meantime, I just want you to focus on true negative true positive. OK. These red lines represent the different tests, OK? So line A is supposed to represent a test that is 100% sensitive. So let's think about the Eliza scan. Let's just say that this Eliza scan is a test that has 100% sensitivity. What does that mean? It means that anyone who is truly positive is accounted for, you can see here the line a and the dotted line doesn't go past it, it doesn't go behind it. So anyone who is positive and 100% sensitivity ends up getting the correct result. The downside of this, like what I was mentioning earlier is that there are those who are truly negative, right? That may have a false positive test and they're right here. I think I put that here, sorry. Um And now the second part here is line C and line C is representing a test with 100% specificity. So if we're talking about western blot, line C, if we say this Western blot test has 100% specificity, it means 100% of the population who is truly negative for HIV gets a negative test result. So it accounts for everyone who is negative, but unfortunately, it may also account for some people who are truly positive. So it will give them a false negative result. So it will tell them, oh, you don't have HIV, but they do. So the most accurate and the best place that you want is line B OK. So this is where you want to be. Um And basically this, this chart is great at explaining basically sensitivity versus specificity. And if you have a test that's 100% sensitive and 100% specific and um why that can take why that may not be the best thing because it can accidentally leave in parts that are uh uh it leaves room for false positives and false negatives. And of course, this is like a perfect chart. So different tests, the lines will be in different areas. Um But this is just kind of the basic lines of where they are. OK. So I'm gonna talk about positive predictive value. Um So positive predictive value is used for tests that are sensitive. Obviously, not every test is going to be 100% sensitive, but the they wanna get as close as possib as possible. So in this example, we're gonna say we're gonna use the Eliza can which is our sensitivity test and we're gonna assume that it has 100% sensitivity rate. So again, 100% sensitivity line a everyone who's positive is taken into account. OK. So um you may have seen this table before you have your population here of 1000 people. OK? Um In this column, 200 out of the 1000 are HIV positive and 800 out of the 1000 are HIV negative. You take everyone and you do an Eliza scan on them. So the Eliza scan tells you 200 people are HIV positive. OK? These are the true positives and there's no negatives. So it's 200%. So a test what you use use something known as a positive predictive value. Which is a measure of the probability that individuals with a positive test result truly have the condition or disease of interest. So how many people who tested positive are truly positive? And it has an equation. So you want to take the number of people who are truly positive and you want to divide it by the to the sum of those who are truly positive plus those who are false positives. In this case, it's 200 over 200 plus zero, just 200. So this test has 100% positive predictive value. This is not a realistic example. Um I wanna make a realistic example here. So again, similar circumstance, but in this case, 100 60 out of the 200 are um are HIV positive as detected by the Eliza scan. But 80 have been given a false positive test. Um In this case, how to, how to calculate the positive predictive value is you take the true positive count which is 100 and 60 and you divide it by the total that the Eliza scan has given a positive test result for which is 160 plus 80 that's 240. It means in this circumstance, the Eliza scan has a uh has a positive predictive value of 67%. So 67% of the people who test positive, uh 67% of all individuals testing positive are truly HIV positive. OK. And in terms of the uh implications of this. It, this is what gives the clinician a lot of trust in the results of a test in terms of its sensitivity. And as you may have predicted, there is a negative predictive value and this is for tests that are uh that are for specificity. So western blot, um again, we're gonna take an example where a western blot has 100% specificity. Um same population in this case, no negatives and everyone who has H I who's HIV, negative has been correctly accounted for. In this case, the negative predictive value is 100%. Which means if someone tests negative in this Western blot, you can be sure that they don't have HIV, this is a more realistic implication of it. So again, you have a Western blot test that identifies 720 out of um the 800 or HIV negative, but 40 have been given a false negative result. So you take the true negative and you plot that as your enumerator and you divide it by the total number of people who have been given a negative Western blot, OK, whether it be HIV positive or negative. And that gives us a 95% score and this is our negative predictive value. OK? We're getting to the end of the presentation. We're almost there any questions so far good. If you have any questions, let me know. All right. So this is quite straightforward. We have our mean, which is the average and this is used to basically describe the average occurrence or severity of a disease within a population. So let's just say you have a certain population in Manchester um over the age of 50 you want to determine what's the average systolic BP reading. So you're gonna take uh 200 participants, you're gonna measure their BP and you're gonna take the systolic values and write them down. This is your collected data. So the average is you're gonna add up all of these values and you're gonna divide them by the number of participants. And this is gonna give you the average or the mean reading. So it tells you in Manchester, um the average systolic BP reading for adults or for individuals over the age of 50 is 100 42.4 millimeters of mercury median is basically where you have a set of data. Again, let's continue on with our example from here. So you have a set of data, you order them from the lowest BP reading to the highest. And you try to figure out which is the middle, right. So the median is the middle reading and this helps to determine basically it divides the data into two halves. OK. Those that are falling below and above the median. And the great thing about the median is that it's not ex uh influenced by extreme values or outliers. Ok, unlike the average. So if there's a reading here, that's extremely low or extremely high or an outlier, it can definitely affect the average reading, but the median doesn't have that problem. It doesn't, it's not something that's concerning for it. So, um, you find the middle number, but sometimes in the data, you might have two numbers that are in the middle. So determine the median, you take the average or the mean of the two middle values to determine what the median is. And lastly, we have the mode, the mode tells you what's the most frequently occurring value. So again, if you have these different systolic BP readings in our population, we try to determine which one is the most frequently occurring. So after reading through the data, we may find that uh 100 30 millimeters of mercury occurs more than once in this, in this data. So you can see it twice over here, which means it's more frequent than any other measurement. And this is known as the mode. Um uh We are running a little bit out of time, but I've put a few links over here and I can add them into the chart, um which includes a little bit more information in terms of um in, in epidemiology and statistics that you can have a quick read through, be really helpful um in terms of your research. So that's the end of our presentation. The last part is for tomorrow and this is gonna be our critically appraising um section and basically, it talks about how to do how to go over the critical appraisal uh aspect of it and what tools are available to help you in this part. Um This should be a relatively short presentation. Um So it'll be, it'll be a nice wrap up for our series again. Um The recordings for this presentation as the one as well as the ones that were previously will be available on metal. Don't forget to register for tomorrow's session and keep an eye out on our, for our journal club page on mind the bleep and I look forward to seeing you guys. Um, don't forget to fill your feedback form and get your certificates, uh, before I end this. Do you guys have any questions?