This two-part series delves into the nuances of critically appraising clinical research papers. It offers valuable insights for medical professionals at all levels of training who wish to enhance their skills in analyzing research articles.
How to critically appraise a clinical research paper 1
Summary
Join us for an informative research webinar on critically appraising a clinical research paper, presented by esteemed researcher and Assistant Lead in Surgery, Mr A. Ruby. With distinctions from the University of Oxford and the University of New York, Mr Ruby brings a vast range of expertise in neuroscience and health research. An advocate for academic mentorship and diversity in medical academia, he has contributed significantly in the field of neurosurgery with 22 PDE Index publications. In this session, explore the importance of study designs, understanding biases, and the significance of evidence-based medicine. Learn the art of evaluating the validity of scientific literature and familiarize yourself with different types of observational studies. This empowering session will equip you with essential skills to critically assess and utilize clinical research effectively in clinical decision-making.
Description
Learning objectives
- Understand the role of evidence-based medicine in clinical decision making and the importance of critical appraisal in this process.
- Become familiar with different types of study designs and their appropriate uses.
- Develop the ability to recognize and decide whether a study design has been correctly applied in published research papers.
- Understand the concept of observational studies, including cohort, case-control, and cross-sectional designs, and their limitations.
- Learn how to discern the quality of published papers, differences between retrospective and prospective studies, and the impact of bias on research findings.
Similar communities
Similar events and on demand videos
Computer generated transcript
Warning!
The following transcript was generated automatically from the content and has not been checked or corrected manually.
Yeah, should we start? Um Are we going live or are we live? Oh, we alive. Perfect. No problem. Ok. Very nice. Good evening, everyone. Uh Welcome to our first um two part research webinar, the sa neurosurgery. Our topic today is how to critically appraise a clinical research. Part one. And I'll be going to the power of our speaker. A speaker for today is our assistant lead in surgery. Mr A Ruby is a is an incoming surgery training. He, his residency in Manchester, UK, United Kingdom. He graduated in medicine with distinction from University of Mouth where he was awarded the best graduating same prize. He completed an MSC in Neuroscience at the University of Oxford where he received a graduate graduate essay prize as an academic foundation doctor. He completed APG set in health research and statistics at the University of New York. I was awarded the best producing prize overall. Mr has achieved that one on the graded and postgraduate prizes and awards at local international and national levels. It will be a strong research interest in clinical use, neuroscience, medical education and health inequalities. He has authored 22 PDE index publication with two of those. As first all senior autos, he has co-authored 14 conference conference abstract, presenting 17 abstract as before. So as major national and international scientific Meetings in 2022 he was awarded as an advisor on the IRN R HL Open research advisory board. He passionate about academic mentorship and was formerly the director of Academia at me Medics. We created a national research network to tackle the lack of diversity in the UK academic workforce. Currently the assistant lead of neurosurg of neurosurgery Department are the surgery interest group of Africa doctor. Um for um speaker that will be speaking to us to be on our, on how to critically appraise a clinical research paper. Part one. Thank you for listening. Thank you. Mhm. Thank you very much Jeremiah for the introduction. Um Can you hear me still? Yeah. Yes. Yes, we can. All right. OK. Thank you very much. OK. Um So today, as Jeremiah said, we'll be doing the first part of this ses of this um critical appraisal series. Um And today I'll maybe be introducing you to like study designs and some biases. And then in the next session, we'll be doing things on like going to be more details and randomized controlled trials and the algorithms like critical any paper you see? Um w why is it slow moving? Ok. Older. So um why is this called important? Uh Firstly, we are clinicians or we will be clinicians at some point and when you are making a treatment decision or management decision for your patient, you need to follow evidence based medicine. And I'm sure many of you have come across these three circles like coming in together to form evidence based medicine. So evidence based me medicine is basically you as a clinician using your clinical judgment and expertise from years of experience and then combining it with what you see in the scientific literature and also taking into consideration the patient values and preferences to reach a shared decision that you and this patient will be happy with. And in the, the reason why Riszo is important is especially in the relevant scientific evidence bit because it's not every paper you see published in literature, that's um good enough to be published. There are poor quality papers in the literature and a famous example that comes to mind is the um publication by Andrew Wakefield. Um He, he was a doctor in the UK, published a paper in the um lancets many, many years ago. And he kind of said that um the Mmr mo measles mops and rubella vaccine causes autism um in Children. Um But so this is method was really bad, it was an ethical research and is even though he published in a very high passed journal, it was later found out that actually this is not worthy um and to be in medical literature and this paper was retracted. Um so critical appraisal is basically the process of e examining um research evidence to judge if it's valid. So the trustworthiness of the paper and um to see the value of the work that has been done and also to see if it's actually relevant to the patients who are trying to uh make a decision on. And I'm sure many of you have come across this pyramid of evidence. Um So starting from the bottom, um when you wanna make clinical decision, the lowest evidence to be case reports or expert opinions and letters. Um so it might be maybe a professor that has worked many years in certain field, they might have an opinion on how you treat certain patients but, but like the low level of evidence and then you have animal trials in in vitro studies to preclinical studies which give you some evidence on on what's going on and then you have observational studies. So that's cross sectional studies, case control studies and cohort studies. Then after that, you randomized controlled trials, then the strongest one is um systematic reviews with meta analysis. So I'm sure many of you would say, oh if I mean, I mean, when I was a student that is uh when I started learning about ciic evidence, I used to think the only good evidence we can have is randomized controlled trials or systematic reviews. Anything else is low quality virtually. Sometimes um you might only be able to do observational studies, you can do trials for certain research questions. So how would you answer these questions? Um So when you want to ask a question, does c um smoking cause lung cancer? You could think of a randomized controlled trial where you make some of your patients smoke and you make some not to smoke and then find out, follow them up and she develops lung cancer. But you have to ask yourself, is this an ethical research? It's not ethical cause they are causing um, to some patients and also is obesity a associated with cardiovascular disease. So you might think, oh, I'll feed these patients and, and make them not exercise. Just let them lay in bed for like years and see if they'll develop any cardiovascular disease. That's not ethical as well. Um, so you can really do a trial in that or our Children who live near paralyzed are they're more likely to get leukemia. So if you're already thinking of an association with this, do you really want to like put some, some Children in near power lines and see, oh, they get leukemia and some away from power lines. So it's not ethical. So it's not always possible for us to run randomized controlled trials. That's just what um I'm trying to drive at so randomized controlled trials and all is feasible. Um So sometimes, yeah, it's unethical to test some hypotheses and also, um, it's not feasible if you are trying to like, look for long term outcomes because trials are quite expensive. Uh you need to. So for example, let's say you want to find out if um a medi a patient undergo certain procedures with their still with, with their disease reoccur in maybe 50 years time. So you expect to be bringing this patient back to your clinic every year. Uh It's quite expensive to do that. And also you'll probably lose some patients to follow up. So it's not always feasible to do randomized controlled trials for long term and trials. And also if the outcome you are trying to measure is very rare. Um For example, maybe once again, let's say it's rare for like after treating a certain condition, it's rare for it to reoccur if you do a trial, most trials, I think they are most trials are usually like 1000 patients or so it's or 10,000 like the COVID trials. Um But if the outcome is quite rare, you will need a lot more patients, maybe 100,000 patients or a million or so patients for you to actually see the outcome. So it's not very feasible to run trials when the when the um outcome is rare. So um observational studies, sometimes the best study designed when um our cities are not feasible. And um as I mentioned earlier on, there are three types of observational studies. Um you have your co op studies, case control studies, cross sectional studies as well. Uh But I think an important message I want to pass across today and which we discuss later on is that observational studies, they only show that there's a relationship between X and Y. It doesn't show that X causes Y or Y causes X. So you can determine causality from observational studies and we'll, we'll explain this later on, right. So uh the next step is just to like talk about the different types of observational studies. So when you look at the literature, there's actually a lot of literature that says if you look at research papers, a lot of the research papers, actually the the study design is incorrect, they are mislabeled. Um So this is a paper from Neurosurgery, one of the high impact journals in your surgery. And the the authors kind of looked at various publications that cause high like impact factor in your surgery journals. And they actually found out that about, um I sorry, they found out that 63% of studies were inappropriately labeled as case control studies. And this is not only in neurosurgery, it is also in ob obstetrics and gynecology. And this is a problem because the way you apply your study design affects how you interpret your, your, your study and also affect your analysis. So it's very important to be able to like actually say this is a co op study, this is a case control study, this is a constructional study or this is a randomized controlled trials trial. So, um, we'll talk about how to recognize it. So, what I'm doing, driving out is that just because it's published in the literature, which has a peer reviewed, that doesn't mean there is no mistakes. In fact, II, I'll confess there's a paper I published, um, I think I published it two years ago and I'm quite ashamed of it. I love because I mistake called it a case control study, but it's actually a retrospective coop study. Um But of course, we we learn and grow every day. Um Right. So let's talk about the various types of observational studies. So co studies, let's ask you want to ask a question, does coffee consumption increase the risk of stroke? So in a court study, what you do is you bring, you look at a group of people to the court who have not had stroke at all in their life and then you now check they are coffee consumption, maybe some you, you, you put them into, they don't drink coffee at all or they drink moderate amounts of coffee or they drink a lot of coffee. And then after grouping them into these categories, you now follow them over time to see if they have developed stroke, you then compare the rate of stroke from in each c uh category of patients or people that drink coffee and core studies could either be retrospective or prospective. So in prospective core study, you just get this patient now check um and their co measure, measure their coffee consumption and then just follow them over time while in retrospective court study, which is usually the this is where people make a lot of mistake between retrospective court studies and case control studies. In retrospective court studies, you might want to look at, let's say from year 2010 um to, to yeah, 2000 and maybe 20 we'll bring a group of patients from that time. And see, I mean, I measure the um coffee consumption and see how many of them develop stroke in that ten-year period. So you are going to retrospective data, but remember you started off with the exposure which is coffee and then follow them up uh to see if they are the outcome. OK. So this is just a flow chart to show what a co study looks like. So you start with a population that doesn't have the disease and then check divide them into people that drink coffee and do not drink coffee, follow them up and then see who develops stroke versus who doesn't develop stroke. So that's literally a course. So this time with your exposure, find out the outcome. OK. Now coop studies, they have some advantages. So with co studies, you can get a wide range of outcomes. So maybe I study, you might want to look who develop stroke, who develops coronary heart disease, um who has a better quality of life. And so on, you can also measure the incidence of an outcome. For example, if 5% of the people that drink coffee develop stroke, you can say the incidence of stroke in people that drink coffee 5%. Um you can also measure temporality. So temporality is about time. So you know that at the start of the experiment of your study, none of the patient had stroke. So uh you know that you are just checking coffee versus no coffee and the fact that this patient ended up having stroke later on means drinking coffee caused before stroke. Um So you can know that your exposure happened before your outcome. So that's we try what temporal is and in terms of disadvantages, um it usually requires a large number of participants um for co studies and it can be expensive and sometimes the follow up period is long. So you lose some people to follow up and that could cause something called attrition bias, which we'll talk about later. And also there's a risk of selection bias as well. Um So um we'll talk about this later as well. Um OK, now, um case control study, uh once again, let's try to answer, does coffee consumption increase the risk of stroke by using cause case control study um design. Um So as I mentioned, this is something people mix up a lot. So in case control study from the name, case control, so you get people with stroke So you have like look at your patient, get people with stroke or maybe even in the population, get people that have had a diagnosis of stroke and also get a group that has not had stroke. So those are your controls. And then you now look back at their retrospective data to see if they have been drinking a lot of coffee before or if they have not drank coffee previously. Um So you start with, so just to clarify again, you start with this case versus control. So stroke, patient versus no stroke and then I look back at their history to see, are they drunk or did they drink a lot of coffee previously or not? And then you now look for differences in coffee consumption between the cases and control. So I'll show you a diagram again. Um So case control is always retrospective because you already have the outcome and you are going backwards to see the exposure, which in this case is coffee. Um So you start from here cases with stroke cases without stroke, share the proportion of people who drank coffee and those that didn't drink coffee. And then just do your analysis to see the relationship between stroke and um and coffee consumption, right? So once again, you start with the outcome case and control and look for differences in the exposure. Um case control studies. They are, they are quite good for rare outcomes. Um because you just, you don't need to like wait for years for the outcome to outcome. You just look at your database and find people that have the outcome already. So the disease and then just get people that don't have the disease and look for um um for um um risk factors and it's also good for, for, for conditions where there's a long time period between exposure and outcome. Uh but the problem with um case control studies that you can't establish temporality. And um so because you already have the outcome and now you are now checking backwards to see this patient drink a lot of coffee previously. Well, what if this patient started drinking a lot of coffee after um they had um af after they had stroke because you might ask the patient that have had stroke, do you drink a lot of coffee? They like, yes, but you, you can't really tell um that they didn't like the drink of coffee started before or after the stroke. And it's also also prone to record by us, which we'll talk about sooner as well. And then finally, the third type type of um observational study is cross sectional study. Once again, we are trying to answer, does coffee consumption increase the risk of stroke? Uh The thing with, you know, in court study and case control study, we always, there's always a follow up period, a timeline. But with cross sectional study, I looking at the both outcome and exposure at the same time. So do you drink coffee? Do you um does that cause stroke just the same type? And I you can use questionnaires for this. So, have you had a stroke before? Do you drink a lot of coffee? And then that's it? Are you, you just do your analysis? So um it's great to do a very cheap as well. Most questionnaires can be done online or you don't even need to post them anymore these days. So it's quite cheap to do questionnaire studies. And also once again, you can't really establish temporal relationship between the outcome and exposure in this case. So you don't know which came first, right? This is a diagram to like just differentiate and summarize what we've done so far. So in cross sectional studies, you examine the individuals at a single time point. So right now you have a patient in front of you. Do you have um have you had stroke? Do you drink a lot of coffee just to examine them there? And then case control study, you start with the patients with the outcome versus those without the outcome. So stroke versus no stroke and then you work your way backwards and see if they are the risk factor present or not. So in this case, drinking coffee, vsel not drinking coffee. And then in co study start with big factors. So drinking coffee vessels, not drinking coffee, and then you follow them up over a period of time to see if they develop stroke or not, right? So, we'll just do some task now. So I would like you guys to put your answers in the, in the um box. Uh um So just let me know. Um So yeah, just put your answers on the box for if this is a court to or not. Um um, so I'll give you 30 seconds. All right. All right. So, um I'll read this in. So just let me know if this is a co op study, case control study or, um, cross sectional study. So, in 1951 so I can't see the chart box. Um, and yeah, sorry, I'll just go for the answer. In 1951 the BMA forwarded to all British doctors a questionnaire about their smoking habits and, and 34,440 men replied with a few exceptions. All men who replied in 1951 have been followed up for 20 years. They had five causes of death causes causes of all 10,000 s to death and, and subsequent changes in smoking habits were recorded. Um, so from what we've said, this is a court study because at the start of the experiment, you identify those that smoke versus that didn't smoke and now follow them up over a period of time, 20 years to see if they have, um, to, to see the cause of death. So, it's a court study starts with the exposure check what the outcome is and then this one as well. I'll give you 3rd 2nd. Ok. So a study which aims to identify the environmental factors that influence the onset of insulin dependent diabetes involve the identification of was records from 1, 196 Children with type one diabetes and 320 25 Children of similar age without diabetes. Um Information on pregnancy, gestation, um birth weight, neonatal infections, breastfeeding, and admissions to S CBU. We collect from us to records. Um So in this instance, um these studies to identify patients to that type one diabetes, most of those that didn't have type one diabetes and then check their risk factors. Um So this would be a case control study because you start with the outcome and then work your way backwards to see the risk factors. And finally, this 1 10 seconds again. Thank you. Ok. Um So he started investigating the association between weight status, underweight, normal weight overweight obese and um self reported. General Health was undertaking using the data from the health survey for England from 2008. Um In this survey, individuals were asked to, I respond to the question, how is your health in general? But you say it was very good, good, fair, bad or very bad. Um The participants also had their height and weight measured by trained nurses. So this study was just looking at between weight status and then they are in general and they, they were asked at the same time, height and weight were measured at the same time. So, because both of these things were done at the same time, instead of having a long follow up period between both, then it has to be a cross sectional study. Believe I just not looking. Ok. Yeah. So this is quite important, um, knowing the type of study because that, that's your first point in critical appraisal. Um, even if the study says it's a case control study, look at it properly and decide this is actually a case control study because it's a big problem in research, um, papers and it caused many specialties and this could actually be a research idea for you. Um, probably not in neurosurg because this has been done in neurosurgery. You could maybe look but, um, look at some papers that have been published around this topic called mili of case control studies. Uh, and see maybe in plastic surgery or S and G. No, no. S and Gyne has been done in any special cardio thoracic surgery and see using the same methods as the, all these other specialty line neurosurgeon and S and Gyne see if they are actually mislabeled or the other. It's, I think it's an easy paper to write if it hasn't been done in, in any special, did that interest you? All right. So how do we check relationship between exposure and outcomes. Um So generally when you read papers, they are usually um like ex express relationships using a relative risk or risk ratio, they are the same thing or odds ratio. Um So we'll just quickly go about how do we calculate relative risk? So relative risk is literally the risk of the outcome in the exposed group divided by the risk of the outcome in the unexposed group. That's the definition of relative risk. So um in this table, so you can see it's a two by two table and you have those that have had um coffee and 14 of those that have had coffee got stroke and that's and they were, that's for out of 200 people. So 200 people in total live had coffee and for of them had stroke. So the risk of stroke in the I had coffee is for over 200. Now, um those have not had coffee, the total number of people have not had coffee is 2, 2000 no, sorry, 220 people and 11 out of 220 people have had stroke. So the risk of developing stroke in in patient that have not had coffee is 11/220. So calculate the relative risk. We just divide the risk um of this uh in, in the exposed group, divided by the on exposed group and you get 1.4 I will go through to impact relative risk and other raio soon, right. So you can't actually use um relative risk in case control studies. That's why I mentioned earlier on that the way you do your statistics, how you live your study matters. So to calculate relative risk, you actually need to know the incidence in the exposed and non exposed group. But in case control study, you can, you can have incidents because remember in case control study, you just determine at the start of your study. I will take this number of people that have had this condition, take this number of people that have not had this condition. So you can work out the incidence of the disease in that because you already determined how many of each case and control you uh you use. So in this instance, you you said you use odds ratio. So odds ratio only for case control, you can use it for you can use relative risk um for um um for case control. So how do you calculate odds ratio? So calculating odds ratio is essentially calculating the odds of something happening in the control in the um exposure group, but there was no uh divided by odds of its happening in the non exposure group. So in this instance, the exposure group is coffee, they have had coffee. So for so if you are in the coffee group, the odds of it happening to you is 14 divided by 47. So 14 people have added by 47 people have not had a condition. So the odd is 1447 and in those who have not had coffee, the odds of it happening is of of you getting stroke is 11. Um people that in the case and then divided by 53 those have no idea. And then when you divide it and in this, in this example, again, it's 1.4. So most times odds ratio you is usually a good approximate of um or good estimates of relative risk. But it's not always the case. But in this example, I've used they are kind of similar uh but the way you calculate them is different. Um So when you are interpreting relative risk and odd ratio, firstly, you need to compare, you need to ask yourself which group has been compared to which. So it is, in instance, we are comparing those that have had coffee versus those who have not had coffee. And then you now need to look at the value of the relative risk or odds ratio. Is it greater than one? If it's greater than one? It means there's increased risk. If it's one, it means there's no risk. If it's less than one, it means there's lower risk of getting stroke. And then you now need to decide if the defect size is actually of clinical importance. Like is this big, does this actually matter clinically? And you also have to look at the status quo signific actually, sorry II made a mistake. You look at the status significant clinical significance usually. Um So you look at in 95% confidence interval and also see the P value is measure 0.05. So let's do some. Oh yeah, this is just I'll probably not go into too much detail about this, but you can watch the video later on or you can buy slide later on. Um This is just ee estimate effect size to determine if it's um clinically important. So if the other ratio is more than four, we are, we are usually confident that you this is likely calling important. If it's between two and four, possibly important, but we need to like look into it more closely. If it's between 1.5 and two, then it's possibly important. But you might need more study to confirm this. And if it's between one and 1.5 then we it's probably not important but you need more more, there's probably an association but it's not clinically important. You need more research for this, right? So um so I'm sorry. Ok, sorry. And in the status quo significance. So if you have a 95% P value, a um 95% confidence interval, if one, if number one is in, in included in the range, then there's no status quo significance cause it means you have a 95% chance that one could be one of the um um um the odds are issue. So you can't really, you can't say it's that too significant, right? So these are another task. Um So this a th th this is um relative risk, let's just a relative risk of 1.4. And then this is a 95% confidence interval. Um So the 95% confidence interval, yeah, is be between 1.2 and 1.8. So your next question is, is this significant? Um So for you to answer that you need to ask yourself, can one is one in that range of 1.2 to 1.8? And the answer is no. So because one is not in that range, then it's significant. Now, the next question is, is this clinically important? Remember what I said about a 1.4 is quite low, but we can't really tell. So this is probably clinically important, but we can't, we can't say for sure. And then the next one sorry, I II can't see the screen. So I can't see where you guys are typing. So I'll just ask you guys a question and then you just answer mentally and yeah. Um so the next one, the relative risk is 3.9 the confidence interval is 3.6 to 4.2. So once again, the first question to ask yourself is, is this significant? So is one in the range? So three between 3.6 and 4.2 there's no one there, no one. So this is significant and the fact that the ratio is 3.9 it's quite, it's it looks like AAA large effect. So I would say this is clinically important. So both starts clear, significant are clinically important. Now, the next one is 0.5 and the range is 0.3 to 1.3. So once again, you ask yourself, is this significant? So one can be far between 0.3 and 1.3. So there's a chance that there's no, there's no um significant effect. Um So I would say this is not significant and 0.5 is quite low as well. So the it's was not statically significant and there's no um important clinical effect. And then for the final 11.2 to 0.9 no, sorry. And then the 95% interval is um 0.9 to 1.4. Once again, you ask yourself is one in this range. Um One is between 0.9 and 1.4. So it's not statistically significant. Um But is it um clinically important there, there, there's been some studies where the um the the association was not significant but because of the odds ratio that some people are like, actually, there's a trend towards it being significant. Um And it could be an important um an an important um and risk factor Um, so yeah, so it's a question mark for important effect. So it could be by chance it could actually be important as well. So, yeah, that those are, you can approach and, or you can ask questions later on if, if there are any questions. Ok. Right. Right. So, I don't know if you've ever maybe picked up a news. So, or, or your research article one day, it could be like sometimes it's, it's like there are a lot of contradicting finding in literature. So one day that could you could some people have this thing that or me call it, which is like spinning the wheel. I, you spin the first wheel, it, it points towards coffee, you and can cause you spin the next wheel, depression and I spin the next one swing. So coffee can cause depression in swings the next time you spin it again. Smoking can cause maybe hypothermia in in rats or in seven out of 10 women. So it's like if it just feels like sometimes things happen by chance or sometimes maybe you might see coffee causes um depression in twins in one study and and study be like coffee does not cause depression in twins. So is it, is, is it fake news or what's happening? And um there was actually a study in the US as well. So the same data was given to 29 research teams of like soccer players. So the data was becoming a referees are more likely to give red cards to dark skin players. And um each, you know, there are 29 teams with exactly the same data for each um team is different s school method and they actually found different relationships between skin color record. Um So if you look at this graph, the gray circles, sorry, the gray circles show no, no SIM card results and the green one shows sim card results. You can see that some found sim card results, some did it with the same data and also some found very big significance. Uh Most of them found between um um like twice as likely and between equally likely. While some you can see this one at the end, um at the top part found it's almost three times as likely. So the question here is uh how come with the same this week, we were able to get different results when they did the analysis. So this is where we talk about um by various explanations for this. So there are various reasons why one study will show um such significance and wouldn't show it or one study will show a large effect. Another, wouldn't one of the reasons is chance. It could be a random error and we'll talk about this. It could be due to bias and it could also be due to due to something called confounding. All right. So chance, um s so as I mentioned, one of the things that could happen in, in research is your status, skin cancer happen due to chance or your lack of skin cancer is not due to chance. And, and for you to examine status significance, you already mentioned it. So you look at the 95% interval. Does it contain one or not? And also you can look at the P values as well. So it is a figure from a paper. Um So the ones in Aster are, the ones with significance is just to give you a mental picture of what I'm saying. So anything that crosses this line of one, can you see this line? Anything that crosses it is not a predictor of w whether to have a publication or not. Um So for example, if you look at clinical students, the first Aster, um the effect size is 4.78. It means that um clinical student are 4.78 more is likely to have a publication compared to preclinical students and there's no one as well in the, in the complex interval. And if you look at um women as well, so at the bottom of the graph, so 0.53. So this is less than one. So women are less likely than men to have publications. And it's significant cause there's no one in that between 0.3 and 0.85 it doesn't cross the line of one. So that's why you're becoming, if what you've seen um is significant. So you could, you can see um um Russell group Universities. So the effect size is 2.06 just looking at like, wow, this is a great effect size. 2.06 is quite good. But then it could be by chance we call it crosses the line of one. So we can't say it's that significant. So that's how you assess if it's by chance or not, right? And in research, there are two types of random errors. It's either you get type one errors, um which are false positives or type two errors. So in type one errors is where you think um your f has start something that starts significant, but it's actually untrue. So I'm sure many of you are aware that we use ap value of less than 0.05. So because of that P value that is universally accepted. If you are trying to dec decide association between X and Y and you conduct that research 20 times one in 20 times, you will get such good significance by chance just now just by chance because we are using one in 20 as significant level. And there's something that's quite important in research as well. So, you know, in research, there's this thing called publish or perish. So if you don't publish, your research career will not go far or something. And many people try to like publish their work. And when you look at literature, many people try to i most things you see literature are positive findings because people are not really interested in negative findings usually. So some researchers could actually do some manipulations uh and try to like they just bring many factors in and try to force that to happen. Um Because as I mentioned, if you test 20 variables, one of them, one out of 20 of them will probably be significant by chance. Um So this is why it's important when you are looking at any scientific literature to make sure that in their methods, they stated that they had actually um described uh um and s methods and analysis a priori a priority. So they're describing their study protocol before actually publishing their work before they even collected data. They already said, look guys, this is how we are going to do our, our analysis because if they don't do that, they could actually just fish for significance. And another thing you could do to prevent this type one error um is to do something called one for any correction. And I'll just give you an example. Um So this is a paper where students were looking at. Um I mean, um the researchers were looking at um the factors I explained whether a student will get um successful grants or not. So if you count the, all the Y axis, these are all the variables, there are 18 variables tested in this in this analysis. And also when you read the full paper, this this paper actually also did another school analysis for a number of research grants applied for by students using the same three variables. So in two card, I had six variables that, that was tested in this study. So if you look at this um I this this, this uh page is quite crowded. Well, mm oh This uh oh yeah, if you look at the legend figure two, so you'll see that the asterisk ones are the ones that tax significant at a corrected SYN level of 0.05 divided by te six. So they divided the usual SYN level 0.05 by the total number variables tested. And I let 0.0014. So any P value that not less than 0.0014 is not significant according to this study, right? So remember I told you if a variable cause it does not cause the one line, the vas line of one is significant. But if you look at this graph um so mixed, mixed race. So it is a third variable. If you can see that it doesn't cross one. In fact, the it is 7.80. Um that's the odds ratio and also the confidence interval is 1.17 to 51.8. Um that has a very large confidence to, by the way, and it's probably due to the low sample size. So confidence in the larger your sample size, the more precise it is the more narrow it is and you have more confidence that it could be true by the way. And when you look at this text, um just by the left of the graph, it says multiple regression analysis showed that gender P less than 0.01 ethnicity, P 0.034 a number of research projects conducted P and 0.001 by independent predictors of the odds of securing a research grant. However, only gender and the number of research projects conducted remains significant on both for any correction. So this P value of 0.034 does not is higher than this. So that's why it's rejected. So, Bon for any correction also helps us prevent false positives. So false positive associations, that's just what I'm trying to drive up. But some researchers think it's straight like you, you might mistake, reject the like true associations as well. Um So I've seen some medical literature where, where the researcher says if in your protocol, you specifically say these are the things I'm going to assess. You don't have to use B for any correction. But if you didn't write a protocol before doing your, your projects and then you just, yeah, they think it's a way to use for any correction because it will limit the risk of that point errors. And the risk of having that random one in 20 to start, you can't find them. OK? And then the other type of error in in research is type two errors, false negatives. So we when I was in medical, so I used to con confuse type one and type two error. So the way I remember it is in type one error, you only have false ones. So false positive, false positive in type two error, you have false negative negative is kind of false. So false false. So that I remember it's type two error. So type two error is false negatives when you find no association between two variables. But actually this your finding is untrue and it usually arises because the sample size is small, it's too small to detect a difference. So in court studies, a way you can improve your, your, your um power. So you there's this thing called power calculations to determine your sample size. So in court studies, it can improve your, your your risk of, you can decrease the risk of type two errors by increasing the number of people you you recruit into the study or also increasing the follow up period because then you end up having outcomes towards the end. So there there's a lower risk. And then in um case control studies, you can decrease the risk of type two error by looking at by recruiting more people into the case group into the case group compared to the um control group. But there is another thing your power calculation should be, should be done a prior as well. Because if you just like you, you just go into your database and you keep you initially brought out 100 people for your study and then you didn't find us because we like, oh I can't publish this. You're now going in again, find more people 203 100 until you find us skin cancer. And that's bad research. So um yeah, so your this is the importance of writing study protocols and when you are critically present papers, check they did they actually write a protocol, um publish their protocol before the before doing the study. Now, um remember we covered trans, another thing that could cause or was it time? Oh, we have enough time. Another thing that could cause um like abnormal results, wrong errors in in research is bias. So bias is literally any systematic error in your in your study design which makes you um which causes in incorrect estimates of the association between the exposure and outcome. So there are different types of biases um especially in observational studies. One of them is selection bias. So this is a bias where di where you identify the study population. So let's take a case control study. For example, you also um look at the risk of of um does smoking is smoking associated with um with lung cancer. So, ideally when you are picking your population, you should pick them in the general population. So go to the population randomly pick people that smoke. Um, people that have lung cancer and people that don't have lung cancer. So pick your case and control there and then I'll check with them. They are smoking. I think they have been smoking before and do your analysis. The wrong way of doing this would be to go to the hospital, which is not the general population. These are already sick people, you go to the hospital, pick people that have lung cancer, pick people that don't have lung cancer. And then find out if they have smoked or not, there was actually a paper in the past that showed that people in the hospital um they don't have lung cancer, they are actually more exposed to smoking. So there are more smokers among non lung cancer patients in the hospital, they can kind of affect your, your your calculations. So ideally you get it from the general population instead of getting them from the hospital or from a tertiary center. And now in terms of court studies. So how can there be selection bias? Remember, selection bias is there is a lower risk in court study because remember you start with your exposure and then you follow them up, you don't know what the outcome is. So it's, it's lower risk of like actually manipulating the selection of patient. But as clinicians um in like subconscious unintentionally or intentionally, we actually select patient for certain procedures. For example, let's say a patient comes in with chronic subdural. You look at the patients, you look at their performance like their functional status. Um You look at their comorbidities as you think should I operate on this patient or not? So maybe you might want to operate on this patient. The patient are fitter, right? They are more fit for surgery, operate on them. So that would be the um um surgery group and then the patient you manage, observing, um observe, um and conservatively are those that may not fit for surgery. And let's say you now want to do a, a study, a course. So that follows up this patient to determine who has better outcomes in terms of disability or in terms of quality of life. It's likely that the surgical group, we have better outcomes because they were already fit before the surgery anyway. So this is our selection bias be introduced into co studies. Um And that could be actually reduced by randomization, which we'll talk about later. Another form of bias in observational study is information bias. So, so this is an EOR um where you are measuring information on outcome or, or exposure. And one of the types of recall of information bias is recalled bias. And it's a big problem in case control studies. So let's say you have people that have lung cancer. Um So you are looking at cancer patient versus lung, uh, non lung cancer patients versus lung cancer patients. You not ask the cancer patients, have you smoked before? Because they have lung cancer. They are more likely to remember that they, they are smokers or they have smoked previously in the past. But if there's a patient that doesn't have lung cancer and it's, they are probably so I need to question, they are probably not going to remember that they, they are smoking habits in the past. This is these are an extreme example if you, that's how I'm driving at. So people will probably remember things based on their circumstances only if they are peace or control. And interviewer bias is also like when you are the, you are the researcher, you are trying to compare position between maybe uh people that are um um surgery versus con conservatively managed or you are looking at your quality of life and you are the interviewer, you know, the people that are in both groups, you'll be like, mm, are you sure your quality of life hasn't improved? The patient, the patient in the uh or you will ask the patient in the surgery group? Are your quality of life improved the patient of life? No. And because you have that bias, you expect your quality of life to improve the, are you sure like you start probing that? Because you are, you are trying to follow your hypothesis. So that's kind of like interviewer bias and um, misclassification is basically as a result of recall bias and interviewer bias, it could lead to a misclassification of exposure. So exposure in the sense that people that have smoked in the past because they don't have lung cancer, they don't remember that they smoked or outcome, people that have had surgery, um, sorry or outcome. So people that have had surgery because of your, the interview, you tried to like, force, it might make them end up saying that they have had um better outcomes. So that's the type of those are the types of biases. And finally, confounding is a big problem in observational studies. Um So what is a confounder? So remember we've been looking at, so in this example, let's us, we are looking at coffee, drinking and heart disease. So we do a study, we find an association between coffee, drinking and developing coronary heart disease. But or um like maybe myocardial infarction but is does drinking coffee actually cause myocardial infarction or is there a factor that links both of them? So in this example, the factor that links both of them could be cigarette smoking, it could be that people that drink coffee, they, they tend to like smoke or people that smoke tend to drink coffee. So the smoking coffee is associated with drinking, I mean, sorry, smoking cigarette is associated with drinking coffee and also smoking cigarettes is associated with um myocardial infarction. So it's this cigarette smoking that's actually caused this relationship. So that's a problem in, um in uh in, in observational study, you can't really tell if one variable is a, is causing the other. You can only tell that there is, um there is an association because it might be confounding variables that we know of. In this case, smoking, cigarette smoking, and there might also be some confounding variables that we actually do know about. So how do we deal with confounding? One of the ways we could deal with confounding is by designing the observational study. So you could say, ok, I'm looking at patients um that smoke. Um I mean, you, you can do something called restriction. So you only restrict your study to patients that don't smoke because you know, smoking can cause myocardial infarction. So you just say, OK, I wanna check the effect of drinking coffee and I'm not going to include patients that smoke. So that's, that's one thing you could do restriction. But the problem with restriction is that you can't really generalize your findings to more people. You can only generally, you can only say it only applies to people that don't smoke. Another thing you could do is matching. Um So you could have um like smokers are no, I mean, sorry, the coffee group are no coffee groups and make sure that there are equal number of smokers in both groups because then any relationship you see, you can see actually the the smoking doesn't affect you because I have equal amount of smokers in both groups. And uh you could also do so we'll go into randomization trials, control trials later on. So another way you could deal with confounding is in terms of um your analysis. So, um in your protocol, you could say, OK, I know smoking influences the risk of developing myocardial infarction. So what I could do is actually to see the effect of of coffee on developed myocardial infarction. I'm going to divide my analysis into those that I might have an analysis for smokers and for nonsmokers. So separate the analysis um for both groups. Um And then finally, you could do most variable techniques. So in your regression analysis, it can control or adjust for uh smoking habits in your analysis, right? So um we've talked about observational studies. They are good in certain instances as I mentioned, especially if your research question can be done ethically through randomized controlled trials or if you, your outcome is a very rare outcome or if you need a very long follow up period. However, as we mentioned so far, there, there is risk of bias, there is risk of um like errors and also um confounding in in observational studies. So the gold standard is randomized controlled trials and that's where you compare, for example, drug X versus drug Y, you randomly put patients in both groups. And next week we talk about or no next in the next session, we'll talk about randomization and how you actually was a proper way of randomizing. So remember I said observational study, you can detect causation or in fact position, but in randomized controlled trials is actually the gold standard design, in fact position between two variables. Because randomized, if you randomize people properly, there's a very, it's highly likely that they will be balanced on all the variables that are, that those that you know of and those that you don't know of accepts your intervention. So the only thing that should be different is one group is receiving drug X, the other is receiving drug Y if you randomized properly. And um and also, so as I mentioned, the process of randomization decreases the risk of selection bias and confounding. So uh this is what we'll be doing next time. So we'll go through some points of our randomization and then go through the APP algorithm um that I use and I've used this in some academic interviews which um were scored highly as well. Um So when you are critically are present, you start with the basic information of the paper. So um what was the title of the journal of, of the paper? What's the, who, who are the authors? Which country are they from? Um what was the impact factor of the journal? Um What was the study design? And so, and then you go to the research question and then do main findings, we'll go into more details about this later on and then you're now going to systematic bias. So you look at the paper, look at your methods. Are, are there any evidence of bias? And once again, we'll go, we'll go through all these types of biases in the future. And then you're now going to just a school analysis where they done a priority. So the way they decided before the study or whether they done post hoc, uh like after the study had been done, remember we talked about power calculation. So did they calculate their sample size? Um Did they like adjust for any confounding factors in their analysis? Um And also the data called for multiple testing. So remember when I mentioned about B for any correction, because with multiple testing, you have the risk of type one errors. So false positive, the more tests you do, the more likely you come up with this at school. Um So variable. So we're going to that and then you're now going to your conclusion. Um So do you agree with the authors conclusions based on your appraisal and also something called external validity? Can you apply these studies fine to the patient in front of you and the numbers needed to treat? So this is what we'll go by in, in, in our next session. I just thought it's important to understand study design and some of the biases as well. And also to let you understand that it's not, you can't always do a randomized controlled trial. Um, so that was the end of the session. And, um, let me know if you have any, uh, what was it, Tylenol if you have, oh, if you have any question and, um, you can contact me um, by email or linkedin. Um, thank you very much and let's see. Mhm. Thank you so much, uh, for the, for the insightful presentation. Um We are just on at the top of the part and wants to take me one or two questions um before we um ending um webinar. So the first question, what is the difference between case control cohort and retrospective cohort studies? OK. Um We'll go through this again and then I just answer this question in the chart. OK. Um All right. So we'll go through uh see this. Are you, I'm still sharing my screen, right? Yeah, I am. Right. OK. So let's uh um let me use this other diagram. So I think they're better. So co opt study um from the name, you start with a group of people with co op and um you, you, you, for example, you want to check does drinking coffee increase your risk of stroke? So you look at your pop population, find out those that drink coffee, those that don't drink coffee, follow them over a period of time and see if they end up having stroke or not. So that's co study. So you start with your exposure, which is the cough, which is cough in this instance and then follow up and find out if there's an outcome at the end, that's court studies. Um And as I mentioned, court study could be prospective so I could decide. Ok. Dia today, uh you want to start, you wanna do this study? So I just have II decide now that I'll look at people that drink coffee currently and people that don't drink coffee and then follow them for maybe 10 years or to 2020 2034. So that would be a prospective cohort study or I could decide I want to use retrospective data. So I want to look at people that are drinking coffee versus not drinking coffee in 2010. Follow look at their follow up for 10 years. So in 2020 did any of them develop strokes? So that was a retrospective course study. The key is that you are starting with the exposure and then following up to find out if they are the outcome. Now case control study is different in the, it's just in the name, case control study. So you have the case, you have the control. So you'll start with the case and control. So cases will be those with stroke. You find a group of patients that have stroke and then controls will be those without stroke. And you now look back maybe 10 years ago. Um were these people drinking coffee or not? So did I drink a lot of coffee in the past or not? And, and then you now, so you are walking backwards from the disease, walking backwards to the exposure to then determine the relationship between um um the condition and, and the outcome and the exposure. And then um cross sectional studies when you just like, maybe I want to like assess you guys. Now, I'll use it like, like how many of you have complications? Um Maybe I want to assess the association between um your stage of clinical, of, of medical training versus the chances of you having the publication. So I'll just ask you right now and through a questionnaire, what stage are you in? Are you clinical preclinical? You tell me, do you have uh the publication or not? You tell me. So I asked both questions at the same time, I'm not following you up like, oh, you start your year one and then by the end of preclinical, do you have a complication? No, I don't do that right now. Do you have a publication? Are you a preclinical clinical student? So that's, that's cross sectional study. So you don't do it. It's not, there's no follow up, you just measure them at the same time, both the exposure and the outcome. I right. Um So this is what's what? Yeah, the second question is asking um please, what's your suggestion as regards the to publish um I didn't know. OK, what's up? Yeah. So I think it's this one. So I II mentioned that um in neurosurgery this, this has been done in neuro surgery. So if you, if you are thinking of publishing a neurosurgery kind of paper, there's probably no point cause it was published recently and I think that was published in 23. So these people um they if you look at their methods. So following a literature search clear is drawn from five blah blah blah, they reviewed 125 manuscripts claiming to be case controlled studies. And then they now look at those at this paper, actually, case control studies, they use some checklist and then they found out that 63% of them were labeled inappropriately. And this is actually a problem across various specialties and at least in gynecology as well. This was, this was done in gynecology. So I'm saying if it hasn't been done, let's say you are interested in urology and this has this kind of like mislabeling of case control study hasn't been done in urology is to be a a AAA research project for you and for you to design it does look at it, like look into in literature, find out what the the methods using these available papers and see if we can transfer it into urology. Um So that's just the idea, I don't know if it's been done in all specialties, but this was some examples. And as I said, I've been, I've, I've committed a crime as well. I've mistakenly called my, my paper, which was a retrospective study, case control study and that was only three, I think, three years ago. Yeah. Um So it is a common problem and as I mentioned, the, the, the, yes, the study design affects um how you interpret your data and also um your data analysis any more question. Uh The blood is less than one. Can you not mean that the exposure is protective, right? So it depends on what you are. So the outcome, let's say the outcome, you are looking at smoker versus nonsmoker, right? And the outcome is the risk of developing lung cancer. And um this the the this is an example of because we all know that smoking increases the risk of cancer. But this is an example. So if the odds ratio, when you do your experiments is less than one, then it means smoking is protective because you have lower risk of developing lung cancer and the smoking group. Um or maybe let's say, oh, what's this condition? So let's say um the effects of smoking in developing flareups in ulcerative colitis. I can't remember, I can't remember if it's protective in Ter or Crohn's. But let's just assume smoking is protective in ulcerative colitis. Um So you look at um your, your X ray will be smokers versus nonsmokers and then your outcome will be risk of developing flare ups in ulcerative colitis. So I'm guessing if you compare smoker and nonsmoker, the other ratio will be less than one for developing flare ups because smoking is protective. Don't quote me on this. By the way, I, I've not done operative colitis in years, but I think smoking is protective. Um, but yeah, less than one is protective for, um, for, for, I mean, a year that, ok, so the next one is, is with regards to court study, follow up done by a specific period or his follow up up up until of is still the outcome. So usually most studies because I can't run forever. So at the start of the of, of when you are writing, designing your study, usually say, OK, we are following up this patient for 10 years. So you won't wait for everyone to develop the the the outcome. You can say we are following this patient up for 10 years. And then at the end of 10, as we look at those that have had the outcome or I have not had the outcome, um Not until like if you decide everyone should have the condition, firstly, that's too expensive. Secondly, you probably end up needing to follow up for years and years and years and years and you will never publish your paper. Um So yeah, so yeah, you need to like be time in out your follow up period. Um So I remember I did a spine project and usually it's time surgery, the optimal follow up period for maybe feel like back pain, leg pain is 12 months. So I looked at the relationship between um something called neutrophils ratio and see if it predicts outcomes um at the end of like top one point follow up period. So yeah, you need to like and um look at um yeah, specific follow up. Likewise, in case control is retrospection to find out only if the X is present or not. Mm So I think you in terms of um in terms of um this, this final question. So I think when you are the, there is a importance of define of writing a good study protocol. So when you are defining your exposure, you need to like describe what you mean by exposure. And that's one of the other ways where there could be information bias. So you could say exposure in this instance is defined because those are drunk coffee in the maybe they drank coffee at least five times a week for the past five years. Anyone that doesn't follow this definition is not belongs to no coffee group. So it depends on your definition or some recital might be like as far as you've drunk coffee in the last five years, we don't care. Um you are a coffee drinker. So the expo the definition of exposure depends on what you want in your research question and it depends on your research design. Um So that's why when you are looking at clinically are presented critically are present in the paper. You need to look at the definition of the exposure, you also need to look at the definition of the, of the outcome as well. I don't know if that answers your question. But yeah, so you just need to define it could be a combination of both. It could be either the exposure is present on um the smoking or coffee, drinking is present or not, or um you look at the duration like this as this person smoked for a certain number of years consistently, then that would be your, your exposure. It just depends on what you want. So I was asking if there is anyone who is interested in working together with need to write some article for publications. Um So, so, II mean, I think the best thing to do, I don't know if you are called C A is to contact the lead um for your C A group. I don't know if, if you are the neurosurgery group, you could maybe just contact the research, which is Jeremiah or, or, or um Joe or you could contact um, Tommy, well, who is the lead like the lead of the group in your surgery group, um or myself and the assistant. So you could contact us on whatsapp if you have any research ideas, um which are for publications, um which you need support for all. Um, if you do have any research ideas, we have, we've advertised some research, um, um, um, projects on the group chat which you could apply for. Um, the deadline is Friday. Um, but if it's another specialty, I don't know who, who the list of the other specialties are, you can just post to your group chats in, in the specialty and the special specialty page and so respond. Ok. Ok. Any other question? Am I the only one? Am I the only one? Yeah. No. OK. So sorry, I logged off at some point. We answered your questions but uh II can't find any question um again, and I think um good to, good too. Yes. A good to be to call you today and thank you so much for your time and um for the very expository um um explanation about how to critically appraise a clinical research paper. And um this just uh first, the first part of this two part series. And um by um next time we're going to communicate this time, we're going to have the second part which will um end this uh research um series. This research series is a primer for um ju club meetings and um in S avenue surgery, we want to have a um ju club meetings, like start having it um this year. So that's why this, this um webinar is coming at this time. So um thank you so much for coming. Um If our assistant leaders intend to say we can um go on. Um No, nothing to say apart from uh the next session, I don't know if there's a way to like share. Um the article will be doing with the attendant attendees. Um Just so people prepare before it makes it easier. Um But yeah, the next session we're actually going to like how to critically press papers properly and like the method I've seen in like articles and also what I've been using in interviews. Um, but yeah, that's all. But I think probably we'll, yeah, we'll, we'll, we'll give more. Um, if I just, um, look out for the advertisements on, on, um, um, when we'll be doing the next session. Yeah. All right. Thank you very much for attending and for listening and also for part for participating. Sorry, I couldn't really, like, see all the charts um, during the, during the session. But, yeah, thank you very much. All right. Please don't, uh, forget to fill out the feedback form because we are giving certificate at the end of the, um, series. So, thank you so much for your time and have a good day today.