This site is intended for healthcare professionals

How to critically appraise a clinical research Paper 2



This on-demand teaching session breaks down the complex concept of critically appraising studies for medical professionals in part two of the series. Starting with the hierarchy of evidence and progressing to examine observational studies and the benefits and shortcomings of randomized controlled trials, the session provides a solid grounding to help professionals better interpret studies and apply them to patient care. A detailed critical analysis of an emergency medicine/cardiology case, complete with an explanation of bias and confounding in research, will offer invaluable practice to consolidate learning. The session also highlights how funding can potentially impact results, offering a more holistic understanding of the factors influencing medical research. This learning opportunity is unmissable for professionals seeking to advance their capabilities in interpreting and applying research.
Generated by MedBot


This two-part series delves into the nuances of critically appraising clinical research papers. It offers valuable insights for medical professionals at all levels of training who wish to enhance their skills in analyzing research articles.

Learning objectives

1. Identify and understand the various types of clinical research studies (i.e. observational studies, randomized controlled trials) 2. Analyze and interpret the strengths, limitations, and potential biases inherent to study design 3. Proficiently critique and appraise clinical research papers, focusing on critical details like the biodata, research questions, and main findings 4. Evaluate the ethical considerations involved in clinical trial design and execution 5. Apply these critical appraisal skills to a specific case example from the fields of emergency medicine and cardiology.
Generated by MedBot

Related content

Similar communities

View all

Similar events and on demand videos


Computer generated transcript

The following transcript was generated automatically from the content and has not been checked or corrected manually.

Very good one. Thank you for coming. I change this uh on how to critically appraise the part two. And your first is in, in relation to this um topic. Look at how you can the types of studies we have um like what you need to like step the guidelines to give it to people. And we also look at some examples that you can do to the um to practice on during um up on the course of the web. And most importantly, we also look at some um your surgical that's really um that uh I we dissected them. So we really um see how we can. Um And this time we're just, we are going to be more in depth uh in this part two series. And I want to thank you for joining the call and uh give you have not done so you can. All right, I think um sorry, going to your mouth. Yeah, in this session, um our assistant um is going to be talk, talking about how to put this part, this part two session. And I would like you to have your virus and pain. So you come in, you to write down everything that going to be talked about today because this is the last session. So I would like you to um make the best use of it. So, thank you for listening. And um doctor, oh, sorry. Oh, thank you Jeremiah. Um As Jeremiah said, they will be doing the part two of our research paper. Um So it will be in the form of a webinar. Um I think something like it is quite difficult to make interactive on metal, but I'll try my best um to make it interactive. Um All right. So let's start. So last time we talked about the hierarchy of evidence um starting from case reports at the bottom, case reports, opinion on papers and letters at the bottom and then going all the way to meta analysis and systematic reviews. And we talked about how in original studies, randomized controlled trials is the strongest. Well, actually, sometimes it's not possible to, to, to, to conduct randomized control trials. Maybe the study won't be ethical or maybe the outcome is so rare. I conduct a trial for that. Then in that case, we do observational studies and we went through the types of observational studies which includes um coop studies, case control studies and cross sectional studies. And we looked at advantages and disadvantages of all these. But observational studies only show correlation. They don't, they don't show causation and randomized controlled trials are the only ones that, that you can say X or there is Y, you can't say that to observational studies. So um randomized controlled trial is a basic design. You bring some, you find a group of patients, you randomly allocate them to either the intervention group. Um For example, let's say drug X or to the control group, which would be maybe drug W or standard care. And the thing about randomization, randomized controlled trial is that patients should have an equal chance of being allocated to either group. Um So this will have made the risk of bias. And as I mentioned earlier on randomized controlled trials is they are the gold star design and to in fact position between two variables and, and the process of randomization limits selection bias and also minimizes possible confounding factors between groups. So um we are going to, I will explain this in more details as we go on and also to critically appraise papers. So um there are various tools that you could use for critical appraisal. But um there's a method I learned in books and also from practice, um which makes a lot of sense to me. So the first thing you look at, so you wanna follow up with a research paper um is to look at the biodata. Um And I'll explain what that means the research questions and also the main findings of the paper. After this, you now look for systematic bias in the, in the research and the in the study design. So it will go through selection bias bias, observer bias, artic bias, and confounding. After looking, after looking at systematic biases, you then look at the at school analysis that was conducted in the study. Um And we'll talk about this as well, then you finally conclude on what are your thoughts about the studies and about the study. And also if you have any ethical concerns about the study, OK. So today, I know we are from the neurosurgery section, but I II decided not to use a neurosurgery paper for this presal appraisal example. So we actually be using an emergency medicine slash cardio cardiology um kind of um um um study for this. So it is the study will be using. Um So it is expedited transfer to a cardiac care center for non ST elevation after us to cardiac arrest. And it's a study conducted in the UK, published in the lancet. OK. Now we'll try to break down that paper into um for the critical appraisal. So as I mentioned, when, before you start a critical appraisal, you need to gain basic information about the paper. So what the journal name was, in fact, factor of the journal. But I want you to remember that just because a paper, a paper is published in a high impact journal doesn't mean the study doesn't have flaws. And there's this very popular example um of Andrew Wakefield, he was a doctor in the UK, published a paper many years ago in the Lancet. Lancet has a very, very high impact factor. And he claimed from a study that um the measles um mump and rubella um vaccine causes autism in Children. Um He had ac size of and they were love like some like some flaws in his study design. This was published in the Lancet and it actually caused a lot of problems in terms of vaccine up taking in the UK and worldwide. Um Actually, the paper has been retracted now and the guy was slick off from the General Medical Council in the UK for research fraud. Um So I'm just trying to drive up the point that although um a good journal will make you think, oh, this part probably of good quality. It's not always the case. Um Once you've becoming, which journal is published and you then look at the study design and then funding information. So a research is not cheap. A lot of research are funded. Um Funding could be from maybe government charity um charitable organizations or even from companies. So when we are looking at funding you, is there a potential risk of uh conflict of interest? So for example, let's say a study is funded by the NIH R or from a charitable organization, they don't have any commercial interest. So it's likely that the researcher will be partial in reporting your findings. But let's assume um you are doing a study to look at those using this specific hand wash. Does it prevent the risk of MSM RSA or C and CLO deficit in the ward? And it's sponsored by a hand washing company. Um, even though you might try your best to be like ethical in your, in your, yourself, there might be a, a chance that because you are funded by the organization, you might make some, there might be a, so it's a very, it's very important to look at final information in studies, right? So um we, we've already said um so this is the first line of that is why we do if I was in a critical appraisal. So from the study, we can see that this is a parallel Multicenter open label random Mice superiority trial conducted in London UK, funded by the British Heart Foundation and published in the Lancet in 2023. So every word they actually mean something and it might from, from this w from from this statement, you need to start thinking what are the strengths and weaknesses of all these words? All these study design, parallel judgment. I mean, sorry. Um I might explain this in a very basic way. It's not to insult anyone, just I don't know everyone's level of understanding um of critical or study design. So I'll try to make it quite basic. Um So parallel means, for example, um you are, you are putting patients in um drug a group and other other groups of patients in drug B throughout your study, those patients will be on drug A, the others will be on B there is no mix up. They're all in the same study uh in the same study groups. Um So that's prior, but there are some studies that are called FO studies where you start some patients on drug A drug B and then at a certain point, you now switch them over. So drug A people become B, drug B, people become a so parallel study. That's what this means. And the fact that this is a multicenter study should be, it's a strength of this study. So multi center studies are usually Generali. Um So for example, and they are just general and generally, they tend to reflect the, the true population of what you are looking at. Um So let's as a study is conducted in the UK. Um if you conduct this study, let's say in London, in a part in just a small, in a city in the UK, the find of that study might actually not apply to a rural area. Um Like that's not the main city or let's is for example, let's assume we conduct a study in Lagos. Um The finding might not be useful or applicable if you now do this study in a different states where people have maybe different habits, different cultures, different uh health seeking behaviors. Um So that's why a, a multi center study helps. So this multi center study was actually conducted in London. So within London, we can say this study is Generali within London, but not Generali within the UK because there are differences between cities. So that's one thing and open label will talk about this later. It has to do with blinding um randomized. We already said, well, randomization is gonna talk about this again. And superiority trial. So we are conducting trials, there are different like terminologies. So we have superiority trials, inferiority trials and equivalent trials. So superiority trial just means. So I'm trying to prove that drug A is superior to drug B. That's why I put it my alternative II put it, drug is superior to drug B inferiority trial is trying to prove that drug A is not inferior to drug B. An equivalence trial is trying to show that drug A, there's no difference between drug A and B. It might sound very simple, but it actually has some impact on your comments like um analyzing your, your results and your data. So the fact that this is a trial actually raises like in terms of ethics, I want to think what could be the ethical um considerations in this study. And uh looking at cardiac arrest patients, remember in this study, they are comparing um actually, we'll go into this later. Uh We'll, we'll go into this suppressive trial thing later. But yeah, anyway, and these are the results of the study. And um the study, the, the the outcome measures are 30 day mortality, three months mortality. And then it looks at the modified ranking scale which is just a disability skill used for stroke patients and also quality of life. And uh and the study also does some subgroup analysis for um day to day mortality. But we'll go into this in more detail and sharing a general review of the paper right now. Right. So we've done um the biodata information. We've done bio data, we've done study design. Now, we're going to the research question and a good way to approach research question is, is, is something called po So po stands for population intervention comparing and outcome. So when you are looking at the population, you need to decide are the patients in, included in this study? Are they close to what we have in real life? For example, let's say you are doing a study that looking at the role of a drug in um managing angina symptoms in patients. Uh let's say um in your study, you only recruited patient that don't smoke, that don't drink alcohol, that exercise every time and, and you excluded other patients that smoke, drink alcohol or, or don't exercise. That population is not representative of or is not representative of the true population in the real world. So I wouldn't be sure I wouldn't really agree with with the result of that study. If they don't have a, a representative population. So whenever you are looking at the paper, when I describe the population look at the inclusion and exclusion criteria and use your judgment to think is this actually valid? Is this a valid study? Why are they including these patients? And why are they excluding this? Um And as I mentioned earlier earlier on population as well and generalizability comes into population. So can the inclusion anesthesia criteria? Can can it be generalized able to the true population? And then you now move on to the intervention. So in the intervention, the rash, you have to look at the study. It's usually in the introduction section or some methods. Do they have a ration now for choosing the intervention? And, and that should explain why they've chosen a specific intervention if it's a drug um uh um study, why did they choose a specific dose? Because for example, if there's no, if you can explain why you decide to do the study, then it means I could just wake up one day and just say, oh um drinking pure water cures cancer. Let's go and test pure water on people. There is no explanation for it. You are just trying to do it. You are just just speaking clocking hypothesis from thin air. Um So it's important to look at the introduction section um for them to like clarify the rationale for this and um comparison. So when studies are done, they usually compared the intervention what we are trying to test against a group. So make sure that this com comparison group, the control group is adequate and ethical. So um control groups sometimes could be class or maybe just give them what the standard therapy that has already done. So in the study we'll be looking at today, the standard process is when patients are return of um of continuous circulation after cardiac arrest in the UK, they just take them to like the nearest emergency center, the ambulance takes them there. But this, so that is standard practice in the UK. But this study is trying to be convention here saying it would take them to specialized cardiac centers. Yeah. So we are comparing this new intervention against what's exists already in this study. I think it would be unethical if you just compare the specialized cardiac center against no treatment, that would be very unethical because it be because you are not giving the control group a chance to be causing them. Um So that's why you are trying to look deep into the placebo group and uh into the control group and decide is this ethical? Is it safe? And does it actually make sense? And next after this is also to look at the outcomes. So are the outcomes clinically relevant? Are they clearly defined? Oh I think I'm missing. So OK, are they, are they clearly, are, are they clinically relevant? Are they clearly defined? So what do I mean by this. So let's assume we are doing a study of stroke patients and um you decide that you did stroke, you are doing a study that is looking at rehabilitation program for stroke patients, for those patients, the best, the the most relevant outcome for them could be maybe to gain their and function, grip strength to have to have less disability. But let's say some, some um researchers decided, oh don't let use this is actually relevant to patients. Let's go and pick out maybe we just do this scan and then measure their muscle b they are they muscular like that doesn't actually is not really relevant. Clinically relevant. You, you can do a scan and say this muscle ball on scan is my outcome measure. But actually is that relevant to the case? And also what do I mean by clearly defined? So let's say we are, we are, we said mortality is our outcome measure. Have you actually defined what you mean by mortality? Is it mortality specific to that condition you are treating or is it mortality of any cause? I will come to that shortly. And apart from effectiveness of, of, of a, of a, of an intervention, it's important that studies include safety outcomes. So let's say, for example, let's say you are given aspirin um for secondary prevention of M I. And let's say as or let's say we are we make a new drug that's for secondary prevention of M I, but that drug has a side effect of bleeding. You actually have to include, apart from looking at your primary outcome measure, it is a measure of preventing the myocardial infarction. You also need to actually have a safety outcome measures also. So for example, was the incidence of bleeding in this group. Cause if your group is, if your, if your outcome is good at preventing myocardial infarction, but they have a lot of bleeding events that that causes a lot of death in the group. It you can't say the drug is ef ef ef effective because yeah, and and that thing is also a lot of studies. Now the actual report, patient reported outcomes. So what do I mean by that? So patient reported outcomes are like outcomes are the patients reports. I mean, it's kind I'm I'm just using the the the um the breakdown of what. So patient, I tell them this is my out. So for example, maybe you could complete the quality of life scale from, you can see patient from a scale of 0 to 100. Can you tell us how, how you feel like more your quality of life right now or maybe from a scale of 1 to um 0 to 100. Um Can you tell me the severity of your pain um or disability and a lot of studies are using it because it's actually very relevant to the patients because one it's good to have clinician reported outcomes. But actually what matters most when we are intervening is one is what the patient think matters to them. So that's why we need to use patient reported outcomes. And I know patient reported outcomes out there. So you actually need to check is this outcome as validated? Because patient reported outcomes are subjective like yeah, cause patient might, might, might be in pain. Uh I mean from a scale on the road to 100 a pa patients, a pain threshold is not the same as patient b um pain threshold. So there's a bit of subjectivity to it. So it needs to have been validated by previous studies. And you also have to check is the follow up period um actually adequate um for, for this study. Um So I remember looking at a paper um a few months ago, it was comparing minimally invasive surgery for rectal cancer against uh for, for ed rectal cancer against mi minimally invasive uh against um invasive radical surgery for um for early stage um rectal cancer. And uh the study only compared uh mortality and recurrence um in this patient group for a period of 12 months or actually 24 months and didn't find any difference in outcome measures. And I believe um the um survival um in both groups was in the 9097 20% of both groups. But actually when you look at um rectal cancer, the five year, the, the five year survival rate is in the 90% anyway. So maybe the reason why we didn't see any difference in, in the minimally invasive and the radical surgery group is because we didn't follow them up for a long on time. So you actually need to check. And for example, in spine surgery, a lot of research as they come to a consensus that 12 months is usually um, um, a suitable follow up period for, for um spine surgery. So anytime you are de designing a randomized control trial or any trial, um you are trying to make sure your followup period is good enough to pick up differences in outcomes, right. So going back to our paper, um the cardiac arrest paper. So in this study, the study population, um um so when we look at the population and the study involved adult patients with a return of spontaneous circulation, um following non SDL which lot of us to cut, correct. And then I'm just combining the intervention and comparison group here. The authors compared the expedited transfer of these patients to cardiac arrest center versus um the standard practice in the UK of transferring patients to the nearest emergency department. And the outcome measure was tested the mortality and the secondary outcome measures were three months mortality, the level of disability and discharge and at three months and also the quality of life of the patients. And I will say something good that this study did was to look at all cause mortality. So it is any patient has in the trial, we look at any m any death, not just death specific to cardiac related mortality. And I think the reason why this is good is because I think when, if, if, if a patient dies, it's hard to actually decide. Did this patient die from their cardiac arrest complications or did they actually? Yeah. Did I die from other causes? And if you start saying, oh Malcom measure, you will be cardiac and mortality to bring a lot of subjectivity. And also because it is a more center study, various centers might define cardiac and mortality differently. Another thing is that we are taking these patients to new centers. What if, let's say um there's no difference in cardio and mortality but patients are these new centers, a lot of them are getting um pulmonary embolism or a lot of them are getting pneumonia and, and dying from all of this thing or even. Yeah. So it could actually using all cosmetics, could actually that the general care in this new place is not as good as the standard practice. So I think that's the strength of this is all cosmo and that's why I said it's important um for outcome s to be clearly defined. All right. So after talking about um we are still on basic information. After talking about um the research question in AP format, the next thing will be to say the main findings of the, of the study. So I'll just go back to this, right? So last time we talked about interpret risk ratios, odd ratio and so on. Um So remember I told you that risk ratio, um if the confidence interval includes one, so um if 95% confidence interval includes one, then there's no difference in the, there's no such and difference in the control and experiment group. And if you look at all of this, um so um that's the mortality, your risk ratio is one and then the um brackets is a 95% confidence in how includes one. So there's no difference. It goes to three months mortality as well. It's from 0.92 to 1.1. So, so that, that includes one, no difference, Mr score discharge 0.76 to 1.32 includes one. So no difference, the same MRI score as three months and 0.73 to 1.31 no difference as well. Um MRI score and discharge favorable or not, no, no different. And then they use mean, yeah, they use difference. So if we are looking for the difference between group A and B, we are looking for different. So it's we are saying the quality of life is group minus quality of life in that group. So if they are equal, it should be zero. So instead of remember if we are comparing it kind of relative risk we are dividing. So no difference between both groups would be one. But we are looking at difference, the points of no difference will be zero. So this confidence interval of minus 0.12 to 0.05 include zero. So that's why we don't have any significant difference there. OK. Right. And now this paper then did some subgroup analysis um for all cause mortality at 30 days. So, um so for example, they group patient by age, they look at patient that less than seven years and check if um taking them to cardiac center favors them or if they can do the standard care of taking them to the emergency department favors them. And if you look at all these P values, you can see down only a significant at p less than 0.05. Right? Um OK. And um so they found out that in patients that are less than 57 years, um it's better to take them to cardiac care center. They are, they, they have better mode and survival rates in those that are between 57 to 71 years. Um Taking, taking them to the nearest emergency department is better. OK. And see if we look at the confidence interval and you can see that this doesn't include one. So for 57 it's 0.6 and 0.971 is not included. And also for 7 to 7, 20 years, it is from 1.05 to 1.56. So one is not included. So that's why the start pretty significant, right? So you just need to summarize the main findings. So um just say this, the, the study didn't find any difference in patient outcome between the two treatment groups and then the patient's age of the same can predict up 30 M with patient younger than 57 years having a lower risk of mortality. And those aged 57 to 71 years having a high risk of M when allocated to the cardiac center growth, right? So we've done basic information done by the C and main finding. Now we're going to systematic bias. OK. So when you look at randomized controlled trials, just because it, it's a randomized controlled trial does not actually mean it's a perfect study. People do a lot of like the, the people do a lot of bad things in in randomized trials. Um When you look closely at some study design, you actually ask yourself, how did this get through um their review. Um So um there are a couple of biases I will look into and I'll explain what that mean saying. So for example, selection bias, Subba bias, performance bias, observer bias. So if you are looking at this, we do colon um bias as well. Yeah, and we'll talk about how to actually back these biases. What do I mean not to backe them, right. So randomization. So what is what do we actually mean by randomization? So when you look at papers, you are trying to look at the randomization process um method of the randomization, sometimes it generates a random code that helps you allocate patients or you can use a random number generate called I mean random number tables to allocate patients and just toss coins. Um you can toss a coin heads or tail if it shows a and putting this group in group A which should tell them put in group B. Um So from what people have done a lot of research on the study design of randomized control trials there and they've come to the agreement that comp generator trials. Um and randomization is the best because remember with co tossing, I mean, sometimes you can kind of, if you really want a patient to be in a certain group, you could maybe toss a coin in a way that will land on head. Um So you, you can kind of manipulate randomization by doing coin tossing a random number and tables. Um So contact gene test to and randomization is the best way and some people might call them some randomized controlled trials. They look at the randomized process say we did an alation process. So maybe um a patient comes in, we put them in group A, a drug A the next patient that comes in will put them in drug B the one that comes after that will put them in drug A and they act that drug b so they keep all connected. So actually, as a researcher, you could possibly, um, you could, I not, it's not ethical, but sometimes maybe you really want a patient to, to have that drug treatment because you think, oh, this patient, this drug will really work on them. So you could maybe manipulate it to make sure that that patient gets that drug because you, you are very confident that that drug will work on them. Um So that condition isn't good or using day of the week, maybe patients that come to clinic these days, they will be in this group. Do that come the other day, they will be in, in, in, in, in this, in the other group. So once again, you can kind of manipulate this, this allocation or dates of people born in j people born in dec or so, yeah. So as I've said, it's possible even though we are trying to randomize things, it's possible for some researchers um sometimes not out of but not, not because they, they, they, they are doing it. Um um with a bad intention, sometimes they actually want to help the patient by putting them in a group I think will work so that they can actually subvert the allocation so they can change the allocation for the patient. Um So and a way to prevent subversion um of people manipulating the um um um allocation is something called allocation concealment. Now, allocation concealment is quite different from blinding and a lot of people like um confuse both of them. Allocation concealment is literally before when the patient, before a patient is recruited into the study, the person recruiting them doesn't know uh the person recruiting them and also the patient themselves, they don't know which group they will be allocated to and you don't know the sequence. So it's a random process allocation. So allocation consuming happens before you randomize patients. Blinding happens after you randomize patients. But we'll get into that soon. So I'll can you do allocation consuming a way of doing this is maybe to have a central uh a centralized department. So maybe when the patient comes to your office, you call that department, the department already has this gene gene as uh that does it. And because our department does not, can't see the patient in front of you. They can't tell if these patients is frail. They don't know anything about the patient that maybe their, their name or even the name, their name, maybe their, their date of birth, their gender. Um That's all you did. And that's the only information they know. Um So that would, it's, it's the person, the front, the personalities and the patient currently manipulate the results in that because I don not know much about this patient is a central point uh in a drug trial. It could also be the pharmacy that's controlling it. So the pharmacy can really see the patient that are giving the drug. So they, they randomly allocated patients uh to the to the drug drugs uh using the generator codes. Um Another way of doing it is to use this process called sequentially no sealed envelope. So there's this envelope somewhere in a safe cabinet, um just pick out the envelope randomly and wherever is in there and that's where the patient is allocated to. Um but this is going to sebastian bias and I will show you some videos about this. Uh Yes, c all you uh the stop that. Ok. Stupid. Mhm. Yes. Oh OK. So you guys come here the video. Uh Hold on let me um complications. I think you des Yeah, just the thank you. So sorry for technical difficulties. Um I'm still, I'm still communicating to you, right? I give yourself some mister. So can you guys hear me now? Yes. All right, good. Sorry for that. Um Yeah, I'm I'm gonna share my skin again. All right. Yeah, I won't play that video again. Yeah, right now and then I'll continue. Correct. Yeah. No. Can anyone comment in the box please? Yes. Ok. Ok. Hello. Mhm. Yeah, we can hear you. Ok. Thank you. Thanks. I'll continue. All right. So um I mentioned that um one of the other things we could do um to prove to for outpatient concealment is they can cause nodes or sequentially a numb um or a sealed envelopes. Um But um it's been shown that what some researchers do and when they are recruiting, they can even though it's OK, they, they kind of figure out where they put the envelope on that light and they can maybe be able to tell which probably the next patient will be uh some actually use some steam. So they boil water and from the steam coming out from the kettle, they use it to unseal the envelope, check the content seal it back. So the use of envelopes is naturally very effective in preventing subversion bias, right. And a way you can detect selection and subversion bias is by looking at um you can look at the um baseline table of of of the paper. So when you compare both groups, uh is there any clean shoots now that you that says, oh, there's a difference in the in the population between group, the cardiac arrest group, for example, maybe standard group, for example, you look at your smoking status, is it that those in the standard group that they are, they are more smokers compared to those in the cardiac arrest group. But when you look at this and most papers, when they have this baseline table, they usually have values to make it easier to spot differences, but some journals don't want pre values. So you are trying to go through yourself and decide we have differences between these two groups. And when I went through this, there wasn't actually any difference in any kind of difference, major difference between the cardiac arrest and, and standard groups. So that let me know that um it's very likely that the randomization process used in this paper in this study was quite good. Uh But when you look at the pre op key events, something that jumped out to me was um when you look at time from uh so it's closer to the end of the table time is the second rule from the end of the table time from rest to um hospital arrival. So it took seven minutes on average, it took seven minutes longer for the patient in the card center group to reach the center compared to those in the emergency department group. I remember in cardiac arrest time is very important. So could it be that this seven minutes, this script cause seven minute difference is the reason why we didn't find any difference between the cardiac arrest group and the standard group? And you now start thinking, did the review, did the study um did the investigators they account for this difference in transit time when you are doing the analysis, right? So cardiac arrest group, you can see there's seven minutes difference in the cardiac arrest and standard group, right? So this is what the paper says. Uh And this is just a summary of why I wrote after looking at um the risk of um selection of subversion bias. So, um this study use a secure online randomization system. So they use a computer. And actually, when you read full paper, you find out that when the ambulance before they take the patient to the emergency department or to the um to the, to the ca specialized cardiac center, they call the central s department to, to tell them we have a patient that's consented. Um Can you randomize the special and tell us where to take them? Um So yeah, the, the secure, it's quite secure. So it minimizes subversion bias. And as I mentioned, when you compare the baseline demographics between both groups, it's well matched and it suggests adequate randomization. And I, as I mentioned, maybe give it P values to make it easier for us to understand. Um And if you remember that the, those graphs we saw looking at um subgroup um analysis that says age um is one of the factors that influence survival. So in that subgroup analysis that puts age as categories, but actually, um in this table, they made age as um mean, mean and me and mean and standard deviation. So maybe they are put in a category, I've noticed some differences, maybe there are much more um older patients in the cardiac group or so, if they are in category, we, we might be able to spot differences um in terms of proportions um of the patient. So it's, I don't know why they did age as, as spontaneous variably when in India, a group analysis they did it as um uh a category called variable. Right? OK. So we've gone over selection bias and, and sebastian bias and how to minimize it. So just to summarize, to minimize selection and, and sebastian bias, um it should be done by a central um allocation. Consummate is important and the best way to do it is for to be done from a central or remote area that no one can manipulate it. And also in terms of randomization, um using a computer, a secure online system is the best way. Um because that you can't really influence, you can influence flipping a coin or you can influence the computer choose randomly, right? So now that's con remember I told you artificial consuming happens before randomization where you can't pre the next allocation, you, you have no idea. Now, um after you've randomized the patients, it's possible for the patients to now know which group they are in. Um So that would be um it, it's possible for the patient to know the group they are located to or even the clinicians or the investigators, they might know which group the patients are located to. In that instance, we can say these patients are blinded um or it's an open label study. And the problem with um open label study is that if a patient knows that, OK, we are we are, we have these new treatments, um, we have this new special drug, um, that's, um, let's call it drug A and then there's this normal drug that people use. So, as a patient and the study is trying to show that drug A is better than drug than the standard drug that we use. If a patient knows this, then it could actually affect them. Like it's, it's called something, uh, it's, it's something called de resentful demoralization. The knowledge of language could be, it could affect maybe how they report the out. For example, those those are in the standard group, I'd be like, oh, I have very poor quality of life or oh I, I'm in very bad pain because I know they didn't get a magic drug. So it will affect them. Um So performance bias, performance bias P for P for patients is when patient low, which will be allocated to and it can be prevented by um by um treatment and by blinding patients to treatment allocation. Another thing is called observer bias. So you as a doctor, if you know which group the patients are allocated to, you might treat the patients differently based on what they are allocated to, it might be unintentional. Maybe when you are taking measurements of, of the patients, for example, maybe are you in pain? The patient, maybe the patient is in the super drug by I in severe pain. Are you as a clinician? You already know. Oh, this drug is meant to like stop pain from happening or re reduce the pain. But are you still, are you still in severe pain? Are, are you really sure like, are you in pain at the moment or is there severe pain from yesterday? So it might affect how you treat the patient or maybe you might even give those patients that the experimental group might give them more appointments which might improve their quality of life because they feel well taken care of. So another way to and and to prevent observer bias is by blinding the investigator to treatment all location. So um if so remember I said open label is when none of the like the patients or the clinician or um the statistician, none of them are blinded. So all of them know which group which patient is in. So that's open label single blinded studies when the patient don't know which group they are allocated to. But the surgeons and the I mean the clinicians and and the statistician, um they know which group they know the group allocation. Um um double blinded is when the patients and the clinician don't know which allocation the patients are in and triple blinded is when the patients, the clinician and the group allocation. Um So ideally, you want to blind everyone. So there will be no bias in in terms of performance or observer bias. But blinding is not always possible, especially like surgical trials. For example, you can't say, oh, I will allocate patients into which exam can I use. Um for example, maybe you want to do laparoscopic vessels, open surgery, which one is better? You will randomly allocate them. Uh I you as a surgeon, you don't perform any operation. So you will know which group the patient is in the patient. They might see the big scale of the open surgery. So they will know which group they are in. Uh but there's been studies where surgeons will fix that. But there's ethical concerns that should we actually be doing this fake sham surgery on patients. So there's ethal concerns about that under weight like um to blind patient would be something like by giving placebo. Um So like giving them sugar pills and drug trials. Um And I remember reading a neurosurgery paper where um they were checking if vancomycin powder prevents the risk of infection in patients that have craniotomy. So they put powder in the wound to prevent infection on top of the or the actual like standard practice of giving prophylactic IV antibiotics. So that study was open labeled, they didn't use any placebo because even though you are giving the patient, like let's say we use, we do a study where we give them placebo powder. Are you sure that the powder you are giving them will c is not causing harm? Like you might think it's inactive, but actually it might be causing problems for the patients without providing any benefits. So, placebo is not always practical, it's not always ethical. Um Right. So a way to pre like to minimize. So in an open label trial or where the clinicians are aware of the allocation, a way to minimize. Um um by observer bias is to make sure that those are taking the measurements of the outcome measures, they are blinded to the study and to the study allocations. So they don't know which group the patient is in. But once again, this is not always possible, right? So if you look at these red box, um so masking of the ambulance staff who delivered the interventions and those reports and treatment outcomes in hospital was not possible. So this was an open remember, this was an open label study and it makes sense in this study, it's the ambulance people that are driving them to either the mis department or the um or the cardiac center. So it's not possible to blind them. And also those the patients are probably having the um outcome measure measured where they are in the hospital. So those observers will probably know anyway. Um yeah. So as I mentioned here, when I was at present in this paper, I said due to the nature of the interventions, this was an open label study which is going to both performance bias on the on the part of the patient and observer bias. Our when you look at the study outcomes here, a lot of them are actually subjective. So apart from mortality, mortality rate isn't subjective. It's either you are dead or alive, they are subjected, mortality rates are 30 days and at three months, those, those ones be objective for outcome measure such as the modified ranking scale, which is image of disability. How disabled are you in your day to day life? That's a bit subjective. Um cause a scale and levels of disability and a patient might decide, oh, I'm this disabled, I'm not this disabled. Um and also quality of life and both of these are actually prone to like observer bias. So let's assume this study showed um no difference in mortality, but then it shows differences like that the cardiac center is better in terms of disability or, or quality of life. I might be a bit more skeptical because I'm like, oh, it's because it's an open liberal. Um So I can't really be sure that this is a valid finding. So it's just having that skepticism in your mind that if it's liberal, you need to actually scrutinize it to see if the outcome measures are subjective. OK. Right. So now by us, so when you randomize patients into a study, um you know, some of them might drop out and this could be because of various reasons. It could be because the drug you are giving them is not effective. For example, maybe you give patients like a patient out when you cough, you give them this medication and tell them, oh, it's gonna cure your cough. But this patient keep having a cough. Like let me leave this, let me leave this trial, man. This is not working or they have side effects from that cough syrup or whatever tablets you are giving them, then they might just drop off of it. Ok? But so I appreciate bias outcomes when patients. Um um So like if you, if you do that cough syrup trial and at the end of your analysis, you only include patient that completed the treatment and that, that, that they stuck with the study protocol. This is when you say articial bias happens and I'll explain why that is a problem. And also if you only include patients that completed the trial, it may not actually know represent a true life patients. Um It might just be like maybe this patient that did the trial have the specific characteristics. OK. And the way to assess um drug power rate is something called consult diagram. So when you are looking for a patient bias, just look at every randomized control trial as it stable. It tells you how many patients are eligible at this time of the study, how many were excluded for certain reasons? How many were um put in both arms? And then it tells you how many patients finished the study. OK. So if you look at this, um 414 patients were included. Um So they excluded some because some withdrew consent or they, they were, they went to a different country of sur um 113 where the people that were randomized originally and from this um group, um two patients dropped out, one patient dropped out. So the dropout wasn't severe in this study. So it's very unlikely that there's a patient bias and, and also this study does something called in intention to treat analysis, which I'm going to explain correctly. So, intention to treat analysis is you do your analysis based on how they were randomized. So if they drop out, you don't care, you just go with like randomization groups in your analysis, not based on oh they dropped out. So I'm not going to include them in my analysis. I will explain that again, right? And the opposite of intention analysis, intention secret analysis is part protocol analysis and it's in the name. So only those that have compared the study are analyzed. And sometimes when you only include those that have completed the study, it might show some, oh wow, this drug is effective and only those that control this complete the study. But actually when you do in the treat analysis, it might not be effective. So that's why intention treat analysis is a better option um over public analysis. And I'll give you an example. So this diagram is showing um with patients with cerebrovascular disease And in this study, I trying to compare surgery called aspirin. That's the standard practice of aspirin. And the outcome measure was the occurrence of stroke, right. So you randomized patients at the start study into the surgery called aspirin group. And then before you did surgery, remember for this patient to be in this group, a protocol, there was absolutely before you did surgery, 10 patients already had stroke. So they didn't, you excluded them because they didn't have surgery and then you did the surgery. So remember now, 10 patients are excluded. We only have 90 out of the 100 you did this study and 10 patients ended up having stroke. So as for the protocol, the incidence of stroke in the surgery plus, um, aspirin group will be 10/90. Ok. Now, um in the aspirin group, all you care about is they are in the aspirin group and you follow up this patient, 10 had stroke in a one month period, another 10 had stroke in the one year period. So in total, in the aspirin group, because this group is about people that only had aspirin in the aspirin group, there are 20 people in total that had stroke after being allocated to the aspirin group. When you now look at your risk reduction, um, relative risk reduction, you get 0.45. So as per protocol, you'll be saying surgery plus aspirin is better than aspirin only. That's what this analysis shows if you exclude those that stroke before having surgery, why we do intention to treat analysis. So you, you recruited other patients randomized them into this surgery. Plus aspra group are, are see whether they have surgery or not as far as I randomized them into this group. Uh That's all I care about. So you have 100 patients and 10 and 10 had stroke in that group. So in total, that's 20. So according to your intention to treat analysis, the incidence of stroke in this patient will be 20/100. OK? And that's a conservative and this is unchanged as well because yeah, they they had that 20 out. So that's 20/100 relative risk ratio year zero. So actually in your truth analysis, there was no difference in the group where you are done part protocol, there was difference in the group. So dropouts actually affects results. So when you look at papers randomized trial, make sure they have done intention to treat analysis if they are when there is a risk of attrition bias. OK. Cause analysis of effect size. But then you could ask what is the drug is actually very good. But um the that people are dropping out so we can't control that. So something that a lot of trials do is something called a run in period. So when they find eligible patients for randomization, so before randomizing the patient, they have this cool off period where patient consents that, oh, I want to be part of this study. So they give them maybe a couple of, maybe two weeks or days or whatever and then they now decide before ran, you haven't put them into groups before randomizing, maybe based on the patient behavior or you check on the patients again. I'm like, are you sure you want to be part of the study based on what you find? You might actually decide. Actually, I'm excluding this patient because this patient will likely drop out. So the running period is a, is a way to like prevent attrition bias and also way to prevent attrition bias is in during this study, you can encourage patients like stay in the study. But of course, remember you can't force patients because that that would be unethical, right? So looking at the red box here, um these studies, this, this, this is the primary outcome was all cause mortality at 30 days and it was analyzed in the intention to treat population, um excluding those with unknown mortality status and also safe safety outcome measures we analyzed in the intention treats uh population. So that's a good thing in this study because they have done a um intention to treat analysis. There's a lower risk of attrition bias. Ok. So we've gone through basic information. Um we've gone through um selection bias and performance bias, observer bias attrition bias. I know that seems to cause that confounding, which we talked about in the last session and I'll touch up again and again soon and then, you know, the next stage will be start school analysis um which we'll go over now. So remember in the last session I told you about this study where um there this NFL study where they gave um 29 researchers research teams, they gave them data on players on, on, on players in the, in the league. And they said we want you to test the association between patients having black skin and then be more likely to be to have red cards. So they just give them a lot of data. So the data might include their color of skin, their age, their, their gender, maybe they are um uh their wage, socioeconomic status, they are smoking like alcohol, um Abit and so on. So they give it this group this and then they told the group analyze and tell us the results. And you, as you can see in this graph, the green dots represents those research things that found uh stat significance. And then the gray dots represent those identified significance and as you can see it right? And they all are these and these um rectangles are the confidence intervals using the same data, different reception found different results. And this is why it's important that before conducting a study, especially randomized trial, you, you need to start register your study protocol, uh a priori. So before conducting the study you need to actually publish a protocol. Um either in a clinical trial database, it is recognized or in a, a peer review journal. Because if you don't register your, like your analysis or the study protocol, you can maybe manipulate things. Because remember, I it's easier to publish, like when you show this starts, it's easier to publish that. So when you are looking at papers, you actually need to check where um where, where the analysis plan a priority or not because you can manipulate statistics to suit what you want. And this paper says the trial was um prospectively registered with the international standard Randomized Controlled trial registry. And when you read the paper, you know, it's also they they actually published a protocol as well. And the next thing is to check that if study describe its power calculation. So what's power calculation? Power calculation is how you determine the sample size for the study. And last time we talked about type one errors, type two errors. Um the more sample size you are the larger sample size. The more likely it is that your high status quo significance. So type two errors which are false negative. Remember last time I told you false negative, false false, there are two false in, in, in the in, in, in the description. So that's type two error, false positive, there's only one false false positive. Um So that that will be type one error. So false negative results um which are type two errors can happen where you don't have enough sample size. So what some researchers might do is oh we did our analysis. We didn't get that too significant. Let's recruit more patients. Oh The, the patient we recruited is not enough. Again, let's recruit more. So because they haven't done power calculation, I keep adding patient, adding patients, which is I would say unethical. So it's important that before you conduct your study, you need to actually do power calculations. I need to give some parameters. Sometimes the parameters is based on observational studies from the past or your previous studies and and then you arrive at this, right. So this based on your calculation, they determine the sample size of six with Q in age group will provide blah blah blah. So that's basically power calculation. It is done with software. It's it's not very difficult to do, but it doesn't to check it. The study describe its power calculation. Now, confounding factors. Um Last time we talked about them. So it confound as a variable that is both the dependent and independent variables and that could cause a spurious asci. So let's assume for example, um you do a study looking at the effect of the people that drink coffee are they're more more likely to get a heart attack. Uh You find an association. But actually when you now look at all that data, you actually notice that people that smoke cigarettes are more likely to drink coffee. So there is an association between smoking cigarettes and coffee drinking. You also find out that people that smoke cigarettes are more likely to have heart disease. Um, um, so actually in this instance, smoking might be the reason while this patient are having heart attack, not coffee drinking. So, um, cigarette smoking is a confounder in this instance. So how do we control for confounding variables? So last time ago we talked about methods and yeah, we've done randomization. So randomized controlled trial as an easy way. Remember if you randomized patients um correctly using the appropriate method generated and also centralized your your likely have confound as um another thing also is to control for confound as in like your uh analysis. So you can control for confounding in the design of the study by a randomization and also by I mean, your analysis through multiple regression analysis, right? And we'll talk about five point errors. So remember, I just by saying the statistics stasis significance level of 0.05 which means that everyone in 20 T tests you do, they are likely to find significance. OK. So that's another reason why researchers need to state their outcomes and start school analysis a priority. You can keep testing. Oh, let me check the solution between chocolates and, and having disability. Oh I didn't find any significance there and check the significance between eating and having stroke or something. So you can't keep from your data, you can't keep picking and choosing or something, the actual aesthetic and the way to control for type one error and multiple um and testing is to do bone for any correction, which I'll talk about soon. And as I mentioned, type two errors um um is usually due to inadequate sample size. And also you need to do your calculation of uh to determine the minimal sample that, that I'll give you the power that you want. Um So this is an example of a study I did where we did Bone for correction. So in our study, we did. So if you count the number of um variables on the y axis, there are 18, uh we did this follow another outcome as well. So it took 36 tests that we did and uh the original stat significance was 0.05. So we divided 0.05 by pet six and we got uh a S cancer level of 0.0014. Um So we said any P value as less than 0.0014 is significant. Anything else is not significant. And so for example, if you look a mixed then that there, it's not crossing the line of one, but it's like to be, it's not a steady because the PP value is not less than 0.0014. So yeah, that's what, what we do for um for any correction. So you need to check out this paper corrected for multiple testing, right? So when I was first present in this paper, um this was what I said, I said the study conducted subgroup analysis for potential confounds. Um So remember they check for patients age and some other things and actually subgroup analysis does not eliminate the effect of confounder in the main analysis. So instead of I would you it's better for studies to do multiple progression analysis of sub group analysis. And also this paper didn't look for any correction. So that's one of the critical the papers actually, um especially for that age, that age um influencing outcomes might be spurious. It might actually remember one in 20 test to be positive usually. So it might not actually be a real association. Um And also on average, they, they transfer cardiac center was seven minutes longer um then transferred to the nearest Emer department. And as I mentioned earlier on this potential delay in treatments might explain the lack of difference in outcomes between the two treatment groups. So the authors should have actually have considered the transit time as a potential confident analysis. But they didn't include this in the in the subgroup analysis um as shown in this graph here that they didn't, they, they, they did time to um return of stock um submission, but they didn't have time from arrest to hospital arrival. Um Right. So we've gone about this again and then conclusions are relevant. So, um what you are concluding, you are telling me to check what the conclusion of the authors based on their research. Now, you are trying to ask yourself, do I agree with this conclusion? Because some authors, they do research and then they write their conclusion. But you'll say how does this conclusion relate to you've reported? It doesn't make sense and after that, you now have to check. Does this paper this study is finding, does it actually apply to the patients in front of me? Like is it relevant to the patient in front of me? I will touch on that again. And then ethics. I remember I mentioned about the superiority inferiority equivalence trial. Imagine, I think this, this trial is ethical called the security trial because you are trying, you put the patient on patients in the standard group and you are see the alternative is something that's potentially better for the patient. So um cardiac center is potentially better. So that's a security trial. But let's assume um in the, let's assume that in the UK, the standard practice is to pick up the cardiac center. And then, you know, I want to prove that, oh, actually, maybe I want to show you that taking them to the emergency department is not inferior to taking them to the cardiac center. If you tell me if I was, if it was my family member, I was a patient to be randomized. I'll be like, no, no, no, no, no, no, no. Take my patient to the standard practice because they are telling me that, oh, this, the emergency department could be potentially inferior to the standard practice. So, if they have done an inferior to the trial yet, it might actually be unethical equivalent studies are, are, can be ethical as well. But if they have equivalent study, you are trying to show me that. Oh, there's a difference. I'll just like, uh, let's, let's just go. Well, we know, take my, take my relative to the, um, standard practice because we are just trying to show them there's no difference because we want to save costs. Um, so, yeah, that's the ethics here. So that's why I mentioned security trial. You, you need to consider ethics as well and also ethics in terms of consenting because in this study, patients have returned, um, circulation but they probably are unable to give consent here. So it's, uh, it's probably, um, from your family member that's providing consent for them or, um, or even their, or, or if you also need to check, does this patient now do not at 10? So if they have that, then you can take that you can't put them in the study. Um, so it's important to also look at ethics of a study as well. Um, right. So there's something called nobody that trade, but I'm not gonna talk about that because of time. And yeah, so this was my conclusion of the study. So in summary, with respect to patient mortality, disability and quality of life, this study showed that for a non ST elevation of was to cardiac arrest, expedited transfer to cardiac arrest center is not superior to the standard practice of transferring patients to the nearest emergency department. Um The external validity is limited by the section of this study in London, which is a relatively small urban area. So it means pa patients, the residents are quite close to all this cardiac center. Anyway, imagine, uh let's as I live in a rural area and the nearest cardiac center, specialized cardiac center is far away. So that like even let's as this study had shown that cardiac centers are superior, but because that cardiac center is far away, I really, it could actually lead to problems for my patients, but I can apply this to my patient in a rural area. Um So um there's probably need for more studies um especially in rural areas and once the studies are done, you can congregate them. It's a systematic review and meta analysis. So that was my conclusion. So applying this, it will depend, it, it depends on, on where I live. Um This, this study will apply to every patient or every city um or, or village or whatever, right. So, um we've gone through some processes of critical appraisal. Um but there are some tools we can use. Um for example, the critical appraisal tool program, they have written various tools um that guides you so you can get a paper, go through the tools and then um yeah, go through, go through like the questions on the, on the, on the cast too. So critically with practice, you probably won't need a tool anymore. Um And there are tools are of risk of bias as such as the R two to your class for Power Rob Eye to. Um So, yeah, that's it. And by the way, the, the paper today was actually a paper that I got um in an academic interview last December. Um oh, so and said, I didn't uh I'll, I'll, I'll answer a question soon. Um So yeah, um apart from average, like critically appraised data for, for, for your patients um in some academic jobs or even in some training jobs as well, you need to, you'll be expected to be able to critically upraise literature. Um So that's another reason why it's important. But yeah, so that's the end of the talk. Sorry, it took long and um sorry for the uh tech technical challenges that we experienced. Um So let's even when ask questions, just post on the group and then I would um explain. Um So you didn't, so someone said I didn't quite get the difference between prior and post talk. So a priority analysis means before the study is conducted, the, the authors, the, the investigators have already published like a protocol to say this is how we are going to analyze our, our data. And they stick with their protocol post hoc analysis means after you've conducted this study, you've collected all this data and then now you start bringing various data variables together to do your analysis. You might, you might do it without any um ill intent. But actually, because let's say your original analysis was to um check the association between, let's say cardiac center and, and, and um and and standard practice in, in patients with in the patient group that we examined and you didn't find any significance. And then you would be like, oh, let me see. Can I actually, so you didn't see this in your quiry analysis. You know, after doing your study, you didn't find anything. Can you like, let me see. Can I group this patient by age and just be like, oh in patient, younger than 50 something years old, um we will find the d um cardiac center is better. So now that you are introducing these new tests, these new population, you're only getting a, a subgroup of your of your original test just to get significance, then that is bad. That's bad practice because they are just fishing for such good significance. So a priority is before connecting your study plan, your test for her is after you've gotten your data, you start bringing variables to get out. So you get some new plan or whatever. So that's different. I don't know if that explains the um if the answers your question. Um But yeah, 2015 and even when I have more question, I sure it's, it's, it's quite late. Um But yeah, um even though I ask questions, I will let your math to cover or he's not in the group or his, his stuff. Yeah. Bank account. So that person just asked a drastic questions. Where can um and thank you for um I think you ask. Yeah. Yeah. Yeah. So subgroup analysis. Remember when you are doing subgroup analysis. Uh For example, you are doing it by age categories. You're only comparing um group A. So cardiac group A versus group B uh Standard practice. So you are not controlling for every other le let's say you are grouping them by, by age. The only, the only predictor variable potential confound that you are comparing in that instance, your subgroup analysis for multiple regression analysis, we can add multiple variables, you can add age into your analysis. So multiple regression. So I'm saying, let's argue with all things being equal patient that has the same age, the same ethnicity, the same gender, the same smoking habit, the same alcohol intake, the same risk time, the same transit time. As far as all of these things are equal, there is a difference between, let's say cardiac arrest and standard practice. So you can test for multiple things while controlling, you can test for like your association, while controlling for multiple things. Saying when all of these variables are equal, I'm comparing this to go in subgroup analysis. You only comparing age at one. You're only comparing I'm grouping them by age and I'm doing minor. So you're only controlling for one variable in, in subgroup analysis. So II don't know if I II would say multiple, multiple regression, Trump's subgroup analysis usually. But as I said, even your um a priority and, and protocol um it, it all depends on your priority protocol to be honest. Um If you are, if you said you are doing subgroup analysis, only your prior protocol then stick with it because you are not efficient for. So you don't know. Yeah. Um but a lot of journals, a lot of papers are published. Um And I remember when I started uh my, I think on my, my Facebook or publications, the reviewers commented on why didn't try the most progression analysis and why, why did I do subgroup? Um So, yeah. All right. So um I'm guessing there's no, OK, no worries. Uh Larry, I'm guessing there's no more questions. Um So um thank you very much for attending um this session and uh maybe you can please leave feedback and if you have any questions, you can either contact me via my email or linkedin or, or just post on the, your CCG group. And um I'll trust to answer your question. But yeah, thank you very much and enjoy your evening and thank you Jeremiah for organizing this as well. Ok. All right. Thank you guys. Bye.