Home
This site is intended for healthcare professionals
Advertisement
Share
Advertisement
Advertisement
 
 
 

Summary

This on-demand teaching session on ethics and stats is relevant to doctors and medical professionals. It covers topics like the Nuremberg Code and the Declaration of Helsinki, ethics principles, equipoise, the four pillars of Beneficence, Non-maleficence, Autonomy, and Justice and the role of the Data Safety Monitoring Board. Join us for a whistle stop tour on crucial topics to know for your medical interview.

Generated by MedBot

Description

Welcome to Session 4 of our 123 Series on the Specialised Foundation Programme!

We introduce the key ethical principles and frameworks that underlie good clinical research and introduce you to the common and core statistical concepts you'll need to navigate your interviews.

We aim to make this interactive with examples for you to contribute and build your confidence too!

Learning objectives

Learning Objectives:

  1. Identify the principles of research ethics
  2. Describe the Nuremberg Code, its development, and its key principles
  3. Explain the four pillars of ethical research
  4. Recognize the responsibilities of a data safety monitoring body
  5. Differentiate between expected and unexpected outcomes in research trials
Generated by MedBot

Speakers

Related content

Similar communities

View all

Similar events and on demand videos

Advertisement
 
 
 
                
                

Computer generated transcript

Warning!
The following transcript was generated automatically from the content and has not been checked or corrected manually.

Okay. Thanks for Hi, guys. Um, thank you for all of those that have tuned in already. Um, we're just going to give it, like, 23 minutes to let everyone, um joined before we kick off. Okay, I think we should, um, start now. Hi, Amanda. As you can see, um, there is one less member, but we'll get, um, started. I guess so. Thanks everybody, for tuning into our fourth session on ethics and stats. This is going to be a whistle stop tour on we on the topics that we think is important for you to be able to recall during your interview. And as always, as a disclaimer, all views expressed in this talk and our entire 123 s f E series is solely us. And do not reflect those of the NHS know our trust. No metal either. But to start off, I will let Alex, who is finally here, introduce himself. My everyone watches nothing here for a couple of weeks. Been extremely busy phone calls. Uh, my name is Alex. I'm a F one at World free just foundation trust at the moment during that UCL medical school. Um, my f one F two rotations are showing interested in surgery, specifically urology and micro aspiration. So I'll see academic urology. And unfortunately, all is not here today. But he's been contributing to the SNP series. And as you can imagine, the only reason why he's not here is because he is, unfortunately on call. But as you guys know, he is, um, an aspiring neurosurgeon. God knows why. And he graduated from work and is now currently at Newcastle. And I'm Aqua. I trained at University of Leicester. I'm currently a royal Sorry. Um, and I am basically part of three universities as far as my academic unit. Um, currently doing Jen surge as my as I've started my F one. And like, Alex, I want to be an academic urologist. But let's just start off by, um Well, firstly, can you guys hear me? And if not, just let me know. I'm sure you guys can just somebody in the in the I know it's It's constant, Amanda. I will. I will never let it go. Okay. Okay, cool. So right off the bat. What is research ethics? Broad set of principles which defined the norms for conduct and helps us distinguish the right course of action from any wrong actions, but just in research. So some of the examples of principles that we must know and keep in mind honesty, objectivity, responsible publications and there are some questionable publications out there and most importantly, confidentiality. And I think it's important because some daenerys will ask you in different ways. So, for example, London will throw it in. They might throw in abstract reading, but they might also throw in in the clinical, um, section, which we will talk about in, you know, our future sessions. But some other deliveries may ask you if you know about the Nuremberg code, which will go through today, or they might ask you about some studies that may have breached some ethical frameworks, that that's why we have, um, these rules and regulations in place. And, as I said, the Nuremberg Code Unfortunately, tons and tons of trials, um, were committed. Um, and they were basically horrible, especially during Nazi Germany, and extreme extreme medical experimentation occurred. So that is kind of why we have the nerve, a code and the Declaration of Helsinki, and it was developed by the World Medical Association in 1964. And they are a concise set of principles for basically any and every research that involves humans. And once it is primarily for doctors, of course, we know that there are tons and tons of non physicians being involved in our medical research. Like I'm sure we have. We've seen all we've all seen studies involving pharmacist nurses and all our allied health professionals as well, of course. But as you know, and I know most of you are, um I guess finally, a medical students You have this to look forward to for your graduation ceremony, and you will quote and you will pledge alliance. Um, with all of your medical, you know, professionals. But it is important to know that we have to always make sure that the patient is at the center of our research and we need to make sure that we act in the best interest for them. But basically, I'm showing you this because the declaration has set principles that we need to know, and I would read up on it. But broadly, it's all about how we can never, ever make sure that our own interest overrides the interest of our individual patients. And I would like to pause here because this is something that may be asked in your interview, for example, I know Cambridge or seven, for example. They may ask you what equipoise is, and I think here is a good point in time to just pause and ask someone without looking at the slide. What equipoise is. Does anybody want to, um, has it? I guess in telling me what equipoise is. Hopefully you didn't You didn't read the slide yet. Anybody? Yeah. Okay, Good. Good. And if I were to tell you, Kathy, um, that, uh, you know, when we don't, um we don't think that this trial is appropriate. For example, if you've given me a proposal and I tell you, um, sorry, I don't think that's appropriate. I don't think this trial has equal boys. What do you think? I've just told you. How would you interpret mean aqua? It's not a trick question. And it's not just directed to Cathy. Can be answered by anyone, of course. Yeah, Exactly. Good. And you've thrown in one of the, um, ethics, um, principle arms. Yeah, Exactly. It would be against the principal ethics to not give the control group to the treatment. Yeah, exactly. And you may think that you may, you know, never be questioned this, But even when it comes to like educational trials, if, for example, you're giving one group, um, you know that this awesome new question bank has shown to increase students marks bye, two or 3% and you give it to one group, but you don't give it to the other. That, in a sense, could also be called echo, echo boys. You know, you always need to think about the about justice. Basically. So yeah, um, whenever you study, whenever you propose a trial, you need to make sure that there is actual genuine uncertainty when you're looking at the comparative merits of each of the two or three or four arms in the trial and if and sometimes it has actually happened. Um, if you figure out that one arm is actually significantly better, then unfortunately you are obliged to offer that treatment. So I mentioned Nuremberg code and again can be assessed. There are some principles that you need to know. All patients need to have the ability to voluntarily consent. There are Of course, some exceptions. For example, um, if patients unfortunately are obviously in the i t. U of the unconscious, they're exempt and we would ask in their best interest. But for the for our most of our interviews will involve, of course, patients that can give concept. As Cathy mentioned justice, we need to make sure that we're doing it for the right reasons. Is this actually genuinely beneficial for our society? Overall, it needs to be appropriate very similar to justice. We can't, um, give patients unnecessary pain. We need to make sure that the degree of risk is less than whatever the scale of the problem is. And we need to make sure that we are carrying this out in a safe place. And if you know an adverse event happens, then we need to be able to make sure that the patient is safe. Every single staff member on the delegation log, for example, needs to be qualified for that designated task that you assign them to do. And all patients need to be given the right to withdraw, so they're given the right to consent and the right to withdraw whenever they want. And for example, if we were to find that unfortunately, our trial didn't have equipoise because we found that halfway through the study, for example, one arm was significantly better. We need to be able to stop that study and these principles amongst some others composed of Nuremberg code. So, yeah, when it comes to our interview, you won't be expected to know any of these inside out. But you need to be able to demonstrate an awareness, and I guess an appreciation of why they exist and some of the key ethical principles. And of course, of course, if you see something that you know as a bit sauce, you need to be able to spot it. So, um, for example, if, uh, your interview is, ask you, what do you think about ex wife said, which I was actually asked during my London interview because we were given, uh, maternity paper basically like looking at gestational diabetes? And, um, they asked about the rate of miscarriage is, for example, and that was an opportunity to really highlight and show off some ethic ethical principles. No, when it when we talk about the four pillars, we must obviously know that we learned about this at, um, what's it called? Medical school Right before our interviews. Basically, you know that these are the four pillars, beneficence, non magnificence, autonomy and justice We've already talking about Justice and justices. Really encompassing will advance patient care when it comes to autonomy. We need to. As I said, it's basically using the Nuremberg Code principles and putting them into the four pillar um, framework. So you need to be able to say that the patients need to have autonomy when it comes to, um, Echo Boys. We need to be able to associate that with benefits. And I want to use some time here for non maleficent because I've written some things that I would like to expand on and a d S m B pretty much means a data safety monitoring body or committee. And this is basically a group of people that are outside the studies steering group that have the ability to stop and monitor a trial. Um, whenever it comes to like safety issues, they can look at our outcomes. And if, for example, our study is not deemed safe or appropriate, they have the opportunity to look at it and again. These are people that are not directly involved in the study safety outcomes. Again, you might be looking at complications of the surgery if you were looking at like a surgical paper. And, as usual, we need to make sure that we are able to minimize harm as much as possible. And when we're looking at expected versus unexpected events. For example, if we were looking at a trial with prostate cancer and we do, for example, biopsies, an expected event would be pain, right? Like after, like perennial pain. Of course, it's a biopsy that's expected, or, for example, blood in the urine. Also expected, however unexpected events is what is an unexpected of in front of biopsy, like you developed a huge, huge, huge anaphylaxis or something. That and that's only happened when they were subjected to a specific item within the injection or something. That is the difference between expected and unexpected. And again, these aren't stuff that you need to know for the interview. These are more just talking points that you can raise, Um, but if you're asked, But now we're going to go through some stats. I'm sorry for talking at you. Um But I think now it's time for some interaction. And I'm going to invite my colleague Alex for this part just before we move the head. Guys, just do any. If you have any questions regarding ethics before we move on to these just parts of this, I think the main summary for the ethics is you're not expected to know anything in depth. But if you know roughly what the Nuremberg code is, and Helsinki Declaration is your help to guide your understanding of ethics and realistically, the most likely question you're going to get is based on this abstract, you know, Was this trial ethical? Was this intervention ethical so reliant on basic knowledge of medicine and studying? So, for instance, if you're comparing the trial and are CT where they compared the new medication versus the existing medication or best standard, then it's not ethical. Perhaps, but if they compared it to say, no medication, when there is existing standard of care is not ethical, probably not. So it'll probably rely on some background knowledge, but it would be very basic and quite, um, straightforward. Yeah, and I think when it came to examples, Alex, can I also ask for your help when it comes to like giving an example of, um, an ethical question. Sure. So what kind of question we're looking at, just just ethics. How would we respond? So, for example, yeah. So I think, as I said, it relies on some background knowledge. So, for instance, let's say let's go back to the counselor because that's quite a good example. You know, say a cell. Skin cancer has chemotherapy. It's a really aggressive cancer, and chemotherapy is the standard of care. If you're comparing a new chemotherapy regime to the existing chemotherapy regime, you may argue that it is, uh, ethical because, you know, uh, this could be just as good. But at the same time, you can argue it's not ethical because you're denying some patients the current best practice. But it is a different trial where we're comparing no treatment to a new treatment, so we're not compared to the current chemotherapy regime. It's very easy to argue it's not ethical, But remember, you know any SFP interview any medical school interview? It's not definitive. Yes or no. It's always a balance of both sides, So remember to get both sides of the argument and just say yes, it's unethical. Always say to some extent it's ethical because of this reason to something. It's also unethical because of these reasons. You want to get both sides not just fixated on one side of the argument. Yeah, and that's actually a really good point, Alex, that you bring, um, when you were talking about a placebo, for example, or no treatment versus current treatment, um, most trials nowadays, there are usually some gold standard or current treatments that are in place. So if you, for example, your next arm is a placebo, you can pretty much automatically be like I'm wondering about the Echo Boys or the Clinical Echo Boys, where that's where you can really show off that you know these, you know, big terms, and you can expand on that. And most likely it is going to be something that you are commonly assessed on that medical school, for example, the Aristotle trial that we spoke about previously, how it compares warfarin versus apixaban. You know, we know that we need to know that as part of medical school finals, and that is something that they can easily bring because right now, if we were to implement a trial like that, unfortunately, it would not have equal poised when we've clearly shown benefit right over one arm versus the other. All right, so we'll start moving on with the second half of talk and stats. So some interaction. We're just going to run some terms. If some of them you probably had it before some of them you may not have heard of, but with anyone like to volunteer what, you think the definition of this or how would you explain it to a labor? Any of these terms feel free? Any one of the T value. Okay, that's a long distance. We'll move on to a lecture on that court. 95% confidence in sports. Okay, well, keep carrying the nightclub, okay? Better answer than 95%. How about relative risk? Maybe we'll we'll show absolute risk as all the names. Uh, it's not a shot. Okay. Number needed to treat. Okay, if we list has a ratio and I was very shocked, so we'll be going over these one by one, not specifically in this order. And, uh, you know, there's a lot of terminology here you've got risk. You've got Ray shows. You've got relative odds. Okay? Incidents and prevalence. They both sound very similar to What's the difference? Okay, well, just missed the remaining five o'clock. Type one type two errors. What is power? Her protocol intentions to treat these ones are probably less known. I think maybe because med school teaches us the other terms more frequently, but we'll be getting all of these. So how could you like to go to the next line, please? All right, so we've got a knee here on the left. A man's being diagnosed as you're pregnant and on the right. What looks to be a pregnant female as you're not pregnant. Can anyone identify the statistical terms were trying to get out here? Yeah. Okay. And then which one is false? Positive. And which one is false? Negative. Okay, so we're saying false positive on the left. Yeah, that's correct. And type one, um, false positive. And I'd like to, you know, make sure that you guys know that type one is alpha and type two is beta. And this comes useful when it comes to when we're talking about another statistical term. But I won't give you spoilers to it. And the reason why false positives are important. And these are things that you need to remember for your interview is that, unfortunately, false positives can lead to confounding and bias. Yeah, whereas false negatives that is associated with sample size. And if our false negatives is not correct or overestimated underestimated. Unfortunately, we might not get a correct conclusion from our research. Yeah, you guys are correct. I was on that thing on. So yes, of course, mentioned There's lots of new terminology. There's alpha feta type one type two false positives. False, Negative. You don't need to know in depth the formulas how to work out as long as you have a better understanding of, you know, false negative. Is this false? Positive is this has demonstrated. That means then that's more or less good enough. At this stage, you don't need to be able to calculate it specifically. Do you want to take this or should I go up to you? I don't mind. Sure. So I like. I remember when I first had power, I was like, once and then it made sense because I think I heard I think it was very well. So I'm not going to lie like he was like, I don't think this this study is adequately powered enough. And I was like, huh? And then I looked into it and then obviously power, uh, the fancy way to say it and define it is the probability that a Type two error won't be made in the study, a k. The study is able to detect the smallest possible difference between the groups. And when it comes to power, we usually say that 0.8 is the ballpark adequate value. But essentially, if we see that a study is adequately powered, that means the sample size has been calculated to be correct for that study, for you detect the smallest clinical difference possible. And when it comes to that, when it comes to our decision, it basically comes down to what is really reflecting the true world or the true state of the world. So if we want to accept the know hypothesis, we need to make sure that the confidence level it tends to be one minus alpha. If we want to reject the know hypothesis and we have alpha, those are the false positives. And that's when you've made the type one error and when it comes to false negatives. Unfortunately, that's when we when we can actually over estimate and were accepting the national hypothesis. But they're actually false negatives. But again, if we want to reject the know hypothesis and accept something else, we need to make sure that again, it's adequately powered when it comes to this when it comes to interviews, all I need to say is I would like to assess and read the full paper to make sure that the sample size has been adequately calculated. Essentially, yeah. So just to summarize that quickly, for in layman's terms of everyone, power is quite a complicated formula. There are lots of variables that go into it. As I said, you know the degree of difference that you want to calculate, um, and a few other factors. But essentially all power means is how many patients for a trial do you need to recruit and various other parts of definition, you know, to find a statistically significant difference of this much by this percent, for instance, and all that means is that, for instance, randomized controlled trial to detect a difference to, say 5% for that to be statistically significant, you need to recruit 1000 patients now If you did, a randomized controlled trial said, I want to detect a difference of 5% and that's going to be statistically significant and you need 1000 patients and you only recruited 100 you got a statistically significant difference. Uh, that will be considered under powered and the results most likely to get a chance again. If you you need to do the calculation for these big trials to find out, you know how many patients do you need to recruit? And there can be 34 5000 patients in order to detect this statistically significant significant difference. That's essentially summary, but the main terms you need to know is if it's under powered or adequately powered. Now I'd like to talk a little bit about intention to treat and per protocol again, terms that you need to be familiar with, um, and in your abstract or papers. They will usually be, um included, but with intention to treat it preserves the randomization and importantly, for example, if a patient is allocated to Group A and another patient is allocated to Group B. Intention to treat actually preserves that the sample size is maintained and you eliminate bias. And the good thing is, is that intention to treat usually reflects real practice because it takes into consideration patients that unfortunately, for example, drop out when it comes to per protocol. It doesn't really preserve randomization, however. It does show the ideal setting. However, it is less generalized herbal, and it only includes the people who completed protocol. Oh, I'm saying. And the reason why the bias happens is because per protocol analyses may over estimate whatever you're assessing by excluding the patients that actually don't comply to the regime that they were allocated. Whereas intention to treat actually gets rid of that, however, each analysis they do have their place in obviously research because intention to treat usually answers. What is the effect of the treatment, whereas per protocol, we'll look at the effect of actually receiving the treatment so again they look at ideal settings versus actual real settings, and the reason why it's important and kind of topical is because you will probably see these terms being waived around. At least I know I did during, You know, the pandemic when people are talking about moderna versus, um, biotech versus AstraZeneca, AstraZeneca. Blah, blah, blah. Like they would talk about these non non nonstop, but again, just important for you to remember. Sorry, Alex. So you guys just before we move on. So if we have a trial and it recruits 1000 patients and only 800 patients completed the trial and only going to analyze the data those 800 patients so therefore, excluding the 200 patients who dropped out, which type of analysis would this be? Would it be an intention to treat or purpose called? Okay, your mom has said her preschool. Anyone else? Do you want to repeat the last bit of the question? Yeah. So if I exclude 200 patients who dropped out of the trial, they didn't complete it. And I only analyze the data 80%. Who completed the trial? Which type of analysis would this be? Okay, we've got to be monitored. So, yes, this will be a protocol trial, so, you know, it's in the name per protocol, is only analyzing the data of patients who follow the protocol. So per protocol. So that gives us perfect data of only patients who completed it. We, you know, we remove all sorts of buyers that drop out lost to follow up, and we'll have perfect data. On the other hand is that was the intention to treat is you know, of those 1000 patients? Even if they dropped out, we'll analyze the data that's available. And that's more powerful for, you know, everyday medical setting. Because if I have to give a drug in the hospital, you know, some people forget to take it. Some will, um, there's benefits, uh, pros and cons for both. So if I want to release the results of my landmarks study being published in the New England Journal of Medicine, which analysis should I do? I want to change the guidelines. Which one is better and what? What should I write up in my paper? Any ideas? Mhm, um, on the second sentence treatment. Okay. All right. So I've had a couple people say intentions to treat so bit of a trick question, So, yes, I want to make it as generalized or as possible at the same time. I want to get perfect data So it's a trick question because in reality, if you want to get the best results in New England, Journal of Medicine paper, you actually analyze both. You know, for the results of both intentions. A trick question. I know reality. That's what you would actually do. And if your trial is really well run perfectly powered, your results are perfect. You actually find that the results are fairly similar together. They're not that heterogeneous, and that makes your study even more powerful. Uh, if both Intention Institute and PE protocol suggest and agree with the same results safest, it's, uh, background knowledge. Everyone Okay, so now we're going to tackle some terminology that quite similar meaning to each other. But there's slight distinctions, so accuracy versus precision. So accuracy is actually how close the measurement is to the true value. A good example of this is a ruler. How accurate is your ruler that measuring one centimeter to what's the natural centimeter is, whereas precision is If I was to repeat, you know, A. I don't know if that was the 4 100 mL of water from a jump into a cup three times. How precise is that each time I'm doing it. So they're quite similar to each other. But there is a distinction between accuracy, how close it is to the true value and how precise you are being in repeating a result, Essentially. Okay, But when we look at some A, B, c and D just really, really quickly. Which one? What? Using the two terms? Um, and I will give you a tip. D is probably the most accurate and the most precise. What do you think that makes a Not a trick question. Okay, let's go somewhat easier. Does anybody want to tell me? Yeah, it's not. It's not really precise or accurate. What about B? What does that look like? Yeah, it's precisely know, it's precise. Exactly. Yeah. Yes. Exactly. Okay. No need roof. You You swap the words together, so I got confused. Okay. And what about see? Not a trick. Yeah, we know it's not precise. Yeah, kind of. So it's not precise because they're not close to each other. However, you could say it's kind of accurate, because if you look, the average kind of reaches the center point, if that makes sense, so a can't Really? If you look at the average. If you look at the pin point of where they combine, that is not close to the true value. If we were to consider the true value, to be Bullseye be obviously is precise because they're all close together. But it's nowhere near bulls. I see it is, but it's not precise because it's all scattered everywhere and d bingo. Perfect. All of our studies aspire to be like that, Basically. Okay, Alex, we're going to talk about incidents now. So incidents prevalence another pair of terms are quite similar together, but again, distinct difference meaning So the incidence is the rates, and that's very important to remember the rates of occurrence of new cases over time in the given population. The second you hear the word rate, you know, that's with reference to time. So the formula for this, with the number of new cases over time divided by the population size and there are specific sub groups of incidents that you may have heard of and these include mortality rates and morbidity rates. Um, there is some further sense links to incidents, and one of them is the absolute risk reduction, which is a I are. So that's the difference in incidence of disease between two groups. Um So, for instance, if the incidence of disease of 110 and one is five and the absolute risk reduction would be the difference between the two. The number needed to treat to someone correctly said earlier, is the number of patients that you need to treat in theory to prevent the occurrence of, you know, uh, something like cancer from occurring. It's quite a weird time to wrap your head around. You know, you might hear a number needed to treat to, say, 100 so you may need to treat 100 patients with this anti, uh, cholesterol medication to prevent someone from developing hypercholesterolemia, for instance. Uh, it's a weird term, but that's how it's used. And there's a calculation do as well a good example to show incidences in this picture where if we look at the incidents in 2014 verses, the incident in 2015, we can see because we now understand it's rate. It's the number of new cases over time in that population, so you can see the incident here has gone up. Now we contrast this to prevalence. This is the proportion of the population of the disease that are given time. So this given time. So it's not a rate. So it's kind of a cross sectional, Um, evaluation. It's, uh, at one given point in time. So that is essentially a number of people with the disease at a particular point in time divided by the population size. So the president's here is one in four or 25% of the population has this disease this given point in time. And I put this picture in because I quite like it because obviously prevalence is, Alex said, captures the snapshot. All the water that represents what's in there right now, whereas incidents is all the new water coming in mortality. Unfortunately, as you can see all of escaping out remission based, you know, sorry, it's all gone recurrence, which is quite nice because I've only seen this diagram when you look at a tap going into like a little bath. But I have actually seen a recurrence. But that's obviously when the patient gets it again, right, and this picture just puts them all into context. So now we talk about the value. There's an official definition for this, which is the probability of the observed result for one or more extreme occurring when the non hypothesis is true. Something you need to appreciate is that the P value is a value between zero and one and by convention is usually said that he is less than or pollen or five. That's just how it's been, and that's how we've accepted it to be. And that's just something we have to deal with, really. So what this in theory, by definition means is that, uh, he is less than 4.5. We would reject the know hypothesis at 5% significant levels that states there is no difference or no effect. So in practice, what does this mean? So normally, if we do a calculation between two groups and we find that the P is the P value is less than 4.5 by convention, we will say that the the difference in result is statistically significant. Contrast me if we say if we have any values, we say he is not quite one. We will say that the results are not statistically significant because the results are likely to have happened due to chance, and the P value is also quite closely linked to the 95% confidence in the fall, which will come into now. And this is the range in which the population mean value was like 90% 95% of the time. And it's always good to give the layman's terms explanation of this. And one of our colleagues did very well in the future. So a good example of this is if we repeat the experiment 100 times 95% of the time, the result will lie in this range. Um so if the if the result is an absolute difference, the confidence in school should not cross one, because one is, um, kind of a line of march of, um, no effect. And if the result is risk of odds, that should not cross zero. Because, remember, it's, um, ratios are divisions. If they cross series, then that makes things up essentially a good way to visualize 95% confidence. Interval is, you know, with a binomial distribution like this. So if you remember or if you can picture two groups of patients in the trial, the confidence in the, uh, the distribution of results. The one group is here, another is here. If they don't open that and there is no overlap at all, then you know the confidence and don't overlap. And it's likely that the results of the do not do the chance. That will be statistically significant. So if you imagine the details of the two confidence in towards crossing and there is regions of labor, that but it's possible, the results are due to chance, and it's very likely that the 95% confidence is across one in this instance. Well, now move onto odds and risk and ratio what this all mean. So the odds of something by by a simple definition is the odds of something happening versus the odds of it not happening. And a good way to understand odds is with a device or die. So you have six faces on a dye. What is the odds of rolling any numbers? Let's say one on the six sided die. What would the odds be? Anyone like to give me an odds for this? If I change this, I can say good. Yeah, very good. Um and so the answer is, one in five will explain that in a second. But if I said What is the probability of rolling a one on the six sided die? What's the probability be so the probability here is one in six. I mean, that's probably quite straightforward for everyone to understand is less so. But, you know, the probability is, um, one over all the possible options. Six. The odds is the odds of me running a one is one, but there are five other options that I could roll that are not wanted by. It's a very difficult concept of gross, but this is the best way of visualizing it stuff. So odds is one is fine for a dice and probably sees one in six and odd ratios and are typically used in retrospective observational studies. And we'll give you an example here, so odds ratio of 1.5. What does that mean? So a good way to explain this is the odds of smoking was 50% higher in those of lung cancer compared to those without lung cancer. So because this is an odds ratio of the odds in one group divided by the odd odds in the other group, so one would be, you know, 100 over 100 61. So it's 1.5, then maybe 100 and 50 and one group compared to 100 another group, and that's how we end up with 1.5. So the 0.5 here is the percentage difference for the operation. Now, if you move on to risk ratio, this can be defined as again the ratio, the risk of something happening versus the risk of it's not happening and relative risk, unlike alterations, typically using prospective cobalt, studies will explain this with a example. Here's a risk ratio of 1.25. What does that mean? There is a 25% increased risk of developing lung cancer in the group that smokes competitive group that did not smoke so again, like alteration ratio divide dividing a risk in one group divided by the other group. So 1.25 means 125 100 for instance, 25 more patients and one group than the other. Obviously, this can be the other way around. That could be, um, you know, less than one, but typically we show it as something that increased risk ratio or increased odds ratio. You may have heard of something called houses ratio, and it's quite closely linked to relative risk. And it's useful when you know risk is not constant with respect to time, and we'll show you the diagram, which you're instantly recognizing. Second, but it's usually shown on Captain My A survivor curves where you have, you know, the the percentage of the population with respect to time, and it's simply shown on mortality. Rates is typically using prospective randomized controlled trials where we can collect the data of the number of patients still alive, for instance, with as time proceeds. So here we've given you a has this rash, it's 9.6. What does that mean? We've given you an example over the course of 20 years, those who received the drug, or 40% less likely to die than those who received the placebo. So here we're saying that you know those who received an intervention, we're less likely to die compared to the standard treatment or the placebo, and maybe that's enough to share. Our drug is very effective, and it's typically shown on cutting my grass has shown here, you'll probably see these and lots of different types of randomized control trials comparing Intervention X to send a wide placebo. And it's important to be able to interpret these and show, you know, in five years' time, what was the and the ratio? How many percent of the population is still alive? And just to test the audience, can somebody tell me what each vertical line drop represents? Typically in the captain, my cuff? Because this is something that, um yeah, exactly. So one line drop. What do you think that really means in layman's terms, George, how would you explain that to me? Yeah, exactly. So if I were to look at this one line, what do you think? Explain that to me like I'm a patient. You can not A trick you could very much be asked. Interpret this graph and most panels have a lay representative on them. Essentially just means, for example, if it's just one line, it means that, like one patient has died at this point, that's all. You guys are right. I just wanted to push you to see if you can, like, just explain it like that. It's fine, but in the interest of time, I'm just going to move on. Yeah, right. Okay, So that largely brings the end towards the statistical. Um, stats part of the section before we move onto the next part, which is to go through an abstract because now we've essentially completed critical appraisal half of our SFP calls, and next week on words will be looking at other parts of the SNP. So clinical prioritization and eight the scenarios the week after be doing more 80 scenarios the week after, we'll be doing a Q and A session and helps making most of your SRP. And in our final session, Third of November will be going over choosing ranking and accepting your jobs. So as we wrap up this first half of the series, we like to go through a abstract with you, make it as interactive as possible, kind of as a group, try and critically raise it and answer any questions you have about it. So before you do that, does anyone have any questions regarding ethics stats? Sorry, you're finding it's bit blurry. And then he tried to refresh. That might perceive it's the connection. If anyone else is funny. Blurry duty. Let us know. Alternatively, you can pull up the abstract. I've just sent, like, any questions. So I think now we have completed the first half of critical appraisal. You know, you guys are more than ready to start practicing with each other preparing, you know, uh, you know, going out abstracts and then to critically up raise them. Um, Aqua, where would you say is? The best job is to read the practice of CSF. Be sure. So, um, New England BMJ onset drama Nature. That's more than enough. Yeah. So those would be the typical, um, journals they bought them from, But again, it could be any. Any journal any? Yeah, because I know that they've been clear reviewed. Yeah, that's important thing will be peer reviewed hospitals. I know decent quality as well. Yeah. And in terms of, um, I guess survival probability, as far as I understand. Um, I think it's very similar to survival. Where? Survival probability, If it's at 100% that means at that at the beginning of the study, ideally, um, 100% of the patients are alive. And then, for example, if it's dropped to 0.9. That means 10% of patients have died, etcetera, etcetera. So it may or may not represent survival entirely. It depends on rate versus absolute numbers. Yeah, Captain Myers. Typically, uh, proportion or probability. So it tends to be a 0 to 1 access, as opposed to, you know, 0 to 1000 patients. That's kind of of those. What good condition is a life? No, it's okay, George. It's fine. But every time you see a like a vertical drop, that basically means either one or proportion of patients. So in that instance, yes, you're right. It may have been survival survival probability. And if each drop represented 0.1, that means 10% of those patients have passed away in some shape or form. All right, so if everyone could have a brief breed of this this, um, abstracts and given your critical appraisal structure, just bear in mind. You know, we've given you an example. Structure. In the last few sessions, everyone structure is different. My structurally different Eliquis. So you know, the way we go through it might not be the same to your structure, but largely it covers in the fields. So you can start picking out key terms, the definitions and, you know, start preparing, peek in your mind, and we'll start to go through in about one or two minutes time. In the meantime, we'll just chat about this. Is that I'm not so Appendicitis is a relatively common occurrence in any in general. Surgery happens to young kids happens to middle aged adults as well. There is much. There has been much debate about whether we, you know, actually treat them with antibiotics. We actually do surgery. Of course, there are lots of factors to put into there because we're, you know, we're all the same. Medicine is always good to have a bit of understanding about what you're reading. Appendicitis. You know what kind of pictures would you expect that a presentation of a query? Appendicitis station. I would expect to right iliac fossa pain initially and then traversing to a little generalized from multiple. Oh, no. Vice versa. So, yeah, typically you you lose some appetite. You may have some diarrhea losing outside the key one. Um, initially, most times generalized abdominal pain, late stage will, uh, refer to write the foster. Yeah, and that's just the nature of appendicitis. It's kind of like initial information all over as it vocals focuses on the appendix and become very inflamed. That's when you get right there really a lot of pain. There's a number of maneuvers you can do So a stretch, Um, and a few other stretches. But that's more to localize the type of appendix. Because if you visualize your appendix, your abdomen, your appendix can be in this position in this position, waitressing equal so that bloods white cells might be elevated. CRP might eat CRP might be up. Yeah, there is the basically in about what kind of scan do we do? Should be using ultrasound. CT scan. Do we see these kind of young kids? Females? Essentially, this trial kind of gave us some evidence. We will now critically face it too. See what we think about it. So basically, what you've kind of alluded to was clinical exercise, and I'm looking at this right now. And can anybody tell me what the eye and the c r. From PICO using this trial and then talk about clinical aqua boys? Not a trick. Yeah, And why can I say that there might be echo boys for this trial. Bearing in mind what I said earlier. Yeah. Expand on that, George. Because your clothes again, you you guys are close. But basically what we're what I'm alluding to is, um no, no, no. Um, basically, appendectomy, as we know, is the gold standard. That's the current treatment, right? And I'm not giving any, you know, like no one is suffering. There's equipoise because both intervention groups are getting something. But most importantly, one of them is getting the current gold standard. So we're comparing something, and that is a genuine question. Because I know, for example, countries like Germany are using antibiotics to treat it, whereas UK, I think, are very, very much we we will operate, we'll cut it out. But so you guys are both close. But it was just the fact that, um, we're comparing gold. Standard dose is something new that rather than placebo, like Imagine if you give um, if this was a trial looking at antibiotics versus that Sasebo, that is not good because we know appendicitis. It needs to be taken out right. Then I could say that that trial did not have clinical echo boys. So you know, close. Yeah. So let's make a start. Aqua is already made a start. Let's continue. So okay, research question is a non included, too is you know, we'll compare it. You know, you started. You were saying this is a study addressing the question of antibiotics versus appendicectomy with the treatment of appendicitis. Um, and then you may may not choose to say, refer to the clinical relevance. So this realize in a bit of your background knowledge on the topic. And I might say something like this is an important research question because appendicitis is a common presentation to the ent department in young adults and Children. Um, that's what I might say this 11 sentence. So now if we talked about the echo formats, um, what is the population being studied in this trial? Anyone like to imagine good? Yes. The population here, our patients with appendicitis depending neutral, it might be limited to, um, patient's be the lowest at an age. But here that's not relevant. Factor may consider things like gender necessary, but here it's adults or patients. Policies. Um, yes. 25 different sensors in the U. S. That's very good. And What is the one that was mentioned? We talked about the intervention versus control. Um, does it say which one switch? I'm not sure that goes to this and that, You know, using your background knowledge of say, Okay, maybe that's fine. Okay. Using your background knowledge as well. If it was a new patient, for instance, Uh, you know what the standard of care is? But here, essentially comparing as you said, antibiotics versus, uh, vasectomy. And what is the outcome that's being analyzed in this trial? Yeah, that's really good, Amanda and I'd like to, you know, pause on that for a second because they've used a certain term for an outcome. And it is the same. It's similar to a ball. Like, for example, if we were to ask me out to this thing. Oh, it's like at high school, you know? And, you know, last year of high school where your boyfriend is eager to ask you to something, it's this big dance event. What? What am I like hinting at? What is it? It's a patient reported. And then two letters afterwards. Yeah, Prom. Exactly. It's a prom. Um, So instead of looking at, uh, like that primary outcome, the primary outcome of, like, mortality or complications. They've used a problem which is fantastic, right? Because that goes hand in hand with our, um, like, Nuremberg code principles of making sure that the patient is at the center of the care by asking them objectively and subjectively what they think about it. Which is quite interesting. Yes, more than guys. So, you know, the thing I would say is, this is a trial looking at 1552 patients in America, uh, with appendicitis. Uh, and they have been randomized to receive either antibiotics or appendicectomy. Um, the primary outcome was a patient reported outcome measure using iqqu five G and secondary outcomes included the rate of appendicectomy and the intervention arm and 90 day complication rates. Uh, I would also say they utilized surrogate endpoint here as opposed to hard and points as well. So just, you know, you want to show, you know, your knowledge. Just checking some additional tonality. We can maybe. Can anyone tell me what's the difference between a surrogate and hard end points? Maybe we've touched on that thing. So And what, uh, what would an example of a hard outcome instead. Good. So hard end points or hard outcomes are things we can directly measure. E, is the patient still alive or not? Surrogate markers are, you know, anything? That is a proxy for us to measure. So blood test results asking the patient's been in a questionnaire to kind of predict the quality of life is not, You know, that's very difficult to directly measured. Qualified. So it's a surrogate marker. Yeah, but yes, there's a minus a clinical versus prediction. Okay, so let's move on to further bits of the critical appraisal. Uh, we can tackle study design that anyone like to tell me about type of study, you know, what's the improvement type? And was there any inclusion? Exclusion criteria? Yeah, that's the study design. Yeah. Okay, So I'm gonna ask you guys a question. Why is it not blinded? Yeah, you can pretty much tell it's not hard. You can't really blind that Unfortunately, Yeah. Okay. Good. And then, um, I guess we've talked about the study design. Um, can anybody just kind of moving on because we've spoken about. Come, can anybody tell me some positives off this study? design and also a bit more of the overall study itself. Okay, Another someone other than Amanda. It is patient centered. Yes. Another benefit. No, it's okay. It's okay. You're You're on it. You're on it. I'm going to start picking on people like George, for example. Oh, Tobey or Ana Or Becky. Any one of you? Yeah, exactly. Because we can see the n c. T. Number there. Yeah. Which means that they set the analysis a priori. I'll give you guys a hint. 1552 patients is kind of a big, big, big study sample size, right? Like it is. I though I would want to see their power calculation. Yeah, good. It's multi centered. That's a fantastic example, which will help with generalize ability. Right? Because if it's just one single center, then automatically It's not, as I guess, reputable than multi centered. What about, um, let's say, uh, 90 days complications. What do you guys think about that? In terms of maybe a. I think it's an adequate time premier and I think it's really good that they're looking at it because their secondary outcome is also like a safety outcome which is good for the patients, right? And another, I guess. Nice thing is that it is a clinically relevant question, I think, because this will be continuously debated, even if the big studies like this exist. Because surgeons wanna search. Surgeons want to cut essentially, but in terms of some not loopholes. But some gaps that because I think that is what's really going to set you apart. And Alex, I'll invite you to help me as well. But some of the things I would like to like assess really are How have they diagnosed to these patients with appendicitis? I think that's a relevant question. Like, of course, they can't include that in the abstract. But these are stuff that you can just like. Tell your interviewer. So like, after you've done the people after you've spoken about how awesome the trial is and how it's you know, it is just it is relevant, blah, blah, blah, blah. You can then go into how, actually looking at the abstract, I am curious as to how they diagnosed the patients with pancreatitis. I am, um I'm interested to know about the sample size, how they actually did it. I also note that it was funded by a research institute. How much influence did they have on the study design? Did they take a role in this? Oh, yes. Mohammed, There is a feedback from Sorry. I'll send it to you guys now, once we continue. Yes. Exactly. So it's not exactly capturing all of the study of patient population, is it? Exactly? Yeah. Um and then Alex. Yeah, very good. You're all making very positive points about this. And if we were to look at the first, um, let me see it. I'm not sure if you guys can see my Pointer, but in the line where it says antibiotics were non inferior to appendectomy on the basis of 30 day e Q five t scores mean different. 0.1 points. 95% confident intervals. What do you think that is? Do you think that's significant or not? Significant. Thank you. Moment, I guess. Because it um yeah, exactly. And it's not significant. And I would argue that 0.1 point is actually really, really small. However, it doesn't necessarily need to be non significant because we kind of want it to be non inferior, because when we're testing for noninferiority, we are trying to see if they are not equivalent. But if they have no clinical difference with each other, so what? This is really showing us? While it may not be significant, it is clinically significant rather than statistically significant, Because what this is saying is that appendectomy may actually be kind of similar to antibiotics, at least in terms of, um, patient outcomes for 30 day health status and then, in terms of more improvements to what I could offer in the study, which I'm sure they have, because this is the New England Journal of Medicine paper. But because they're assessing for 30 day health status, I could argue. And if they're looking at 90 day complication rates, I could say, Why not do one at 60 days? And why not one at 90 days, for example as well, just to get more and more information just throwing that out there. But you know, Alex, what else would you add? Yeah, everything sounds very good. Um, so I think I'll just rattle through how I think the key points we would pick up in a critical appraisal. Um, and I appreciate that. Not everyone may have got to grips with full structure yet. So I will sign post as I go through and make it Very obviously. You guys the key points of picking up. So if I was to do this, um, I'll explain, But also, you may give you the heading. So research question So this is a study addressing the question of, uh, whether antibiotics or appendix appendix cystectomy is better for the treatment Appendicitis. Uh, clinical relevance. Uh, this is an important research question because appendicitis is a common presentation in a me in young adults. The population for the population was 1552 adults across the USA. Uh, the intervention was antibiotics compared against standard of care or control being laparoscopic appendicectomy. Um, the primary outcome being measured was a surrogate marker using a patient reports outcome measure with E t. Five t, uh, with secondary outcomes evaluating, um, the rate of appendicectomy antibiotics group complication rates at 90 days. Um, so that was my introduction and people. So the key findings of the trial where they found, um, treatment using antibiotics, the non inferior to appendicectomy on the basis of their primary outcome measure. So now, talking about the study design. So this was a randomized control trial published in a high impact back to journal the New England Journal of Medicine. Uh, they recruited 1552 adults with appendicitis. Um, I would like to read the protocol or the full paper to evaluate whether this was a random or continuous recruitment and whether there was adequate power calculations and also to evaluate the inclusion exclusion criteria further because as far as I can tell from the abstract the inclusion criteria, including adults with appendicitis, it is currently unclear the duration of the study. Um, how long the recruitment went on the floor? Um, and while it's not clear whether this is an intention to treat or per protocol analysis, it appears to be an intention to treat analysis. I don't actually know, but I'm assuming because they just said, you know, 1552 adults and then went straight to results. But of course, the the the paper um, this study received funding from the Patient Centered Outcomes Research Institute and was registered a priority on clinical trials dot gov website. Um Okay, so now I would like to talk about the internal validity. Um, so if we pick up, he turns randomization. So we know that the patients were randomized in a 1 to 1 fashion to both intervention and controlled arm. It's not clear the type of randomization use this could have been a computer generated envelope coin toss. And it's not clear whether it's a simple stratify. The block randomization. Um it appears that there was a lack of blinding, um, which is not clearly stated, um, things to know about, you know, But that's obvious. Yeah, of course. Um but just be aware. You know, the types of finding available is a single double triple finding as well. Um, are there any confounding variables I can pick out? Mhm. I mean, kind of right. Like here. Most of the patients that were in one arm needed the other arm anyway. Sure. Yeah. So other so confounded. You want to look for, You know, Are there any factors that could explain a link between the intervention and outcome and confounding variables Always reduce by randomization. You can mention that randomization helps reduce any associated confounding variables. You can talk about the bias now. So I would say that you know the use of randomization help to reduce. Um Mm. What kind of I think that really is aqua? Um, yeah. Selection. Bias. Um, And also, if the type of recruitment as well was it continuous or random recruitment as well, it helps to reduce selection bias. You can mention that the lack of blinding may contribute towards observation bias. If the patient knows which treatment they received, they may act differently. You can mention that this is a prospective randomized controlled trial that helps to reduce, um, recall bias, which is typically associated with retrospective studies. Um, and also, yeah, response by this because everything is collected prospectively as well. Um, And then things I would focus on as well would be, um, conclusions. Did they report any dropout rates, early termination or adverse effects? So I mentioned that I'd like to read um further into the paper. But as it's mentioned towards the bottom of the results, the rate of serious adverse events, uh, was higher than the antibiotics group as well. So they do report that, um and then I just mentioned the external validity as well cause it touched on the internal validity. So this is a clinically relevant study because sinusitis is growing the world. It's relevant because if proven to be just as effective, antibiotics are a lot cheaper than going to the theater. However, I would take these results with caution because the population here is only American patients. So it's not a worldwide and may not apply to patients in the UK Um, and also, you know, the way surgery is done maybe different across the world. Here we do microscopic, but in other parts of the world may be done open, so you can't generalize the results as such to touch upon ethics. I would mention that it's not clearly stated, but I would hope that the authors complied with the Helsinki Declaration and all patients gave consents. And then I would round up with a summary So one line conclusion about the strength of limitations of this study really say this is a randomized control trial. Uh, that showed that, uh, antibiotics were non inferior. The appendicitis appendectomy, uh, in the treatment of appendicitis. Uh, although the results suggest that it's not, uh, not necessary statistically significant. Um, but this may aid my practice going forward. But of course I would wait for further evidence, such as systematic reviews and meta analysis of further randomized controlled trials as well to generalize the results. So lots of things to cover. But of course, you wouldn't say everything I said. Those are kind of more to sign post you. And there's a lot to say about these. Yeah, does anybody have any questions? And I'm sorry we went a bit longer, but it's literally just because we wanted to give you guys an example of an abstract and one of us doing a critical appraisal just to show you what we would have said in the in our actual interview, any more questions? Otherwise, please, can you do the feedback? Pretty please. Yeah, the feedback helps guide our future sessions. So we've had feedback on, and you want to make it more interactive. So make today's session especially more interactive. You got some more examples. So we ended this first half of the series with an example, and we'll do so with the A C E as well. Thanks for work. For example as well. Um, yeah. So feedback does kind of future session. So please do provide you with that. We'll stay for a few more minutes. If anyone has any questions about any terminology this abstract, uh, or any questions about today's session? Thank you for joining everyone. Yeah, I would like to say that I think Alex really strung together all of the concepts that we've been talking about over the past few sessions together because obviously he mentioned some key terms, like surrogate and how he strung them all seamlessly together. And that's what hopefully you guys are working towards. And obviously you don't need to get every single key point. But you need to demonstrate that you know what you're on about and you know what you're talking about because ultimately these things are based on who you get as a as an interviewer. But as long as you impress them, you're okay. And bear in mind that your critical appraisal part of the interview won't be 10 minutes of you speaking. It might be you start off. You mentioned people and they'll cut in if you like. What is the main finding of the study before you can get a cut you off? Like, what's the study ethical. So you will be interrupted, Um, And on the pressure that maintain composure and you're talk talking composure. So, for example, I know people who are who are cut into, like, absolutely, like, completely ruins your concentration. But as I said in the previous session, like, I know people who had a blank stares the entire 10 minutes, like zero communication whatsoever. So again, you never know. And some days you may even get a good cop bad cop. It's all up to whatever they want to do. Yeah. Mm. No questions, guys. Otherwise, you know where to find us. Um, you can please follow us on Twitter, and please feel free to ask us any questions there as well. All right, We'll see you all next week for a CT scan Areas as we wrap it with the clinical. Half the SSD. Yeah. Thanks, guys. Bye.