Home
This site is intended for healthcare professionals
Advertisement

SFP INTERVIEWS: KEY CRITICAL APPRAISAL TERMS AND QUESTIONS

106 views
Share
Advertisement
Advertisement
 
 
 

Summary

This on-demand teaching session will focus on critical appraisal terms and questions relevant to medical professionals. Here, you will learn about what a critical appraisal is, key terms and definitions, basic statistics, tables and figures, and the Nuremberg code and Declaration of Helsinki. You will also go through research biases, validity, and external validity, all of which are factors used to judge a studies worth and relevance in a particular context. A lively Q&A session will follow, allowing attendees to ask questions related to these topics and more. Join us for this invaluable opportunity to learn more about critical appraisal and SFP and set yourself up for success.

Generated by MedBot

Description

Join our SFP prep course to learn to maximise your application success this year!

Learning objectives

Learning Objectives:

  1. Describe two ethical frameworks used to evaluate medical research: the Nuremberg Code and the Declaration of Helsinki
  2. Name four types of bias that can occur in medical research and explain how to mitigate each
  3. Define internal and external validity and key features of each
  4. Describe the structure of a critical appraisal and discuss strengths and weaknesses of a study
  5. Explain the differences between efficacy and effectiveness of a medical intervention/treatment
Generated by MedBot

Related content

Similar communities

View all

Similar events and on demand videos

Advertisement
 
 
 
                
                

Computer generated transcript

Warning!
The following transcript was generated automatically from the content and has not been checked or corrected manually.

Hi, everyone. Um, hopefully you can hear me. Please put in the chat if you can't. Well, if you can hear me, you won't be able to hear me if you can't. Um, we're just going to wait around until five past, um, or a few minutes time just to see where a few more people join and then we'll, we'll get started. Hi, everyone that's just joined. We're just waiting around a few minutes to see. We have a couple more people, um, going to join the call. Um, we'll probably start in, say two minutes time. Um, so, yeah, just feel free to grab a drink or something. Ok. We'll probably get started. So, um, hi, everyone. Thank you so much for joining, um, this evening for our, um, Norwich SFP application course. Um, so if, don't know, I am, my name is Nesta. Um, I'm a current SFP, um, in Norwich. Um, and I'll let everybody else introduce themselves. Hey, I'm Angie. I'm also Asap Norwich. Hi. I'm a Maya and I'm also an SFP in Norwich in general surgery at the moment. Great. So, um, today's session is going to be about a critical appraisal terms. And questions. Um So I'll just do a quick introduction and then I'm going to hand over to Angie, who's starting. So we've already had three of our sessions. Hopefully you found those useful. Um We've still got quite a few left to come um in preparation for interviews um and with them coming up in the future. Um So for today, um we're just going to sort of go through a brief, what is a critical appraisal, some of the key terms and definitions, some basic stats um and then tables and figures and finally a bit of a Q and A session which you ask us questions about this or general questions about SFP and in terms of white space questions, if you've got any last minute um questions. So firstly, what is a critical appraisal? Um So in this session, we're not going to go through sort of this structure or an example of doing a critical appraisal. We're just going to focus on the basics to be able to get you started in appraising a paper. So um this is I thought this was a really good definition. So a critical appraisal is the process of carefully and systematically examining research to judge its trustworthiness and its value and relevance in a particular context. I think it's really important to think about those three points, the trustworthiness, the value and its relevance. So we'll think about that a bit in terms of pot in the next session, which is a structure that I find really useful for Chris praising. So really, it's just a brief overview of the study. You weigh the strengths and weaknesses up against. So the good and bad sides of this study, you discuss the key findings very succinctly judge if the study is reliable and if it's valid and then apply it to your own practice in terms of the relevance. OK. Next piece. OK. So I'm just gonna hand over to Angie who's going to run through the recommended key terms and we'll go from there. OK. Um So I'm just gonna go through some definitions as there's a lot of terminology that's thrown around when critically appraising and it's important to know what they mean. But as it is a long list of definitions, please feel free to unmute yourself and ask any questions or ask any questions on the chat next slide. So the first one is research pose and it just means that there's a state of ge genuine uncertainty on the part of the researcher when comparing the merits of the diagnosis of the treatment or um prevention options that we're comparing in these trials. So it just means that you're doing the experiment to truly find out if one is better than the other because if it, if A is better than B there's no need to do a trial and it would be unethical to randomize and send a group of patients to do B when you know that A is actually better. And then an important fact, er, feature is that if at a certain point in a trial, you realize that A is far superior to B, you no longer have opo and then at that point, a ethics board might need to convene and discuss whether it's ethical to carry on the trial to completion. So, um, one important paper where this is a clear example is the dexamethasone paper when that came out in COVID and it, when it started, we didn't know if steroids would help. It was quite early on in the pandemic. So we had no, there was equipo as we didn't know if it helped or it didn't. And then as the, as the 28 day mortality came, it showed that it was significantly better. So any papers after that used dexamethasone as part of their control groups because it would be unethical to do a study after this when there is a steroid that works to improve mortality in certain patients. Next slide. Next. Also along the lines of ethics is the Nuremberg code and Declaration of Helsinki. So the Nurenberg code was a statement released, I think after world war two, after the Nurenberg trials, and it was a statement of ethical principles for experiments and medical research involving human subjects. And the Declaration of Helsinki built on that further and it's more specific to clinical trials and the basic gist of it is that it just means patients need all participants need to volunteer and um consent to this as well as um the it have capacity to be able to understand the risks of what's going on and any research needs to be for the greater good of society. So these are the statements and the declaration of Helsinki, I'm not going to go through them, but it's fairly standard and makes sense. Um Another thing that you might be asked ethics wise is to use the four pillar structure to um look at the ethics of a trial. So does the patient have capacity to be like to con concern if um the trial was acting in their best interests and avoiding harm and making sure there's justice and fair allocation of resources? No, we're just gonna look at validity which is quite difficult to um define. But it's essentially if how trustworthy and important the conclusions in the study are. So you can, the best definition I found for it is the extent to which an intervention measures, what it's supposed to measure or accomplishes what it's supposed to accomplish. And this can be split into internal validity, which is looking at how robust the study design is and external validity, which means how much the results of a study or what they found can be applied to the general population. So internal validity would mean that the conclusions they came up with are warranted and trustworthy and external would mean that it can um be applied to a larger population. So with internal validity, it's a lot of this, we will go through in our next session where we break down the structure, but it's making sure we have enough of a big sample size, there's reduced biases and it's the study has been undertaken in a fair way. And so the study design is what we're gonna go through next thing and the thing we will go through now is the biases. So the biases are anything that can influence the result of a trial, anything other than the experimental inventions. And like you see, there's a long list of it. So I've just picked out the most common ones as well as important ones in my opinion. But I think you can find a list of, I think 100 different types of vises and they all differ from each other with technicalities. A lot of them can be grouped. So as you critically appraise more and more you'll become familiar with these terms. So the first one is selection bias and that's how you pick your participants. And this happens in the recruitment phase. So if you pick um high BP patients from a hospital clinic, you're only getting the patients that's bad enough to be in a hospital clinic. You're not getting any of the ones managed at A GP and that will influence your results. Performance bias is how the patient's performance changes. So if a patient's on a weight loss therapy trial and they know that they're in the study, they're more likely to stick to it. And that could have a different bias to what you would see. In reality. Observational bias is similar to um performance bias, but it's on the researchers side. So if you're the researcher and you measure the weight twice, you might pick the lower one because you subconsciously want to show that your intervention works. Attrition bias is when between the two treatment arms in a trial, um more patients leave one group than the other which skew the statistics and mean that it can reduce, I think statistical power. So how you, we'll talk about how we mitigate from this publication bias is just the that significant findings are more likely to get published. So any papers where you don't find a significant finding is less likely to get p published or will get published in smaller journals. So there's a bias towards the favor of positive results. And confounding bias is any factor that influences the exposure and the outcome. So, um if you say if you smoke, you're more likely to drink alcohol. Um and smoking causes lung cancer, technically, there might be an association between alcohol and lung cancer, but that's confounded by the effects of smoking. So it's just trying to find out other factors that can influence both. So we mitigate biases using a lot of different techniques the most common ones being randomization and Multicenter, which um allows for any confounder to be equally balanced between the two groups. And selection bias is decreased if you're having lots of people from different centers, um you just increasing the number of participants and reducing the number of um systematic um errors in selecting performance wise, an observation bias you blind and with attrition bias, you, it changes with how you analyze it. So you can go through an intention to treat protocol um analysis or a per analysis. And you can also look at efficacy of a treatment versus effectiveness. So efficacy would look in ideal situation. Does the drug treat what it's meant to? And effectiveness would be more um in reality if 50% of the participants taking the drugs stop taking it because of the side effects, it's not as effective even though it's 100% efficacious. And for confounding bias, randomization can distribute the confounder evenly between the groups or you can look for confounder that you know, will have an effect on it and match them equally between the two groups. You can separate them and stratify them and analyze them like that. But that reduces the power and you can also try a Multivariate analysis next slide. So external validity is looking at how the results of the study can be applied in the real world. So you're looking at is the population they study Generali is it a big sample size is it and seen in lots of multiple centers. So if the study is looking at four people um in a small group in the north of England versus a study, looking at hundreds of thousands of participants across the world in lots of different hospitals, the other studies, a lot more likely to be externally valid. And we also look at stuff like if the intervention is acceptable, if someone in the real world is likely to have this intervention to treat this condition. And also is it better than what's already out there as the gold standard? And you also need to look at number to treat and number needed to harm. So there's no point s saying that this t treatment option is amazing if you need to treat a million people for one person to get the benefit. And you also see if it's financially feasible as well as if it needs specialist equipment, it can only be delivered in certain settings. And yeah, so it's a bit of a balancing game as this meme shows where you confounding variables will master the effect. But then if you control for them, that's introducing a bias and you're changing and making something not as externally valid. So next slide, that's where the hierarchy of evidence comes in. So all study designs inherently contain systematic errors and therefore they have some sort of biases. So which is why we prefer higher quality of evidence. So your systematic reviews your meta analysis, your randomized control trial because they have less bias and they are a higher quality of evidence. And you also have to use multiple different sources of evidence. You can't just take one randomized controlled trial or one cohort study and base your entire medical practice on that. So using this hierarchy helps minimizes the biases and like ate a lot of data to point towards the true results. Hi everyone. So um thanks Angie, that was a very good run through some of the key terms and definitions. And Angie also covered like very um um superficially some of the things that we're going to speak about today. Um So there's a lot of statistics that you could study for, for this um for the critical appraisal. Um I think I'm going to go through the main ones, I'm not going to go through all of them. Um You don't have to be, I guess statistics with like have super in depth knowledge on it, but just enough so that you can interpret the results. Um And also if you conducted a study, like be able to um do some of the calculations. So um first of all, the P values and confidence intervals, probably the most important bit to know. Um So we use confidence intervals and P values to determine whether a result is likely to have a reason due to a chance. So um A P value of, we usually use 0.05. And if P is less than 0.05 then it means that the result is statistically significant. And, and then if P is less than 0.01 then it means that it is highly statistically significant. Um So essentially with, for example, with P being less than 0.05 it means that the probability that a result appears statistically significant when it isn't, is 5% and then it will be 1% if P is less than 0.01. Um So yeah, it's essentially the probability that a result could have arisen due to chance. Um Yeah, when actually it like isn't really significant, then confidence intervals are a range of values in which you're a certain percent sure that the real value. So this could be like the real mean if like the real mean difference between two groups, for example, lies between those two values. So, and usually we use 90% 95 or 99% confidence intervals and the way you calculate them and I mean, I doubt they'll ask you to calculate them, but I feel like it's like good to know um depending on what percentage it is, you use a different number. There's like a complex calculation behind that specific number. We don't really need to know that just um I'd say the most important one is the 95% confidence interval if you're going to learn one and it's the sample mean plus or minus 1.96 times the standard error. So, um and it's like a different number for 90 99%. And then something I think again, within this, the most important thing is that you can interpret these values. So for statistical significance, if the result is an absolute difference, so it's not like an odds ratio or a risk ratio, anything like that, then if the result, if the confidence interval crosses zero, then it's not statistically significant. And if it's an RR or an or if the confidence interval crosses one, then it's not statistically significant. Because that means that there is a possibility that there is no statistical difference if it's zero in the case of an absolute difference or if it's uh one in the case of a um re relative risk, risk, relative risk or odds ratio. So those are the most important things to know for that. And then I'm going to go through risk. Um But just before I get into that, um it's important to know the difference between incidence and prevalence. So incidence is the number of new cases of an illness, for example, during a specified time period in a given population and prevalence is the proportion of the population who have a specific characteristic in a given period. So the way I learned it is that incidence is the number of new, like new cases. So how many people develop a certain illness over a certain period of time. But prevalence is just the number of people who have that illness, like not, not necessarily who develop, not, not the new number, the kind of the number of people who already have that illness. And then um the two main types of risks are absolute risk and relative risk. So an absolute risk is calculated by the number of events in the treated or exposed group. Um So it depends like if you're calculating the absolute risk in the that treatment or control group, and then you divide that by the number of people in that group. Um and then the relative risk is the probability of one outcome over the probability of another. So yeah, it depends how you want to learn it. Um But essentially, it's like the cumulative incidence in the treatment group over the cumulative incidence in the control or, or like in other words, it's the absolute risk of the treatment group over the absolute risk of the control group. So to calculate the relative risk, you first of all calculate the absolute risk of the treatment group of a certain event happening in the treatment group. So you, for example, let's say like you're looking at the number of um people who develop like a pe for example, then you take the number of people who develop A P in the treatment group over the total number of people who are in the treatment group and that's the absolute risk for the treatment group. And then you divide that number by the number of people who have, who developed a P in the control group over the total number of people in the control group, which is the same as the absolute risk of the control group. Um Yeah, I'll, I'll go through an example later. I know it's a bit hard to explain just like by saying the theory and then the absolute risk reduction is the amount by which your intervention reduces the risk of a bad outcome. Um So the way you calculate it is that you take the risk in the controls minus the risk in the treatment group, um like the relative risk and then the number needed to treat is one divided by the absolute risk reduction. So that would be um the number of people that you need to treat to prevent one person from getting a bad outcome. So let's just say we were investigating the effect of a thrombolytic drug um on the number of people who develop a pe. So um the number needed to treat would be the number that you of people that you would need to treat with that thrombolytic drug to prevent one pe also feel free. I can um I can't really see the chat while I'm presenting this but feel free to un mute yours yourself and ask questions. So um we're going to go through this is a little example of relative risk. So um let's just say, let's just stick with the example. I've just said the drug is a thrombolytic drug, like the intervention that we're investigating and the disease is P. So if we want to calculate the relative risk of developing A P, you would first of all calculate the absolute risk in the treatment group. So there are 100 people in the treatment group with 100 people in each group. So you would take 10. So that would be a number of people who develop a P over 100 that's 0.1. And that's the absolute risk of the treatment group. And then the absolute risk of the control group would be 40 divide by 100 number of people in the control group. And that's 0.4. And the relative risk of developing A P if treated with this Thrombolytic drug is 0.1 divided by 0.4 which is 0.25. So this means that taking the drug. So taking this thrombolytic drug um reduces the relative risk of developing a pe by 75%. It or you can also think about it as whoever is in the treatment group is 0.25 times as likely to develop at P compared to those in the control group. And then um if you do 0.4 so the absolute risk of the control group mine is the absolute risk in the treatment group. Um that will give you the absolute risk reduction, which is 0.3. Um And then if you did one divided by that number, then that would give you the number of people that you would need to treat in order to prevent one person from developing the pea. And then in terms of in, again, I say the most important thing is to be able to interpret this because usually when you go through your critical, like on the day of the interview, you, I don't think you'll be asked to do the like a calculation. You might, but I think it's more likely that you'll see results, you see risk, a relative risk and you have to interpret it. So for example, in this case, the relative risk was 0.25. So it was less than one. So it suggests that this Thrombolytic drug is associated with a reduction in the risk of developing A P. If the relative risk was more than one, then it would mean that this drug would be associated with an increased risk of developing a PD. And if the relative risk is zero, it means that this drug makes no difference to whether you develop A P or not. Yeah. So I say even if you don't exactly understand like what, how to calculate a relative risk as long as you know how to interpret it, that's the main thing, then this odds and has its ratios. So it's very important to differentiate risk with these ratios because the absolute risk and the relative risk are used in prospective studies. So, randomized control trials or cohort studies, whereas odds and hazards ratios are used in case control studies or retrospective studies because it's um you're looking at the association between the exposure and the outcome. So I like to use the example of smoking and lung cancer. So that will be a classic case control study. And you want to find out the association between smoking and developing lung cancer. Um The way to calculate the odds ratio is the odds of the exposure amongst the cases. So the cases being those who have lung cancer over the odds of exposure amongst the controls and the controls being those who don't have um lung cancer. So um you have to like this can only be used if you've already chosen an outcome from the start and then the people who have that outcome are the cases. And that's where you put into the numerator. And the people who don't have that outcome are the controls and you put them in the denominator. Um So you take the number of exposures and you like another way of doing it is that you take the number of exposures and you divide it by the non exposures for both the case and the control groups. So for both for the people who have had lung lung cancer you divide the number of people who were exposed to smoking over the number of people who did not smoke. You take that value and you put in the numerator and in the denominator, you take the um out of the people who did not have lung cancer, you divide the number of people who develop, who were um who smoked by the number of people who did not smoke. So you have those two ratios and then, then you divide the number in the numerator, you always have the people who have the outcomes of the people who have lung cancer. And in the denominator, you always have the people who are control, so who do not have lung cancer. I'll go through an example. I know it sounds a bit complicated. Um When I just speak about it without, with no context and then the hazards ratio, it's essentially the same, I think about it as the same as an odds ratio. Um But it's occurring at a given interval of time. So like over, over a specific time period. Um but it's essentially the same and it's calculated in exactly the same way. So let's go through an example. So let's just say so we disease would be lung cancer. Um So we are asked to calculate an OS ratio um for this case control study investigating the association between smoking and lung cancer. So um we already choose an outcome. So people who have lung cancer that's, that's the, those are cases and people who don't have lung cancer are out of control. Then to calculate the ODS ratio. First of all, we need to calculate the ods of exposure in the cases. So we take the cases. So it's that first row. Um number of people who are exposed to smoking is 30. We divide that by the number of people who are unexposed to smoking and that's 0.43. So we've calculated the odds of exposure in the cases. That's our enumerator. Our denominator is going to be the number of people who are exposed to smoking. Um in the non lung cancer group divided by the number of people who are, who were not exposed to smoking in the non lung cancer group. So that's 20 divided by 80 that's 0.25. The OSS ratio is 0.43 over 0.25 which is 1.71. So this tells us that there is a 71% increase in the odds of the disease with that given exposure. So it's there's a 71% increase in the odds of developing lung cancer given against in someone who is smoking. So in order to interpret odds ratios, you have to remember that if it's kind of similar to the relative risk, if the ODS ratio is one, that means that there is no association between the exposure and the outcome if the odds ratio is greater than one. It means that there are greater odds of association with the exposure and the outcome. So the exposure increases the probability of the outcome occurring. Or we can't say there's a cause of the relationship, but at least there's an association and if the odds ratio is less than one, then um then it seem, it might seem like being exposed to a certain that exposure to, to something like in this case, smoking reduces the the risk of the outcome occurring, reduces the probability that the outcome will occur. Um Yeah. And the way I remember this is that in the numerator, you your, you have the odds of exposures in the cases and in the nominator, you have the odds of exposure in the of um in the controls. So if the top one is higher than the bottom one, then it means that there is a greater association between the exposure and the outcome. Um So something else which is quite important to remember which they could ask you. Um On the day of the interview is the, what is the difference between intention to treat and per protocol analysis. So um in, in, in an intention to treat analysis, all the subjects who were initially randomized are included in the, in the final analysis of the data, regardless of whether they dropped out, regardless of whether they completed the protocol as required, the study protocols required. And then per protocol means that only those subjects who complied with the protocol of the study are included in the final analysis. The advantages of intention to treat are that it maintains the effect of the randomization. Whereas if you do a protocol analysis, you're losing that randomization because you're selecting for the people who completed the study correctly. Um And that might select for certain confounder as well, then intention to treat analysis that therefore reduces the risk of selection bias because randomization is the way of minimizing the risk of selection bias. And then the intention to treat analysis is more representative of real life because in real life, there's going to be subjects who are not able to complete the study for certain reasons. Um And in real life, there's people who are, there's patients who are not going to be able to take a certain drug for whatever reason. So it's more it reflects real life, which is why intention to treat analysis is usually the one that's preferred. However, a the one the advantage of a per protocol analysis is that it really does show whether the intervention was effective in those who fully ad did. And it might show um the benefits of ensuring that the protocol is adhered to. So remember what they both mean and also remember the advantages and disadvantages of both. And then um another really important thing to know for your interview is the difference between type one and type two errors because you might be asked um like what type of error you can see in the study that you're critically appraising. So um yes, I don't, don't have like a quick way of remembering which is which um I think you just have to like, try and remember it as it like, just like that. If anyone has any tricks, you can let me know after um type one error is a false positive. So it means that you reject the true not hypothesis, the not hypothesis is always um always says that there is like no significant difference between one treatment and another that a certain drug is not better under control. That is the no hypothesis. So in a false positive, we're saying that there is a significant difference when actually there is no significant difference when actually, then our hypothesis is true and you can avoid this type of error. Well, you can avoid both types of errors by ensuring that there is a large sample size. And um you can avoid a type one error by lowering the significance level. So it means that instead of saying of setting a confidence interval of like 95% you said or a pe value of 0.05 you set a confidence interval of 99% and you, your P value is 0.01 um for a type one error and then type two error is a false negative. That means that you accept a false not hypothesis. So you say that you, you conclude that the result is not statistically significant when actually it is. And the way you avoid this is again by increasing the sample size. And by so increasing the significant level. So remember for type one error, you can decrease, you can lower the significance level to reduce um a false positive. Um And for type two error, you increase the significance level. So from like 2.01 to 0.05 to reduce the probability of a false negative occurring. And also for a type two error, you can increase the statistical power to reduce that type of error. OK. So that's all from me. Uh I know I haven't covered everything but those are, I think those are the basic statistics that you would need to know. Um Yeah, so I'm gonna hand over to Nesta. Great. Thank you so much. May. Um So next, we're going to talk a little bit about figures. Um Just before I do, I think somebody had asked in the group chat about when we would do when they have to do critical appraisals for the SFP. Um in terms of the leadership or education, my honest answer is I'm not sure about the leadership and education side of things for interviews. Um It all very much varies and very much Deaner specific for Norwich. They definitely do and that is all available on the website. Um But for other deans. I'm not entirely sure the best place to look for this is on the um the unit of applications, web websites because they often have a nice breakdown of what's involved in their interviews. Um But if not, it's best to speak to somebody that's previously done the interview and hopefully they'll be able to let you know what is usually involved and if not prepare rather than being surprised by this sort of thing, um We'll just come back to that question that's coming in about risk once I've done my figures slide, if that's alright, please. Cool. So we're going to go through today the basic characteristics figures, um study flow chart and also forest plots as sometimes they may want you to critically appraise something in a structure. Other times they may just pick out something and ask you just to um describe and explain what a certain figure means or what a certain statistic means. So we're just going to go over the figures briefly. Mhm And next side, please. Ok. So first of all, we've got the baseline characteristics table. Now, it seems it might seem quite an obvious table at first glance. But within your interview, you don't want to just um read off the table, you want to make some sort of judgment from the table about what the study is showing. Um So a couple of things to think about, um obviously you could read straight across and say well, the age of this is equal and it's not significantly um because of the P value, it's not statistically significant and all of that sort of thing, but things to really think carefully about and words to use which key phrases to pick up. So think about your selection bias. So proportions of the individuals in each group with the reported characteristics, are there actually any sign significant in terms of their statistical difference? So think if, if there's lots of, you know, one population, there's a much older um cohort within that, then that might um introduce some selection bias. Realistically, you want to be looking for age and sex matching between the two groups as a bare minimum. If they're not matching, then you need to question why you need to state that. Um Next thing about the confounding factors which you touched on earlier. Um So are all of the potential confounding factors accounted for? So on there, on this example, you've got the, the cardiovascular system. Um So it counts for things like angina hypertension. Um arrhythmia, I believe. What's that? Say? It could be mi it could be a f I'm not sure I can't see it. Um But you want to account for all of those um confounding factors. So if, if there's smoking on there, then account for that. If this is a study about something that's condition and they've not put into that baseline characteristics, things such as smoking, you want to speak about that because that could be a com a potential compounding factor and they've not accounted for it in the baseline characteristics. So why is that, is that something they're hiding or is that something they're just not thought about? And then finally, something else to think about for these baseline characteristic tables is whether the difference is what the results show is causal. So what that means is if there's no differences in the groups, so age and sex are matched, there's no confounding factors. The groups look relatively equal across all of these baseline characteristics. And the only difference between the two in an RCT is the treatment allocation, which is one with placebo, one with the proposed drug, then it means that the results are causal and therefore is likely to be valid next side, please. So next, we've got the study flow charts. Again, you could be asked about these again at a first glance, all of these sorts of things are quite basic, but it's, it's pulling out something that's going to impress the examiners and the interviewers um about that, you know what you're looking for in terms of analyzing these um figures. And it's about how you do it as well, not just reading what you see in front of you, but actually making some sort of um decision about whether these are what they show in the relevance of the text. So in the study flow chart are all the patients accounted for, that might seem really obvious. But if you compare the two tables, if there's patients that have gone missing, where, where have they gone? Because in a study that's valid. Um And it's a reliable study, then you don't want to have people and patients that have gone missing. Um The next thing is attrition bias. So is there a big loss of patients from one of the groups? So for example, if you've got um one side of the diagram they're undergoing that it, again, we use a drug example. So one group who are undergoing placebo, one group who are undergoing um the proposed drug, if loads of the patients have dropped out from the drug group, um The treatment allocation group, why is that? Um And think about at ti bias if you've got a big loss in one of the groups and there's probably a reason for that and that should be accounted for. And you could think about that in relation to the study. And then in terms of the last thing was something that ah Maya touched on. Um is the in, is this an intention to treat? And again, that will decide how you interpret this figure, whether it was protocol or whether it was intention to treat. And again, think about that and state what that shows in relevance to the figure next. Uh Please finally, we're going to go through a forest plot. Now it would be less likely for you to have sort of a meta-analysis as a randomized, er, rather than a randomized control trial for your critical appraisal. But again, it could happen and it is known to happen that rather than it being just a study to critically appraise that they just pull out random figures at you or random stats and you have to be able to interpret them appropriately. So, what is a forest plot? So, essentially, it's just an odd way. Well, not odd but an interesting way of showing what the meta-analysis shows within the systematic review. So the squares where the square is actually based. So if you imagine it's quite self, but the one is the center line in terms of Staal significance. So it represents sort of an odd ratio of one. So there's no effect between them. So it has all of your studies on the left hand side and they should be in order of how they've been referenced. Um And then the bottom one is a total with the confidence interval um there as well. So where the square is based is a point estimate of the odds ratio for that study. So the closer it is to one is the closer it is to no effect. So the bigger the size of that square usually represents the weight of the study and that's the weight that it is within the meta analysis. So it's like represents the sample size that they used in that study and the statistical power that it holds. And then finally, at the bottom, you have a diamond shape rather than a square shape. And that's just the only reason it's a diamond is because it's the total from all of the studies. So it just makes it a bit clearer on the diagram and that's a combined results of the trial. And if the result, if, if the diamond does not cross that vertical line to the line of one to the line of no effect, then the results have a statistical significance within where it is on that graph. OK. Basically, the best way to understand the forest plots is to read a systematic review meta-analysis and then look at the forest plots and you'll be able to sort of link the two together just by having it out, it's quite difficult to understand. Um But yeah, OK. Um And then a few recommended resources for this. Um I think for quite a few people, this might be the one of the first times you're asked to critically appraise or use statistics unless you've done um a prior degree and close degree or done this in the past. Um Sometimes it's a bit different how they ask you to um do this verbally rather than, you know, and it's in quite a a pressured environment. So making sure you know what you're talking about rather than stomping on something else. So as a couple of recommended resources um got Kitty Wong on youtube um for not just this but for other things as well. Um Really, really helpful. There's some excellent sfpafp um videos on there, especially around sort of like critically appraising and for the other sections of the um, interview process. Um Usually I don't recommend things that you have to pay for, but this one is only two lbs 99. Um It's not a necessary thing. It's quite outdated. It's 2014, but I found this really, really helpful. Um It's just like an Amazon Kindle book. Um It's critical appraisal for the academic foundation program. Um It's, it's really well structured and like I said, it was, I find it quite helpful. Um even though it is 2 99 but it's not too much. Um then the 123 SFP medal, um that's really good. There's a link on there and then also the um make an academic medic, the SFP course. Again, it's 14 lbs 99 but it's for charity and very helpful. Um ok, so if anybody who's got any questions, thank you so much to Angie and a Maya um for really good in insightful and helpful um presentations. Um If anybody has any questions, you pop them in the chat, we've got one there. Um Yeah, I think um so it says, how do you reduce both types of errors at the same time without increasing the risk of the other? So, really the only way to do that would be to increase the sample size. But you can't really like if you decrease the significance level, then you will be reducing the chance of type one error occurring, but you will be increasing the chance of type two error occurring. Um So I don't like you can't unfortunately just by increasing the sample size. But yeah, obviously like usually if you reduce the chance of a a false positivity, increase the chance of a false, false negative. Yeah. And I think just to add to that in research, you can't really avoid biases or reduce all the errors and make it perfect. And I think that's just one of the truths you have to accept and balance pros and cons of doing different types of studies, different types of statistical analysis to balance everything out. Great. Um I've just popped in the group chat feedback, please. Please please fill in the feedback. I've also put in there some mock interview, sign ups link and the next event for sign ups. OK, great. Next question. Um is the Norwich interview panel based? So I'm not sure about for this year but for, for us last year, it was all on teams and it was a bit like an MM I for um med school interviews where you kind of did one station, then you were put back to the waiting room and then you were sent to a different station, put back to the waiting room and then another station which was the, the three different sections um which were the um critical appraisal slash academic one, the personal one and then a clinical scenario which are things we're going through in this session. But again, double check the website um to be sure. And each of those did have multiple people on it. So it was a mini panel on each one. No worries. Um If any questions feel free to put them in the chat, um we'll just hang about for a little bit. Thank you, everybody for attending. Um Please please fill in the feedback form. It's very, very helpful. Um And we will see you in two weeks at the next session. Uh in terms of the recording, yes, you should be sent it after the feedback. If not, then just drop one of us an email and we should be able to get back to you. I believe as long as you've attended and filled the feedback, you should be able to have access to the recording.