Home
This site is intended for healthcare professionals
Advertisement

Recording of Critical Appraisal Webinar Series Part 2: Critical Appraisal of a Quantitative Paper

Share
Advertisement
Advertisement
 
 
 

Summary

This on-demand teaching session will explore critical appraisal of a quantitative research and is perfect for any medical professional looking to enhance their evidence-based medicine skills. Learn how to assess bias and form a judgment based on the paper. Our speaker, a final year medical student in Glasgow and national research director for Scotland, will guide us through the CONsORT criteria, discussing the introduction, methodology, results, discussion and conclusion. Questions can be asked during the presentation, and a certificate of attendance will be provided upon completion of a satisfaction survey. Don't miss this chance to build essential skills for clinical practice and academic pathways.

Generated by MedBot

Description

AMSA Scotland is organising a webinar series on critical appraisal to provide students with the basic skills required for critical appraisal of academic papers.

The target audience is medical students, but the events are free and available to everyone.

The series consists of multiple talks, and the 2nd talk (Critical appraisal of a quantitative paper) of the series will be held:

Date: 22nd October 2022

Time: 11:30 am - 12:30 pm (UK Time)

Speaker: Agi Jothi - She is currently a final year medical student at the University of Glasgow. She was the previous National Research Director for AMSA Scotland and has previous experiences in hosting critical appraisal workshops and journal clubs. She completed her BMedSc (Hons) Neuropharmacology in 2020 and has experiences in lab based research and research methodology. She has also been a speaker for various journal clubs and critical appraisal talks.

Tentative schedule for AMSA Scotland Critical Appraisal Webinar series:

1. 12/10 (Wed) 1830-1930 (UK Time): Study designs (Speaker: Agi Jothi)

2. 22/10 (Sat) 1130-1230 (UK Time): Critical appraisal of a quantitative paper (Speaker: Agi Jothi)

3. 9/11 (Wed) 1400-1500 (UK Time): Qualitative Analysis - (Speaker: Dr Heather May Morgan (https://www.abdn.ac.uk/people/h.morgan/)

4. 16/11 (Wed) 1700-1800 (UK Time): Academic writing (Speaker: Professor Phyo Myint https://www.abdn.ac.uk/iahs/profiles/phyo.myint)

5. 24/11 (Thur) 1730-1830 (UK Time): Systematic Reviews (Speaker: Dr Amudha Poobalan https://www.abdn.ac.uk/iahs/research/public-health-nutrition/profiles/a.poobalan/)

6. 29/11 (Fri) 1615-1700 (UK Time): Formulation of Research Questions - (Speaker: Prof Stephen Turner (https://abdn.pure.elsevier.com/en/persons/stephen-turner-2)

Learning objectives

  1. Understand what critical appraisal is and why it is relevant to medical practice.
  2. Learn the CONSORT criteria to assess the validity of a quantitative research study.
  3. Analyze the introduction, methodology, results, discussion and conclusion of a quantitative research paper.
  4. Apply the PICO framework to investigate the population, intervention, control and comparison, and outcome of a quantitative research paper.
  5. Recognize potential biases and declare interests in a scientific paper to assess its quality.
Generated by MedBot

Similar communities

View all

Similar events and on demand videos

Advertisement
 
 
 
                
                

Computer generated transcript

Warning!
The following transcript was generated automatically from the content and has not been checked or corrected manually.

all right. Yeah, it's life in No right. Hello, everyone. My name is Raymond one. I'm the national research director for Scotland this year. Welcome to the second part of our critical appraisal Webinar series. Today's topic will be critical appraisal of a quantitative research, and we are very happy to have a GI Jersey again as our speaker for today. She is a final year medical student in Glasgow, and she is also our X national research director for Scotland. She has been to be careful various general clubs and critical appraisal talks. So indeed, a very bright senior that looked up to please feel free to type in any questions you have any time during the presentation. You could type your question here if you have verified your medical account or you could also typing on slider. Using the link in the chat again will be addressing your questions in the queue and a session at the end after the Q and A. It would. It would be great if you could fill out the feedback from which I will send out the little later in the chat. A certificate of attendance will be automatically generated for you after completing the form. Today, we will be a pressing the paper and you will find a link to the paper in the chat. So, without further ado, let's invite a key to share to share representation on quantitative research. Thank you very much. Yeah. Hi, everyone. Thank you for joining in this morning. So what I'll be doing for today's talk is just going through the basics of critical appraisal, looking through the biases as well. And then at the end, we'll be going through the abstract and sort of pulling out information based on the format that I'll talk about later on. Okay, so so the things that we're going through. So we're going to start with Why is it important? How is it relevant to all of you the key concept or the key tips for doing a critical appraisal? And then how to use how to sort of conduct a critical appraisal to come to a judgment of a paper that you're reading? And hopefully towards the end, you guys will also get some skills on, like basic skills and doing critical appraisal and how to approach a paper when you're reading it as well. Okay, so to start off with, why is critical appraisal important? So when you're reading an article, they all mostly prayer reviewed. So they have gone through a series of reviews by like experts to make sure that the papers up to standard. But the issue is that even with, like big Journal, for example, like the Lancets or the New England Journal of Medicine, they are not immune to like biases. So it's very important that when we're reading through a paper, we sort of pick up points to ensure that whatever we're reading is sort of applies to our clinical practice later on and currently in medicine, evidence based medicine is very important, which is the B M. It's because when you're planning your patient's treatment, okay, you want to base that treatment or care management based on evidence that has been published. So its relevance for you is that, for example, when you're applying for, like, foundation year, one jobs, uh, during the 50 year of medical school. And if let's say you want to go into like an academic part way, it's very important for like people who want to go into an academic part way to be able to appraise papers. So this is sort of like the basic stuff because you're reading papers you want to pick up point. You want to understand how and why this sort of study design was conducted in this paper, for example, and also the critical appraisal. You'll be able to sort of have an idea of how to do your own projects as well, and or maybe how to do your own reviews as well and present that to the public. And it's also, you know, if you're interested in it, then it's also good to sort of get a knowledge on how to do critical appraisal as well. So to start off with that is a structure. So it's called the consult criteria, which gives you this this list of criteria which you can read through, and it's applicable for clinical, for critical appraisals. Um, so they have, like, separate sections, and they go through each section of the paper, for example, the introduction they've got, like questions for the introduction, uh, methodology. They have, like, criteria to look at the look at in the methodology section to make sure that the paper or the authors try to minimize the bias. Um, I won't be going through the full concert criteria. You guys can search this up, but what I'll be doing is I'll be giving you like tips on how to sort of read the paper and also my way of how I read the paper as well. Okay, so to start off with a paper usually has an introduction. And sometimes in the introduction, you can find the research question as well. So the research question is very important because through the research question, you can have an idea of what this paper is all about. And even through the research question, you can see if there's any sort of bias or anything anything fishy that would happen with the paper. Uh, then after the introduction, research question part. There's the methodology part. So this is where they talk about the materials, what's been going on with the paper? How did they design it? What's the inclusion criteria? What's the exclusion criteria and in the methodology section, or sometimes anywhere else? They would also talk about funding and declaration as well. So funding and declaration is also an important thing to look at because for example, Let's say your reading a paper about a new drug treatment, uh, for a certain disease. And then in the funding section, you see that this whole trial was funded by a certain pharmaceutical company, uh, where they're taking the drugs from. So you should also check and see whether there could be some sort of potential buyers or influenced by the pharmaceutical company on that paper. So that's why it's very important to look at funding and look at any declaration of interest. Then, after this, you look at the results. So for the results, they will basically talk about what they've done. What are the outcomes they wanna look at? And then they'll go through, You know, like statistical analysis and the outcomes of, uh, and the results of each outcomes that they want. Discussion and conclusion is basically based on the results and a bit of literature review as well, so they'll be going through the results, trying to compare it and analyze it and have a discussion about it. Conclusions are good because when you read like, let's say, if you have no idea what's going on in the methodology of results section, because sometimes some papers can be really complicated as well. The conclusion section is also easy to it's also good to glance it because you sort of know what's going on at the end. What is the outcome at the end? So they sort of give like a brief description of what happens in the conclusion section. But the most important thing is when you're reading through a paper, ask as many questions as you can so keep on asking, Why do they do this? Does this make sense to me? Um, look at especially the methodology section. Just keep on asking questions. And if the patient looks a bit if, like the paper looks a bit, you know, um, officially, for example, in the results section, you notice they admitted something, then asked why they're avoiding this or are they avoiding anything else? Right. So when you appraise the paper, it's always important to give a brief introduction about the paper. So, for example, like, let's say you're presenting to, you know, like in a conference or or even a journal, for example, just give a brief background. So the title of this paper So this paper is a randomized controlled trial a multi label trial looking at X versus why in this setting. So just give a brief description. Talk about the same. So usually the aim is like, Oh, the name of this paper is to investigate. Exercise that, and then talk about the research question because the research question ultimately tells why they wanted to do this research and then talk about the type of journal. So is it they published this in The Lancet or The New England Journal of Medicine or the BMJ? Because it's also important to sort of know what journals they are, because if they're Q one journals, which which means high tier journals, they would usually have a very rigorous peer review process. So there's a very little chance for it to have any ethical issues or any bias is in those papers. But still, they are not immune to those, but they sort of have a good standard or a good quality. The year it was published is important because, let's say, you know, in 2022 now and then let's say you're looking at the paper back in 1998 and the results that was published in 1998 might not be as useful as as it should be for us right now. And maybe there's a new paper that was published last year on the same sort of, you know, interventions and outcomes, and it shows a different result. So it's always important to look at the year and tell people the year as well. And so basically all this gives people an idea on what this whole articles about. Okay, next section. So this is the important one that I want you guys to remember the Peko framework so the Peko framework can be used on any papers you read or a majority of papers that you read. So the people framework should be sort of based on the research question as well. So the research question will usually be they won't exactly specify or this is the research question some papers do. But some papers would basically just sort of embedded into the introduction introduction session. They'll be like, Oh, we're not clear about why this is being done or we're not clear about the new treatment strategy, so that can sort of sort of point towards an idea that their research question is basically, you know, what is the efficacy of this thing or what is the outcome of this thing? So the Peko framework is usually sort of match to the research question, so the goal is basically an abbreviation. So P stands for population or problems. I stands for intervention. C stands for control for comparison and always the outcome. So population is basically so this whole thing, you also look at the research question. You also look at the methodology section as well, so especially in the abstract, you can sort of pick out points from that so population look at or where this was based. What are the ages, age of the patients or the participants? Um, any specific demographic criteria and the inclusion and exclusion that can all relate towards the population section. Whereas the intervention is usually, you know, what are they trying to do? What are they trying to assess? So, for example, let's say it's a randomized controlled trial, and they're trying to assess the efficacy of drug A versus placebo, so intervention would be drug A and then moving on to control and comparison. Same thing with a randomized controlled trial. So if let's say they're trying to compare drug versus placebo. The control will be the placebo then, and outcomes is basically basically what they're trying to measure. So usually they will have, like a primary outcome, for example, so that's the main thing that they want to know. So let's say drug A. The primary outcome that I want to assess is the efficacy. So how good it is. So that would be the primary outcome. And sometimes they will also mention secondary outcomes. So those are like minor outcomes. Meaning what are the side effects of drug A, um, how long did the patients have to stay in if they have the side effects for drug A. But usually they will specify what are the primary outcomes and what are the secondary outcomes? Okay, so this is like a basic example. So the scenario there says that researchers want to assess the efficacy of the new drug treatment for Alzheimer's disease in the elderly population who are diagnosed with Alzheimer's disease. So they want to determine the efficacy and safety of the new drug treatment for Alzheimer's disease. Soapy looking at the population so from that scenario can see that the population is the elderly population diagnosed with Alzheimer's disease? What's the intervention so that the intervention is basically the new drug treatment? And the control is basically the standard drugs for Alzheimer's disease at the moment. And then looking at the last sentence over there, that's the outcome. They want to determine the efficacy and the safety of the new drug treatment. So this is basically like a very simple example. Obviously, when you read through like a, you know, long paper or even the abstract, it won't be as simple as this. They would usually sort of specify a few more things as well. But this is sort of just to give you guys an idea of how the Peko framework works. Okay, next one. Right. So when you're praising the paper, the most important thing, obviously you're looking at the frame of the peak of framework. But the other things that you want to pick up on, uh, you know, the bias is that they have in that paper. The thing is, no paper is immune to bias is like they would have bias at one point or some point, but it's important to check whether they try to minimize it or not. And buyers can also show if they're trying to hide something, or if they're trying to skew the data towards you know, a certain direction and thing with biases. If someone tries to minimize bias, then we can use that paper to sort of generalize it to the whole population in a way, because eventually the ultimate purpose of reading in the paper to see whether we can change clinical practice or not. And so I'll be talking to a few biases later. Some of them are quite similar, like when you read through paper like or when you sort of, you know, do your own like reading Later on. Bias is, sometimes they'll talk about sampling buyers. Sometimes they'll talk about selection bias. Sometimes they are quite similar, and the important thing is keep on asking questions when you're reading the paper. If you think there are areas for buyers, just talk about it as well or keep note of it. Okay, so the type of bias is that we will be covering here would be selection, bias, performance, bias detection bias that sort of correlates with blinding. So I'll be talking about blinding as well. Uh, there's reporting bias as well. This attrition bias. And obviously there's many, many more types of biases as well. But these are the key ones that I want to talk about today to sort of give you guys an idea of what you guys should be looking for. But the most important thing is keep on asking questions. Why did they do this? What's the purpose of doing this? How did they do this? Always ask questions, right? So we'll start with selection bias. So selection bias. So when you're reading through the methodology section, look at the usually it's the first part of the methodology section. They'll be talking about the groups that they picked or the population that they picked. So selection bias sort of links towards the pee or population in the people from the framework. So, for example, they're looking at like smokers and non smokers for a risk of like squamous cell carcinoma of the lungs. So they're trying to see whether they have inclusion exclusion criteria as well. And you're trying to see whether if there's any buyers going on here or not and usually like for our CT, it's a randomized controlled trial. They will have, like a sample size or like a population size. And make sure to see if there's power calculation or not for the sample size. Because power calculation is important because it shows whether that amount of participants are enough to sort of come to a conclusion or not based on the results. So, for example, let's say, in the UK, for example, you're looking at like like a risk for lung cancer, for example. And let's say you only have like, the UK is a big country. And let's say you only choose 100 people. So obviously that 100 people doesn't have a very good you know, power in terms of like drawing a conclusion. So the power calculation is usually done by statisticians. There's like a complex formula. But the most important thing to know is whether they mentioned power calculation or not. Um, okay, yeah. So look for power calculation. Look for inclusion exclusion criteria. Make sure it's sort of links to the research question. Always look at the research questions, see whether the methodology section they sort of linked it back to the research question and whether it makes sense or not. Okay, next one, Performance bias. So performance bias is basically, for example, you're looking at drug X, Uh, and then you compare drug acts with a placebo. But in reality, like current treatment standards, you have a different drug Y, for example. So you're supposed to compare drug acts with drug Y. To see drug acts is better than drug y for, let's say, lung cancer or some other disease. But then you compare drug acts with a placebo when the placebo is not even the standard of care, so that points towards performance bias. So performance bias is basically when you don't compare something with the current standard of care, so the same thing you look at the interventions. You look at the control group during this time, So this is the most important areas where you would have an idea if there's any performance, bias or not right blinding. So in a lot of randomized controlled trial, you see words such as blinding. So blinding means that that person doesn't know who gets what, so that's blinding. So some randomized controlled trials or study designed you can't put blinding, for example, if let's say, one person gets radiotherapy and the other person gets surgery. You can't exactly blind the researchers because eventually they have. They would know that this patient patients going through surgery and this patient is going through radiotherapy because they're comparing those two. But blinding is very applicable for drug treatments, for example, so that there's no bias. So with blinding, let's say your testing drug acts against placebo, for example, so they will blind the participants and blind the researchers. So that's double blinded when to people get blinded researcher and the participants. So that is basically the increased validity because no one knows what they're taking, and that can skew the data. So that's detection. Bias is basically identifying what are the outcomes. So when you do blinding, it minimizes detection bias. So when you're reading a randomized controlled trial, look out for the word blinding they will. It's usually very clear to see they usually put it in the abstract or the first few parts of the introduction look out for it. If there is blinding, then they have minimized detection buyers. Right then, the thing about the selection bias is basically, you know, looking at like the outcomes of systematic differences between each groups. So, for example, like, let's say, surgery versus medication. Like what I mentioned before. You can't hide it because the surgeon eventually has to know who's going to go for a surgical procedure. Okay, outcomes. So when you're reading an abstract outcomes, it's usually written in the methodology section as well, usually at the lower part. So for outcomes, they will be talking about what's their end point. So why did they choose their end point? And they'll be talking about primary outcome and secondary outcome. So the most important thing when you're reading an abstract, usually an abstract, they don't usually talk about it because the abstract is usually very concise. But the full paper when you look at the outcome section what they're trying to measure, always ask, Why did they choose these outcomes? Is it the best outcomes to measure for this? Um, is this in the best case scenario or in a pragmatic way? And then also look at the tools that they try to measure the outcomes? Is it like a questionnaire, or are they doing like some statistical analysis to measure the outcome and always match it with the research question. Look, if the outcome is appropriate for the research question or not, next session next section reporting bias. So reporting bias basically means whether they report the data or not. So let's say they are looking at this drug. Okay, sorry. I'm always talking about drugs, but it's easier to sort of give an example of the drugs. Uh, like with the drug. So let's say they're looking at drug acts versus drug y. So drug acts is the main thing, the intervention, and they want to see if you know it has a higher efficacy than drug y. But what they do is they notice that X number of patients they have a lot of side effects might skew, you know, might not be able to publish this paper so they take away or ignore. Ignore that subgroup, and they just published the favorable ones. So that's basically ignoring results, and that is that is basically reporting buyers. So when you look at the results section, take a look and see, make sure that they have not vomited any people or any groups, and also they will usually have cables or like graphs to show things. Look at those as well, so you can see if there's any reporting bias or not, right attrition, bias. So this you can see from the results section as well. So attrition bias basically means, let's say someone has withdrawn from the study like Does that skill the data in a way? So for attrition bias, you will look at statistical analysis. So the statistical analysis part, they will talk about what kind of statistics they use or what kind of analysis they used to analyze the data that they have. So if you look at the last sentence over there, there's the intention to treat analysis. And there's also the per protocol analysis. So intention to treat basically means all the participants that you've recruited, even if let's say they don't follow the protocol, or if they don't take the drug or the dropout, you still include their data in. So that means intention to treat virus per protocol. Analysis means the ones who don't follow the protocol or the ones who you know, sort of dropout. You do not add their data or you do not analyze the data. So the thing with intention to treat and pro protocol analysis is that with a poor protocol analysis, there's a higher rate of attrition bias because your vomiting data, which could potentially be important as well. So I think the nutrition biases Let's say they don't include the people who dropped out of the people who didn't adhere to the protocol check and see whether they did like a power calculation or or like a statistical calculation to look. If there's any system a significant difference or not. If there's no, there's no significant difference between the people who dropped out and the people and the first number of patients recruited, then they have minimized nutrition bias over there. Or there's a minimal nutrition bias over there. Right? So the results section look at the results, see whether they answer the research question or not and see how they presented it. Is it very straightforward? Because sometimes if it's not very straightforward, like some papers are obviously a bit harder to read. But let's say if it's meant to be like an easy paper to read, and if they're talking too much, they're making things really complicated. Can sometimes mean that they're overcompensating. They're trying to hide some limitations. Um, look if they've done any subgroup analysis, for example, let's say, um, infections, for example, and you're looking at like you know, this sort of this drug for this infection, whether it can like antibiotic, whether it can control this sort of infection. See whether they sort of did like an analysis on, Let's Say, immunocompromised patient. If they did analysis on immuno compromised patient, that sort of shows they did subgroup analysis and seeing whether you know, like a certain group, is more prone to getting an infection or whether the drug is better in a certain group or not. So subgroup analysis is basically good for identifying, like a specific demographic where it's better or it's sort of dangerous. And that specific demographic and subgroup analysis is also good for future studies as well. Right Discussion section is basically talking about whatever results they've gotten, and they'll also sort of link it with, like, past research or, you know, the relevance of the two, the field as well. Sometimes in the discussion section, they might give a bit of additional data as well, so just look out for it as well. They will also talk about their limitations as well. So like they will give, like, a bullet point Some papers don't usually like it. Depends on the papers, the papers, Which sort of put the limitations their papers where they don't talk about their limitations. Um so just keep an eye out for it. And in the discussion section, like towards the end as well, they also give out their message in like, a not a very straightforward way. But you can sort of look at look at the whole discussion session discussion section and sort of, you know, keep and find out what the actual messages about. So just see whether you know the discussion section. You can also see if there's any buyers or not as well like. Or are they sort of, you know, keep striving towards one side, or are they sort of comparing properly or not? Um, right, there's a question. So is there a certain threshold to determine how little amount of bias the paper needs to have so it can be regarded as a good paper? Well, to be fair, there is no like clear cut like points or like numbers to state that all these papers too many biases it can be used, but it sort of depends because it's not just one person reviewing a paper. There are a few people reviewing a paper as well. If let's say this is like two published in the Journal, and it's very hard to minimize, it's very hard to sort of have no bias at all. But it's important to just look through this section and see whether they try to minimize any bias. Not for example, let's say you see a paper where there's no power calculation. They don't really talk about why they did these things to get the results. They don't talk about the inclusion exclusion criteria. You can see so many biases or, like the whole methodology section, is just not right then. Clearly, that paper is not a good paper. So it's basically it's like, sort of like a personal view as well. I would say there's no clear cut point, but I think as you sort of read through papers, you sort of start getting better understanding of whether this is a good paper or not as well. Uh, I hope that answers your question, but if it doesn't just sort of type into the chat or just type it out inside and I can explain it again as well. And I know that a few of you guys also typed out a few questions on Slider. So I'll answer that at the end of the session as well. Right? So overall judgment. So this is basically like I think it sort of answers the question just now as well. So overall judgment is basically your judgment. So you once you've looked at the paper, you sort of notice the biases. You've looked at the results and outcome. You you sort of I understand their message from the paper. Then you sort of come up with your own judgment. You think about how do I feel about this paper? Do I feel like I can trust it? Do I feel that there's good internal validity? So internal validity is basically did they try to minimize the bias? Do I think you know this paper can help with my patients? So this is where you come with an overall judgment. So with the overall judgment, you look at you, look at the title first, so the title will still sort of, you know, give you a clear view of what you're trying to see. Look at the name again, see whether there's any bias in the game. So, for example, let's say their aim. They say I want, uh, the name of this review or this, UM study is to investigate the side effects of this drug. Or but if let's say the AM say, oh, we aim to prove that this drug has no side effects so clearly the second game because they're not trying to investigate they're trying to prove something. So that's sort of shows that this paper could have potentially quite a number of bias is because they're trying to prove something. They came to the conclusion from the start, and they're trying to prove it so that that's the wrong way of raising the name. So the um will also sort of give you a clue whether there could be any potential biases in this paper or not, and where to look out for as well. Um, so mostly like to answer the question. Previously, it's a lot of gut feeling as well because, like I said, there's no clear cut sort of points, which you have to attain to make sure that, you know this is a good paper, Not There's a lot of gut feeling, which you will develop eventually, as you read through papers and papers and papers like it takes a lot of time, But with practice you'll get better, like I know, like, you know, like junior doctors or even registrars who sort of struggle with reading like complex papers as well. So it just takes a lot of time. And it also depends on like the feel you're interested in, uh, as well. So if you were to ask like, let's say, a surgical registrar to like a praise like a farmer biochemistry paper, for example, it would be sort of hard for them to sort of appraise the paper because Number one, they don't really have that much knowledge. And, let's say, the pharmaceutical biochemistry area, they could have knowledge, but they're not as knowledgeable, as knowledgeable, knowledgeable as someone who actually specializes in that field. So it takes time. It also looks at what you're actually interested in as well. Um, and then generalize ability. So this is basically external validity. It's basically trying to see whether this study can be used in a bigger population or not, whether or in a practical setting. So generalized ability is basically looking whether that study can sort of affect your standard of care for your patients or not. So when you appraise the paper, when you want to finally talk about the paper or when you want to conclude about the paper, you talk about internal validity. So that is the biases and then generalize ability. So whether this can be used worldwide or not, this, um, results. Right? Uh, so before we go on to the activity, do you guys have any questions? I can answer some of the questions now so that you guys would have a clear view of you know what's going on before you start activity? Um, let me just going to slide and see if you guys have any questions. Okay, Right. So, uh, yeah, the first question like what type of research should you do to differentiate you? Okay, this one will answer towards the end of the, uh, activity and how to do a journal club appraisal. Yeah, this one will answer to its end of the whole presentation, but do you guys have any questions? about the framework or the biases or anything at the moment. Okay. All right. So we'll go on to the next section section. Sorry. So, um, I hope you guys got the paper, um, that I emailed out. So the paper is basically looking at estimated protection of prior SARS. Cov too coated infection against re infection with the endocrine variant, uh, amongst people who have been vaccinated or not vaccinated. Oh, yeah. Yeah. I'll explain how to do a journal club after the whole presentation. That's okay, because I can talk a bit more about it at the end. Okay, So what I want you guys to do is just go through the Let's do the abstract because it's easier to go through the abstract for you guys. So go through the abstract. So the abstract is the first page and the second page. So go through each section. Look at the introduction methodology results section outcome section. So we'll we'll use the Peko framework for this. Right? So we'll use the people from work for this. So I could someone just read through and type out in the chat box or type out in slider? What do you think the population that they're looking at is so what is p it? Or if you guys want to just go through the Peko? So what is so just type out? What is P? Um So what is the population that they're looking at? And then someone else can do What is I what intervention that they're looking at? Um, someone can do. See? So what's the control and someone can do or what are the outcomes that they're looking for? So I'll give you guys about 10 minutes to sort of read through the whole thing and sort of do your own people format, and then I'll give the answers at the end after that. What? The Peko, what are the results? Okay. And if you guys want you guys can also type out in the chat box as well what the answers are for the people. So I know that this is also a case controlled study. So for I, it's the intervention part. It's basically the case and foresee is the control and does the outcome. Okay. And then I will start sharing the answers around 12. 15. All right, thanks dot Okay, So time was given the P already. So someone else could type out the eye, the C or the Oh, that'd be great as well. And then I'll talk through the answers at 12. 15. Hi, Fran. Were waiting until 12. 15. So that, um you know, you guys can read through the abstract and sort of type out your answers for I see, you know, um, yeah, would someone like to type out? What is I? Oh, sorry. Yeah, thanks, Raymond. Would someone like to type out what you see and all? Okay, so I'll just give it, like, another minute for you guys to sort of do your answers, and then we'll go through this. All right? Okay, then. So we'll go through the answers so right for PS your right, Tamara. So it's community dwelling individuals age 12 or older who have been tested positive for coated in, uh, Quebec, Providence of Canada, and where it's for I, or basically because I is intervention for, like, a randomized controlled trial. But in this case, because it's the case controlled study. So the dissection would basically be the case. So what, you're trying to compare? So the thing that you're trying to see is like positive test results during the study period. So that's the case, and the control is the negative, uh, coated test results during the study period, which they choose, whereas for all in the abstract, they gave out two outcomes to the primary. One that they want to see is whether there's any re infection of SARS cov to, especially the Army crawl variant. And if there's any hospitalization with that and because this is a case controlled study, usually for the statistics part, they will be doing an odds ratio. So odds ratio basically means what are the odds of this? Let's say, the case, getting this in comparison with the control. So that's why they're also looking at the odds of prior infection, with or without vaccination as well in there, um, in the outcome section. Um okay, does this make sense for you guys this PICO framework here? Um, do you guys have any questions about this PICO framework here? Yeah. I'll just check slide as well. Okay. So yeah, I know you guys have a few questions. I'll go through this once. I'll go through the questions once I'm done with this. We're almost done with the presentation. Okay, so I don't think there are any questions about the Peko format here. Uh, if it's still not clear, just type it out into the chat and then we'll go through towards the end. Okay? All right. So from the results section, also, they also came to a conclusion. But they came out with, like, the outcomes of the results of the data they analyzed. So they noticed that amongst the non vaccinated individuals, those who have a prior infection, they had a 44% reduction in a re infection. And whereas they've also mentioned that the estimated the protection against a migraine variant was consistently significantly higher amongst the vaccinated individuals with prior infection compared compared with the vaccinated individuals without any prior infection. So when you go to the full paper, when you come to the results section, they actually subdivided those things so they subdivided prior infection, prior vaccination or prior infection. No vaccination at all. If they've got vaccination, how many doses? So the results section is basically I don't see very complicated, but there's so many things to read through, so the abstract would give you like a clear view of what sort of pick up upon or sort of like a very like the main thing that you want to talk about as well. So the main thing that they want to talk about that they wanted to talk about in this paper is basically the protection was significantly higher in the vaccinated individuals who also had a prior infection compared compared with vaccinated individuals without a prior um, infection, Right? So? So this part, I'll give my overall opinion about the paper. So basically, this paper was talking about, like vaccination with a few doses two or three doses, and I was trying to see whether prior infection also plays a role in the infection as well. So, from what I could see is the those who were vaccinated with two or more doses had a better protection against those who weren't vaccinated with two or more doses of those who had no prior infection. But I also felt that there was some limitations as well because India, uh, the part where they confirmed the coated they were mostly doing nuclear, uh, 80. Basically, it's a nuclear acid testing, basically, but they were only basically being done in those who were hospitalized at one point, and the rest was sort of going through the rapid test, basically. But the thing we know that is that the rapid test isn't as reliable as the nuclear test, so they need the nuclear acid test. So they basically didn't mention if that data could be significantly skewed or not. So that could be a bit of potential buyers around that part. And also they didn't mention any subgroup analysis. So subgroup analysis. Basically, you look at you look at usually it's stable one, which is the demographics. Then you look at co morbidity. So look at, you know, like they got diabetes or prior lung condition. Any lung infection. How old are they, whether they're very frail, so they didn't really do subgroup analysis. So it's possible that, you know, there was no time for a subgroup analysis or they didn't have enough patients with a certain group to do a subgroup analysis. So this is more for, like, future studies to do a subgroup analysis and see you know which groups sort of having a better outcome compared to the other group. Okay. All right. So basically some up this whole presentation, a critical appraisal of a paper can take a short time. Sometimes it can take a really long time as well. A long time is usually when you're looking through like a systematic review and you're trying to see like you're going through each individual, our cities that day review, so that will take a long time as well. And usually when you're publishing like when you're publishing in journals, basically, it's not just one person reviewing the whole thing. They would have a few reviewers as well, and that would also take a long time. So that's why, like when some people are publishing to journals, it takes a few months to sort of finally publish it. Um, most important thing. Just keep on doing as much as you can. Um, I don't get over stressed about it. Like I remember the first time I read through the paper. It was just too many jargon, too many things to take in. Just go step by step. So, like today, we just did the abstract because that's the first step. I would say, like when you're reading a paper, you always come across the abstract first, usually and from the abstract. You can sort of see whether you know there's a clear view of what's going on or not, because if the abstract is not clear, then the paper might be not clear at all. That might be a lot of bias in the paper as well, so the abstract is a very good thing to sort of look at. And just when you start off, start practicing from the abstract. First, keep on looking at the people format. Try to see what goes into the P. What goes into I what goes in to see what goes into Oh, and just do the abstracts. And once you're comfortable or you want to sort of move on and look at the full paper, you can do that as well. But with the full paper. When you look at the statistics part, if you feel like there's a lot going on, don't worry about it like because some statistical analysis could be a bit complicated. But just just get like a hangover, like how to read papers. Just get very just get comfortable with reading an abstract on the paper. Um, and if you feel like there's just too much going on the paper. You can briefly scan through the methods and the results section because that's where you can sort of see if there's a lot of buyers or not, especially the methodology section. Remember to ask as many questions as you can. Why did they choose this population? What is the inclusion criteria? What is the exclusion criteria? Why did they exclude these people? Why did they do it in this part? Why did they not do it here? Keep on asking questions That's very important. And the last part when you've read through the abstract or or the paper and you want to come with a conclusion or you want to give your opinion. Always talk about the background of the paper. First, talk about the title. Talk about the aim like basically a brief summary about the whole paper. Talk about the research question quickly give the audience, like the Peko format, talk about the Peko format very briefly and then finally talked about internal validity. What are the biases? Were there any ethical committee involved or not so in a randomized controlled trial of So could you not hear me? We could. We could, uh, Do you want to refresh the, uh, let me just type up. Uh, the rest. Could you guys hear me? Mm. Okay. All right. Um, so, yeah, just refresh the page. That might help. Um, okay, so right, so basically, like for, like, a randomized controlled trial. There's it's it's a big trial. Your there's a lot of intervention. You're giving drugs or you're giving some sort of treatment. So there will always be an ethical committee involved every hospital, every organization has an ethical committee to make sure the patient you're not doing any harm towards the patient. So the most important thing to look out for an ethical committee if a randomized controlled trial doesn't talk about an ethical committee, there's some issues that there's some potential issues with internal validity, because the ethical committee will ensure that you're doing things based on the actual protocol, actual standard of care as well to the patients and making sure they're not harm. So that points towards internal validity, uh, then generalize ability is basically external validity. So this is basically you looking and seeing whether these data or the review was looking and seeing whether these data can be used for the general public or in a pragmatic way or and can be sort of, you know, I can sort of change the clinical outcomes or the standard of care for patients. So when you give your opinion, talk about internal validity, talk about what potential biases they are or which biases they minimized, um, and then tell there was an ethics committee involved and then talk about you know, what you can do with this data? Can you use it for the whole world? Where can you use it for? Um, right. So thank you guys. So I will. If you guys have more questions, just type it out into the chat box or slide over, and I will go through the questions that you guys have typed out. Um A. So I think the question about is there a certain threshold to determine how little amount of bias the paper needs to have so it can be regarded as a good paper. So that's basically there's no for that question is basically no set point. It's based on people's opinion, But the more you read through papers, the more you understand when you pick up biases. How bad? Other biases. And whether you can still use this paper or you want to sort of disregard this paper fully. Um, so that's for that question. Yeah, the slides will be shared, I think the recording women with the recording the shot as well. Or is it just slides? Uh, we could share both the slides and the recording. Yeah, Yeah. So you guys will get both the slides and the recording once you guys have filled up the feedback form and how to define. So the next question is how to define generalize ability as a study, population is unique in terms of culture, health system context and results may not be applicable to another population, right? So, um, yeah, that's a good question. But let's say in terms of like, a multi center study, because a lot of our CTS not a lot, but there are cities which are like international or multi center. So they do it in different countries, for example, so that sort of gives us a snapshot of different populations. And usually those kind of studies would also do, um, subgroup analysis. So for studies like that, you can generalize it so various for studies where you feel like it's in. Let's say this population, let's say in the east side of this population and they've got different genetics and all you can sort of talk about doing an R C T for this, like for the West population or like the western population. So in that way, that data for the western population that can be generalized whereas the data for the eastern population can be generalized for the eastern countries, then so you can generalize it. But based on like, let's say they do it in this city, then you can generalize it for that sort of town sort of thing. If that makes sense, does that make sense? Um, if not, I can explain it again. Just type in the chat as well. Um, right, so how to do a journal club critical appraisal? So I am thinking this is like how to present in a journal club, but I'll talk about how to present in a journal club and also how to do your own journal club. So when you're presenting in a journal club, it's basically doing your own critical appraisal as well. So just do a PowerPoint slide about the paper. Talk about a bit about the background of the study. So, for example, let's say they're looking at Alzheimer's disease, like drug treatment for Alzheimer's disease. So you're not your first slide, but you're early slide. Talk about the background about Alzheimer's disease. Like what is Alzheimer's disease? What's going on right now? Is there any new drugs available right now? Once you've given a background about Alzheimer's disease, for example, then talk about the paper itself. Talk about their introduction. What's the title? What's the name? What's the research question? So, basically, in the slides are provided, I've sort of given, like a framework of how you sort of set your own slides for your own critical appraisal. And then you follow the Peko format after that. So population intervention control outcome. Once you've done the Pequot format, talk about the results section. So talk about what are the results? Talk about any significant difference, um, and then sort of get the key results of the main point. Once you've done the results, then go on to give your own opinion. So a brief physical again talk about the bias is and then talk about internal validity and generalize ability as well. The most important thing when people want to see you present from a critical appraisal is basically your opinion and also what you think about this paper, whether this paper can sort of be used or it has to be disregarded completely or it can be used. But you need more studies to sort of compare it. Okay, Um, right. Next question. So you are two year a second year medical student and what type of research I should do, which can differentiate me from others. And what's the whole process? Okay, right. So I suppose like the most important thing when you're trying to sort of get yourself into research is number one. Find a very good mentor or someone to sort of guide you, because when you're starting off, it's a bit. I won't say hard, but there's a lot of things to look into, especially if you want to do like a big primary study or even like basic audit, for example, because when you're starting off, you don't have much experience threat on how to do it, so it's very good. It's very important to find a mentor someone who has done this, someone who has, like, an interest in this specific area. And, um, that will sort of help you because your mentor is a big, big help. Um and like, for example, if let's say in your medical course, they've got, like, a research component like a self selected student. Sorry, students selected component, for example, go and do it or try to find a mentor to do it. So that's sort of giving you a step towards, like, sort of figuring out how it works. And it takes a lot of time to understand how each different study design works. Like, even like, sort of like, you know, like S H. O s or registrars. Even they sort of struggle to publish papers or do a proper research because it takes lots and lots of time and effort. But the most important thing fundamental always start with the basics. So I think the easiest things for medical students to do is review is basically literature review or simple review. Try familiarizing yourself with doing a review, especially if your first year second or third year student even 40 50 is like a review. Basically gives you an idea of how to read papers, how to sort of compiled papers and give your own conclusion of review is the basic thing you guys can do. You guys can also present reviews like Look out for conferences, which are accepting abstracts, and they are medical student conferences because their their whole selection of abstract is not is not very straight, but not as strict as like, you know, like a consultant level conference conference. So look at, like medical student, um, conferences. Try to present your review paper. That's a very good step to start. And through conferences, you can also link up with a lot of people. And with the linking or networking, you can sort of potentially find mentors. So that's a good thing. First year students, second year students, I would say, Go to conferences, find mentors, um, and try to link up. That's very good. Look up at how to do a literature review. Um, can you please tell if possible, how to do a literature review? How to give evidence that we have done literature? Okay, so literature review is basically, for example, let's say you find a topic. You are interested in. So let's say in sir dot Okay, let's say lung cancer. You're interested in lung cancer and you want to look at the recent treatment for lung cancer. So what you can do is you can go to bed. So the literature review go to databases. I'll just type it out here. So for literature review, you need data bases. So, like Pop Matt, um, I'll just type it out on the chat the databases so you guys can see what they are, and you can sort of look through later on. Pop Matt Scopers and Base. Google's collar is an okay database. It's just that there's a lot of duplicate with Google Google scholars, so I suggest, you know, like these ones a good med line. So these are big database is where you can get loads of papers from so the individual journals, for example, Lancet is an individual journal. B. J s, which is the British Journal of Surgery, is an individual journal. But all these papers will be in a database, so a database can have a collection of different journals, thousands and thousands of papers. So these big databases, that's where you look for papers. So number one, if you want to do a literature, come up with a topic what you're interested in or what you want to look at. For example, new treatments or recent advancements in ovarian cancer, for example. You want to talk about that. Look at that, and then come. The first thing is to find articles. So when you scroll through the paper, look at the abstract First, see whether it makes sense to what you want, whether it links to your research question or not. Abstract after abstract screening to do full paper screening. So that's when you look through the full paper. Does it still linked to your research question or not? If you asked, do they match your inclusion criteria? Then you select them. So let's say inclusion criteria. I want adults only no Children. So any paper which are talking about Children, you exclude them. Kick them out and you take the adult papers. So that's how you pick your papers. Then for literature review, go through. You have to read through the whole paper, basically go through introduction. Important part is the results section of each paper. That's what you're going to talk about in the literature review you're going to talk about or this paper said, This is the recent advancement. This paper said this is the recent advancement. So literature for you is basically giving your opinion. It's like the basic level of like, you know, the research pyramid. So that's the sort of like the basic level. So it's basically giving your opinion. Um, and how do you give evidence that we have done so? So the only way you can give evidence if you've done literature review is if you have published it, so when you publish it, you get like, Let's say you publish it with Matt, you get a bad idea. So that's one way to sort of show that you've done a literature review. Second thing, easiest way is just present it. When you presented in the conference, you get a certificate of presentation. So that's sort of a formal evidence stating that, yeah, you've done this Literature review and you've presented it in the conference. So just do the medical student conferences try to send abstracts, um, to the conferences, and if they get picked up, that's a really good thing, because you can sort of do oral presentation or post the presentations on your literature review work. Um, I hope that answers your question. If not, just type it back in the chat, and I will see how to answer that as well. All right, so back to the second year and DBS student like what kind of research can differentiate you from the others? I think, because mostly for students, they don't expect a student to sort of do like a randomized controlled trial, because that involves a lot of things. So I think for a student level at this point, the one thing, because a lot a lot of students don't really do research work. They don't really present much. So I think the one thing that would sort of differentiate you from other students if you're interested in a career in academic is basically, you know, presenting work going for, like conferences. Um, if possible, try to publish. But it is a bit hard to Publix because it's a rigorous process. But the most important thing go for presents, do presentations, and you can do basic audits as well. So find like a mentor, very important. Find a mentor, ask them what you can do for them in terms of research work. That's how you start. Um, I hope that answers your question. Do you guys have any other questions? If not, if you guys don't have any other questions, you guys can leave those who have questions. You can just stay on and ask if you guys have any more questions. But the most important thing is just, you know, read the abstract, start from the extract and just build from there. Yeah, I think there's one more form from slide. Uh, yeah. Is it that Are there any resources or examples? Yeah. Yeah. So the next question is, are there any resources for examples out there that should that show how experienced individuals critically appraised of paper? Right. So there's this checklist. So, in the presentation, I told you about the concert checklist for our CTS, or like, quantitative research. So you guys, they have that checklist basically to sort of look out for so you can sort of use that checklist as a guide. I put like some of the checklist are good. Some I don't really like. For example, like our CTS. I use the peak flow for that because it's very straightforward. But if let's say you wanna appraise the systematic review so there is this CSP pool for systematic review. So I think just go to Google. Just type out tools to appraise the paper. It will give you that checklist. Um, and you guys can start from that. So the checklist is basically created by experts, and they sort of when they look at the paper, that's what they're looking for as well. So that's like a guide for you guys. But today's talk was basically the Peko format that I told you guys about. It's a It's a famous format that people used to appraise papers. But this format can also be used for different types of studies. It's just that you guys need to sort of sort of know where the intervention, what is. The intervention was to control, for example, in this study, because it was a case controlled study. They don't specifically say that the cases the intervention so you guys have to sort of fit into the case sort of falls under the intervention section. But this pill form that can be used for multiple types of papers But if you guys want more like, you know, like a proper checklist, like what kind of questions? The checklist is good because they sort of show you what kind of questions you have to ask. So concert checklist. I'll type it out here. Um, consort checklist. There's also the cast. Uh, check this or two as well. The cast is a good one because it gives you like a checklist for different types of papers. Um, there's also another one. I can't remember that one, but it's also there's another new one as well for systematic review and meta analysis, but just basically just typed out checklist or how to appraise the paper. And it will give you all this checklist and to, um, right. So I hope that answers that question. I don't think that any more questions it done any more questions? You guys just feel free to leave, but please do fill up the feedback form so that I can know what I can improve on and how the talk went today as well, right? And thank you for joining in today as well. I really appreciate it as well. Spending a Saturday morning here or wherever countries you guys are from. Thank you very much as well. No problem. Thank you for joining in. Yep. So our part three of our webinar series will be held next month on the ninth of November, which we have invited Doctor Heather May Morgan to speak on quantitative analysis, please Look out for are promotion on our Instagram at Anza Dash Scotland. Closer to the time we hope to see you again very soon. Thank you very much again for the session today. Thanks for joining us today and have a good day. Thank you. And before you guys leave, just for those who are interested in journal clubs, uh, I think Mr Scotland is also holding a journal club. So just look out for that. And if you guys are interested just joining present whatever papers or whatever reviews you've done, Okay. Thank you all. Thank you.