Home
This site is intended for healthcare professionals
Advertisement

The Academic Language - How do I understand a paper?

Share
Advertisement
Advertisement
 
 
 

Summary

Join us for an enlightening teaching session led by George, a specialist foundation year two doctor, aiming to educate medical professionals about understanding academic language and comprehending medical research papers. This session is a part of a research series curated by the National Surgical Teaching Society. George will share his extensive knowledge and experience in research, including his publications and backgrounds in various types of research methodologies. You'll learn how to critically read and appraise a paper, understand different types of research, and consider the relevance and implications of the findings. Don't miss out on this chance to enhance your grasp of medical research and challenge your understanding about academics in medicine!

Generated by MedBot

Description

Join us on our next exciting 4-part webinar series providing an insight in to research skills in medicine and surgery. We have an exciting line-up of speakers with a breadth of experience and knowledge, boasting strong academic achievements and accolades. We aim to provide you with an introduction to research, how to tackle the academic language, tell you what medical statistics actually is, and tell you what a career in academia might look like.

Learning objectives

  1. To comprehend and distinguish between different types of medical research including quantitative and qualitative methodologies.
  2. To gain knowledge on concepts like hierarchy of evidence, and understand how to appraise the quality of information in a research paper.
  3. To learn how to read a research paper effectively, understanding the standard structure of a medical research paper including sections like abstract, introduction, methods, results, and discussion.
  4. To develop skills on how to critically analyse and appraise a research paper, looking into its methods, trial design, randomisation, controls, power calculation, and generalisation of results.
  5. To understand how to apply the SMART (specific, measurable, achievable, relevant/realistic, timely) principles in developing a research study and how it applies in medical research paper reading.
Generated by MedBot

Related content

Similar communities

View all

Similar events and on demand videos

Advertisement
 
 
 
                
                

Computer generated transcript

Warning!
The following transcript was generated automatically from the content and has not been checked or corrected manually.

Should be good. Sorry guys be free. Feel free to set the rains. Ok. I think we're live. Hello. Hi, everybody. Um Just want to check that. Uh if you are here and can see and hear us. Could you pop a message on the chat? Because sorry, we have been experiencing a little bit of technical difficulties a few minutes earlier, but I think we are sorted. Um For those who are here. Could you just pop a quick message on the chat so that we know that you can see and hear us? Oh, lovely. Thank you. Thank you. Thanks very much. Uh much appreciate it. So, um welcome to uh our second of our research series by the National Surgical Teaching Society. Um We are we uh in this series. Um the first one, if you've not uh attended it, it's uh an introduction to medical research. And on the second one, we'll talk a bit more about um the academic language and how we understand a paper and all the other, the bits and Bobs that come with it and we've got George um who is going to, I'll let him introduce himself. Uh And then he's gonna talk us through it today. So, just a bit of housekeeping. Um, so this will take, I think, um, 40 45 minutes and then after that, we'll have AQ and a session at the end if, um, in a while as, as we go along, if you do have any other questions, feel free to pop it on in the chat. Um, and we'll let George know and then we'll try and answer them for you. Um, and, uh, yeah, I think that's it. I'll let, uh, George, um, introduce himself and then we'll get started. Um, if everyone's happy. Perfect. Thank you. So, um, yeah, hi, my name's George. Um, I'm currently a specialist foundation year two doctor. Um, so that's the, uh, academic program. Um, I trained in the East Midlands at Leicester, er, and I've worked across the East Midlands, um, in both my foundation jobs. Uh, I want to go into hand and nerve surgery, um, and have a strong background in research. I've got a few publications, including quite a few systematic reviews so well aware of kind of what you need to know and what, what you need to go through when looking at papers. Um, I completed a masters in medical research whilst I was at UNI as well. So, three main learning points for today, um, are understanding the different types of research and what that involves then looking at how we approach reading a paper and then what to consider when we were appraising a paper. So I've separated out the reading a paper and appraising a paper. Um and we'll go through that as we do the talk. So types of research. Oh, sorry about that. So types of research. So I'm pretty sure if you've been at medical school, you've seen your hierarchy of evidence before. So obviously, this is looking at our systematic reviews at the top. Um And then moving all the way down to one surgeon's opinion at the bottom. And this is kind of what we base our quality of information on or one of the things we can base our quality of information on. So in theory, our meta analysis, our systematic reviews, these will always be a higher tier of evidence than our cohort studies, et cetera. Um not always the case, sometimes you will read some appalling systematic reviews. But in general, when we're looking, if it's been accepted in a good peer reviewed journal, it's a good quality journal which again, we will discuss um then our systematic reviews meta analysis. And just under that our randomized controlled trials are our best form of evidence that we would base our clinical decision on. Obviously, if you ever do any academic interviews, you will definitely get asked the question. Um looking at this one paper that we've given you, would you change your clinical practice 90 to 99% of the time changing your clinical practice based on one paper, the answer is probably gonna be no. Um And it's about considering the wider evidence and how it fits into that. Obviously, there are some exceptions, for example, um if you're familiar with the recovery trial during COVID, um that very high level of evidence, very emergency situation, we didn't know too much about COVID and we found that dexamethasone had a massive increase um in patient outcomes. So that's why that was used and that was co opted quite quickly into practice. Um But often you will find that changing medical practice takes a long time. Uh So as a with, with research, it's not a quick job. So locally, you might be able to do something quickly and implement a change quickly. Um But getting people to change nationally, internationally can take a lot longer. Um And then obviously, you can also split that pyramid into different levels of evidence. Um So if you were ever sitting your surgical part a exams, this will as a question come up quite commonly. Um And it's looking at different levels of evidence. So obviously, level five being the worst level and level one being the best. So it's just having awareness of that. Um There are many, many different types of research and you can broadly split them into quantitative and qualitative. So obviously, your quantitative is using numbers different measurements, different findings. Whereas your qualitative will be more semi structured interviews, talking to patients, talking to clinicians and that's broadly the two different methodologies. Um You will also tend to find that in surgery, it's more quantitative research. Uh the surgical qualitative papers, some of them are very, very good. Um some of them leave a lot to be desired. You tend to see more qualitative papers in for example, psychiatry, um definitely surrounding the CBT side of treatment. But still it's really useful to be aware of both mixed methods are a very useful approach that you can take. So using both quantitative and qualitative and that's something that is considered in grant applications. Um but that's a whole different, different beast, that is something you can work towards. So there are obviously different positives and negatives of each approach and having a broad idea of different types of research is really really valuable when reading the papers. Um I am briefly glancing over it because I think that especially aimed at more medical students being able to critically appraise a paper and be able to judge it on its positives and negatives and how you might think that would impact clinical practice would be more useful. Um If you are interested in going more down the research route, then as someone who wants to do it, I would highly recommend it. I've really enjoyed my time doing research. I know it isn't for everyone. Um Some of my friends hate it. Some of my friends want to leave clinical medicine to go just purely into research. Um But there are other courses and other resources available. And if that's something that you're interested in, um we can definitely do a deeper dive and that's something that um the National Surgical Teaching Society can help organize. So if you are interested in a deeper dive into the methods and more of a discussion around this, then let us know and we'll, we'll definitely try and organize something for you. Um But in broad terms, if you're designing a study, um the things to consider are you want to have a smart aim? So it sounds stupid. You've probably heard it lots of times before. Um But so smart stands for specific measurable, achievable, relevant slash realistic and timely. Um And definitely, if you're getting involved in a study that a consultant or a registrar is putting on, you definitely want to consider that smart aim. So you want to make sure that they've given you a lot of support. So, you know, exactly what you're meant to be doing measurable, you know, exactly how you're going to measure the effect. So, is it through the interviews and a qualitative method? Is it through a patient reported outcome measure? So you're quantitative, you get a score that's easy to manipulate with statistics, which will be the next session in this series. So if you're interested more in statistics, we'd really recommend watching that one. You want to make sure that what you're doing is achievable in the time scale. So if they're saying that you are expected to put in say eight hours a week, is that achievable for you with your current timetable? So is that a realistic plan for you? So it's always worth to consider that as well. Um And you often see smart aims in papers. Um words are in a slightly different way which we'll go through when we go through a paper. Um but it's always worth to keep that in mind. And then other things to consider is how are you randomizing? What are you using as your controls? What is your sample size and has a power calculation been performed? So a power calculation is a basic part of most clinical studies because you need to make sure that what you're looking at has the sufficient level of evidence to prove that there are papers and mathematics that go into this. Thankfully, it's easy for us. Um We just need to find the right paper that tells us what numbers to use. Um So normally you'll say you want to have a look at an 80% or a 90% power level, that's something that someone who's more senior and more familiar with the research can help guide you towards as well and then how generalis are the results. So if you're doing a paper that's looking into, let's say you've got a patient population group of patients who are having a cystectomy, you've got one paper that looks at every single patient having a cholecystectomy. And then you've got one paper that's looking at under 18 year old males, obviously, one of them is gonna be much more generalis and therefore much more useful in terms of the conclusions drawn. And that's something else to consider when you're designing a study is. What patient population group are you looking at and how will that affect other patients and other groups? So that's my brief overview of, of methodology. Um If you've got any questions, then just, just let me know and I'm sure someone will flag them to me. Um But we will discuss a bit more of this as we look into one of the papers we're going through. So how to read the paper? Um I'm assuming you've all had a glance at papers before. So you know that there will be an abstract and then the papers normally split into your introduction, methods, results and discussion. Obviously, this will vary slightly by journal, some prefer different ways. Um And it will vary depending on what submission type you're doing. Um So for example, if you're doing a case report, that's a very different submission to a randomized controlled trial, for example, but in general, your introduction should include a bit of background on the topic and a bit about why this topic is important and why they wanted to look into it. So for example, let's say you're looking at wrist fractures, part of the background. It's one of the most common, if not the most common fracture we encounter in the UK. Um So in terms of background, you know that it's an important fracture, we see a lot of them, we treat a lot of them. Therefore it's gonna affect a lot of patients. Then let's say they're looking at two different fixation types. So let's say they've looked at people having a closed reduction, so no surgery versus people who have surgery of any type. Um So we know that more severe fractures will need surgery. Um And then they would hopefully explain why they think that is important and then what their aim is. So the introduction will normally include as the last paragraph, the aim of the study. So that's often what I skip to when I'm initially reading a paper, I will skip to the aim to see what they wanted to achieve and what then that will focus my mind on and help me frame the paper in a better way because I'll always keep that question in mind when I'm reading their methods or reading their results. So the methods is how they've gone about it. So we've already discussed quantity qualitative, there's lots of different ways of doing consultative research. So lots of different statistics. Um but in general, you will hopefully get a patient population group, you will then say exactly what you're going to do to them. And then they will tell you what they're going to record in terms of data. So that's the really interesting part to read sometimes because they might not comment on all of that data that they've recorded in their results. So just little things that you pick up as you read papers. Um But you want, so the methods will tell you who they're going to get what they're going to do to them and then what they're going to do to that data, then the results is what happened based off the methods should be very descriptive, normally won't be the longest portion um unless they're qualitative. So qualitative uh with semi structured interviews, you will have to go through all of the data and you essentially reach a saturation point where with repeated interviews, you aren't identifying new themes from what people are saying and you group what people have said into broad themes and then you can do subspecialized analysis after that. So qualitative will take a lot longer to do um and to perform. Whereas your quantitative will often be quicker. And if you're new to research, a little easier to understand, um something else that you might hear about in qualitative research is ethno er ethnography. So this is a principle where let's say you went to someone's house and did an interview in their house. The ethnography is looking at the environment around the person. So including what they're saying, but putting it into context with their surroundings. So how clean is their house, how tidy is their house? What type of area, these kind of questions and these kind of editions which can help to paint a better picture and then the discussion. So this will be the authors interpretation of their results, what they think they found and their highlighted comparisons to the literature. So it's always worth noting that the discussion is always going to be the author's interpretation and opinion. So always take it with a slight pinch of salt and always consider the wider background of the research yourself. Um Then obviously just in terms of basics of interpreting a paper, four things you can look at which will help frame what you're saying is the patient group. So how old are they male, female, young old? Was there an age range? Did they have to have scored a certain amount on a questionnaire to be included these kind of questions, the intervention so broadly, just what did we do to the patients? Um So have we introduced a new treatment, whether this be a medicine, whether this be surgery? Um what have we actually done to the patient then the comparison? So what is our normal gold standard of treatment? So in a good study, you will hopefully have the gold standard of treatment to compare against. Um Sometimes you don't. So again, it's worth always worth looking at what the comparison is. Um So it might be one particular type of operation that they've compared to that might not necessarily be the gold standard operation for this injury and then their outcomes. So how they reported their outcomes? Uh and what was their overall impact of their intervention? And then these are the three diagrams that you only see in research. Um So what I'm gonna say is if you have a look at the diagram on the left of your screen at the moment, um I obviously can't see the chart so I'm sure that someone will let me know. Um but just have a think and have a type of if you know what that diagrams called. So have we got any answers at all? Let me just quickly flip back. OK. So that's OK. I didn't know what this my um foundation interview when I was asked, um I was able to explain what it is, but I didn't know the name. So it's called a consort diagram. Uh And it's basically a flow chart that shows how patients move through your study. So this is an example of a randomized controlled trial. Um And this just shows you how the patients where each patient was included, where they were excluded with the reason why and then showing how many went on to randomisation and then how many go into each arm. So being able to track all of the patients through the study is a really vital part of being clear and transparent with your research. Um The more transparent you are and the more people can see what you've done and why you've done often, the higher quality the research will be. Um so the middle diagram, uh I'm sure you've maybe seen this before is called a forest plot diagram. So something that will be used a lot in meta analyses and something that is really good just to be able to glance that quickly understand and then be able to explain. So obviously, um you can see that there are different paper names on the left. So you've got your Smith et al from 1991 for example. Um And this is comparing their odds ratios. So you can see that the length of the little arms is their confidence interval, whereas the box will be indicating. So the size of the box will indicate the weighting of the study. So the larger the box, the more you weight the study. Um and it also can indicate the significance as well. So it's always worth understanding forest plots, um consult diagrams. And then our last example on the right is a prisma flow chart. So this is something that's used in a systematic review. Um And this is where you will track which papers have been included at each stage. So how many were identified on the different databases? So for example, you've got Embase um PUBMED sin et cetera. Um These are all different databases which have different specialties attached to them. Um Typically in medicine, we tend to always use PUBMED. Um You will have heard of Cochrane, which is the collection of big systematic reviews. Um to the Cochrane library is a very good resource for us. Um Embase is another one that's commonly used central is one that's commonly used when you're going to do a systematic review. Um It's always worth discussing with your supervisor, which different databases you think you should use because some of them, for example, if you're doing a project that's some of the research is based in nursing, you'll want to include the nursing database as well. Um Your supervisor will be able to help guide you. But in general, when we're doing a systematic review, we always include at least three databases. Um just to make sure that there's a broad width of knowledge and papers that we're tracking from. Um But yeah, so just some common examples of diagrams that you will see often in research. Um So this brings us on to a critical appraisal, which is I think what I want to spend a bit more time on for you. Um because being able to read and synthesize a paper is a vital skill. Um that even if you don't like research and don't want to do research, um you will have to use your critical appraisal skills more than you think. Um It is a necessary part of medicine. Um So my best advice for this is to use a checklist. So I personally like the CSP checklists and I'll take you through some of them in a minute. Um There are others that exist. So I think Oxford have produced a few um but always use a checklist. It helps to frame what you're doing. It will help to really solidify and give your answer a structure which when you're so when you're presenting a patient back um on a ward round, for example, or to a senior, you will normally use the sbar um kind of performer, which is your situation, background action result. And that helps to structure what you're saying, makes you seem really slick and makes the person on the other end trust you exactly the same for using a checklist for appraising. You're not gonna miss anything, gives you a structure makes you seem more slick and better even if someone else has more information than you and didn't use a checklist and their answer was jumbled, you would come across looking better and more knowledgeable just because you've structured things better. Um And obviously for os you as you know, the way you approach things often will make more of a difference. Um And there are other quality assessment tools that exist. So for example, your minors, your NIH quality assessment, so just worth bearing in mind that quality assessment and critical appraisal, they go hand in hand, but they also are different concepts. So um if you are doing a systematic review your quality assessment can help to guide how much you would comment on a paper and how much emphasis you would put on that paper's findings. Um Your critical appraisal is the next step above that I would say. So it's taking that part of that quality assessment, taking what you've read, what you've learned and then really challenging it. Um So additional considerations before we get into a worked example, um is, you know, what journal have they published in? So is the journal PUBMED indexed? If it's PUBMED indexed, that means it's meeting a pretty stringent set of criteria and that gives you more faith and more trust in the paper than if it wasn't PUBMED indexed. Um Is it peer reviewed? So is there a review process? What is that review process? So one paper might just have one person peer reviewing that will not be as good as say a paper that a journal that has three people peer reviewing. Um So just having an awareness of that review process is also important and then we move on to impact factor. So impact factor is a measure of how much that journal, how important, how much consideration, how much attention those papers get. So what the impact factor number means is it's updated regularly and it's essentially how many citations on average does a paper in this publication get? And the reason that's important is because the more citations, the more important, the more influential the research rightly or wrongly could be a positive and a negative citation. But in general, it's cited because it's found something important and people want to discuss it and comment on it. Impact factor can be fudged. So obviously, if you're looking at a journal like the New England Journal of Medicine or the lancet, those are very well known papers, very high impact factors. They have earned those impact factors because they're very selective with the papers. They take they've got a good reputation, you know that you can trust their research, some smaller journals will get the rights to older more influential papers. So for example, the prism of flow chart was pub published as a paper. Um some smaller journals may obtain the right to republish that article in their journal. And then what people will do is they will search for Prisma. That will be the first link that comes up because it's the most recently published and they will cite that in their article. So that's a way of artificially boosting the citations that a journal can get. So something to look at is has the impact factor of that journal been consistent? Um Is it a well known journal? Does it have a good history, a good reputation? Um So what you will find in research is there are ways of manipulating data, there are ways of manipulating findings, there are ways of manipulating impact factors and this is why there's a critical appraisal process. It's considering all of these things. Um obviously consider the landscape at the time of publishing. So looking at some of the research that I personally have done um distal radius fractures, I already mentioned them once. Sorry. Um However, the evidence for intraarticular fractures, so fractures that extend into the joint line, the displacement seen there, the evidence is actually pretty poor. Um However, what's happened in the world of orthopedics is they've just taken it and run with it and no one's ever challenged it to a great degree. Um And it's just become accepted that two millimeters is the level that we treat. Uh however worth considering that at the time that was published, that was really good research. And then as we've become more modern, more nuanced in our approaches, we've actually discovered some of the methodological flaws and it's made us rethink previous research. So it's always worth considering what's going on around you. So for example, in the COVID pandemic, you know, any research that was coming out about COVID was really important and something that we wanted to review and include in our clinical practice as soon as possible. Whereas let's say we were looking at a valve operation for heart, we might not necessarily jump on that as quickly because we don't know the long term implications of that treatment yet. Um Other things that are important to consider with research you could publish as open access. So that implies a certain level of funding. Um So if there's funding, you should be able to look, they should have published all their protocols should be held to a slightly higher standard because you'll be able to track all their changes. Um Sometimes studies will be linked to large trials or larger trials. They might be a secondary study or a primary study from that trial. Um Again, worth considering and have they published their methodology? So for most randomized controlled trials, I would hope that you'd be able to see a published methodology. That's a really detailed methodology that will go into more detail than the end paper will. Um But again, all of these things are something to consider. And then the other um point that I put on the bottom there is use of PPI. So PP is patient participation involvement groups. So this is where you will get different members of the public from different areas, ask them to come in, discuss with them what their understanding of the research is and what the implications of the research is often when you find that studies used PPI um it's gonna be a better study because they will have considered more around the study than just, oh, we've seen this one thing we want to investigate it. Obviously, this doesn't apply to all studies. And so if you're just doing an observational cohort study, for example, PPI might not be suitable. Whereas if you're doing a large randomized controlled trial, definitely want to get them involved and have that discussion. So, um, I thought that giving you a worked example would be quite good. Um, so we have popped a link in the chat. Um, I've just popped it in there again. That is for this study and we will go through it together. Um, just so you can see a how the checklist works. Um, what my thinking is surrounding the checklist as I'm going through things. Um And then we can have a discussion um If you're unsure if you've got any questions. Um I thought I'd also just show you the CSP checklist. So this is the critical appraisal skills program. They've got checklists for most different type of papers that you can find. Um I cannot recommend these enough. I relied on these during my masters. Um They are really good, really detailed and as you can see, um they literally take you through step by step, the different questions and what to consider. Um And we'll work through one together now. So you've got an example. Um But in the meantime, does anyone have any questions anything anybody wants to raise or add before I start? Okie Doke. Um So I'll crack on. Um So just to give you a bit of a background about this research, so I'll just read you a bit of the key highlights of the uh abstract. So prisoners um very reported to have a very high prevalence of AD HD symptoms, methylphenidate is a treatment that we can use, that might have an impact on AD HD. But little is known about its effect on prisoners because of their substance abuse histories, mental health histories, et cetera. So, the aim of this study was to compare methylphenidate with a placebo. Um and it was a randomized controlled trial in UK prisoners with male prisoners. Um, their results or their reported results, um was as we can see here that um actually there was not a significant result. Um And we'll go more over significance when we do stats in the next next session in the series. Um So it was not significant. Uh So their conclusion was that actually the routine use of methylphenidate in this population wasn't worth it. And that further research was required to really consider what was best to support these, these patients in this patient group. So looking at our casp checklist, our first question is, did this study have a clearly focused research question? And this will be similar for pretty much every single casp checklist you go through. So when it says a clearly focused question, so it said it's giving you the P code there that it wants you to um define and rely on. So if we look, it will typically be in a good study, the last paragraph of every single um introduction. So this one, it goes over its primary objective, it tells you exactly who the patient population group are, it tells you how they've been defined. Um And it gives you the secondary objectives. So this paper excellent use of its aim, very, very clear, very reassuring straight away. Um And as a slight spoiler, um I have chosen a very good paper to critically appraise um because there's more to pick out at the end. So um was the assignment of participants to the interventions randomized. So they've got an entire paragraph on their randomization. Um I can tell you straight away, they use the online system provided by the Kings Clinical Trials Unit. This is a very well known, very well respected way of randomizing. Um So again, that's a very good sign. Um So were all the participants who entered the study accounted for at its conclusion? Um So we could read in, in greater detail to find that um however, they have also included their version of their consort diagram. So you can see that every single patient is accounted for, they've given reasons why they were excluded um at each stage. And then you can see the final numbers who have made it to, to inclusion. So obviously, you can see that nearly 1200 of them started, we only actually ended up with 200 of them at the end. Um So you can judge for yourself whether you think that that is uh an appropriate selection criteria, shall we say? So, were the participants blind the intervention they were given, were the investigators blind? So this ask about blinding. So this is something that's really common in randomized controlled trials. You don't want someone knowing what they're having because then that could alter how they interpret record and present the results. Um So if possible at every single stage, you don't want anybody to know what they're really doing, so you can properly assess and there will be no bias. So in this study, um we can go back and have a look into their study design, um and into their participants. Uh I believe they did blind for this study. Um So uh there was a trial psychiatrist. Um I think from my memory, um they did blind, so they definitely blinded the participants. Um I think they only had one person tracking for the patients to maintain patient safety. Um So part of larger studies is you will have to record the adverse events and that's part of the trial management and it is something that we don't need to discuss in critical appraisal. Um But it is just worth noting that during larger studies, you have to maintain a certain level of patient safety. So, um were the study groups similar at the start of the randomized controlled trial? So what you want to avoid is having, um So typically in randomized controlled trials, you'll have two arms. So one arm will have in this case, methylphenidate, one arm will have the placebo, what you want to avoid is this arm being completely white British and this being every other ethnicity or um subgroup because then that would alter your results. What's nice in this paper is I believe they included the table here. So you can directly compare um each of the different ethnicities, educations, age of leaving school, all of these different factors and they compare the meth offensa against the placebo. And you can see just from glancing quickly down each column that they're incredibly similar. Um So this is another good point for this paper. So then apart from the experimental innovation, did each study receive the same level of care? Um I can tell you that they did in this paper, um we can go and read the, the methods if, if you want to, but they had exactly the same happened to them. And this is really important in randomized controlled trials because you want to maintain a, you want the only difference to be the thing that you are looking at. So you want everything else to remain the same. So nothing is left up to a chance difference. So then um moving through our C ap checklist, uh were the effects of the intervention reported comprehensively. So this is when you start to consider things like your power calculation. So did they meet the power that they wanted? Um And looking at the rates of drop out sources of biases, did they report P values go over that in the next session again. But these things are, have they reported it enough that you're confident that, you know exactly what's happened that, you know, and that you're happy that no corners were cut and that they've done what they've said. So again, this is by really in detail reading what their analysis plan is. So you can see that they've discussed power levels here. So they've said they've done a power calculation, they know how many participants they need as a minimum. And they've actually achieved that with the the 200 patients, they got so really important to look at these different points when you're looking at a paper and consider how this could affect the end result. Because at the end of the day, your critical appraisal is always going to be answering a couple of questions. So will it affect my clinical practice? So do I want to change what I'm doing based on what this paper has found? Should I include this paper in my systematic review? Is it a good quality paper? Is there a reason for me to exclude it or do I include it? But comment on some of the methodological flaws that I found. Um And then also when you're doing critical appraisal, often you might be asked by a senior, what are your thoughts on this? And you could maybe quote this paper and then say why it's positive? Negative. So if you're asked to review a paper what I would say is you always want to give the brief summary, you want to give the three main positives of the methodology, the three main negatives. Don't worry if there aren't three main positives and three main negatives, if you could just say it's a really well designed trial, um I can't actually find any methodological flaws in it, that is perfectly OK to say, um and then obviously looking at the outcome and how that fits into the wider knowledge. And so was the precision estimate um reported. So this is, do they report their confidence intervals? Again, this is more statistics that we will come on to in the next session? Um But having a good understanding of your P values and your confidence intervals is really they go hand in hand when you're reading a paper. So do the benefits of the experimental intervention outweigh the harms and costs. So for this, something that you can consider is have they done um their ethics, have they got ethical approval? Um have they accounted for any adverse events? So for example, a patient death, whilst in the study would be a very high um adverse event that you would want to consider and look at in more detail. Um This is also where we can bring sustainability into research. So it's where we personally, I think we should be going a little bit. So this is me bringing my own belief into the teaching session. Um but sustainability is really important considering the cost effectiveness. Um and the impact on the patients is something else that's really important. So when you're looking at a treatment, um you might have heard of qualities. So qualities are quality of life depending on treatment. So let's say you're having open heart surgery and that gives you a quality of life um of one uh obviously balancing that against the cost of that treatment. Whereas looking at the alternative medical therapy, let's say, giving a beta blocker, for example, increases your quality by 0.5. But it's much cheaper. It's about balancing them against each other. So always worth considering that as well. Then whenever you're reading a paper, you'll always compare it to your own local population and your own patients who you're treating. So for example, this study happened in UK prisons. So it's relevant for UK prisoners. It wouldn't be relevant for, let's say prisoners in Thailand who might face very different prison environments, very different treatment. Um So it's always worth considering that especially if you're doing a study that's looking into again, open heart surgery, there's no point in looking at that study, if there's no way your hospital can provide that treatment. So almost as soon as you're reading the paper, you can say, well, yeah, this looks great, but we can't provide it. We need to look into an alternative treatment or the ability to transfer to a different hospital for this treatment. So it's always worth considering what you can provide at a local level. And this is the generalisability of the results if they're not generalis to your patient group. And actually, it, it could be the best paper in the world, but it's not helpful to you. So it's always reframing that paper and putting it into perspective and that's really important to you during your critical appraisal. Um And then with the experimental intervention, provide a greater value to the people in your care than any of the existing interventions. So this is comparing the new intervention against the current interventions. So this is where your largest synthesis of evidence. So for example, your meta analysis or your systematic review would be really important because they directly would compare and then allow you to form a better judgment. Um So things about this paper um that you could pick out um it's a very good paper from a methodological point of view, the results were very well reported. Um And then this is where you then move into the discussion section and their limitations. So they will always discuss how limited their study was. Um And what they think the impact of that would be. Um So the context of this paper is there's recently been a methylphenidate crisis. Uh So patients who need it have not been able to get it. Actually, this paper found that methylphenidate didn't have a, didn't really have an effect. There were no violent incidents during this study either, which is um quite good to note. Um And it's also worth noting that prisoners and the prevalence of AD HD, there's a big, big argument around that at the moment. So, is it potentially over reported because patients are in a low stim environment where they might exhibit some of these behaviors more than if they were in a higher stim environment outside of a prison. Um So again, it's worth just framing the findings of the paper with the situation currently and with the local patient group that you're treating. Um So that's just a very quick run through of a critical appraisal. Um because I wanted to give you the ability to see some of the thoughts that you go through as you're reading a paper. Um and some of the features that other people can bring into the paper. Um So if you were interested and you wanted to do a bit of reading based on what I've said about this paper and with the um cash checklist as well, there is a commentary that's really good. And I think it's the Cortezi commentary that actually discusses this paper and discusses some of the findings so that could help because you could practice on this paper and then read what's an expert in this field has written and what they've said. Um But I apologize. That is a very brief from Whistle stop tour, an example. Um I could talk about this for much, much longer. But um not everybody wants to do research. I appreciate. So I hope that that has helped. But does anybody have any questions or any thoughts? That was a really great talk? Thank you, George. If anyone else have any questions, feel free to pop it in the chat box and then we can answer them for you. Yes, and then we can see in the chat box hands just post it um details for our next event. Um Cool everyone happy. Any further, any comments, a really good talk. Um I'm just gonna briefly go over the uh next couple of events that we've got going on guys. Uh My name is Hansa, I'm uh a member of the NST S team um and I am facilitating um the series uh and I'm just gonna share our next offense in a second. Um So we've got upcoming, so we just had the first two. We've just had George give excellent talk on the academic language. However, I understand the paper, the next two events that we've got are on medical statistics, er whistle stop talker. That should be Tuesday the 18th. So that's next Tuesday, uh followed by a career in academia and research, what it can look like. And that's again on Tuesday, I apologize for the Typos um and just to go over what that's gonna include. So next week we have Doctor Henan Chaud who's an anesthetics registrar and he's gonna be going, giving a whistle stop tour on um, medical statistics. He's got a background in statistics. Um, having done a further study through the Harvard University School of Public Health and he's got a background in big data analyses uh with uh relevant publications. Er, and then to round off the series, we've got, um, doctor Doctor Anthony Howard who's an academic clinical lecturer, um, an orthopedic Roger Straw working at both the University of Leeds and Oxford University and he's gonna talk us through his career and the things that he's done. Um, you know, the, the, the phd is undertaken and how he's got in the position that he is right now working um with numerous, er, clinical trial units working on quite big, big, er, studies and things. Um, so thank you guys. Thanks for attending today. Um uh Please make sure you fill out the feedback forms, um, and you'll be able to get certificates through that and we hope to see you next Tuesday. Thank you very much.