Computer generated transcript
Warning!
The following transcript was generated automatically from the content and has not been checked or corrected manually.
I will hand you over to Jared. He's going to teach you something. Uh, a lot more highbrow than me. Uh, he's going to teach you about how to write an abstract. Yes. Yes. All right, thanks, Keta. What's that? Gone to the Zoom? Oh, we're back. This is good. Right. Uh, are you logged on, are you Michael? Uh, good. Well, hello, everyone. Uh, please say hello on the chat to tell us you're actually there. That would be lovely. Michael's looking at the chat. Anyway, my name is, I spoke to you earlier about large bowel obstructions and colorectal cancers. I am the NST five in general surgery in the west of Scotland. I research lead for Asg B MO Academy and I'm also a phd candidate. So I know a little bit about research and I'd like to share you share with you some stuff. This was meant to be a half hour talk and then 15 and now 10. So I'm going to, it'll be a whistle stop tour. So I'm going to talk to you about how to write an abstract and a little bit about statistics. If I have time, I'll go through one example. But the whole point of the, the, the, the title is a bit odd because how you write an abstract is essentially how you critique an abstract, which is essentially how you conduct a study. Well, so I'm going to give you 10 points. Um And I probably won't read through all of the sub topics, but one you want to ask is the research question relevant, doesn't address an important topic. Uh And you know, you might find that it's very important to somebody but not very important to you. That's fine. But a journal will be looking for or someone who's assessing your abstract that you submit to a conference will be looking for whether or not it's a relevant research question. Uh You know, has it been answered before or whatever? Is it novel? Does the study add anything new? Is it novel? So there are some seminal research papers, ones that people quote thousands and thousands of times. These are rare. Usually studies are not, don't have quite the same reach, but it's still important to make sure that data gets out there for the benefit of human benefit of humanity. And it needs to be reliable data and it needs to be Generali ideally otherwise, it can't really be used in other populations. What does, what is the research question or what does the research question? I don't think that was a good answer. What does sorry? That must have been a typo but the point is PCOS. So every research question should or research studies should be covering these things. Patient intervention, comparison outcome and study design. Uh was the study design appropriate? So if they ask one question, but then there study design. Sorry, get OK, I've got 15 minutes, I've been added five more minutes. This is great. So it was the study design appropriate. So they might have asked a good research question, but the design wasn't appropriate to answer the research question. Um That's also important. Uh So for example, we can't all do randomized controlled trials, one because they take about seven years and they cost millions of pounds. Um But if you ask a question that can only be answered by a randomized controlled trial, then you probably shouldn't even start levels of evidence. So this is um I think the Oxford guide. No, this is yeah, Oxford Center for Evidence, evidence based medicine has given numbers and letters to uh types of evidence and grades of recommendation. Basically, it goes from systematic reviews of randomized controlled trials at one A down to um R CT S down to systematic reviews, but the studies contained within the systematic reviews are not randomized. Um And then it goes down to cohort studies like your typical sort of epidemiological studies or case case control case series is number four. And then expert opinion is five. So basically, if someone writes a paper and says, I think this is what should happen? That's not, that's not evidence, that's just somebody's opinion. Um Evidence is derived from data. OK. Fifth. Do the methods address the other reasons for association between the exposure and the disease? In other words, um is there bias, is there confounding and have they accounted for random error? Um Confounding is like when you say A causes B but actually, there's C that causes B or at least there's an, there's an influence or a relationship between that you haven't appreciated. And bias essentially is the whole reason we do research. That's the whole reason why methodology exists because we're trying to reduce bias. And we're trying to answer a question that um and tease out the bias, that's why randomized controlled trials are uh randomized and are controlled so that we're trying to reduce bias bias can be introduced in any aspect of the study. So things like selection bias, who you asked or which patients you involve in the study, um recall bias, performance bias. This is like um if you ask somebody a question about what happened 10 years ago, they might not recall the answer. So it's kind of you're not getting to the truth. Uh And then even after you finish something, something called confound citation bias or publication bias, whereas the things that are published are the things that are um usually show uh uh significance as in statistical significance. Whereas there's lots of papers that do um that are conducted well, but because they don't show an, an outcome that is very sexy or um expected, then they might be rejected uh from a research paper which um which is extremely unfortunate and it's actually caused. Uh there's some journals out there that are, that only publish papers that had negative results, the journal of negative results anyway, it's kind of an interesting thing. Uh Do the, do the data justify the conclusions? So sometimes you do a perfectly good paper. Uh It shows this, but then the conclusions are completely different. I've seen this in a few abstracts. Don't overstate your data. Uh It's a big, no, no, just be honest. Just say, yeah, we've looked for something and we didn't find it or we look for something and we found the opposite. That's OK. Just be honest in your conclusion. Um Was the study performed according to the protocol. So if you're doing a randomized study or uh you have some sort of prospective study where you've written a protocol, just do what you said you do if you do something different, if there's a deviation, uh there should be a reason for it. Usually you have to submit an amendment to your original protocol. Um I think it's pretty important. Does the study test a stated hypothesis? Um I would tell anyone so an abstract generally have introduction methods, results, conclusion. Um you can break introduction down into sort of background and aims, but really you have to start with a research question and aim and a hypothesis. Everything else builds off of that were the stats performed correctly? Uh In stats we trust. So here's just generic thing. You can take a screenshot of this if you like, there's loads of like these loads of things like this on the internet. But basically, um it's good to have a wee flow chart in your head about what statistical tests you might want to use uh depending on the data that you have. So this is says, you know, what's your outcome variable? Is it continuous? Like, I don't know, heart rate. Is it categorical like uh G CS which is ordinal actually uh or is it survival? So it's like um over time there's different things, different tests that you would use for each of these things. Um And importantly, is your data. Um So you have, do you have one group or multiple groups is the second sort of column? And then is your data normally normally distributed or skewed? And we'll talk about that in a second and other conflicts, conflicts of interest. Interestingly, there's just a paper published um in I think journal of trauma acute care surgery, talking about uh uh other papers that have been published related to RAA uh which is a recessive endovascular balloon occlusion of the AORTA which has been recently uh investigated as to whether or not it could help save lives from people having exsanguinating hemorrhage from trauma. And uh the paper that was published suggested that a lot of these other papers on are not adequately reporting their conflict of interest in companies that produce uh devices. That's just one example, essentially if you have been, if you're publishing anything and if you've been given money or funding uh or you have associations or shares in a company, all these things need to be um uh mentioned when you submit a paper or submit an abstract because, and usually there's a section to do that. Do you have any conflicts of interests? Most people just say no, none declared. But if you have any, it's important to note, for example, if you've been funded by a drug company to look into a drug, you know, there's a bias there, potentially. So in summary uh critical appraisal of, of research or, you know, which is what you thinking about writing a good abstract, you have to have a good, you have to know what questions to ask of, of yourself. As when you're designing a study, it's the same questions if you're critically appraising another abstract, um it's essentially the same thing. So how do you conduct a good study? You have to be um systematic and if you ever asked in an exam setting, for example, to, to critically appraise something uh or you're doing a journal club in a, in your local hospital and you asked to do it, just don't just regurgitate the abstract and say this is what they found be systematic um after eight questions and this is not, it's a, it's a skill like anything else. So you can practice this just like you'd practice the clinical history or a clinical examination. And you essentially want to get to the point where you can say whether you agree with the authors conclusions or not, whether you think it'll change practice or whether you think it'll lead to further um uh research questions. Oh And just to point publication in renowned journals does not mean it's a good study. So even though something is published in New England Journal of Medicine, uh it's still important to critically appraise and they might still have some holes in it. So I'm going to go through some basic stats with a few more minutes that I have first normal distribution. This is the bell curve. Uh You can see that 68% of the population lies within one standard deviation of the mean, 95% within two standard deviations of the mean and 99.7 within three standard deviations of the mean. This only works. So this works with normally distributed data. In other words, data that looks like this. If you've got a left or a right skew, that would mean that it's non normally distributed or non Gaussian distribution, you'd have to use different statistical tests to compare two groups of data that are non normally distributed or another question if one of them is normally and one of them is non normally distributed. You use the test as if they're both non normally distributed P value. We use this a lot. It's essentially one out of 20 or less than 0.5 less than 5%. It was decided ages ago that this was going to be our sort of test as to whether or not something is statistically significant. It's entirely arbitrary. You can select P values. Uh You can say that uh the statistical significance we were looking for was less than 0.01%. If you want one out of 100 or so on, you can do whatever you like. But this is essentially accepted, but better than this are um relative risk or odds ratios coupled with confidence intervals. I'm actually not really interested in P values if you can show me a confidence interval. Uh Once you understand these, this slide is a bit busy. Um But just to know that they are important things to understand the relative risk between two things, probably something to Google at a later time. But confidence intervals are used to estimate the position of either a ratio or a hazard ratio. And if it essentially, if it crosses one, then it would be considered non significant. But if it doesn't cross one, then it could be considered significant um or zero for that matter. Uh That's hard to explain. But if you we'll see if we can have an example at the end. Uh So there's something called equator, the Equator network. And basically, if you're wanting to do any study at all, there will be a guideline published to tell you how to do it. This is the guideline. So if you go up, look up equator network.org, um you'll find all these things. So if you want, I mean, the most accessible data that we have is retrospective case review or whatever from your local hospital. Um Then if you go up to uh that would be an observational study. So you look at the Strobe Tr OB E guidelines, the second from the top. So you click on that and it gives you a guideline of what, what are the steps to consider in the introduction, the results, the methodology and, and so on as to how you should be conducting that study and how you should be reporting it. Um So if you choose randomized control trials, then you'd end up going into this Consort diagram. We have to explain how the patients were enrolled, allocated, what the follow up was, were any loss to follow up? And then what your analysis was, was it per protocol or um uh intention to treat which I'll talk about in a second and this is about diagnostic tests. Uh So you basically have the condition positive or negative and then you have the test outcome positive or negative. Uh And then you've got in the middle there, you've got true positives where the condition where the test was positive and the condition is also positive. Uh true negatives in the, in the bottom right there. That's where the test is negative and they didn't have the disease. But you also have type one A or false positives where the test was positive but they don't have the disease uh and false negatives or type two A where the test was negative, but they do have the disease. This, I mean, we suddenly everyone was interested in diagnostic accuracy when COVID came out because everyone wanted to know whether you know what's the best test to check it out and whether they were and what was the sensitivity and specificity of these things you can see on the bottom, the definition of sensitivity and specificity and on the right, you've got positive predictive value and negative predictive value. It's a bit dry, but it's important to know these things sample size. Uh So the larger the effect size, the smaller the sample size required to demonstrate this effect. Um hm. Ok. So say if you've got a difference between uh I don't know if you're wanting to run. If you want to get to 100 m. If you walk, it'll take you a minute. If you run, it'll take you 10 seconds. There's a 52nd gap between those two methods of getting to the end. It's a large gap So you probably don't need a big sample size to tell you that walking is slower than running. That's a little example of what effect sizes. Power has is a bit more complicated and there's power power calculations that you can do. You essentially need an estimated effect size. Um And you select your level of significance both clinically and statistically. And then it tells you how many patients you need to enroll into a study. This is extremely important for randomized controlled trials because it determines um your pre specified levels of uh patient recruitment, which basically means time and money. So if you're creating into a study, that's like, I don't know, open versus laparoscopic surgery, there's a few of these done in the past. Um You know, if you're, if you're trying to fund a study between 50 patients and 5000 patients, you can see that's a huge difference in money and time for the funders. So, intention to treat analysis. So if you're doing a randomized controlled trial and you do an intention to treat that means that whatever you, whatever you allocate them to in the treatment, the treatment arm or the non treatment or the control arm doesn't matter what they actually have in the end, if they get that treatment or not, if they're allocated to that treatment, they're analyzed as if they had that as if, as per the, the allocation per protocol is different. This is um sometimes you might say, right, you've been allocated uh treatment A. Um But in the end, you had treatment B, I'm going to do the analysis for the, what you actually had, which is treatment B um difficulty is in intention to treat is the more um statistically pure per protocol analysis is, is real life, but it's considered inferior. And this is just a quick note to say that propensity matching is another way that you can get around doing randomized controlled trials. You can basically say, ok, we can't afford a randomized controlled trial, but we'll ask the same question and we'll look at retrospective data and we'll say, instead of randomly randomly allocating one person to a and one person to b we'll just say, uh we'll treat everybody as if they were, um, well, identify out of a group of 1000 a group identified about 100 people who 50 had one thing and 50 had the other thing. But the rest of their characteristics are the same. So we can, it's essentially like a randomized controlled trial except we didn't have to spend all the money and time doing it. Well, a lot of papers are published this way and I think they're rubbish. And this is why because in this study, they propensity matched two people who were uh two groups of people who were male. Born in 1948 raised in the UK, married twice, living in a castle, wealthy and famous, except you can quite, you can see very readily that you're really comparing apples to oranges. These are not the same person. OK. Uh How much time do you have Michael? One minute? OK. I'll just give you the slide. So I putting all that together. How do you appraise a paper? How do you think about research? How do you then uh describe how you might ascribe to a colleague or from the stage? What your thoughts are on the critical appraisal of a research paper? I generally say, look, this was a study carried out by uh some other institution published in 2023 in the journal of whatever the aim of the study was, Xy and Z and the primary and secondary endpoints were whatever this was a randomized controlled trial. Um looking at whatever it was the key findings are. Xy and Z. So don't just mention the P values but say what the, what the findings were and you could say the authors conclusion were that A was better than B my impression is that, you know, their methodology was appropriate. Uh But I'm not sure about the impact on practice and how it might affect future research. Um The negative things of the paper are that uh specific sources of bias may have been introduced such as allocation bias or whatever. But I'd like to read the whole paper to find out more information. That would be my two second guide to how to um describe critical appraisal of a, of a research paper. Um And I give you an example, we don't have time. So thank you very much for listening. And if you want to do some research or you have some ideas, feel free to get in contact with me.