Home
This site is intended for healthcare professionals
Advertisement
Share
Advertisement
Advertisement
 
 
 

Summary

This medical teaching session will cover how cancer screening tests can potentially increase an individual's lifetime. Data from over 2.1 million individuals from 18 randomized controlled trials were analyzed to determine if such screening tests had any significant effect on lifetime gained or mortality rates. The researchers found that only two screening tests, sigmoidoscopy for colorectal cancer and a single trial screening for four different cancers, had statistically significant results for lifetime gained. The session will give clinicians an up-to-date insight of current evidence on the effects of cancer screening tests on lifetime gained or mortality rates.

Generated by MedBot

Description

Welcome back to the Surgical Trainees in the East of England Research Collaborative (STEER) Journal Club.

The following session will focus on Screening for Colorectal Cancer.

We are excited to introduce our three guest speakers - Mr Sam Hettiarachchi, Mr Alan Askari and Mr Kawar Hashmi!

The session was divided into three parts:

Part 1 - Critical Appraisal of Paper: "Estimated Lifetime Gained With Cancer Screening Tests - A Meta-analysis of Randomised Clinical Trials" (30 minutes)

https://jamanetwork.com/journals/jamainternalmedicine/article-abstract/2808648

Part 2 - Guidelines you Need to Know for the FRCS

An overview of all relevant guidelines needed for the FRCS including a brief overview of important landmark papers in General Surgery!

Part 3 - Statistics for the FRCS

A summary of some statistics needed to know specifically for screening and a brief summary of Wilson and Jungner's Criteria for Screening!

Learning objectives

Learning Objectives:

  1. Learn about the importance of assessing absolute and relative outcomes associated with all-cause mortality in cancer screening tests.

  2. Explain the approach taken by researchers to analyze data from randomized controlled trials regarding the effect of cancer screening on longevity.

  3. Differentiate between the benefits and harms of cancer screening tests.

  4. Analyze the findings of meta-analyses performed on 18 randomized controlled trials regarding the effect of cancer screening tests on all-cause mortality and cancer-specific mortality.

  5. Describe the implications of the study and identify areas for further research.

Generated by MedBot

Related content

Similar communities

View all

Similar events and on demand videos

Computer generated transcript

Warning!
The following transcript was generated automatically from the content and has not been checked or corrected manually.

Of different cancers and estimating a lifetime gained with these uh cancer screening tests. Um It has come from mainly University of Oslo Group which have uh contribution from other, from other um uh reputable um mm universities elsewhere, I think I believe in us mainly and some someone from Australia and Japan, it was published in Jama recently in August online. Uh Next slide please. A couple of slides about background. Uh So cancer screening is advocated to save lives and we market it to our patients and to um members of the public. Uh claiming that um if we find out cancers early by using certain screening methods, we can increase their life potentially. Um Commonly, the facts are measured by comparing all cause mortality in people who underwent screening uh with those who did not. Um We do know that certain screening tests do have harms. Uh for example, preparations with colonoscopy and sigmoidoscopy and septicemia sometimes induced by um if we have to do prostate specific anti x rays and and prostate biopsies are taken out subsequently, also subsequent treatment for certain small cancers um like surgery or chemoradiotherapy can sometimes cause a significant amount of harm. Um So the flip side is that the cancer screening test may reduce cancer specific mortality. However, it might fail to increase longevity. Um if harms of to some individuals outweigh the benefits for others. Next like please. So it's important to provide the public and clinicians with reliable estimates of benefits and harms of screening. Um which and also give them enough information about whether the lifetime will be gained and what is associated risk of mortality. Um Benefits of screening tests had been gauged usually by cancer specific effects in most studies. And all cause mortality also is one of one of the metric which is always assessed. Um However, observed data of longevity from the RCT is very, very small. Uh This paper is aiming at analyzing the data from these already done randomized controlled trials um and trying to find out uh what is the effect on longevity? Basically, um the next slide, please. So two independent investigators, they looked at MEDLINE and Cochrane Library, they used a such strategy which is quite thorough um and pulled out a few reports including R cities and meta analysis. Uh I'll give the details about the reports in in the er section. But um basically, they looked at me the cities and, and meta analysis which are looking at co specific and all cause mortality as end point. Um if there were any discrepancies arised uh arose and during that time, they resolved it by consensus between themselves. They included mammography for breast cancer, um fecal blood test every year, every other year, sigmoidoscopy and colonoscopy for colorectal cancer. Uh PSA for prostate cancer and lung cancer for lung cancer. They use a CT of uh current uh and former smokers and a PAP smear for cervical cancer. So RCD S were maring um people who are either screened by these tests or not screened. They were included next like these um uh they looked at basically, they also mainly um take, took into consideration of the follow up time. They, they took our only those which are significant for like 10 to 15 years. Uh So that they, it's a more objective prisoner reporting guidelines were followed for searches and selections, um absolute and relative outcomes associated with all cause mortality based on intention to treat comparisons of individuals who were screened and who were not screened or used. And um primary outcome which they looked at was a lifetime screening group versus uh no screening group based on reported all cause mortality. Next slide please. So individual cities uh um they were actually the data was, data was extracted from individual or if there was a most recent meta analysis of certain cities were already performed that took out data from there as well. If there was an R uh after there was a meta analysis already performed on certain, they looked at individual and run their sensitivity analysis on them as well. OK. Uh for lung cancer screening, three cities were found and there was no meta analysis. Hence, a random effect meta analysis was first run uh using uh this and model. And then the data extracted from that meta analysis themselves were used. And for mammography screening, they took out uh only those which took 50 years of age as a start age of um screening. And if there was any suboptimal randomisation trials, then they actually suited them. Uh They also looked at cancer specific mortality data and they took it out as well from those R cities. So in section um so basically the, the reports which initially came back were 4100 and 34. After first, um such as the first screening, they took out more of them than 100 and three were left behind. Uh Both the researchers then selected 18 R cities which included 2.11 million individuals very large number. So in the end, four RCT, which compared copy screening versus no screening four, which compared fecal blood uh for colorectal cancer four, which compared psa screening versus no screening and three for the last CT of the lung cancer screening and two, which included mammography screening. Another RCT which is to complete 18 was, is not given here is a big OC in which they have uh screened. They included those individuals who one individual was screened for all four cancers including prostate lung colorectal cancer. And if it was a female, then ovarian cancer as well, which is also included in the main meta analysis. Next slide, please. Uh So this table uh summarizes all the cities and their main results um in into just 11 big snapshot. Uh It gives you first column, gives you the main follow up, which is in years and we can see it's a very decent amount of follow up given here. Um Then obviously, there's a screening and no screening. Uh So in, in both the groups, they have looked at all cause deaths and cancer specific deaths. And this is obviously then became the basis of the meta analysis. Next slide, please. Um So this is the meta analysis which was performed based on the R cities which already have been shown. And in, in fact, and if you look at here and there are like uh few groups who were screened by using different methods and all the cities were combined, the data was pooled and combined and then individuals uh number of individuals given according to uh the, the combined data for individual screening test, um there's an absolute risk of death um from target cancer per 100% years given in screening and non screening individuals. And uh important thing here is that statistically only significant um lifetime gain was observed in only two screening groups. Um One was a sigmoidoscopy when it was used for colorectal cancer. And when, when the data was compared in two groups that is screened and non screened ones 110 days as a lifetime again, was seen, uh, with, uh, sensitivity was done, 95% were, was 0 to 100 0 to 274. So it was with some c 110 days were gained incorrect cancer. And, uh, in the large single trial, which was, uh comparing, um, screening of individuals for four different cancers um versus no cancers. So, in, in, in that trial, uh it was only one trial. So it's not really a meta analysis to be honest, but it's just pulling out of data from that. So when lifetime gain was seen 123 days were added, but these are the only two si uh statistically significant um results which were, which came up where actually a longevity was shown. And um yeah, I mean, these, these are the major, major, major results here. Rest of the screening tests actually did not make any difference in longevity um of life. I mean, prolonging in life like this. So for blo um really does not show much effect on lifetime gained. Although there's a trend, the screening tests are increasing the life um in, in individuals who are being screened. Um But um as, as a, as a modular effect, we cannot see any reasonably significant effect uh of all the screening tests combined. Uh The important thing here is also that the, the the researchers have not given any heterogeneity. Uh They haven't made it actually probably all these uh cities which were included were very, very different from each other because different screening tests were used next side, please. Um After running all these uh meta analysis, post talk analysis was also uh included because the researchers were um worried that whether that these trials they have already excluded, they may have, they may that may have skewed the data a lot. So they have included those trials. For example, mammography, the trials which were um screening women less than 50 years of age, they also included them and run the analysis again. Even then. Uh there were no significant time or statistically significant lifetime gain was observed, only 4 to 7 days was gained in screened. Um ladies uh with confidence in travel, which is quite wide and also mammography trials including which were thought to be suboptimal randomization. And they were not only analysis, they were again added and same results were almost found also, they included the national Lung cancer screening trial which are comparing all cause mortality in current or former smokers using X ray or low dose CT. So they compared the two, they initially did not include this trial because uh chest X ray is no more screening test. Um but they included that. But again and again, they did not observe any statistically significant lifetime gain legs like these. So cancer screening um is a widespread norm in normal in western world and other developing countries are now uh taking it up as well. Fewer studies have made correlation between screening and all osmo healthy. This study has some explicit results for those who have longevity altered the screening. Um the cumulative loss for those who have, who have got harmed by screening um must be outweighed by duration of accumulative gain and this is what the researchers thought um before because that that would be the metric which should be looked at when we are offering screening to lots of individuals in the in the public. Next like please um they, they acknowledge however, that a mortality shift could be a phenomenon which has caused people to die, not because of the cancer itself for which they were screened, rather maybe other causes than caused their death. Hence, our data is not very, very accurate. This is what they, they, they actually acknowledge in themselves. Um The study estimates are based on intention sheet analysis from our cities and this data may provide the most unbiased estimate um for outcomes but can easily underestimate efficacy of screening test due to non inherence and contamination. Well, that is that is again um uh and and their, their, their, their, their thought process that yes, they, they think that their, their, their data may be screwed next slide, please. So individual trials may not have enough power. Hence the mortality um with all and power to affect all cause mortality. So all to cover all cause mortality, we need a really large large studies which probably the individual did not have enough numbers in them. Yes, 2.1 million sounds very good. But actually if you look at the analysis is not of 2.1 million individuals rather than individual cities have a few thousands in them. While if you, if you look at the effect of cancer mortality, um in overall mortality, it is a very small number still and leading causes are other ones like cardiovascular disease. Hence, to really gauge the effect of um screening tests uh on the cancer or all cause drug, we have to have a really large data and longer follow up also. Uh if we follow these individuals, although 10 to 15 years is a very decent follow up. However, if you follow them up, maybe another 10 to 15 years, maybe our data will start showing the effects of screening um in addition to the lifetime gained or lost. The screening um is um in in addition to that the quality of life, which is also very important. So uh quality of quality of tested life years analysis is also very important, which again is very difficult to measure. And there's a big controversy which exists whether all cause mortality or cancer specific mortality uh is a better metric to gauge performance of these cancer screenings. Next slide, please. So in conclusion, of this study. Current meta analysis is based on best available evidence, cancer screening for life and also suggest that policymakers and organizations and clinicians should inform the absolute harm and benefits and burden of screening tests to interested individuals who we are offering to. Thank you very much. Thanks like please. You're welcome. I think will want to take over from here. Now, before before we move on, could I just ask everyone to mute their microphones, please? Thank you. Sorry, go ahead, sir. OK, perfect. Thanks for excellent presentation. So there's nothing much to say now, right? So I think in this slide, we, we have put Atra two and ca check list and various things, but particularly in the exam, I would say I usually break down critical appraisal of any paper into 55 main questions and I try to answer them as I go along. So I find it, it's quite easy and, and a reliable way of answering particularly in the FRCS where you will not have longer time to discuss those. Because if you think about a two, they have about 16 points that you need to answer and the C check list, depending on the type of study, you have got different, different questions to answer. So the five questions I always ask when I critiquing a paper or reading a paper. First question is, what is the research question? Is it uh is it done according to the classification or not? And uh uh framework. That's first question, what is the first question? Second one is what is the study design, how they have designed the study and what they have looked into it? And the third one is what are the results? And then I will look at all the results valid. And then the last question is, will it affect my practice or will it alter my local policies? So, based on that, we will look at this paper in a, in a brief way. But if you have any questions at the end of it, I'm more than happy to answer. But uh if you have any statistical question, I think I cannot think of better man than Alan to answer those questions anyway. OK. And uh so what is the research question? Do you understand the the basic question is does the screening increase your life um life expectancy or can do you live longer if you, if you are going to be in the screening program one way or the other? So why they have this question? Because they have looked at most of the studies based on calculated their lifetime gained by cancer screening on extrapolated data. So they want to find out whether there is any benefit using the observational data. So that's why they look at, they define the population, they look at the intervention and they look at the control control group and the outcome is the longevity in the screening group versus the non screening group. Now, is the study designed properly, what is the study design? So they have done a reasonably wide search method. I think if I'm not mistake, they've taken the Cochrane library and the, and the Midline, but we can easily, they can increase the search criteria by adding something like pop net and various other search databases. Now, they briefly mentioned about that. They, you looked at all languages and the two different investigators looked at it. Now, it is not clear how they uh whether they carried out any great research, which are, which they looked at any published unpublished data. And also whether they looked at the other, how they looked at the other language studies. So there can be a selection bias. And not only that when they have a discrepancy or a a disagreement, they usually high quality studies, sometimes they can um they should have a third party or third person to resolve those issues. But here they, I think they discussed among themselves themselves and agreed to which is not a bad thing, but I think they can be rectified further. One thing I forgot to mention. So whenever you are critiquing a paper, first thing you have to say to the examiners, particularly in the exam is Jama Internal Medicine. It has, it is a high impact journal. So I think if I a mistake in 2022 uh uh 2022 was about 39 it is not a general. So you won't get any data or research published here. So that's one thing mentioned when you start the and then they looked at the, I think they, based on their searches, they found the adequate amount of meta analysis and where there is, they cannot find a meta analysis. They went into the randomized controlled trials or if they don't have, if the particular metanalysis does not contain the most recent RCT, they have included into the study. So I think which is they tried very hard to include all the relevant studies and to reduce the selection bias. They obviously followed the prisma guidelines and they excluded screening versus screening trials, which makes sense because it does not relevant to the their study question. So they only looked at particular screening versus a control group holders given the amount of evidence available. They went up to about 10 to 15 years of follow up, which is not banned in terms of available studies. But one might question is that enough to assess the longevity of the cases of patients given with the expected living and all that things. And like how I mentioned, they use the intention to treat compassion of individuals. So which is I think the most reasonable way of reducing the V. But we have to understand whenever there is intensive treated treated groups, there is always a bit of contamination in the control group and also your efficacy goes down you. No, I think the authors have accepted that this is, this is going to be a problem. And um interestingly they did not perform any quality assessment of selected trials, I think, which might be a problem in terms of. So if you, if you collect, I'm not saying all the rcs are bad, but if you collect a bunch of bad results and create an outcome from it, it might show an impact. But ap fact that there won't be any, any, you have to question whether it is a true valuable or not. So that's something I thought was quite interesting. And the, the, the, the, the Simian and li uh essentially a statistical correction, uh they use it to assess the create a sort of a mi me a. So it, it usually run effect method for me. So it is a variation on the inverse variance method to incorporate an assumption uh in the different studies are estimating different but related intervention effects. So I'm not 100% sure how they used it and whether it's quite accurate within the studies they selected. But one good thing, they have got clearly defined primary outcome and they looked at the life expectancy of all cause mortality of screening versus non screening. And um uh these are the main things about when it comes to uh results. Uh uh uh this design. Now, what are the results? I'm not gonna go through all the things uh uh how I explained. But uh the interesting thing about when they did the study, they have, most of the randomized controlled trials did not specify the age or the sex of the patients or participants. So it's interesting. So we don't know what age group people they were comparing and we are amalgamating all the data. So what comorbidities these patients have and whether there is any impact on those, those comorbidities or the age on the patients will cause mortality. And so for the breast cancer trial, um the the the most recent metaly was in 2020 but they discarded it because it's only 9.6 years follow up. But they used the 13 year follow up in the metasis which was published in 2013, which was quite critical about the best uh screening. I don't know whether you can remember. This is the one which said that every 15% patients were finding that they are doing unnecessary operations on about 30 patients or all of that who came with that uh paper. So it's certainly um you cannot directly argue whether they have a selection pass, but you have to consider there's a significant uh selection pass. Maybe they are in the process and the prostate, I think how show that there is a gain in that combined study of prostate chest Sigma bowel and ovarian cancer trial. There's a gain of 123 days. So I wish they had more effort to see why it has a different or significant improvement as opposed to the other trials, what made it different. I can't, they haven't looked into that at all. And I wish they had more effort into that and particularly when it comes to the core, which is close to my heart, they looked at only one trial for colonoscopy. And whereas in the Sigmas, there are, there were four trials. And so there's always difference between the outcomes when it comes to powering all these studies. And they've looked at the fecal blood test, but we we now move into FEC test, which is also a fecal blood test, but it's far sensitive to, compared to fecal old school local blood tests. So how do you compare those studies and whether you have, whether there's any impact on the outcome? We will not know and no, he passed the. Um so uh and no individual trial has a power to show the effects of old course mortality. I think it's a very important thing because we are amalgamating. But we have got old studies which are not powered to look at the old course mortality. So, uh which is very important in terms of analyzing or taking the um outcomes from these, these studies and um and the all the results valid. So when you look at the um they look, they use the statistical methods, I think quite appropriately and they look at there are few risk of all cause mortality, but they have not explained the heterogeneity very well. And the cancer is supposed to be mortality values that they have. They are just heading in the paper. It says cancer is very mortality versus all cause mortality. But I cannot see any specific data looking at cancer specific mortality. So which might have been a significant impact in terms of uh uh patients, uh a long term outcome or, or, or quality of life. And they looked at the confidential, I think 95% is quite accurate and those are the otherwise, I think that the statistical analysis were carried out quite reasonably well. Now, will the results affect my practice or how should we stop screening patients? I think they are quite, they don't recommend it. But I think if you boldly ask that question, um, the first question I would ask is each cancer treatment is mainly catered to reduce the mortality specific to that cancer rather than, uh, the screening test is specific to that particular cancer related mortality rather than all cause mortality. And we are looking at a group of people who have got multiple comorbidities, their age groups are quite elderly. So we may be, we may be looking at the wrong way around. So I think it's, it's quite interesting question and I don't, I disagree that you see will cost mortality for this screening purposes, particularly for this. Now, they have quoted the outcomes of bariatric surgery, which says having bariatric surgery will improve the life almost by three years. So I think it's quite interesting argument but you are comparing apples to pears because we are very well. We we know that by doing bariatric surgery, most patients have other benefits like the diabetic reversal, my injection and myocardial infarction. So the the, the fact on other comorbidities we have to, but doing a cancer operation does not particularly improve any patients, the other comorbid or myocardial infarction risk or diabetes, et cetera. And uh the other question is we need to ask, uh are we only looking at their long of life? Are we or we, are we considering the quality of life and then how they spend their life um after or without the with or without cancer? So it's very important. And also we have to look at the, what is the cost to the health care services? And this, uh there is, there are certain biases, selective bias and significant heterogeneity. And um and more all the studies are not powered to the old course mortality. So I think they are very, um although this study on, on its own um shows that that the screening will not make significant benefit. I doubt it is the true reflection of the screening process. So, uh based on my reading on this paper and looking at this data, I do not think this will affect or all to my current practice? Great. Thank you very much. Let me just stop showing the screen. Ok, lovely. Thank you to Kawa and Sam for an excellent comprehensive, very comprehensive presentation. And I think Sam you covered very nice outline of how to critique a paper for the exam. You are only given Sam, can I ask just before we go into the paper, any general the thoughts on how you were only giving half an hour in the paper to in the exam to read the paper, any thoughts of how you would approach that? So I think you might need first which bits you might miss out. Yeah, I think I would definitely go through the abstract thoroughly and, and because if you, if you spend good enough time on this abstract, then it gives you a real nice idea what, what paper, what paper intends to deliver and also what we should expect. And also looking at the conclusions you can sort of get an idea of is this paper is going to make any difference or not. And also always look at where the paper is published and particularly if you say BMJ open or something like that where you have to pay money to published a paper, then you always question whether there is any, those papers are fac they are quite commonly put on these papers so that they know that they are not good quality papers. But at the same time, you can have a quite rational discussion with the consult, the examiners. So I would read the abstract at least twice and then uh uh quickly skim through the, the five questions are in my head. I always ask why they did it and how they did it. And oh, don't book up into the result too much because you're not going to remember all the numbers. They, you're not going to discuss all of that within a short period of time. Most of the time the examiners are interested on the, what is the research question? Have they done the due process in terms of selecting studies or carrying out the methodology? And then they will and then focus on your discussion and always look at the limitations. And if you, if you, if you, if you're running out of time, just look at the limitations and the conclusion and skip the discussion. So that's how I would do it. And then they give you, they provide you a piece of paper as well. So important points just jot down quickly and then when you, when you discuss me, they can examine us, you can always refer to that as well. Yeah. Yeah. No, I agree. I think the abstract basically gives you a summary of the whole paper and even the introduction, you just want to make sure you see what the aim is and what the aim of. And I don't think I really bothered with the results. Just looked at the tables and graphs that summarized it and then the authors themselves will give you their limitations. So that gives you a couple of things to talk about in the Viber. So just pick out those things. Um OK. So with regards to the paper, is there any comments I'd like to open it up to the floor. Is there any comments anyone has? You can either unmute yourself or put it in the chat? I mean, one interesting observation which I have found out when I was looking at the background and whether any good response, which I think I've sent to uh Mr Hat as well. Um Duffy is one of the leading uh I think researcher as well who has done a lot of work on screening related studies and screening and reporting the results and everything. And I think his observations were really interesting and he had actually a paper published earlier somewhere else. Um where he actually argued that uh all cause mortality is not the metric of choice when you want to assess a screening test. He argues quite strongly and a bit of a lot of, a lot of sense. I think that cancer specific mortality is the only metric which should be used when you are assessing a screening test, how beneficial it would be. And that was one of the observations I thought I should share. I actually went to read one of his old papers which you published and he argued why scientifically, it's important the cancer specific mortality should be only looked at because his argument is that if you want to look at all cause mortality effect by a screening test, you have to have such a large number of individuals which has to be followed for a very long period of time, which is totally unnecessary and not productive enough. On that note. Thank you very much. That's a fantastic succinct definition and explanation of why does anyone know why it's important to sorry Hamil to jump in there? But it, it's a good one to kind of just pick out. Does anyone know why it's important that we look at cancer specific rather than all cause it, it, it's fine if nobody wants to volunteer. But ultimately, what has very eloquently said is essentially when you're looking at cancer specific as the term suggests is you're looking at deaths that are directly related to the cancer. So you're not looking at things, you've got heart failure, falling down stairs, getting hit by a bus kind of thing because that would add an extra element of uncertainty and variability. So for example, if somebody lives in a high risk area or has got other comorbidities, then they, they all cause mortality will be higher because they just generally a more risk of dying. But the cancer specific mortality may not be different. And if you're only interested in looking at your screen, for example, in this test and looking at your cancer treatment then, ok, it's very unfortunate and terrible that somebody should die from another cause, but it shouldn't be blamed on the cancer or the treatment they have. So that's what's very important to distinguish those two. So sorry to jump in there. But that's one of the things that kind of gets kind of asked about. So that's why in kind of circumstances cancer specific is more important. Thanks. Anyone else have any other comments again? You know, either unmute or pop it in the chat or if you have a question for the speakers, I think the thing I picked up on and I think you may have mentioned it actually was that they only followed patients up for maybe 1015 years, something like that, which to me doesn't seem like we normally screen people lifelong from a given age. It doesn't seem like a very long time, but I had to stop. I think, I think they were limited by the studies available mainly. And, and so, and then there's a statement saying that they, they removed poor control that, but then there's no direct information about how they I'm sure if you ask the people can provide the details. But um it's more to do with the quality of studies as well as the availability of the studies. Yeah. Yeah. Makes sense. Makes sense. And, and so with this, so for example, in colorectal cancer you'd have, you have or in any cancer, you have different cancers with different biologies. Um, and that, that, that would, in itself would introduce hee in these RCT S. So they did a random effect. Is that correct? A random effect? Yes. Ok. Yeah. And is it the people doing the meta analysis did when they, yeah, I think the me they performed themselves is a random effect, meta analysis. Yeah. Yeah. That's right. Yeah. Ok. I mean, if you think about it, we carry our screening about 14 years anyway. And um and I don't know how this is a problem, so we don't know what the main age in the group. So this this is they selected and uh what, which what female male percentage uh there. And all of that is quite uh we cannot find any of the data unless you go into each, each study. So, yeah. No, that makes sense. Um OK. Well, thank you both for that very comprehensive and thorough overview and critique. I think it's extremely useful for the exam just to have and the more we do of these, the more that people will get an idea of how to structure their answers, what to look out for in the paper. Um So thank you really appreciate that. I'd like to move into Alan Ascari, who's a consultant, upper gi surgeon in Newton, um who will go through some, I believe some principles of screening and possibly some statistical aspects of the paper which will be useful for the exam over to you and thank you very much. I'm sorry to jump in there, but I thought I could just clarify that very important point. I'm delighted and it's an honor to join the state Collaboratives Journal club again and well done to the team for an excellent work and continuing and expanding this. Uh Also, it is delightful to see so many old friends and colleagues and new ones. So, thank you very much and um also delighted that Mr Heteracia who's a long time friend of mine thinks the bariatric surgery is worthwhile. So thank you for validating my career choice and um apologies about the background as I'm sure Mr will attest to I've got excellent taste in, in internal decorations. This is just a place I'm staying in at a conference. But yeah, let's get started. We will, I'm going to just share my screen. Hopefully, you can all see that. Is that right? Can you all see that we can see that? Excellent. We'll talk a little bit about screening and meta analysis. So we'll talk about some of the terms they've used and what screening is. I'm not going to go through the paper specifically because that's already been done excellently by the team. But we'll just talk a little bit about what screening is in general terms and also what, what, what meta analysis and some of the terms they used is. Ok. So, and you can still see my screen and take it. Is that right? Yeah. So we'll talk a little bit about the definition of what screening is, the principles, what the criteria is and screening what the some of the current programs are in force in the United Kingdom. And then we'll crucially talk about what meta analysis is. It seems like a very scary term, but actually, it is not terribly difficult to understand at all and I'll break down the statistics and this is one of the areas that a lot of trainees uh feel a little bit nervous about. Um But actually the statistics that you need to know for even at the Fr CS level, for higher level exams is actually not all that complicated at all. A promise. And I'll um it would be good if we can get some audience participation from this as well so that we can have a more engaging discussion. So what's screening screening essentially? And by the way, in all these terms, what you want to do for the exams or in general is you want to try and give a very concise and simple explanation of what things are whenever you're being asked to define a term or to explain something. People often think that there is a compulsion to, you have to have very big words and you have to really make it complicated to make it sound intelligent. That's not really true at all. It's quite the opposite as you want to use the terms you've been given to break it down much like we do to patients not talking down to them, but rather just using simple language, everyday language to explain it a more complex or nuanced term. So for example, screening essentially is the identification of healthy ie asymptomatic. So these are people who've got no symptoms, who may be at increased risk of a particular condition or actually have the condition. So what are the principles of screening? Well, the condition is going to be an important one. For example, if you're, you know, screening for something that is bizarre and rare, like a pseudomyxoma Perel or something very unusual or very rare form of sarcoma. There's not much value in that because the chances are that there's not going to be a big problem for the general population. I'm not saying it's not important. What I'm saying is there's no, there's no point in investing time and money in something that only one in a million people suffer from. For example, I'm using exaggerated examples there and it should also be a treatment for the condition. So for example, there's no point screening for something that you can't do anything about with current technology or current treatment plans. And you have to have, remember when you're talking about screening, you are not talking about your hospital, you're not talking about your clinic, you're talking about your entire country in the entire world. So you you know, massive regions with millions and millions of people from different kind of potential, genetic make ups and backgrounds and ethnicities and lifestyles, et cetera and medical needs. So, you've got to really make a system that's robust and um kind of all encompassing for a, a great deal of the number of people. So you have to have the facilities to allow you to do this and also not just to investigate them but also to treat them. For example, there's absolutely no point in our screening for breast cancer if it's very rare, which it isn't uh or if we have nothing that we can use against it, which we can. So breast screening and bowel cancer screening are very much worthwhile to a certain degree because it's um we've got the facilities to diagnose it, which is fit tests and colonoscopies or mam mammography, for example, and we could do something about it. We could do resections chemotherapy, et cetera. And there should also be a, a AAA disease that you uh that, that doesn't act too quickly. So for example, gallbladder or pancreatic cancer is probably less useful in screening for or rather establishing a screening program for because one, it's more rare than breast or colon cancer and also the disease is usually very aggressive. So by the time you actually detect something, the chances are that you're not going to be able to do much about it. So it defeats the point of screening. And as we mentioned, it's gonna be a test that you can actually test for that's acceptable to the population. So if a test is massively prohibitively expensive or, or difficult to perform, then it's not going to be so useful. But a fit test, for example, relatively quick to perform a mammography, relatively quick and cheap compared to many other studies, many other sorry, um, investigations, it's therefore it's going to be worthwhile than doing something that's very, very rare or very difficult to detect. So what are some of the screening program in the UK? Now, this is not an exhaustive list by any means, but some of the bigger ones. So for cancer screening in our subspecialty in general surgery, breast and bowel cancer are probably the biggest ones, diabetic eye screening pregnancy, you know, to make sure the fetus is healthy and the mother is healthy and AAA in vascular surgery, for example. And again, these are things that we can largely do stuff about that, we can do something about it. We've got the facilities, we understand enough about the disease and enough people suffer from it. So it's worthwhile doing these. What are the advantages? So one of the advantages that's been kind of promoted the early detection of, of a tumor, for example, or, or AAA. So you can actually watch it from an early stage and once it gets to a certain threshold, you can do something about it and it can also reduce the risk of developing other conditions and complications. So for example, cervical cancers and cervical screening. It is worthwhile because you can prevent a lot of it because you can detect high grade dysplasia ie before it becomes a cancer relatively early. And that's why we screen young and female patients from an early age. It also has potential to be life saving. For example, bowel cancer screening is thought to have saved around 2000 lives a year because you can detect polyps early or you can detect bowel cancer at an early stage stage 12 without going to nodal disease. And certainly without before it goes to metastases and hits the lungs and liver. In which case, it becomes extremely difficult to treat. No. The other thing is that the, so that's the positive uh by the way, positive here obviously means negative in terms of is bad, but a negative test ie you do not have the disease, it's reassuring and it reduces anxiety and stress. And there's good evidence to suggest, for example, at the age of 5055 if you have a negative colonoscopy for about through a bowel screening program, the chances of you developing lifelong colorectal cancer. And Mr He can correct me if I'm wrong, but it's, it's very, very low. So if you've got a negative colonoscopy at the age of 55 the chances are you never ever will get cancer or rather only a tiny proportion of people will. So it allows you to reassure people and reduce the anxiety and stress. Now, even if your results do come up positive and you do have a disease or rather a precursor to disease such as dysplasia or changes already. Then again, you can have, you can give it time to make informed health care decisions at an earlier stage and do something about it. Now, there are of course disadvantages as well. It can give you false results. No test is perfect. Now, you may think, OK, something might be 99% perfect and sensitive to specificity, which is unlikely. But let's pretend that it is that 1% might not sound like a lot. But in a country like the UK where you've got 60 million people, 1% is still quite sizable and it can lead to misdiagnosis. These are unnecessary treatments because there can be false negatives as well. Now, the other thing is some of the tests can be quite invasive colonoscopy. For example, is one such example is one such investigation that can be very invasive or is very invasive and it can lead to anxiety. You know, the I know we use fit test now, which is a lot better. But before we used the FOBT and there was a whole bunch of things that can make it positive and you went on to have a colonoscopy that you probably didn't ever really need. Because the reason why it was positive because you had a bit of bleeding for, for, for no reason or angiodysplasia or diverticular disease. And it also raises ethical and social concerns including discrimination and moral dilemmas. So for example, we know that certain socioeconomic groups and certain certain racial groups and people from certain um socioeconomic and, and educational backgrounds are more likely to engage with screening programs. Therefore, are we then almost excluding people from a lower socioeconomic background? And this is certainly is the case in bowel cancer screening in the UK, where areas are more affluent tend to engage more and therefore, are more likely to be more health conscious and more engaging with the screening program. And therefore, they're more likely to benefit from the screening program than areas that are more socially deprived or from lower socioeconomic and educational background. It also has enormous costs. So you may think, ok, we're screening all these people for mammograms and the vast majority of which are not going to have cancer. Are we, are we using the resources that we've got, which is limited, of course, not infinite in a, in a wise way. Should we actually concentrate on people who are more likely to get cancer? And that's the moral dilemma as well. So these things, these all these points kind of tie in with each other? Ok, so let's, let's talk about the paper, let's talk about meta analysis in general. What is a meta analysis? It seems a very daunting term. But essentially it's combining and analyzing results of multiple studies, whether they're randomized or observational studies on the same topic, topic, then trying to fuse them together to get a more accurate and reliable estimate of the effect of an intervention or a variable. So for example, it's to rather than just looking at each individual paper that you've got there in a meta analysis and, and it could be any kind of meta analysis, irrespective of subspecialty. You are trying to put all of these data together, pool them to come up with an overall impression of what the research suggests. Now, there's a few terms that we should go through and these are some of the things that again worry people, but they are quite simple to break down once you, once you break down in simple terms and certainly those of you who are going to be facing the FRCS will absolutely be asked these or at least some of these. OK, a relative risk. What's a relative risk? Well, a relative risk is the chances of an event and apology for the misspelling that occurring in one exposed group in one group versus another. So an exposed group to another. So for example, the risk of cancer in those who smoke, er sorry, lung cancer, in those who smoke versus those who don't, you can do that as a relative risk. It's a measure of what we call effect size and I'll talk a little bit about that in a second. What's a confidence interval? Well, a confidence interval is that the, the range of values in which the true value lies in a population. So for example, if we say taking the example of the lung cancer in smokers versus nonsmokers, if we said that the relative risk of the smoker group in developing lung cancer is 1.5 and this means that the non smoke group would be one. So if the, all the, all the non smoker had 100% chance of developing cancer, lung cancer, the smokers will be 1.5. So there's 100 and 50% chance. So that's the effect side. So they are 50% more likely. So the smokers are 50% more likely in developing lung cancer than the non smoker. By the way, this is completely made up. This is just, so please don't, um, go away saying that smokers and non smokers, it's far higher than that. But, um, this is just me using these figures and an example. So it sticks in your mind. So we're saying what effect size is 50% more ie 1.5 compared to the non exposed group. Ok. So that's our best estimate. That's our effect size. That's our relative risk. But how sure can we be? Well, we can be 90 95% sure or 97% depending on uh, what we set it at but usually the international community has agreed on a 5% kind of leeway either way. And that's completely arbitrary by the way, but we can be 95% sure that it's between 1.3 and 1.7 which means between 30% and 70% increase in risk in the smoking group in developing lung cancer than the non smoking group. So this is the, the important bit is that relative risk is a measure of the effect size. It tells you what our best estimate is of the risk of an event occurring and it could be good or bad. By the way, it could be re resolution of diabetes in bariatrics or post bariatric surgery or, or hypertension after taking a certain drug compared to the group that hasn't taken the drug. So it could be a good thing. It doesn't necessarily mean it's a bad thing. It just means how much more likely is the event to occur and the good or bad depends on what the event is. So that's the relative risk. That's the effect size. The confidence D is OK. We, we've got a, we've got a, our best estimate is 50% but it could be between 30 70%. So that's 1.3 and 1.7. Now, ap value, one thing I need to stress you is, is ap value is essentially an arbitrary number for the scientific community. And this is, of course, all disciplines, nuclear physicists, astrophysicists, um biologists, chemists, whatever have all agreed on and it's generally set at 5%. That's why 0.05 is the magic P value that we all like. However, in some disciplines you, they, they can, they find that unacceptably high. So they sent it, set it lower. Essentially. All it is is the chances that these results were obtained by sheer freak of nature, accident chance. It does not say anything about how significant your results are. So it doesn't tell. So p value is really, really low. It doesn't mean that your results are more important. It just means that it's less likely that this, this happened by complete random chance. We also mentioned the term intention to treat. You heard that several times. What does that mean? That means that the groups of the patients were initially assigned to were analyzed in that way, ie they were analyzed in the way we intended to treat them irrespective of what happened. So for example, we put two groups in two different chemotherapy for bowel cancer, for example, and one group then can't tolerate that and they cross over. Now, intention to treat analysis will ignore the fact that they crossed over and will treat them as they were first started. Because the important thing with the intention to treat analysis is that the fact that a bunch of patients had to go over to the other arm, to the chemo to to chemotherapy B is important in itself because it may denote that actually it was poorly tolerated by the patients. So it's quite an important way of reducing bias power. What's power? You'll hear that in studies a lot, especially randomized controlled trials. Power essentially is the number of observations you need to, to tell if there's a difference between group A and B hypertension, uh anti hypertensive drug A versus B chemotherapy A versus B surgery, no surgery, et cetera, whatever the groups are. So all it is is how many readings do I need? How many observations do I need? How many patients do I need? How many lab samples do I need to be able to truly detect a difference if there is one and it's based on certain mathematical assumptions. But um it's not something that you can kind of calculate in your head, but there are online tools for that. Now, that is not something you're going to be asked to do in, in any exam. But being aware of what a power calculation is, is important. Alan, can I interrupt and just ask on the power? When papers say a power of 80% could you clarify what that means in very simple terms? Because that could be asked in a power of 80% means that you've got essentially a 80% chance of detecting a difference between the two groups. Now, that's you've got an alpha and a beta. Now, the, the the, the, the beta is 80% and the alpha is 0.05. So the, the alpha will be the chances that this was that this detection, let's say you got your 80% and, and it's usually set at 80%. So that gives you an 80% chance that so the likelihood is 1/4 out of five times you will be protected difference. If there truly is one, that's your, that's your 80%. Now, your alpha is your 0.05 which we arbitrarily assigned. That means that OK, so you said it's a 80% chance of detecting it. But what is the chance that this detection that happened was completely random? And five and 0.05 means there's a 5% chance or less than 5% chance. Does that make sense? Yeah. Thank you. Yeah. Now how the obviously the more numbers you have in each group depending on how, what the difference is between the two groups in terms of detection um is going to be better. So, but equally, if, if an event, it depends on how common the event is if event is very common. So for example, let's say a chile leak after gastroesophageal cancer resections or, or, or a bleed or a pancreatic fistula after HPV, resection is unfortunately relatively high. Therefore, one technique versus the other is not going to need thousands of 1000 numbers to detect it because the chances because it occurs. So commonly, the chance of you being able to see it is actually pretty good. So you don't need thousands and thousands of people. Now, if you however, say, what's the chances of a leak from a bypass gastric bypass for obesity? For example, well, then you're going to need a lot of patients because the actual rate overall is less than 1%. Uh in fact, a lot lower than 1%. So therefore, you're gonna need many 100s, if not thousands of patients to be able to see that difference, even if there is one. Does that make sense? Well, we can do um um some other session on, on, on that and power calculations a bit more. But ultimately power calculation is when you're starting a study, you want to know, OK, I want to, I want to compare group A and B screening program A versus no screening or a breast cancer versus colorectal cancer screening program. Which one is better? Which one is more likely to detect it? Well, that depends on certain variables. How common is the event in one group versus the other? And what do we think is the likely difference between them? So for example, if I said I'm going to do a certain type of surgery on patients with colon cancer versus doing uh and we'll call that A versus operation B on the same patient. Well, which one is better? Well, it depends on what I mean. By better. Do I mean leak rate, do I mean bleeding do I mean survival? Ok, let's say leak rate. Well, then if it's a, if it's an area of the colon where it's a low leak rate, let's say the right hemicolectomy, you're gonna need a lot more than an anterior resection which has got up to 15 to 20%. Um kind of re of course not in Mr ratchets hands. But you know, this is what you need to consider. And a power calculation will tell you how many patients or observations you need to see this difference. Even if there is one, there may not be one technique A may be just as good as B but if there is a difference, you may need many, many 100s, if not thousands to see a right hemicolectomy leak versus a left one or an Antero resection just because the event is far more rare in the right than the left. I hope that makes sense. But if it's a bit muddling, then we can talk about it later um on a, on a different session. But ultimately, your power calculation tells gives you an idea of how many data points you need to collect to see this difference. OK. So I'm going to just throw this out there to somebody in the audience and please just shout out or raise your hands or, or, or, or whatever or put it in chat er and which can um and the rest of the team can er monitor, interpret this for me, somebody interpret this for me, cos this is the kind of thing you what, what does this tell you? So let's say group A is go back to the example that we use, we kind of ran through this. So group A is non smoker who and their risk of getting lung cancers, group B, which we compared to group A is smokers who may get lung cancer. So the outcome is lung cancer and got two groups, smokers and non smokers. So the exposed groups are smokers. What is the relative risk of 1.5 relative risk and risk ratio? By the way, are exactly the same thing. Um They use different terms for it confusingly but they mean exactly the same thing. What's the relative risk of group B ie the smokers versus the non smoker? It says 1.5. What does that mean? It means that chances of having cancer in the smokers is higher. Yeah, 1.5 times higher. Exactly. So for every one person in the nonsmoker group, that's going to get lung cancer in a smoker group, it's going to be 1.5 people, not one but 1.5. Excellent. So that's your effect size. That's what we're talking about. I don't know if you can see the arrow, the mouse, but that's the effect size. What's the 95% confidence interval? So we said our best estimate, our best scientific estimate is 1.5 or ie 50% more likely. But what's the 95% confidence interval here saying it's a may of accuracy that the effect we have made already is accurate this much. And because the interval which we have made with that much accuracy is saying that because it's on one side, it's not crossing to minus this 0.1 side. So it's 1.8 to 1%. It means that the true value lies between these two and it is accurate. Yeah. So let's, you're right, you're absolutely right. I wouldn't use, I wouldn't use accuracy. I would use a measure of uncertainty. Um because that's a bit more reflective. So you're quite right in saying, um, in what you said is actually right. But a simplified way of saying is OK, you, we said that the smokers are 50% more likely ie 1.5 times more likely to get lung cancer. That's our best estimate. However, what could it be? Well, what it could be is we can say with 95% certainty that our best guess is 1.5 but it could be between 1.3 and 1.7. Now, the more observations you have, the tighter this, this range is gonna be. So the more certain you are that your, of your effect size, a very wide confidence interval usually suggests that you haven't got enough observations you, you're more uncertain. So if you see a paper that has got a confidence level going from 2 to 50 then you're a little bit more suspicious that they haven't had enough observations to, to, because that means it could be anywhere between two times or 50 times. More likely. It means that you're, you're, you're a bit more uncertain. Good. Ok. So we've got an effect size, which is the relative risk on the left. We've got a measure of uncertainty and we've got ap value. What is the, what is the P value in? Very simple terms? Uh means means the result is uh statistically significant. And what does that mean? Well, your risk of a, of a type one error of incorrectly um sort of rejecting the adult hypothesis in this case is one in 1000. Yeah, good. But very, very simply remember you wanna a really low I 12, I shouldn't be probably Ss Sam's probably thinking that it shouldn't be too hard, but just, just give it to me like really simply the odds, the odds of you having this er result by, by random. Exactly. It's the chances that this results. So this infect size and this competence into v wonderful estimate we made has happened by pure and dumb random luck. That's what it means. And we've set it at 0.05. So it doesn't tell us how important this finding is or how big this effect size is it just tells us that if you did this experiment 100 times over and over with the same population and you've not changed anything. How often are you going to get the same result again? And like you said, here, 0.001 means that 999 times out of 1000 you're gonna get the same result. So this is what it means, relative risk of the effect size ie 1.5 times more likely group B versus group A. So the smokers versus non smokers in getting lung cancer, the confidence interval is the measure of uncertainty. We said it's 1.5 times and that's our best guess. However, it could be between 1.3 and 1.7 times and we're 95% sure that if we did this experiment 100 times 95% of the time it's going to be between these ranges 1.3 to 1.7 the P value is the likelihood of this occurring by sheer chance. So less than 0.05 and, and you can set that lower. Um It doesn't have to be 0.05. Can I, can I just make a comment on, on the P value when you, when you're asked to define it in the exam? Um I think Brett may have said the word odds and risk. I know and the examiner will probably know what you're trying to say something very specific in statistics. Um So I would, I would use the word probability, the chances, remember? Yeah. OK. You feel like you're going into an exam and you want to use terms and technical things. But honestly, I promise you the more you can simplify it, the more that person will know, you understand it. Um I mean, II think I probably mentioned this before, but Richard Fireman was a genius physicist who American physicist who worked with Einstein up and I and all the others on various, he was a nuclear physicist and not only was he brilliant in this field, but he was also a brilliant teacher and he used something called the Feynman technique. The Feynman technique is a very simple theory and has been proven time and time again. And essentially what he said was if you can explain a very compl any theory, if you can't explain, right? If you can't explain any very simple theory, any simple, any theory, no matter how complex to somebody who is the age of 12 with reasonable understanding, then you don't know well enough. Basically what he's trying to say is the more simple, you can explain something the better you know it and it's very true. So you don't want to use things like odds when you talk about P value or, or any of that chance, the chances of this happening, everybody on the street understands the word chance odds ratio is different odds are different. They're a bit more complicated. But when you're talking about p value only use the word chance or the likelihood of an event happening. Ok. Moving on and apologies if it's taking a little bit too long of time, but it's quite important that we go over. You get, you get. Ok. So Forest Plot. Ok. Forest Plot looks really busy. Looks really scary. I mean, some of the stuff is quite, by this, by the way, is completely randomly taken off the internet, but it doesn't matter what it is. That's, that's why I didn't choose any forest plots from the paper to demonstrate to you that you can interpret any forest plot very quickly. You've got a bunch of studies with people's names on it on the left, you've got the year and the design and the whatever you are looking at. But then you've got to line down the middle, the land down in the middle is one which means that if the studies land on one, it means that group A versus group B. So screen don't screen operation A versus operation B blue pill versus red pill is exactly the same. If the points lie on this line, it doesn't matter which pill you choose, you're gonna be in the matrix or not in the matrix, whatever the outcome is, it's the same thing you get the relative risk on the right. And as we talked about here, we, we use 1.5 for the smokers versus nonsmokers, but here we got 1.45. So close enough. Now, what, um I'm not sure if it was or Brett who mentioned uh crossing one, it makes it insignificant, this is what they were talking about. So here, let's say uh group A or group B versus uh sorry, group B versus group A. So nonsmokers, smokers versus nonsmokers, there's a 45% or 1.45 times, they're likely to develop lung cancer, but it's not gonna be significant. And I can tell that straight away because the level of uncertainty crosses one. So anything where the confidence interval is, starts off lower than one ie 0.98 or 0.969 and goes to above one means that that result cannot be significant. It means that we're too uncertain. We do not know if the blue pill is better than the red pill. We do not know for any measure of certainty that people who smoke have more lung cancer or are at higher lung uh risk of lung cancer than those who don't smoke because it crosses one. Ok. It means that we're, we're not, we, we, we, we're all the way from um it means that we are, they could be less likely, all the way to more likely. Well, that means nothing. That means we're uncertain. So that's what each individual uh reading means. OK. Now, the pooled reading and if you see the boxes, the black squares, you see some are and I apologize, it doesn't demonstrate it very well here, but some you have to kind of take more word for it. But some of the boxes are a bit smaller than the other. That's the waiting. That means the number of patients they've got the the contribution they've made to the overall effect size of the bottom here that there's rhomboid. So the bigger this black square, it means the bigger the contribution that particular paper has made. So for example, Osaka K here in 2009 has got a big square, a relatively big square and I appreciate this one doesn't demonstrate very well. But uh Hiroyama for example, has got a smaller square compared to this one. So that means they contributed less to the overall data. OK? Now let's go to the overall data. These are all the data of the individual papers. But if you go here, this is your meta analysis, this is your overall data. So each individual paper. So for example, if you were to read the top one, Hiyama in 1984 well, you could be left thinking, well, I don't know if um group A is more likely or less likely to develop anything because it crosses one. It could be anything from less than less likely all the way to more likely. So you're none the wiser and that's the same case with multiple of these studies. But when you pull them together, you actually get a very um more clearer picture that actually you are group B is more likely than group A of what the outcome is. And in this case, 1.28 times ie 28% more. And that could be a good thing or a bad thing. So that there is your relative risk or odds ratio, that's your effect size, your 1.28 the 95% confidence interval, which we've already talked about is in the brackets there at 1.10 to 1.48. OK. So that's the important bit. Then you'll get AP value which unfortunately is not. Um it's, it's not um appear here. There's P values for the I squared, by the way, not. But let's say if this was the significant I 0.005 or less, then that means that group B whoever they are is more likely at 1.28 times more likely than group A to have this event have happened, whether the event is good or bad. So that's your three things that we talked about. That's that, that's what you're looking at there in that bottom, right? Um box in the, in the, in the oval circle on the right now, the heterogeneity, what's the I squared, what's heterogeneity mean? Essentially heterogeneity is, is, is variation in your papers, the higher the I squared. So, the closer to 100. The more likely it is that your papers are all saying different things. So a high he is certainly 70 75% and there's no absolute cut off. But the higher the I squared means that your papers are all over the place. Some half of them are saying, uh, you know, some of them are saying, oh yeah, the red pill is much better than blue pill. And the other ones are saying actually, no, the blue pill, the blue pill is useless. The red pill is better, the blue pills and it's all over the place. And here, the point I squared of zero, which is a little bit suspicious. To be honest, it means that everybody is saying exactly the same thing, which is bizarrely not entirely true here. But anyway, the lower your eye squared, it means the more homogenous your studies are and, and what they're reporting what can affect eye squares and heterogeneity. Well, it could be the result itself. It could be the populations that selected. If you use the methods you don't know without reading a paper. It just tells you that the overall results in that chart. When you pull them all together, the higher the I squared, the more likely the rather the higher the eye squared, the more they are saying different things, the lower the eye squared, they're all s singing from the same hymn sheet as they were. Ok. Now we mentioned fixed effect and random effect. And by the way, you, you will not, you know, if you're getting to this kind of discussion, you are absolutely flying in your academic station. You, you, you are getting, you're heading towards maximum marks and they're struggling to ask you questions cos you're doing so well. So don't stress too much about this, this is definitely not a pass or fails kind of question. But there are essentially two different types of models. When we talk about meta analysis, one is called a fixed and one is called a random effect. Now, this is uh in any mathematical model, there are certain Presumptions that are made in a fixed effect. It assumes that all the studies are measuring essentially the same thing and the differences between them are essentially only chance and the methods are all very similar. The equipment they use is very similar. The doses they use is everything is quite uniform. So it's very suitable and studies are very similar and they have low heterogeneity. There's low variation in the methodology and the population size. Now, unfortunately, in medicine, actually, that is very rare to to to to happen because different studies in different centers across different parts of the world, different techniques, different measures different scans, different operations uh and different time points where they measure the outcome is gonna be, it's gonna be all over the place. So a random effect model assumes less. So the it basically says that actually the chances are that there's a whole bunch of variation between how they recruited patients, how they stratified them, the ages, the comorbidities, the interventions, et cetera. So therefore, the random effect is less powerful in detecting an effect, but it's more robust and it makes less assumptions. It's a bit more open minded and it's usually random effect that we tend to use more of just because unfortunately, especially in surgery, the chances of us coming across 10 studies that are exactly the same, using the exact same techniques on the same population at the same time point is very, very rare with a similar sample size. So in summary, screening is a way of detecting illness or there is a risk of illness in a healthy population. So these patients don't have symptoms otherwise it wouldn't be called screening. They go to symptomatic, they be something else. There are advantages and disadvantages which you talked about. A meta analysis is essentially a way of compiling data from various studies. And it could be observational studies. It could be a randomized trial studies, it can be lab readings, it could be just about anything. It's just about fusing multiple studies together to get an overall impression. So we've done, you know, 50 studies all over the world at different time points. But what does it actually all mean? You know, I want to answer a question. Is the red pill better or the blue pill tell me and that's why you use a meta analysis to try and compile them some of the terms we used the risk otherwise called a risk ratio. Um Now, odd ratio we didn't talk about, but it's, it's a similar thing. It's, it's slightly different but not hugely. And again, that's a discussion for a different day. Um But ultimately, it's still an effect size. It's a measure of effect. We also talked about measure of uncertainty, which is 95%. So your, these two will tell you your best guesstimate of what the value is. The red pill is twice as better than the blue pill. Um But the, it could be anything from 1.5 times to 3.5 times. And that's your measure of uncertainty. But what are the chances this was complete and utter fluke? Well, that's your P value forest plots, the graphical representations of results of a meta analysis, which is the combination or combination of multiple studies fused together. That's all it is simple words said very clearly. There's two main models, a fixed and random model. The fixed model makes more assumptions. It assumes that the studies you've colluded and jammed together to come up with this meta analysis have similar methodologies have similar um interventions, similar numbers, similar everything a random effect assumes that actually there's likely to be more variation. So it assumes less and that's it for me. Happy. Thank you very much. Thank you. Thank you. That was absolutely brilliant. Thank you, Alan. Um Any questions for Alan from the so anything we look, ultimately, it's ultimately it's sorry, I'm getting a bit of fee. Sorry, I'm getting a bit of feedback. No, no. Yeah, thank you. Thank you. Um Can you? Thank you. I think it was just um, it, it's very easy to get bogged down and get really kind of intimidated by all, but a lot of it is actually quite simple. Now, what I've thrown at you, there is a huge amount in a short space of time and we can do other sessions where we can reiterate these and the statistical methodology and so on. But nobody's going to ask you to calculate a power calculation or to, to, to, you know, to sit there and do these things and it can't really be done in your head. But what they do expect is an understanding of what this means. That's all you need to really do. And the best way is not to try and a lot of the books as brilliant as they are, they try and give you a clever kind of sophisticated using no hypothesis and hypothesis. OK. You can try that but you want to have just so that you understand it, a very clear understanding of what AP value is. AP value is the likelihood that the results were obtained by chance. That's it. That is it, you want it that simple and a relative risk similarly. And that's what it's about. It's about really breaking it down to very simple, very clear ways. And same with the forest plot. Don't be intimidated by it. Just break it down as to what you look at. There's only really two or three little small bits, even though there's a lot going on a page that you actually really need to glean something from it. Um Having the definitions is very important. A clear understanding is important and a way of practicing actually saying it is very important as well because one thing having knowledge in your head and the time pressure, exam, pressure and also somebody that you've never met before sitting across you, you need to be able to convey in a very concise and brief and short period of time. What that you understand this, that's what they're looking for. Does this person understand this? Not, is this person going to write 200 papers in next month? They don't, that doesn't matter. You don't need to, but that's not a requirement, but it is a requirement for all of us to understand. And you know, the guy sat questioning you, he's not going to be a professor statistics. So he's going to understand it on a very basic level as well. And even if they are, there's nothing in the syllabus that says that this person needs to know advanced hierarchy or aggression to be able to pass a station or do you even get eights in the section? There's nothing there if you know it? Great, but it's not. And they're supposed to ask very specific questions. So, um, nobody is going to put you in difficulty or try to test your deep knowledge. So it's very, very, very genuine and adult conversation. It's always, yeah. And those who have sat for the exam will know that now. Yeah, ultimately what they want to know what the requirement is this IRS and this is irrespective of what specialty you are on surgery, not GP medicine, anesthetics, whatever there is no compulsion to be able to understand higher statistics or anything like that unless you do academia and that's what you want to do. That's fine. But there's nothing in the, in the in your mark scheme that says you have to know how to do these advanced things. No. However, what is a requirement is that we all understand evidence and that means that rightly or wrongly, we do need a certain understanding of statistics. It's a very basic understanding, but it's a certain understanding. So don't shy away from it. And of course, part of that understanding is what my friends and colleagues presented before. Ie knowing how to create a paper. Again, you don't need to write 200 papers to go in there and pass the exam. You don't have to be an academic, you know what you do, however need is in the future when you're a consultant surgeon, somebody puts a bit of evidence in front of you. You can understand it. You don't have to be able to reproduce it. Design your own study unless you want to or anything else. That's not a requirement, but it is a requirement for you to be able to interpret what this means. Yeah, I remember for mine it was some complex National Quality of Life Project and I really didn't understand it and I sat down and said, I didn't really understand a lot of this paper, but Xy and Z this is what I took away from it. Um So they're not looking for deep, critical appraisal skills and deep statistical knowledge. Um Good. Well, thank you, Adam. I appreciate that. Um If no one has any further questions, I think we'll wrap it up there. I'd like to express my deepest gratitude to Mr He, Mr Asari and Mr Hashmi for their presentation and for their time and effort they put in. Um And I'd like to thank you guys, the audience for taking time out of your evening to make this happen. Um This is, as I said, there's been a bit of a hiatus as we've got a new committee together, but this is the first in our month of these sessions. Um So if anyone who is especially or anyone really, but anyone coming up to the exam wants some practice critiquing a paper, going through a paper learning how to do that, then drop us an email and we'll put you on, on the list of the pool of people who want to present. Let us know what your specialty is, what area you're interested in and we'll together with you, pick a paper that you'd like to present and critique. Um And we, we won't go into it now, but we've got three or four projects, collaborative projects up and running now. So you'll get information about those and how to be pis and associate pis for those projects and collaborative authorship. So just look out for on whatsapp groups and emails for those as well and they'll be coming from the Deanery as well. Um Sorry, just one thing on that, sorry to get to jump in again while in the exam, you may, it doesn't matter what the paper is and you're not going to get something that super specialized. You may get something that's very, very general even medical, not even surgical because ultimately, it doesn't matter. So if you're breast training and you get a really specific kind of transplant paper, it's going to be on very general terms. It's not going to be the nuances and specifics to that specialty or vice versa. So the actual papers that you tend to give you are very general surgeon because the test really is. Do you understand what this paper means? Not, can you do a transplant if you're a breast surgeon or can you do a complex breast reconstruction. If you're an APG I surgeon, the answer of course is no. Otherwise, if we could all do everything, there would be no need for some specialties. But the, um, what they want to test you really is your understanding of paper and irrespective of what the topic is, the, the things that we've talked about now are going are the same for every paper. So, it's universal. So, if you can understand what we've just talked about and go over that you'll understand any medical paper in any specialty, neurology, cardiology, whatever. Yeah, exactly good. All right. Well, thank you very much again, everyone and thank you again to our speakers and, um, we'll keep you posted on, uh, on next month's session. All right, everyone have a good evening. Thank you. Thank you. Thank you. Bye bye. Thanks. Bye. Thank you. Can the committee just stay on the group for, uh, for, for one minute, I think humanly you can stop recording now. Ok.