Summary of Presentation can be found on our website
Abdominal Imaging
Summary
In this on-demand teaching session, the British Radiology Journal Club hosts their third event, focusing on radiology topics related to the abdomen. The club meets monthly, inviting everyone to present on unique subjects. With previous topics on neurology and artificial intelligence, the talk featured presenters discussing research papers on nasogastric feeding tubes and their identification by medical students. Moreover, it explores the potential uses of online learning tools to improve this knowledge gap. This type of training will become essential in the Medical Licensing Assessment content map from 2024 to 2025, making it a crucial aspect of patient safety. Joining their social media groups allows you to stay updated about future events and keep in touch with the community of radiology professionals and enthusiasts.
Description
Learning objectives
- To enhance understanding about the placement and recognition of nasogastric (NG) tubes on chest x-rays.
- To help medical students to enhance their competence on identifying tube placements on chest x- rays through practice and feedback.
- To measure the effectiveness of an online learning tool in educating medical students about placement and recognition of NG tubes.
- To comprehend the importance and relevance of tube placement for patient safety and its part in the medical curriculum.
- To develop an understanding of the current practice and procedure followed after an NG tube is placed and its confirmation using chest X-Ray.
Related content
Similar communities
Similar events and on demand videos
Computer generated transcript
Warning!
The following transcript was generated automatically from the content and has not been checked or corrected manually.
Hi, everyone. Thanks for joining on this Friday evening. Um I think we'll make a start now. So, um welcome to the British Radiology Journal Club. I think this is our third event so far. It's good to see uh some new faces and um some old faces in the audience. So um for those who don't know who are joining us for the first time, we're a journal club which meet on a monthly basis and we give everyone the opportunity to present. Usually, we have two presenters which we have today and we aim to focus on a different topic each time. So thus far, we've um focused on topics such as neurology and artificial intelligence. And today we hope to um focus on um topics relating to the abdomen. Um So I'd like to invite everyone to the opportunity to join our social media groups. So we have a whatsapp community and um we also have an Instagram page, so we love everyone to join. And by joining there, you'll be up to date with our latest announcements and our future events which we have planned. So please do join. Um We have another event planned next month and that's the plan going forward and through our website, you can also join our newsletter um, list or you'll also have a summary of each event which happens. So, um hopefully, uh it should be a benefit for all. So I'd like to invite our first speaker, Devia to the stage um to present her fever da You, if you can hear me, I'm just gonna stop sharing light. Hi. Uh Good evening. Am I audible? Yes, sir. OK. Uh So I'll just share my screen right now. Perfect. I'll let you know when you're alive. Mhm. Oh, so am I visible is my slight, slight visible. Yes. All good. So, currently on the first page uh that we can see you, of course, the rest. Uh OK. So good evening to everyone present here. So I'll be presenting. So I'm DEA Davis. So I'm working as junior resident cardiology and uh I'll be discussing the paper, my position, NASO gastric feeding tubes, our medical students see to identify them. Uh So firstly, I would like to thank um uh the uh organizers for giving me an opportunity to present over here. And uh moving ahead with the presentation, uh this paper was uh published in March of this year in the British Journal of Radiology. And it basically involved um looking at a learning tool to identify whether medical students are able to uh recognize ng tubes on chest x rays and how it can impact the patient's safety. So the aim of this paper was basically to assess uh if the medical students were uh having the adequate knowledge to identify correctly the NG tube positions on chest x rays. And also to determine if uh an online learning tool could actually help and improve their knowledge in identifying uh the tube placement. And so basically, previous studies um usually described the patient's safety and how, you know, proper base could be uh done to ensure proper placement of the tubes. Whereas this study was really helpful in trying and understanding whether an educational tool could help in improving the knowledge and in turn improve patient safety. So what it helps in adding to the current literature is uh basically NG tube placement has become an essential skill in the MLA content map from 2024 to 2025 year. So which means that the graduating medical students will basically need to have the skill as part of their um uh curriculum in order to go ahead with the training. And this is really important because it uh on the wards, it is something that can actually impact uh patient safety also. And the study also helps in uh determining how an educational tool can affect the knowledge uh of these students. And another uh issue is that misplacement of tubes in the lungs is a never event or something that should never actually happen. And there is a recent trend of increase that has been noticed. So this uh study also addresses that. So usually the current practice is that after an tube is placed, we confirm it using a chest X ray. And it has been noticed that about one in 50 are misplaced in the lungs and over 25% are unsafe for feeding. So basically, what we know so far is that um during medical school, there is limited training among medical students as compared to other subjects which are given more importance. And so there might be a need to evaluate and look into how uh radiology should actually be incorporated into the learning system uh to better um you know, have an understanding among medical students. So the methodology for um this uh topic was basically fourth year in final year, medical students were selected from the same medical school and the participation is voluntary and they were contacted uh via the university and given information about the study and the inclusion criteria was basically fine year and fourth year medical students who belong to the same medical school. And all of these students had at some point undergone radiology teaching within their curriculum. And uh it was not something that was specifically done for this uh project, but they had at some point undergone some sort of medical uh radiological training. So the study design basically involved an intervention group which was the fifth year medical students and a control group which was the fourth year medical students. So uh as part of the study, there were two tests which were given and test one was given prior to the intervention which was the learning tool and test two was given a week after. Uh the access to the learning tool was given. But the access to the learning tool was given only to the fifth year medical students. So the intervention group was the fifth year medical students and the control group was the fourth year medical students. So basically, uh the uh learning was available online and it basically uh composed of a teaching module which covered the chest x-ray anatomy and also went into details about how to identify the correct placement of the tube. And they were in interactive courses uh throughout the module which included two sets of uh unique chest x rays which came to about uh which came to 40 images in total. So throughout the learning module, they had access to uh learning about the relevant chest x ray anatomy and also they were able to go through around 40 images in total. And there were self assessment quizzes and indeed feedback that were available and uh it was available online free. So as I mentioned, the first test was given before uh the learning tool was introduced and the second test was administered after the learning tool was uh given to the fifth year medical students. And the test contained uh both the tests contained 20 chest x-rays each. So both of them contained basically the same chest x rays, but only in a different sequence. And out of the 20 chest x rays that were given 14 were correctly placed in your tubes. And six were malpositioned. And among the malpositioned tubes, two were in the lungs and four were in the esophagus. So basically, uh the chest x rays which were given deferred from the learning modules. So, um any student who completed the test along with the learning modules would have had an exposure to around 60 uni chest x rays, which included the 40 which they uh went through while they were going through the learning module and the 20 chest x-rays of the test one and two, which were just differing in sequence. And these chest x-rays had been reviewed by experienced radiologists to ensure that they were accurate. So this was basically a summary of the data that was collected. So there was the final year medical students, which was the intervention group and the fourth year medical students, which were the control group and test one was given, which was before the intervention or the learning tool. And they included 20 chest x rays. And after that, the fifth year medical students were given an online tool uh which included uh an inactive uh session regarding the uh x-ray anatomy as well as um details regarding how to identify NG tube placements. And after that, a second test was given. And uh the fifth year medical students underwent the test after the intervention where is the fourth year medical students uh did not have the uh did not have access to the learning tool and directly attempted test to. And they were given access later on after the uh second test. So they were also asked to rate their confidence in identifying the chest uh in the chest tube placements on a five point scale. So basically the results that were obtained uh showed the following. So the number of students who attended the first test included uh final year medical students, there were two for nine students and uh fourth year medical students, they were 161 students. And after the intervention, uh among these students, those who retook the test were actually really less. The fifth year medical students were 81 and the finally medical students, uh the fourth year medical students were 71. So there was a decrease in the number of students who actually came back and attempted the second test. And um these students, as I mentioned, had at some point, had access to some sort of radiological training before. So prior to the intervention, which is for the test one, uh I'm the fifth year medical student. Uh 4.8% only identified all NG tubes currently. And 51.8% misidentified at least one malpositioned NG tube. Whereas the fourth year medical students of 3.1% correct. And where is for 7.2% misidentified at least one malposition G tube. So basically, there was not much of a significant uh difference between uh the test results prior to the intervention. Whereas after the learning tool was given to the fifth year medical students, there was a significant improvement among the fifth year medical students who attempted the test. Basically, uh the results showed an increase in at least 40% of uh improvement in the fifth year medical students. Whereas in the fine uh fourth year medical students who did not have access to the learning tool, no significant improvement was observed. And uh and but, but a positive point that was noted was uh the tube placement in the lungs, which is the misidentification. It was really almost all of them got that right. And uh at the point that was, was uh there was a self confidence that was, that was asked like, basically, uh they were asked about how confident they were in identifying the x-rays. Around 81% of the fifth year medical students were really confident in attempting the second test. But their actual uh performance did not uh indicate, you know, the um rating that they had given themselves. Whereas the uh fourth year medical students had a bit more lower confidence and were aware of the limitations. So this is the context which is more than one and this is the x-ray where the the NG two is located near the uh and uh this did not improve uh despite the learning tool. So it was noticed that even after the learning tube was uh learning tool was given the identification of this chest X ray uh was not uh has not really improved. So one of the reasons that was thought was uh x-rays the gastric junction, the point is not really visible. And usually we run the uh left and look at whether 5 to 10% advancement of the tube is present. So, since it is not exactly seen as a point, uh it was uh thought that that could have been a reason uh why there's no I was happening. So, one of the strengths that we uh saw in this um study was that it was actually uh shown to have an effect in improving uh the final students performance in identifying the tube. And it uh showed us that if given a learning tool, uh these students would actually have a chance to improve their knowledge regarding identifying these misplaced tubes and uh particularly the identification of tubes in the lungs uh was really good. And the challenge was that um the NG tube placement in the esophageal gastric junction, that particular x-ray uh had been difficult for the students to identify and also uh the confidence of the students and identify the X rays did not actually align with the results that were given. So there was a gap in the self-awareness that they had regarding their knowledge and uh the actual test that they did. And also another notation was that uh finally, students had attempted the second test almost a uh almost right after their uh final exams. And uh most of them would probably had been uh high or not really interested in getting another exam. So that introduce selection wise. And also for the test was uh re uh get as high quality images, which uh could not, which would not be the case in the wards or the common x-rays that you come across. So some recommendations that were given was uh since there was uh a percentage of error that is usually seen while identifying uh the tube placements, it could be made mandatory for radiology specialists to report uh the NG tube x-rays to improve patient safety. And another point is introducing standardized training and educational tools um in order to improve the understanding of students in N GT placements. And it's not just um uh you know, sufficient to have a tool it should also be looked into as to how to incorporate this into medical education and uh which can in turn have these students. So in the future, uh certain considerations could be that uh we need a uh necessary tools for medical students to improve their knowledge. And also uh we could consider a mandatory radiology reporting for these chest x rays before starting uh feeding and uh further research could be uh done to explore how uh these learning tools uh can actually improve the knowledge. So the BI GC impression is that uh the medical students showed a proficiency in identifying the NG tubes. But they struggled with determining the placement in the esophagal gastric junction. And the online tool actually helped in improving the performance and uh certain system and educational improvements could be deemed necessary to further improve patient safety in identifying misplaced tubes. So that's about uh excellent. It was really good. Actually, if anyone's got any questions, please do, pop them in the chat. As we go, I think about two things come to mind for me is I think uh obviously the most important thing with the NG tube is uh misplacement because you obviously end up with an aspiration and faithfully kill somebody. But I think it also shows that with the right education then it can work because obviously, we see a massive difference in improvement, don't we? When the education has been done? D do you think this could be transferrable to possibly picking up fractures on plain film X rays uh for picking up fractures on plain film x-rays even I feel that um for learning tools for that could also actually uh help students in improving, you know their understanding because even small fractures are sometimes missed. And I feel that the further exposure you have with chest x-rays, that is how we uh keep doing uh you know, our understanding of how to read x-rays. So apart from the learning tubes, uh learning tools, I also feel that um you know, exposure from the early stages in medical school is really necessary uh while going through, you know, if you are going through an Ortho elective or an Ortho rotation, uh we do come across x-rays, but most of the times we focus on how to manage and how to diagnose and such uh by providing specific focus on looking at radiographs and um you know, taking specific classes for the same and going from the basics, I think that can make a difference while starting earlier in medical school. Yeah, for sure. I think NG tubes are quite unique in that they're quite simple. It's quite black and white of whether it's in the right place or it's not, it's either there or it isn't. But thank you for that great presentation. Um So I'd like to invite Jerry to the stage. I think he's on already, Davey. If you stop sharing your screen, please. That'd be good. Perfect. And Jerry Fish a uh good. Mhm. 10 people will see. Uh Not yet. Um Yeah. No. Yeah, perfect. All right. Um Yeah. So my name is Jerry. I'm an F two in the northwest of England and graduated from the University of Manchester and I have an interest in radiology training and more specifically in the application of artificial intelligence in radiology. Um in the next maybe 10 years. Um So the vivo uh we'll be presenting on which is a kindly um uh offered as one of the choices was artificial intelligence based tools with automated segmentation and measurement on C measures to assist accurate and false diagnosis in acute pancreatitis. Um This was a paper published in the British B BGR in May 2024 which is uh quite recent and it comes in line with the recent um sort of wha around artificial intelligence, both in the mainstream world but also in the medical field. Um the paper was written and the study was done in China, the Hubei University of Medicine. And it involves a branch of the General Electric Health Care team in China, which is one of the bigger sort of private corporation that's involving implementation radiology um as a clinical tool to assist radiologists. Um So, the aim of the team um in China was to develop an A I two with automated pancreatic pancreas segmentation and measurement of pancreatic morphology to help radiologists improve diagnosis and um efficiency. Um in terms of time of how quickly they could spot normal b normal pancreases. Um acute pancreatitis or acute pancreatitis with complications on CT images. And it was a study that was done retrospectively. Um The whole idea of why they decided to do that was of course acute pancreatitis is a condition that we all learn about quite extensively from very early on in medical school, it has very high incidence worldwide and in developing world and incident that is increasing over the years. And as we know from medical school and for experience when we are working and accurate diagnosis of pancreatitis and at in diagnosis for supportive treatments are crucial because of the fact that pancreatitis in its most severe form has very high mortality rates and uh of up to 30% and quite a lot of patients and end up having to go to ICU if they um don't pass away from hepatitis. Um One of the main criteria for the diagnosis of acute pancreatitis is the advent criteria, which is a mix of clinical biochemical and um imaging um uh di uh diagnostic tool. So you get abdominal pain suggestive of pancreatitis with your um classic epigastric radiating to the back and you get your amylase or lipase depending on which centers you're in, in the UK um which is usually three times more abnormal level. And then you got the car characteristic finding on imaging imaging can be both CT or MRI but at the moment, um in, in most centers contrast and CT is the diagnostic standard for evaluation of acute pancarditis. And it's very good at predicting severity and prognosis, which is um one of the main reasons why it's still um the the most commonly used one besides the fact CT is so much more readily available, of course. Um So the diagnostic criteria is per pancreatic edema and pancreatic morph enlargement. And it depends on the radio, just flicking through the images of the CT abdomen um and sort of spotting and making up the uh the shape of the pancreas. Um as well as looking at things like the uh fast trending in the area surrounding the pancreas on the ct abdomen, pelvis. Um One of the important things in pancreatitis is that in acute pancreatitis is pancreatitis that is complicated by uh pancreatic ductal adenocarcinoma, which is often missed unfortunately, and has fatal complications for the patients if missed. Um All of this just means that the diagnosis of acute pancreatitis is quite dependent ultimately on the radiologists, um visual judgment of what he sees on the scan. Um And if as we've all seen sort of flicking for images, it's often quite difficult to make out things because it is sort of gray value differences and very subtle differences in appearances that um radiologist has to pick out on and the experience in particular level of radiologists have a big um uh influence on that. Um So I'm just going to take an example of what I mean by the differences. So this is uh just on radio P. So if you click through the ct abdomen here, you can see the pancreas here and sort of this fast trending and just edematous changes that um the radio is has to be kept from clicking through these mes and this becomes very complex because also radiologists have to look at the overall picture besides answering the clinical question, making sure you don't miss anything else in the abdomen. And as you can see the grayness and, and the, the contour and the um distinction between what is normal or other tissues is quite difficult um in many, many times and with edema, it becomes even more difficult. Um that's the idea of using um artificial intelligence to help with this. Um And how we can help is by the use of two tools. One is computer vision and the other one is machine learning integrated computer vision, computer vision is basically um uh one of the more recent applications of artificial intelligence, whereas whereby computers are taught in, taught in a certain way to interpret and understanding mas. And the fact of the matter is that the computers as opposed to humans can analyze images at the pixel level which no, no humans would be able and would just allow us to deviate the contour and pick out edema or changes in sub changes in the grade grade differentiation at the pixel level. Um And through the application of machine learning, through this computer vision, we can teach computers to sort of learn what specific patterns are in keeping with acute pancreatitis or what specific patterns are in keeping with edema or pancreatic fast trending. Um And sort of highlight these, these, these um these changes in acute pancreatitis to the radiologist. Um And then the aim of this ultimately is to have radiologists as the c major spot out subtle changes in the appearance of the pancreas that we've been keeping with acute pancreatis or even with the complication, the more complicated acute pancreatis, which is acute pancreatitis with pancreatic ductal adenocarcinoma. Um uh This is a background because while I was reading for this paper, I was like, what is actually artificial intelligence, machine learning, computer vision, I think it's quite a new topic and um not often well explained at all in medical school. I think it's the curriculum is still lagging behind. So I just wanted to just give an overview of what part learning and vision are the most commonly thrown around terms. And if you type machine learning or, or computer vision on the internet, just see these very complex diagrams and you get throw around neural networks. And when I started reading this paper, I was like, ok, time to panic and well diving more into it, we can break it down into more simple um sort of ideas which help us understand the topic. Um more uh the topic of arts and that's how the team at the University of Hubei and developed their model. Um So traditional programming is you give the computer a set of rules and, and some data and then the computer read through data, applying the set of rules and give you an answer. Well, artificial intelligence is just a different way of programming computers to respond to data. This time, we give them the answers instead of giving them the rules and the computer learns from the answers and the data and tries to pick up the rules that mean that, that the rules that apply to the data, that means that you get the answers. And this is the way that the computer basically learns, um uh what each data mean, what each data in the future um gets. So it, so you give it answers and data and it learns the set of rules. So that next time when you give it some data, it applies the rules that is learned on prelearn um data and then is able to give you an answer of what this data now means. This is similar to how we humans actually learn. Um And this is all for an in process of guessing and then you learn for your mistakes and then you come, come closer to identifying the correct answer. Um So for the purpose of this, you obviously have to have quite a good amount of data so that the computer can pick up things. Um Well, uh through this iterative process. And you know, we usually divide it into the training, data set, evaluation, data set and test data set, the training data sets, what the computer uses to optimize its gasses, the validation data set, the computer also uses in the background. Um as it's learning from the D data set. And the test data set is the one that once you finish with the model you, you use on the computer to um as a real world test if that makes sense, um This is how you would use this for instance to uh differentiate between uh images of a cat or a dog. So you give the computer see 10,000 images of cats and 10,000 images of the dog. And then for the process, it's able to pick out different parts of the images. That would mean that this is a cat dog, for instance, the shape of the nose or the shape of the body or um how the ears are. And through picking out all this overall um picture, it's able to give you an answer, what the, the, the what of what the image actually is. And the building unit of this is the neurons which form the neural networks. And what art neurons basically are are is magical, magical formulas and functions we have set with parameters that are unable to the weights and biases. And this is what the computer actually learns. These, these are the rules that the computer learns with time as you give it the data and the answers. And then subsequently, once it's got to set uh uh uh a complex sort of uh array of ways and biases for different parts of the images, it's, it's, it's analyzing then it gives you the correct answer of why. And artificial neurons are in layers and complex um sort of network to come up with the what they call the neural network, which forms the basis of machine learning and articial intelligence. Um And through the application of this, you can uh build an A R model that can learn anything really. And this is what the papers, the authors of this paper did. So applying machine learning um which is designing a neural network and computer vision, which is the programming of a computer to recognize images and to recognize patterns in keeping to recognize the patterns on a CT scan that are in keeping with acute pancreatitis. Um So we'll come back to the paper now. Um So the study design, they did a retrospective study of in one center which is associated with the university that the researchers are at in UBE. Uh They started with 202,000, 180 patients with suspected acute pancreatitis. And there was a set exposure criteria. So, pancreatitis with complicated by hyperlipidemia or cystic tumors, patients who didn't have incomplete inflammation of course, and patients with severe pancreatic atrophy because that would uh yeah, the affect the way that the they would interpret images and would potentially not help the A I. Uh the model learn, it would be a problem for the model to learn at such an early stage. So they ended up with 1124 patients um eligible for training their model in the end. And they split this into the three different data sets that you need to train an A I model, which is the training data set validation dataset and the testing data set in a ratio of 616 to 1 to 2. And then they run the training dataset and the validation dataset through the specifically made A I model and then tested it out. Um At the end, um again, this is the sort of radiologic dynastic crate by which they abided by in training the model which is enlarged pancreas and diffused within the pericar region. Um So how do you train the model now? So you need first of all to give the model answers and data as we discussed a bit earlier in the, in terms of training the the model to to learn and how the machine learning works. So this means um giving the the, the the model data and answers to learn with. So to do this process, you start with your data, which is the 1000 images, you get radiologists to manually segment the the pancreas on the CT scan. So you get radiologists um to look at the images and then delineate the pancreas on all the images that you will use and feed into the machine learning model. So this is how you would do it. Um So for their study, it was done by two junior abdominal radiologists and one senior radiologist, they used a specific software called Label Me to m into the pancreas on the C measures. And this was done only on actual action scans and they were blind to any clinical information so that the way they controlled all of the pancreas was not affected by it. And then for intra observer reproducibility analysis, they got three random cities to be segmented twice by one of the juniors. And then one of the seniors then analyzed um the two segmentation of the same images that he did to to assess for consistency of of the segmentation. And this is assessed through the dias coefficient in the intersection of a union, which is basically looking at the overlap between in this case, for the junior radiologist. It's looking at the overlap between the first contour that he marked on the pancreas on the C majors as compared to the second one. And then you see how much of the difference there is and inter in intersection of union is just another way of looking at that. But it's a bit more precise in the way mat mathematically works out how consistent the the junior radiologist um is in the way he is contouring the pancreas on the city majors. Um The next step in that is then refining the model. So you've given the model, the training and validation data set and now you um go through the testing data set, read the model and help it refine itself to learn more accurately. So you make the model and basically learn from a uh uh an experienced radiologist um from the testing data asset. And this allows basically benchmarking of how well the A I is doing at the end of all the testing um and all the learning process and it helps you identify areas of improvement and assess clinical usability. Because at the end of feeding the model, all the data, all the learning data that you have given it, you use the testing data to see how well it's doing. And then using this testing data, you are able to again teach the model about its mistakes that it's it, it's made and help it learn from its mistake again, this time from intervention from a radiologist now. So hopefully, now you have a working one that after all, all of this has been done. So the model that they use was an MSA net model that they, they designed themselves. And it is different to other models which are open source models. Some of them are sort of like units at units are some of the common models that are used for imaging and segmentation of other organs. Um And in the field of A I, they are quite commonly used for those. Um But the, the authors of the paper thought that these models were not very good and had uh would have problems with identifying the pancreas on uh on the CT measures. So they came up with their own model, which is the MSC net model. Um And in building it, they um uh they came up with ways that would help the, the, the model pick up acute pancreatitis that before the, the other type of models would not be able to pick up or would be, would be confused about. And um after building the model, the uh how to look at how the model compared to other, the other models and the document they used for other organs performed. And they found that actually the MSC model was the one that performed the best, both on the training data set and the value data set. So in terms of the DSC and I ou which we discussed the area which is the overlap of what's the real and what is um the, the. So the real segmentation as for the radiologist and the segmentation that the A I makes of the, of the, of the images, you can see that for the MO most often is the closest 1 to 1 which is perfect overlap between what the radiologist is and what the A I model picks out on the CT scans. And um the second part of this now that they had a model that performed better than any other model that was open source and available was to test it in the real world. And what that means is getting radiologist to look at the images um independently. So they used two different radiologists that the not the ones that were involved in training the model and then getting the radiologist to do the same process again. But with time, this time with the A I highlighting the contours of the pancreas on all of the images. Um So the F for this, but this is where I use the testing data set so that the rest of the patients which were not seen by the EI model. So I had 291 patients from the initial 1000 that were left and 104 of these had normal pancreases had normal pancreases. 98 have acute pancreatitis and 89 have acute pan pancreatitis with the complication of pancreatic ductal adenocarcinoma. So they got another junior radiologist and a senior radiologist to independently diagnosed um active pancreatitis both on their own. And then with the A I segmentation to that is the A I highlighting the different areas of pancreas. And for all this process, they were blind to patient information. So they didn't know which patient it was. And they were given the CT measures randomly and then blind blinded to reference results. So they didn't have any clinical information or any sort of uh reading of like based on my days. So they just looked at the majors and figure out whether it was normal acute pancreatis or possibly acute pancreatis with complications. Um So they look at the CT images first and then two weeks later, they look at the CT images again. But this time with the model helping them look at the images. And um actually what they found out was that both for the junior radiologist, which is the J da junior doctor and then the senior radiologist, the accuracy um with the model helping them. So with the A I model helping them jumped from 89.6% to 92% for the ju doctors and then 95 to 99% in the uh senior doctors. And you also cut the diagnosis time. So the images from 100 and six seconds to identify um the pathology or non pathology to 1 81% to 81 2nd in junior doctors in senior doctors, you could see that it went from 76 to 51%. In addition, if you look at specificity which is avoiding false positives, um the use of the A I actually help both junior doctors and senior doctors improve the specificity that is um falsely and identifying healthy individuals with disease. So overall the results were very good. You get with the assistance of A I junior and senior doctors who are able to identify correctly pathology or normal pancreas, you get um diagnosis time that's reduced and you get false negatives of misdiagnosis that are reduced too. And this is just another way of looking at how the they perform. So in an ideal world, you want 100% true positives and 0% false false positive. Um So you would expect you want this to be where it is in the annual world. And so on the normal pancreas, you can see that the doctors were on here and the senior doctors were very close to one. Um and with the eye, they, they get even closer and you can see it even if the difference is even more tough as you get for acute panis and then for acute pancreatitis with complications. So you can see that the A I improves the performance of the junior doctors and the A I then improves the performance of the senior doctors too and brings it closer to this perfect point. Um This is another way of looking at um the performance of the uh union doctors with the A I. Um uh how much time do we have left? Ok. Um So you can see that with just the junior doctor. So 904 were normal. Um So six were misunder fire as acute pancreatitis. And then um in terms of pancreatitis, all of them were OK. But then with the complication, this is where they really struggled where they misidentified the complication as just acute pancreatitis in cases. And you can see that with the AI this number comes down and the misdiagnosis of acute myitis when the patient is actually healthy gets much improved. And actually, there's no, no no other mistake made. And this is the same for the senior radiologist where the normal, well, there was one mistake made now becomes zero where there's no mistake. And then again, when there was mistakes with the complication of so not properly diagnosing, diagnosing that is acute pancreatitis, complicated by pancreat. I think doctor Adenocarcinoma, it goes from 11 being mis misdiagnosed to two, being misdiagnosed only so much improvement. So the limitations to this study. Uh so yeah, this study has over a lot of positives but the limitations, the main limitations that associated with that this is just a single, single center study to to improve this. We need subsequent multicenter studies with uh broad edit assets because pancreatic pathology doesn't include only acute pancreatitis and pancreatitis. That's complicated. It also includes we benign tumors and pancreatic pancreatic cysts which can be on the CT. So the model has not been trained on any of these other pancreatic pathology that could present on the CT. So it would not be um suitable for real world application just because if you did sort see that, then it would mi I um any of those and then you could through multicenter studies and different datasets also um get external validation from different population group and in different demographics. And then as you could see previously, the data set for acute pancreatitis and the acute pancreatitis complicated um is actually smaller than the normal one. And this model, this is just means that um we need to continue to um train the model with more images to make sure that it's balancing errors properly. You know, this learning um um for your uh application because there might be still um instances where it will mis make mistakes in the way it's de delineating the, the pancreas. And then as with any I model, um we don't actually know how the model is making its decision. We just know that it's making a set of rules, but we can't go and ask the model which set of rules it's come up with um and doubts and the thing is of a fitting which is very common with AR models, which you have to think about, which is you give the model information that you want it to learn. And then as a result of that, it doesn't learn any information that is meant to be related or similar because you've not fed it that information. So in this case, you would be giving it, it, it would be about giving it just acute pancreatitis, normal or complicated but not giving it for instance. And pancreases with benign tumors or pancreatic pancreatic cysts. So in conclusion, the conclusion from, from this paper is just that the M SA net model, which the team developed showed best performance among seven models that are available widely um and are open source Um And that the performance from the model was actually better for segmentation and measurement of pancreas and skims. That is good uh mimic what radiologists would show on a CT image that they reviewed um more closely to what a human would do. Um And when testing the A I model, we, they have the using the model to have a radiologist diagnose whether this is normal pancreas, um acute pancreatitis or complicated acute pancreatitis. Um The model actually help reduces the time to diagnosis and the efficacy of diagnosis for both junior and senior urologist. Um And that even for complex diagnosis of acute pancreatitis with pancreatic pancreatic ductal adenocarcinoma, which is often misdiagnosed by radiologist, both senior and junior um when they do it independently. Um And then we don't really need to go through this. This is just the architecture that they use. Uh Yeah, that should be it. Thank you, Jerry. Excellent presentation. Um Really, really good show is where A I is going. Does anyone have any comments? Please put them in the chat box? Happy to hear from everyone. I don't think we can hear you, honey. Sorry, how many are you talking? So I think the internet has gone. He is based in MG. So it might be an issue. Yeah, I think um one point II I'd mention is I think overfitting is a big thing, especially in A I where bias is uh becomes a big issue, especially when you're thinking on like a population level because in terms of building the model, what would you think about that? Sorry. Can you say that the guy didn't catch all of it? What do you think about the overfitting we've done in this model? Because it's, it can have quite a big impact, especially when you're thinking about the use of A I on a wider uh population like. Yeah, so the author is actually quite straightforward with overfitting. In in in this case, they do say in the limitations at the end of the paper that this is only a single, single center study and this is not fit for use clinically yet. But they showed that the model that they, they developed at least for the pancreas is um is working better than the other models that are currently in use for other organs. So the unit model which is one of the open source model that is widely used and widely widely sort of a topic of research is accredited for other organs. But the authors thought that because of the uh intricacies of pancreatic diagnosis, um there would be era especially with regards to the gray, the gray segments that you see at the border of the pancreas and the peri pancreatic fat stranding. This is the area that they f the unit model, which is one of the better models out there we struggle with. So the way they build their own model, which is the MS net model is to incorporate sort of subsegments and change a bit, some bits of the unit model to um overcome these uh the struggles that the unit would have with interpreting the gray, the gray areas around the pancreas when it's pathological. Yeah, that makes sense. So, in terms of, yeah, sorry. So to answer your question, in terms of overfitting, they've used normal pancreatic or normal pancreas and two pathologies only and they've used that only with 1000 images, which is not nearly as, as close to what is needed usually for any A I model. And it's probably a very homogeneous demographic of, of people too because it is in 11 region. Um So I think like they say, like I in the limitations with this needs to be first of all carried out on other images. So to include images of the pancreas that includes pancreatic cysts um for instance, and then to do that other than I in different centers all over the world to see whether the the model is able to pick up um and give out similar results um in the, in, in different um areas and on different um population groups. Yeah. Yeah. Yeah. That makes sense. I think um it's useful as an adjunct for like a radiologist in picking up pancreatitis. But when it comes to the wider picture, it's quite limited. I think that's, yeah. And that's the case with A I at the moment. But let's see where it goes. Um I think we'll wrap up there. Thank you to both of you for presenting excellent presentations and thanks for everyone for joining. Um I know it's been, it's been an hour, but I think we've all learned to learn this, go quickly. Um Feedback forms will be sent to everyone's email addresses. So please do fill them out. It helps our presenters and the team in terms of knowing what's going well and what we can work on. And the presenters have put a lot of effort and time into these presentations as you can see. So um we'd really appreciate that from uh from everyone. Um Just to add, we have um another event next month on the 20th of September. So we look forward to everyone uh joining us then. Uh but we'll leave it there. But um please do fill out the feedback forms um without the presenters, we can't keep this going. And it is important for the presenters in knowing what went well and their future plans. Thank you everyone. Thank you. Thank you.