Learn more about how Artificial Intelligence is assisting surgery
OSCAR Webinar 2: AI-Assisted Surgery
Summary
During this Oscar webinar, three speakers talk about AI in surgical procedures. Consultative neurosurgeon at the National Hospital for Neurology, Mr. Hanny Marcus, leads the discussion, focusing on his clinical work in endoscopic pituitary surgery. Mr. Marcus starts from the foundations of neurosurgery and traces its evolution up to today where an endoscope is used in pituitary surgery, making it possible to advance surgery using AI. To illustrate this, he shares pictures from his own work to explain his points. Mr. Marcus also discloses that his team has created the first core outcome set, the first prom and the first workflow consensus, and has led foundational work in defining international agreement on operative video steps, instruments used, potential technical errors and potential adverse events. All this work has resulted in the development of predictive models using AI in surgery. However, Mr. Marcus acknowledges the difficulties in data collection due to the resistance from surgeons in sharing their data and the need for political will to convince them to participate in data collection efforts.
Description
Learning objectives
- Understand the history and evolution of neurosurgery, particularly in relation to pituitary surgery, and the transition from direct surgical views to digital representations.
- Gain a knowledge of the potential applications of AI and digital technologies in pituitary surgery, with a focus on enhancing imaging and operative video footage.
- Recognize the challenges associated with data collection in operative videos and learn about solutions to overcome these, such as structured data and consensus agreements.
- Appreciate the importance of collaboration in advancing the application of AI in neurosurgery, exploring how professional societies and research groups can drive progress in this area.
- Understand the process and advancement in developing predictive models for annotating operative videos, including the identification of relevant surgical anatomy, instrument usage, and operative steps.
Similar communities
Similar events and on demand videos
Computer generated transcript
Warning!
The following transcript was generated automatically from the content and has not been checked or corrected manually.
Great. So welcome everyone to the second Oscar webinar um that we, we're hosting um to let you all know about Oscar, our, our national collaborative, a on surgical video recording. I'm, I'm really excited um to, to welcome our three speakers today. They're all really excellent and inspiring individuals. Um And they're gonna be talking to us about A I assisted surgery. Um I will introduce them as, as we go along. Um And there's plenty of time for, for question and answer. So if you do have any questions for us, because please let us know in the chat and, and we will address them once they've finished speaking. Um As you can see, we've got Mr Hanny Marcus, Miss Katie Sigs and uh doctor Pietro Masca. Um And it's my pleasure to, to start off with Mr Hanny Marcus. He's a consultant neurosurgeon at the National Hospital for Neurology and neurosurgery, uh and an honorary associate professor at U, the UCL Queen Square Institute for Neurology uh and he's also the CMO of Pando Surgical. So, over to you Mr Marcus. Yeah. Thank you so much and you for that kind induction and it's a real pleasure to be here. Uh I will try to screen share and get this to work. Could you confirm that you can see the screen Andrew still? Not just yet for me but um I am on hospital Wi Fi so just I'll, I'll just give a minute a second because I think II think I had shared screening out the window. Um I could try again if it's not worked, share a screen. Any joy. Does it look like it's doing anything? Um Not just yet. If, if there's the present now button as, as you just were you using? There we go. Ok, good. Yeah, perfect. Yeah, great. It's a truism that the more technical, the talk, the more likely there is to be it failure. So I'm glad I'm glad it's a, it's a positive reflection on the talk itself. A a, as you say, Andrew, I um I'm a consultant neurosurgeon basically at the National Hospital for Neurology and neurosurgery and over the next 20 minutes or so with your permission, I just want to talk about um A I and more broadly digital surgeries to advance my own uh clinical focus, which is endoscopic pituitary surgery. Now there is perhaps no better place to begin in this story than at the start of the specialty of neurosurgery, which, er, was with Victor Horsley at the National Hospital for Neurology neurosurgery. He was the first neurosurgeon appointed in the world, er, and that was in the late 19th century. And he would typically, and you can see him here in the left most image um performed very large uh operations through the skull, uh removing half the skull, in fact, to expose the brain and then dissecting through the lobes of the brain to reach the central um component where these tumors arise in the sellar. Uh over the preceding decades. Er, the technique has evolved through to microsurgery. You can see you actually go in the center there using a, a microscope. And here um if you like the advance has been that the information the surgeon is receiving is much better in that he is using a uh a microscope that allows for a very good illumination and magnification field. And the instruments he's using are also um advanced the the micro instruments that he in fact develops. And in the last slide, we see a further evolution of the imaging technology. So at this point, we move from the go using a microscope to Peni developing keyhole approaches with an endoscope to allow for a really wide angle for you through very narrow corridors um including the nose, which is the preferred approach. Now for the pituitary in most cases. Now, if we stop for a moment to reflect, we would recognize that now that we're using an endoscope, we're no longer looking at real tissue of the brain and nervous system. Instead, we're looking at digital representation and that therefore allows us to advance surgery. Now, this is an example of the sort of technology that all of us carry around with us without thinking of er on our phones. This is um a picture I took of my beautiful daughter Lily, er automatically have very good facial recognition software and will track faces over time, try and adjust the focus to those faces and provide a whole host of other very complex algorithms to enhance the images. And so now we are recording operative videos. The same way we record personal videos. The broad application of A R to those videos should be totally possible and should enhance our outcomes. The the challenge is that unlike personal videos, the data for surgery and the myth of surgery is really um opaque cos there's there's very little agreement or anything that surgeons do and therefore it's very hard to train algorithms um to make those predictions. Now, the very first step in trying to develop A I for live pituitary surgery or any type of surgery is thinking of a way to structure the data and II would take a step back and think if you're trying to give the surgeon some advice on what is the right thing to be doing at any given time? The minimum data set you need is an agreement on what the steps of the operation are. So what is the person doing at any given time? And was it the right thing to do? So what are the outcomes of that said surgery. And I have spent a few years, uh you know, trying to focus on this task of trying to structure data and through my roles in the Pitu site and P foundation, we've led some foundational work I think, but both having consensus agreement on the structure of an operative video. So we've agreed and it's very difficult to get consultants to agree. But we have agreed internationally on what steps entail an endoscopic pro operation, which instruments are used in each step. What technical errors are possible in each step and what would the adverse event be if those technical errors occurred? And we've done that for every part of the endoscopic pituitary operation with lots of core steps and lots of optional steps. So we now have a language to describe operative videos. And on the outcome side, we have uh just recently completed some work um developing a core outcome set for patients undergoing pituitary surgery and also a patient reported outcome measure for patients undergo pituitary surgery the first of their clients. So, so we've done a lot of foundation work, developing the first core outcome set, the first prom and the first um workflow er consensus. And now we have all of this, we can talk in a language that we can all agree on on what happened. And was it the right thing to do the next step? Having decided a common language of structure and data is to get the data and this two is tricky and it's tricky. Uh not because of um uh you know, the hardware necessarily, or, or even necessarily the ethics. The real challenge is, is just having political buy in, I think from surgeons and groups, surgeons can be very possessive over the data. I think a bit of that comes from insecurity perhaps about um you know, not wanting their operations to be exposed, That's very natural. I think perhaps medical legal exposure if they worried something went wrong. And so it takes actually a a great deal of political will to try and persuade surgeons to collaborate together to, to share data of any sort, but particularly videos. Um And so I II then focus with my team, particularly Dan K, who had a lot of this work to try and create um collaboratives within neurosurgery and particularly within pituitary surgery to try and work together and, and achieve those objectives. So we created a society, the British Neuroendoscopy Society um to help neurosurgeons interested in endoscopy, particularly for pituitaries, but also for other things to work together. And as part of that, we ran the first and largest order um on pituitary surgery in the world. So that was over six months, but 30 centers across the UK contributed data to that. And we had almost 1000 patients in the end um of, of about 700 of which were pituitaries. Um And in those studies we simply looked at what was done and what was the outcome. So to take a step back again and reiterate first, we created some structures as to how we were going to annotate the data. And then we've collected the data and the last step and I would not say it's necessarily the easiest, but it's certainly maybe not as hard as sometimes people make out is creating the predictive models. And in essence, for any operative video, there's a triad of things that can be annotated at a very broad level. It's either the anatomy of an operation, the instruments being used in an operation or the steps of an operation. And you can go more granular and identify the, the particular gestures and motions of instruments and what they're doing to tissue. But the broadest triad of factors are those. So, anatomy instruments and steps. And we've done the first models in neurosurgery and for the first models in pituitary surgery to describe each of those things. And on top left, you can see automated annotation of relevant surgical anatomy, the purple, right in the center of the field, there's a pituitary, the yellow, below, there's a claver recess and you can see carotids and optic protuberances on either side. Um, it's very difficult anatomy to recognize, er, if you're not used to it, which is why we develop these models to help. And in fact, those models have gone through several iterations already. So we've already published two or three models over time, each one improving the performance of the one before and becoming increasingly fast. So it's real time. Now, the, er, figure on the bottom left is instrument tracking. This is pretty easy actually because metallic instruments look very obvious to a machine. Er, but figuring out exactly which instrument it is, can be harder because some instruments can look very similar indeed. And the figure on the right is a video which I'll play and this is by Medtronic um Touch surgery, who I think sponsored this. Um But before they were part of Touch surgery, in fact, when there was still a um start up. And so, uh this was very early work. The technical lead touch surgery is Dan Stoynoff, a fantastic guy and a friend of mine and, and he's also the um engineering collaborator in UCL where uh this research takes place. So the models are distinct but the in practice, the models that we have in UCL are very similar to this illustrative video that you're seeing here. Now, just to explain what's going on, this is part of the operation where I'm trying to remove the bone around the Sellar, then I will cut the dura open and then I remove the tumor. And on the top left, you'll see the overall phase of the operation. The Sellar phase is the part of the operation to do with the pituitary. And the top right is a particular step. So whether that is, for example, removing the bone opening the dura or removing the tumor, and I've been playing it while I've been chatting and perhaps you've gathered already, but it's pretty good. The, um, the model by and large and as we speak, um, has predicted things for the most part correctly, but it does make a mistake and I'll play it a second time to see if you can spot it. Andrew while I'm doing this. Can I ask, uh is there a way that people will be able to answer questions if I ask them? Yeah, I can't see. Yeah. Ab Absolutely. Um I'll, I'll keep an eye on the chat and, and um yeah, it'd be great to, great to hear from our audience. So, and you can see the video. Cos on the Andrew. Is it? Yes, it's, it's working the video very, very well good. So the question I'm asking to people is what mistake is being made by the um computer vision algorithm here? You can shout it or type it. Yeah, it's very satisfying. Watching myself do a good operation. Nice. Watching tumor come out. It's very satisfying. Pituitary tumors in particular because they're so soft. So what, what was the consensus? Everyone? What was the mistake made by the A I in this situation? It's very instructive. So feel free to put some answers in the chat. II saw that um identify an excised tumor came up a little a little bit before Omy. Yeah, I was watching, I don't know if that's, if that's the, that, that's exactly right. And that, and the same goes for durotomy, the durotomy, the computer thing. So I'm, I'm going to do a durotomy maybe 10 seconds before I reach for a scalpel to open up the Jura in, in other words, it's anticipating those next steps based on what I can see in the image. So it sees enough Jura exposed and enough bone removed that it's expecting me to transition to switch instrument, which is a very good mistake to make because it probably suggests that the A I has a pretty good understanding of the surgical field of these operations. And we've now trained this, as I say on perhaps 500 videos. The last point I wanted to make here is that um we've already started to use this A I in an offline way for coaching. So we've, we've recently published some work showing that um in a six month intervention where in our hospital at Queen's Square, we used A I to provide offline coaching to surgeons. We saw this really dramatic improvement in the quality of surgery as measured by O SAS. For those of you not familiar with O SAS that stands for objective structured assessment of training. And the idea is that you have surgeons blindly rate an operative video on things like economy of movement and instrument choice, uh knowledge of the operation, things like that. And so on this X axis, you can see the case number and the blue dots represent the operations before the coaching and the red dots. After coaching, broadly speaking, a one is a terrible operation from someone who doesn't know what they're doing. A five is a perfect operation. So it's very hard to get fives and a four is an operation that's pretty good. And I would say consultant level uh but maybe not perfect and we did a good job before the coaching. But after the coaching, you can see those numbers really tighten up and the step change in the average is notable that has actually already corresponded to an improvement in outcome. So the six month coaching period was associated with a significant reduction in pituitary dysfunction after surgery. In other words, surgeons were better at protecting uh the pituitary glands function and that again corresponded to reduced length of state. And so the A I in this context is not abstract, it's already impacted real patients that have um benefited the, the goal of course though is to do better. So the goal and this is a picture from the top surgery platform when it was initially devised a few years back. Um uh The goal is to do something like this for pituitary surgery where in fact, in real time, you can provide decision support, identifying anatomies, adjusting the next steps, um er the instruments that the surgeon might need how much time is left in the operating room, things like that. And we've actually done a very good job, I think in the lab of putting some of this work together, but it's not quite reached clinical trials for the um pituitary use case. And so that's our main focus at the moment, getting clinical studies around live um A I work flow support and hopefully in the future, not too distant, we'll be able to do one step better and actually have the A I support very high level decisions. And this I think is actually the real mark of an expert surgeon in any domain. If you look at pituitary surgery, the most serious complication that can happen is something like a carotid injury. And that is 10 times more common in a surgeon that does not many operations to a surgeon that does many operations. And the most important factor for that is very high level decision making about how wide you, you know, do your exposure, how aggressive are you in trying to remove tumors? How tightly do you close at the end of the case? Because all of those decisions are trade offs. You know, if you, if you were very aggressive at removing tumor, you may remove more tumor. But there's greater risk particularly of pituitary dysfunction if you repair every case. Um at the end with uh multilayer constructs, you know, you may have previous cephalate, but you'll have a hell of a lot more nasal morbidity and a much longer case er duration. And, and most importantly, if you have a very wide exposure, you, you, again, you make your life easier in terms of the operation, getting the tumor out, but you run the risk of injury, the cavernous sinus, the carotids, maybe even the optic nerves. And so you have to know your anatomy very well and those very high level decisions are really important predictors of surgical outcomes. Now, uh this is some work we published um pretty recently and it combines some of the computer vision work you've just seen with a large language model for a visual Q and A. So you can ask the computer vision model in the same way you would an expert surgeon next to you, what is going on? Now, the questions here are straightforward questions which are based directly off the models. But the hope is that in the future, you will be able to ask a R model, what you would ask an expert, which is, have I done enough? Do you think this is a client of closure? Did you see a CSF leak? And the kind of question is that you would benefit from a second intelligence in the room from now? This is all great. And I touched on some a very early translation of offline surgical use. But what about real time A I, how are we going to get that into patients? And disseminate it. And this may seem obvious because for lots of you in the audience listening, you may say this is a solved problem. I learned in medical school, there are phase one studies which are in vitro and in vivo in animals. Um uh culminating and study phase two, which are clinical trials, phase three, which is you know, randomized controlled studies and regulatory approvals and phase four which is surveillance. So this is a solved problem. Why, why are we, why are we talking about it now? Well, the answer is that it just is completely um inappropriate for you in a complex intervention like surgery, it is inappropriate for a complex intervention like A I and it's really inappropriate for a complex intervention like surgical A I. And so we have to think about something a little more nuanced and to give you a specific example for my own work as to why we need more. This is some of the computer work you've just seen on the left. Um And in this particular study, we were trying to get people to draw around the boundaries of the pituitary. And just like I said, that's a really important task. You know, if you get that right, you, you really help the patient if you get it wrong and you can kill the patient by injuring the carotid. And we got the patients scans and videos up and, and surgeons did this either um without support using their own best guess or with A I suggesting where it thought things were and we compared the performance of the surgeons with A I support to those without. And the finding perhaps obviously, er, was that every surgeon improved performance but that those who were the most experienced gained the least so novices gained loss, experts gained very little but everyone got better f but the next video on the right is another study we did in our group just published in analysis surgery. Um And here, unlike the NPA digital paper, um the findings are quite different. So the, the set up is the same here, we're trying to find on the right side, an aneurysm during an aneurysm surgery. So we're doing brain surgery trying to find this bomb that's just gone off. And if you see the bomb, you have to uh be very careful and vigilant otherwise it will rupture and the patient will die again. And so how do you look for that where you look for an experience? I mean, on the face of it, this is a very similar study of design. So we ask everyone in the embassy, not just surgeons, but also anesthetists and nurses. Do you see an aneurysm? Yes or no. And we ask them uh in a whole host of different um videos and then we gave them a um the A I decision and support saying the A I thinks yes or the A I thinks not and the A I was able to explain uh by process of, of a attention that, which you can see on the right column of that, uh what it was looking at when it made that decision. And, and although everyone improved their performance, the striking finding and completely and utterly contradictory to the earlier finding on the pituitary uh computer vision work was that the surgeons with the most experienced gain, the most a consultant neurosurgeons performance jumped 20% and a theater nurse with very little experienced jumped maybe 5%. And so, er, here experience helps. So what on earth is going on here? That this is incredible discordance between these two um studies looking at A I and how it can change clinical judgment. Now there's lots of factors but could anyone suggest some before I Yvonne Andrew, I may re remind you to read the um chat boxes on. I don't want to switch off. Yeah, absolutely. Um Yeah. Er, for our audience, please please do comment some of your, your thoughts. Er, I think this is a really interesting distinction um between the outcomes of abusing these two models. Um Well, I can, I'll I just provide some context to this. So again, these are two A I models that came from my group, very similar algorithms under the hood, very similar study designs to evaluate. In both cases, we were looking for clinician support and decision making with and without A I and looking to see whether the A I helped. So on the face of it very similar, but there are actually subtle differences in the way these uh model were done. The model on the left is not so explainable. And we didn't tell clinicians anything about their model. We just said this is the A I the model on the right. We did actually report on the algorithm overall performance on, on a different set of videos. So we said on average, this is how it performs and on a set of aneurysm videos. And you can see with that attention map that there is a clue as to what the A I is using to make that decision. Although it's not truly explainable A I, it is a bit explainable. Um So with that information, why do you think that experts were so uh um dismissive of the A I in the pituitary case? Not me, I should add because I was one of the experts. Um But so accepting of it in the aneurysm surgery, in a, in a word, any takers Andrew. We, we haven't got any comments yet, but I'll keep an eye out for them. Um How, how many people on the group if you, if I may ask? Uh So we have got 20 at the moment. Perfect. It's a, it's a real shame. I don't have names cos I would randomly pick people if I honestly um I, I'll, I'll say this in the context of an online um presentation. It's trust. This is, this is actually one of the many factors which make A I and is real well performing, very difficult to assess. It doesn't matter how good your A I performs. Um uh you know, on a training data set in a lab in real life, you have to make sure that the, you know, the data set that you're trained on is, is applicable to the data that you're putting into the model in real life. So, you know, is it is it is it uh representative but you, you're, you're really concerned about whether the A I is actually um being trusted by the surgeon or whether they just ignore it. Um There are all sorts of issues around the human computer interface and whether for example, it may um result in a lot of excess cognitive load for the surgeon. So it's, it's very complex and I think to not consider these factors would uh really cause um a great difficulty in being able to make sense of, of A I in the real world. Now, there are in fact some very good er frameworks for evaluating A I preclinically. You can see those here and things like tripod and start A I are really good for describing models on training datasets and things like that. Um Equally, I think there's very good reporting guidelines for how you assess A I in A in a stage three randomized controlled study, you know, and that is the consult A I or spirit A I guidelines. So that's all fair game. But actually most of what we care about is in between it's early clinical evaluation. And this is I think for surgery by far and away, the most common category of A I, we're gonna see move to patients. Most of the A I being in the literature in use in live surgery. Overwhelmingly, I think maybe exclusively, in fact, are small single centers, maybe one or two centers studies that show proof of concept and no more. And that is where you really have to be most careful. A and where um the ideal collaboration and decide A I, which is the er guidelines for A I created by a ideal um come to the fore and I think at that point I will stop and say thank you to the many collaborators uh and clinicians that work with me and to my funders and take some questions. Thank you very much. Thank you very much, Mr Marks. That was a really excellent presentation, really enjoyed that. And um while we're waiting for some questions to come in um from the chat, I've, I've got a couple of if that's OK. Yeah, great. So as you were alluding to um your sort of routine practice at um in, in your hospital is, is to record all of your operations. Is that correct? Yeah, my personal practice has been recorded all my operations since I was a trainee, in fact, which at the time, I think 67 years ago was unusual. But now it's, I think maybe more common and I did it because II I've been interested in, in A I and for a long time and because I'd seen mentors in other, you know, in other countries record their videos, but it was incredibly helpful as a surgical trainee, the process of recording my operations. And then thinking about how to break down steps and forcing myself to annotate the instruments and steps for my A I research, I, I'm sure reduced my learning curve. So um I'd benefited from that clinically um as much as academically brilliant. Yeah, and it was, it was great to see you present data on, on how training using video and A I um led to demonstrable improvement in patient outcomes as well. It was really fantastic data to see um how, how does that sort of translate to your routine practice in terms of the discussions you're having with patients consenting for video recording, et cetera when you see them in the clinic or, or ahead of their operation. Um So because we've been doing this a little bit of time, II should say that there's, there, there are two distinct races done in my hospital, you know, for neurosurgery within my my hospital. The first is that we have this very overarching and governance leave for my hospital as well as doing this research. So as governance leave, I think I'm biased, but I think basically every operation should be recorded because it is the best way to assure the quality of an operation. And I'm convinced that improves the quality of operation is just the fact that you're recording. Um Katie had mentioned Hawthorne effect and had no doubt whatsoever that recording videos improves performance on the basis that people subconsciously or consciously up their game. So er as a as governance leader, I really want this and we have um you know, registered uh within our governance unit. This is a formal sort of hospital service evaluation where we expect and hope that patients have videos recorded as part of our um quality control in neurosurgery. We want, we want to do the best operations in the world. Uh And this is how we all do that and we provide, we ask patients for written informed consent on a separate video, you know, sharing video form for every single patient that we record videos from. So that's not really mandated, but because I'm convinced that one day this stuff will end up um blowing up my face, my, my hypochondric nature as such that I would like patients to have written informed consent if their videos are being uploaded to a cloud or being used in teaching or, or things like that, even though they are fully anonymized at that point. Um And, and that is my standard practice for my pituitary surgery is my own specialty. So for that, I have formal research ethics too. So it's, it's not just sort of governance and service evaluation. These, those patients have written informed consent for ethics to share all their data. And it's pretty broad reaching consent. So that includes using the data commercially, for example, which I wouldn't necessarily say it was the case about other patients. Um And I have to say that historically, almost nobody would say no to that patient wise. Some would be surprised that we're asking because they would expect it more recently. That's maybe changed a little bit. Uh I've noticed over the last year or two, some patients are becoming a little bit more reluctant to share their data. It's still not common, but maybe 5 to 10% of patients um might politely decline, which is fine. Uh But I think it ii think to me that justifies asking for explicit written consent because it's if you have asked and they've said no, that's fine. But if you've taken it already and not asked, then you're doing it really against the explicit consent. Yeah, absolutely. Well, thank you very much for, for speaking to your experience on that because it's, it's really valuable. And I'd just say um Mr Marcus at the end of his talk mentioned um ideal and decide I and I think that's, those are, are really important things for anyone working. Um in this area to be aware of and, and we definitely recommend going and reading. Thank you very much. So, our next um speaker is MS Katie Sigs. She's Ast seven in Wessex and, and she's just finishing a phd in advanced endoscopy with a focus on artificial intelligence in colonoscopy. And she was also the Dukes Club Endoscopy rep for the last two years. So, how you now Katie? Thanks Andrew. Um So can you see my screen? Yeah, perfect. Yeah, looks good. So um Andrew's asked me to talk about A I and Endoscopy tonight, which is what I spent the last three years of my research doing. Um just to catch you very quickly it's in presenter mode rather than full screen. Sorry. Oh Which one do I need to do? Um I think if you're sharing your whole window. Yeah. Um and then just go back to slideshow. Hopefully it will come up that one. Oh hang on. It might be some perfect two monitors going, I'll unplug that one and does that one work now? Mhm Yes, perfect. Sorry about that. No, no, it's fine. Um It's problem with having two screens. Um So Andrew's asked me to talk about A I and endoscopy for you this evening um which I have to say is a pretty enormous topic really cos A I and endoscopy really has exploded over the last decade or so. Um So I'm gonna try and focus on the bits that are most common and the ones that people might come across in their clinical practice. Um And we'll kind of do a whistle stop tour of what's um what's out there already. Uh So I don't have any disclosures, but I just thank my supervisor and I and my um former colleague in research Hane who have helped me with some of the videos for preparing this evening. Um So as many people might have heard already, artificial intelligence is kind of this umbrella term that describes um lots of different areas. Um And we can broadly define A I as being a computer that's able to perform a task that a human would normally do. But within A I, you have um these subsets. So the sort of next level down is machine learning um which is where you have automated learning on datasets, but this is the feature extraction and the training is very much done by a human. Um and it's very dependent upon how much data you put in, how the quality of the data that you put in that you know, the quality of the annotation. And then the next level down is deep learning and that uses um different architectures and convolutional neural networks and they automatically extract the features in an unsupervised fashion. And so that's why sometimes people refer to deep learning as being black box thinking and just to sort of demonstrate this in AAA diagram to you the top er diagram is um how machine learning works. So if you wanted to train um um an algorithm to be able to detect a polyp for machine learning, a human would sit there and individually annotate the features that you want the machine to, to detect. So for a polyp, you might train it to look at the vessels, the pit patterns, the crypts, the morphology of the polyp, and you'd individually annotate all these images And then the um machine learning algorithm will classify them and give you output polyp in comparison, deep learning. It involves feeding the algorithm thousands upon thousands of images and you give it the ground truth. So for a, for a picture, you might say there is a polyp in this image or you might say this is an adenoma, but you don't tell it exactly what features that it's picking out when it learns what that is. So you give it thousands and thousands and thousands of images so often 100s of thousands of images. And then the um algorithm will do the feature extraction and classification itself to give you the output of a polyp. And when we talk about um A R and endoscopy, generally, this is referring to deep learning or what also gets called computer vision. So I do, we, we, we've already touched on it. I'm sure but a bit of background about how these algorithms are trained. So just to caveat, I very much for my research was the clinical side of things. And we worked with um some very clever computer engineers that did this. Um But when you are training in a algorithm, you can broadly separate it into your training dataset, your test data set and your validation dataset. So your trading is where you're feeding it lots of, lots of images that you've annotated with the ground truth. And then you use a, a separate test dataset to look at the performance of the algorithm. And then um you can then ch change the parameters and fine tune the algorithm use that test data set for training again. Um And keep on doing repeated processes with new test datasets that the algorithms never seen before. Um And then when you think you've got a product that you think is good, you do your validation that might be on images or videos. And again, these need to be images and videos that the algorithm has never seen before because you don't want it to have, you know, learned from them in the past. And then from that, you end up with um an algorithm that you think is gonna perform well. So um I'll start off by talking about A I and colonoscopy because this is one of the areas most people have probably come across A I and endoscopy. Um So the most common one that's out there is something called computer aided detection. So people call this caddy for short. And the first thing to say is there are multiple commercially available systems already on the on the market and in use in departments. So many of you might have come across these. So um Medtronic Fuji Olympus Pentax all have their own um caddy systems and then OD envision and N EC um that many of you might not have heard of also have caddy systems. And these are ce marked and commercially available and just um some videos of what these look like in um practice. So you can see uh the top left video is Pentax Discovery. Um The middle video on the top is um Olympus Endo a the bottom left is Geogenius. The bottom middle is CADI from Fuji. And on the right is wise vision from any sea. And you can see here it, the way they generally all will work is it will come up with these bounding boxes around what it thinks are polyps. And actually, it's really quite impressive. I think some of the time about how subtle these lesions are that it manages to detect. And as I said, these are commercially available. And to date, there have been over 20 randomized controlled trials with nearly 20,000 patients looking at the efficacy of CA D in real time practice. And you can see here that this meta analysis that came out um last year showed that with the addition of CA D the adenoma detection rate. So the proportion of patients that have at least one adenomatous polyp went up by almost 10%. And you might say, why is this clinically relevant? Well, this is a landmark paper from New England Journal of Medicine 10 years ago, that basically showed that for every 1% increase in adenoma detection rate led to a 3% decreased risk in colorectal cancer. So we know that adenoma detection rate is directly related to your risk of interval cancers and the risk of developing cancer. So things that can improve adeno detection rate, I think should definitely be um become part of our standard practice. Um Moving on from computer aided detection. Um The other one that's common um is something called a computer aided diagnosis, which is also called CDX. Um And again, there's slightly fewer of these available on the market. Uh There's, I think there's about four of them now. Um and you can see here that the first part is that you have this detection box and then they'll give you um a diagnosis which often will just be neoplastic or non neoplastic. So, adenoma or non adenoma. Um but they all work in very similar ways. And why is optical diagnosis important? Um a bit like adenoma detection rates. Um We know that um these metrics um really do reflect uh performance in endoscopy. So there's two concepts that um an endoscopy that have been gathering attention over the last um 10 or 15 years. And that is the resect and discard strategy and a diagnose and leave strategy. So that means that for small polyps, less than 10 millimeters, if you can diagnose it as being an adenoma with high confidence, you can remove it and not send it for histology because we know the risk of cancer in a small polyp is extremely rare. And um and we, we think that with optical diagnosis, you can still work out the correct surveillance interval. And then the diagnose and leave strategy is for diminutive. So less than five millimeters rectosigmoid hyperplastic polyps that you can diagnose them as being hyperplastic with no malignant potential and you can leave them in situ. And so there's a scar trial was run from Saint Mark's back in 2009. Um And that um included um nearly 300 polyps and showed the the optical diagnosis. So that is the in vivo assessment of what type of polyp by endoscopist, there was a 94% sensitivity and a nearly 90% specificity for getting the correct diagnosis. And then from that, they did the discard two trial which was to test the um external validity of this. So looking not just in an expert center but in a wide range of centers. So smaller hospitals and actually, they found that in uh in, in the general endoscopist population in a much larger number, the sensitivity was only actually 83%. So from the discard two trial, essentially the the conclusion was is that we weren't ready to implement these resect and discard and diagnosing these strategies because we weren't quite reaching the performance metrics that we need to in order to safely do so. So it could be that CAD X can help endoscopist with um improving their performance. And it has been um acknowledged by various societies that before we can implement these strategies, we need to make sure that we're reaching a certain standard. So this is from the American Society of Gastrointestinal Endoscopist. So they created this um thing called pivot, which stands for preservation and incorporation of valuable endoscopic innovations. And it essentially set out these two criteria that for, for us to introduce technology to implement these respects and discard and diagnose in these strategies. We need to make sure they have a greater than 90% negative predictive value and to have a more than 90% concordance with predicting surveillance intervals essentially. So when we do trials, looking at CAD X, it's generally benchmarked against these pew criteria. And actually, there was recently um just earlier this year, this meta analysis um looking at the performance of CAD X from um Cheesy Hassan and his colleagues um in Milan. And it's a very nice study actually because they've made it very clinically orientated. So there's only four studies included in this cos as I said, CAD X has um fewer trials compared to CA D, but they've split it down into these, what they called benefits and harms. And so they've related that back to the patient. So they've said that um the benefits are the proportion of patients where you could avoid polypectomy because we know that polypectomy is not without risk. Um And then it's also split harms into, were there any polyps that were incorrectly predicted to be neoplastic? So you're potentially leaving a polyp that has malignant potential in situ. And interestingly, this um meta analysis showed that at the moment from the trials that we've done the risk ratio comparing um the benefits and harms for them in the control and intervention group. Actually, there was no significant difference. So you can see the confidence interval here crosses one in the in the um risk ratio. Um So, although it does seem to perform very well and all of the sort of video and image based studies looking at CAD X have shown that the performance is very good. Um So far, the research we've done hasn't um managed to show that it should be implemented yet. Um This is some of the work that I did for my phd. So this we did about computer aided detection in inflammatory bowel disease. Um And that is because all of the commercially available CADY systems at the moment, none of them are approved for the use in IBD. So we set about developing a dedicated um algorithm for CAD D and IBD. Um And this was our work that was published in gut earlier this year um that showed us um as a proof of concept that it works very well. So you can see when we compared it the same images to a generic Cady. So a non dedicated one, the performance was significantly better when it'd been trained to specifically for patients with IBD. And you can see here the pictures on the right, whether anyone can see anything in them. But when I show you the pictures with the addition of CA D IBD, there's some very subtle polyps that it manages to detect even in the presence of scarring and back background mucosal changes. And this is just a er example of it in real time. So video on the left is without having A I and the video on the right is with having the caddy IBD um algorithm with it. And you can see here that, you know, it's quite a subtle polyp and there's that green Bounding box detecting it nicely. So we then similar to um what the last um talk was mentioning was we were comparing how the endoscopist perform when you give them um Cady. Um And this was a study that we've done again as part of my research that shows that when general endoscopist, so those that are less experienced, have the addition of the caddy IBD algorithm, their, their sensitivity improves significantly. And actually interestingly um end did also improve although to a lesser degree. Um Similarly in the field of IBD, um there's quite a lot of research going on as well for using A I to predict um mucosal um inflammation levels. Um and that's probably quite important in the future for things like drug trials. Because if you're testing a new drug for IBD, you want to have a very standardized way of um assessing mucosal inflammation, which is frequently used as one of the outcomes in, in, in drug trials in IBD. And it could be that A I very much standardizes that. And then Marietta Yuchi um has done a lot of work about using A I to predict histological remission um in ost colitis. So are you able to predict at the time of endoscopy who is in remission versus um who may still have histological information? So, again, IBD has got a lot of work going on in the backgrounds in A I uh moving up the upper gi tract. Now, um just gonna talk a little bit about upper gi endoscopy as well. Um So we know similarly to the same statistics about adenoma detection rate in in interval cancers, we know that people that um have uh upper gi cancer, 10% of those patients will have had a normal O GD within the last three years. And this root of course analysis that was published. Um I think last um in endoscopy um showed this root cause analysis for what are the factors that um lead to interval cancers. So, pic is your post um endoscopy um of upper gi intestinal cancers. Um and they found that a significant proportion of them were, were associated in conditions that had a pre malignant cause. So, things like Barret's esophagus or chronic atrophic gastritis. So these patients have surveillance, but despite that, they still go on to develop intimal cancers. And it's about how can we try and detect these lesions at an earlier stage rather than them developing into a significant cancer that's picked up years later. So Barrett's esophagus, um there are a couple of commercially available systems uh for detection of neoplasia and Barrett's. So Jack Bergman's group um and our team in Portsmouth have both published their work on this. Um And again, similar to um the colon very promising results from this because, you know, these patients have very subtle lesions, very difficult to detect um Endoscopist do Seattle Protocol biopsies with your quadrantic biopsies every two centimeters. But despite that, things still get missed. So what we can do to try and improve that A I could be the solution. And this is just a little example. So you can see here the video on the left um is without A I, the video on the right is with and you can see kind of a uh around six o'clock, there's this um area that the A I is um delineating. And when you, we go into bl I actually it's a lot more obvious that there's an area of dysplasia there and it could be that that's completely missed on quadrantic biopsies and then the patient might come back in three years time for their surveillance and that could be a cancer. Um Again, there's been research about um using A I to detect squamous neoplasia. This has generally been um from eastern centers where the prevalence of squamous neoplasia is significantly higher than the Western populations. Um But this is one of the studies um from Yuan and et al and again, it showed that the sort of accuracy sensitivity is above 90% for detect detection of s squamous neoplasia. Um This similar one again, um looked at um delineating er lesions and the sensitivity again was 98% on um over 6000 images in 80 videos. So, really promising results and squamous neoplasia can be extremely subtle. So a bit like when we talked about Barrett's, um I think anything that can help, help to try and improve the detection of these lesions um is gonna change clinical practice. Um And again, this is just another one. also from an Eastern center of about squamous neoplasia. Um Similarly gastric um early gastric cancer. Um So we know that patients that have chronic atrophic gastritis are uh predisposed for developing gastric neoplasia. Um So we do Sydney protocol biopsies and surveillance for them if they have intestinal metaplasia. And this nice study um from Hong Gang you group um in Wuhan shows that this is the endo angel system. Um And it showed the sensitivity um in, in the internal testing was over 90%. And when they did that external validation, um this was again over 90%. Um And when they compared the performance of experts with A I um A I performed significantly better. And then uh this is just another similar paper about early gastric cancers. Um And then finally, the other area that um A I is potentially going to be used in endoscopy is in quality metrics. Um So there's sort of new kid on the block blocking A I for endoscopy is something called computer aided quality. Um And there's various different ways this are being developed at the moment. So one of the ways um is looking at um mucosal visualization. So these are two different studies, one's in the upper gi tract, one's in the lower gi tract. And it will give you a score of how much of the mucosa will be visualized on your withdrawal. And it will also give you an idea of your withdrawal speed and tell you whether you're going too fast or too slow so that you can alter um how you're withdrawing and make sure that you're visualizing all of the mucosa. Um This is another one again, for sort of quality, is looking at um the quality of bowel preparations. So again, Hong Kong, you group um and this was um giving you an automated score for Bo the Boston Bowel preparation score, which if anyone's ever done it, you know, it's quite labor intensive to actually calculate it properly because you have to give it per segment of the colon. It's not like just say, good, fair, adequate. Um So again, this is all things that can assist you as an endoscopist to try and you know, improve your efficiency. Um You know, we've got huge backlogs of endoscopy waiting times. Um Anything that you can kind of do that will try and improve efficiency and streamline services again is going to benefit patients. And then the final area that's um becoming uh an option for A I and endoscopy is natural language processing. So that describes using A I to interpret human language. So um this um paper from GI ea few years back um showed the the concept of um basically A I will extract um the text from um what you've written in your report with based on how many polyps the size, et cetera. And then from that, it will calculate the surveillance interval for you. So it will extract the correct information to plug it into this um this er schematic that it ha that it's been trained on and then it will come up with the surveillance interval. And I think what's the future of A I and endoscopy? I think that it's going to be a combination of all the things we've discussed. So in the future, it's going to be that you walk into an endoscopy suite to have your colonoscopy and you have a computer that's there helping you to detect polyps. You have a computer that's he there helping you to generate the correct diagnosis. The A I helps you to generate the report. It will be able to detect polyps from the images that you take and say there's a three millimeter adenoma in this part of the bowel because it knows exactly where you are, help with generating the report, making sure that you're improving your quality metrics and doing it at high standards, colonoscopy. And then there's so many different other areas of A I and endoscopy that I haven't even touched on. So things like capsule endoscopy, cholangioscopy, um they've, you know, anything that's image based essentially has potential for A I to to improve our practice. Um And I think that's about it for my 20 minutes. So I'm happy to take any questions. Thank you very much Katie. That was a um excellent and, and very comprehensive um discussion through all things uh A I and, and endoscopy. So while we wait for some, some questions to come in on the chat, I've, I've got a couple for you. Um So it, it's really great to see you, especially the work that you've done in in your phd. Um And I just wanted to ask a bit about um datasets and, and um what you had available because II presume that the, the weaker performance on the non IBD um specific tools was, was a matter of, of the data that they had to be trained on. Is that correct? Or? Yeah, so the generic algorithms, so the ones that are commercially available, they'll all have been trained on patients that don't have IBD. And actually, if you look at most of the RCT S that have been done having IBD is generally an exclusion criteria. Um So we basically, we developed the algorithm by training it on specifically patients with IBD. And it's, you have to have a really diverse dataset because IBD for anyone that's done any IBD endoscopy, they have so many different types of lesions like pseudopolyps, you know, they get these what we used to call downs. It's a very historic term now, but these very flat neoplastic lesions. So you, you end up having to have a vast quantity of data in order to train it. And the reason that we tested it on compared to the commercially available systems is because I think you need to prove that your system does something better cos you know, no one had really particularly tried um what these existing systems had done. So actually testing it on the exact same data set on both systems, gives you a benchmark to say actually what we have developed, having this dedicated algorithm does improve performance. Yeah, excellent. And and you had some really impressive results which is really, really pleasing to see um in terms of the, the training for your model, did you then have to go out and find more, you know, people with IBD and, and get data, you know, by, by performing more colonoscopies. Is that how you went about that or? Yeah, so we have um uh A I research essentially has various centers that are part of our ethics for it. Um So yeah, and as I said, it is very important to be testing the external validity of it. Um So, you know, you need to make sure that you're not just testing it on a very narrow pool of data. Um So yeah, that is one of the challenges with all things A I, as we've heard that you need to have, you know, a, a big data set. So, mm and, and outside of a research setting, are you routinely recording your colonoscopies? Um in order that, you know, there's, there are we, we're contributing to, you know, not necessarily through a specific research thing, but you know, in, in your standard practice, are we capturing that data so that we can expand these data sets going forwards or, or is that something that still not? I think it's got work to be done in endoscopy. Actually, I think in surgery there's a lot better options in terms of data storage um with companies and cloud storage and stuff and I think because endoscopy is so routine to take photos, that video recording hasn't taken off as much because it's very um standardized that for example, you are expected to take a photo of the sequel pole of the appendiceal orifice of the ileocecal valve of the rectum retroflexion. So, you know, people already do a lot of photographs. And so I think that there's still work to be done in terms of recording of videos. But I think it's incredibly useful because if you don't have that data, you're never gonna be able to develop A I stuff. And, and so I think it's dependent on centers, but I think in the vast majority of um sort of smaller hospitals that aren't active in research, I suspect that they're not recording and it's just photographic evidence. Well, fantastic. Thanks again for your talk and you're getting some nice comments in the chat. So it's always really good to see you as well. Um Brilliant. So, um that brings me to our, our last speaker which is, er, Doctor Pietro Maser Mascagni. Um He's a resident at er G Hospital in Rome, Italy and he's also a clinical research adviser on computer science and A I at the camera lab uh in I in Strasbourg, in France. Uh It's a real pleasure to, to have you on with us today. So I'll hand over to you. Thank you very much Andrew for the very nice invitation and I'm glad to speak about uh after this panel. And hopefully, I don't II I'll, I'll leave up to the promises. I'll try to share my presentation here. It is. Do you sit now? I guess so. That's only the usual rhetorical question. So, and you asked, asked me to speak about A I for C laparoscopic cholecystectomy, which is a topic very close to my art because I've been dedicating more than five years of work to trying to uh use this technology for a well defined endpoint where we could hopefully demonstrate clinical value in the coming years. And this is laparoscopic cholecystectomy. So in the video you're seeing now, you can notice as the surgeon doesn't have any excitation at all, any technical excitation in clipping and cutting what he or she believes to be the cystic duct and then the cystic artery. So it's really like a standard operation until he or she realizes that the anatomy was not the expected one. And that was actually a common by that. This is the nightmare of every general surgeon. And we had in 2003, analyzed that 97% of these injuries happened because of a visual perception illusion. And in 1995 Strasbourg and colleagues had already understood that and proposed a vision based solution. The so called cri this consist in dissecting the hepatocytic triangle, the cystic plate. And at that point, visualizing only two tubular structure, the cystic duct and the cystic artery entering the gallbladder. So this kind of secure target identification technique make makes us exclude a reentering tube. Uh uh Easily speaking, that is what happens in the classical b duct injury where the common bile duct gets too close to the gallbladder simulates the fun shape of the cystic duct exiting from the gallbladder. And that, that's those are the cases where you do a bad duct injury. So, um another nice thing of this critical view of safety is that basically the old surgical community agrees that this is of value to prevent major bile duct injuries that we just saw in this video. And this is still an open problem because first of all, laparoscopic cholecystectomy is the most performed abdominal surgical procedure. It is performed by most general surgeons and usually on young patient with benign conditions. The rate of bile duct injury despite being rather small in between at 0.5 to 1.5%. The only registry data we actually have, but we look forward to have the jo data that should come out in the next months. Uh It's still three times more common than in open surgery. And according to a survey, it happens to every other general surgeons throughout their career. And these are tremendous consequences for patient three times increase in mortality at one year and a huge cost for healthcare system. In the last few years. We have been applying data science, surgical data science, which is a field that started in 2017 where surgeons and computer scientists join forces in order to model surgery and improve it. We've been applied that to the problem that I've just discussed safely cystectomy. First, our attempt was to promote the implementation of best practice in this case, the be of safety. So we started collecting videos, something that resonates. Now, um with this Oscar project, we collected one year of laparoscopic cholecystectomy videos from Yu Strasbourg, we found through a double blinded analysis of videos percent of implementation rate of the three of. So what we did was nothing fancy. We just did what guidelines suggest. We asked surgeons to time out before clipping and cutting the cyst like that, the operating surgeon us to explain the three criteria defining the three of your safety to his or her assistant in what we call the five second rule to gamify a bit. And it is very effective. So soon after we went to the apartment, the rate of C VS achievement went up to 70%. Then over the following year, Tabi is at about 44% which is suboptimal, but it's still three times more um implemented the C VS than at baseline. And a number of important secondary outcome was improved. Surgeons were more aware of not having achieved C VS. And as guidelines suggests, bail out when C VS cannot be safely achieved. For instance, with the uh with the SU to college section, spent more time dissecting without increasing overall time of procedures and better reported this best practice. So honestly, this is one of the most simple and work we've done. And I it's one of the one, it's the one I'm most proud of. Second, an e will speak about the work of friends led by Amin Madani from Toronto, uh with a, with a group of surgeons from um uh uh SAGES and us in general. What they did is this go no go net. Basically, they've asked some experienced surgeons to segment. So to paint in images where they would dissect green goes on where they would not dissect a red. No goes on. So here rather than going through uh guidelines, what they did is trying to replicate the mental model, the decision making of an expert surgeon. And I believe this is a fantastic um example of thinking around application of A I in medicine in surgery or potentially also an endoscopy as we previously. So this is already being used for morbidity, mortality meetings. Uh I mean, it is doing a lot of publication to really bring this to, to, to, to practices and also to training. I don't know if any one of you tried this in the a smartphone app that they built with ages, I think last year. So if we need an instrument to guide us towards achieving the safety, then we also need a system to assess it univocally. And that's what I focused on within the research lab um at the beginning of my phd. So this neuro network, this two stage neuro network we built thus segment the Hepatocystic anatomy relative to the three of your safety, to continuously assess the three criteria that define it, that can be used for guidance. As you are seeing in this demonstration video that can also be used, I imagine in a pipeline with a five second rule, you're doing your introverted time out, you uh three the three criteria with your colleague and you also have a second reader, a machine that gives you this green light to proceed. Then we also want to document a our our performance. Again, we are speaking in the context of Oscar. So you I'm sure can understand very well the value of videos for documenting, for recording. However, it's just unpractical to think that surgeons will review. Now, let's forget lab co it's a rather short procedure, but it's just impossible to expect surgeons to review post of each and every video. But what if surgeons could just look at highlights? So like we get res of sports event, could they help us generate highlights? So um uh a more efficient way to document with videos and that's what we did with Endo Digest. What Endo digest does it uses models to recognize the faces of the procedures and tools to infer through a rule based system. The moment the surgeon is about to cut the cystic tart and then saves only the video two minutes before the dis division and 30 seconds after this is an arbitrary number that we have decided. And then we have tested in a user study to be quite effective to document the your system. But what I like to stress that this type of a not this type of uh basically implementation is very easy to scale if you want to do to document other critical um moments of procedures. Because right now fifth detectors and tool detectors are quite uh solve technical problems. So it's quite easy to implement something similar for other procedures or other um endpoints you want to document. And also we did this uh user study, basically we tested on 100 consecutive cases, the system giving to one surgeon, the full video and to another surgeon, the two minutes and 32nd video generated by and digest this fraction here of the overall duration. And in 92% of the cases, the two surgeons could assess the C vs equally, which if you think that and it all found out a few years ago that reports well document the critical view of safety in about 18% of the case is uh I would say a rather big improvement over baseline and notes, we have uh basically demonstrated that this and just work across centers in a multicentric validation. So this was uh excursus of uh some of the works, surgical data science works for safest toy. But this this led to several publication which was of course useful for the phd. But what we really care about is bringing this to clinic. So using this tools, this support system in operating rooms, we want to replicate the success that happened already in in endoscopy. And hence, we partnered with NVIDIA. This was back in 2020. We uh basically were one of the first to receive the developer kit, a compute. And we took what many told us was very um was the biggest challenge. So we tried to accelerate several A I models in parallel to get real time, introvert assistance. These are the models we took from the lab and optimized on the video box. These are some ideas of the stakeholders, the app that this system can help or the application this system can solve. And these are the performance of these systems and this is what they look like in practice one second. So in practice, you have a system that segments Acystic Anatomy and continuously evaluates the three criteria, defining the safety, detect and tracks the instrument and recognize the development of the procedure. So the steps of the procedure. So first we there is that in the experimental operating rooms of Strasbourg, basically here we were simulating the streaming from a camera which was coming from A PC having this box fitting into the, the lap tower and showing that there was no delay in between the original video and the video augmented with this very explicit A I augment uh analysis. We tested this in the operating room. Of course, this the the surgeon was not exposed to this because that would require a ton of uh regulatory work that as a research group were not well equipped to do. So, this was only delivered outside of the operating room here, Alfonso La was only see this but this was probably the first procedure performed that way. So I did II mean we needed to come up with a good way to demonstrate that this was feasible without breaking uh the regulatory around that. So what we did is we partnered with the Digestive System Surgery Congress in Rome, the Palazzi Congress and the Work Congress of Endoscopic Surgery that at the time was held by EA S in Barcelona. And we did this slide broadcast here, professor did mot was operating, that's the screen he was looking at, but about 70,000 surgeons that connected that day. So this video. So here you are the same models basically being demonstrated in real time and uh assessing the clear view of safety and so on. It was a quite emotional moment for us because I mean, this is what drives us bringing to clinic, this type of technology that said big flag. This is no way what we envision being applied in surgery in the coming years because here what we optimized for is that all predictions were very visible. Like all these segmentation, these overlays, these were meant to demonstrate the technical feasibility of it. Now, we're working a lot to make, to understand the human factor and economics around that because we believe that we need to build system that deliver the right information to the right person at the right time. No more, no less. And uh hopefully, I didn't speak too much. This is where I will end my presentation. And I'm of course more than happy to take any question and discuss with you. Brilliant. Thank you very much, Doctor Masca um for, for an excellent talk and, and your work is always so impressive every time I hear about it. So it's a real pleasure to, to hear from you. Thank you just, just while we wait for some, some questions to come in on the chat. I've just got a few myself. Um And, and the first is, is um in your current practice when you're sort of away from your research setting in, in Strasbourg, when you're, when you're back in Rome. Um How are you using video in your current practice? So, as a surgical resident, I record every case I do. And to me this, I mean, it's kind of a no brainer that I want to do that and I do that and I fight for that if I need to and um I just find it to be probably the best way to understand what I do and to try to improve the next case. Um in my center in general, uh it's something I'm advocating for. It's still not systematic and it's still not accepted by everyone and it's still done in a very rudimentary way. There is people going around with U SBS and hopefully the DP OS of our hospitals are not listening, but this is something we should definitely avoid uh from like with my more uh researcher at, I'm now at the stage where I mean, the work you've seen at the, at the stage where we're doing a really large scale multicentric validation. So more than five centers in Italy, a few in Europe, some in Japan, some in the United States there, we needed to come up with some solution to, to do to, to record and collect and share data. So I don't have it in the slide today. Actually, if you want, I can just pull it up real quick if we have time. But we actually have built some tools. I mean, let me while I speak, I will uh uh load it. We, we built some tools to uh um basically enable uh data sharing. I don't know. Can you still see my in my slides? Yes, but not that. Let me go back. Sure. And, and just while you're, you're finding that talk, how, how much of your time each week is spent rewatching clips of, of previous surgeries perhaps, maybe even using endo digest to give you the, the real segments that you're interested in. Excuse me? How much time do I spend looking at it? Yeah. Do you, do you, do you spend, you know, time going back and, and rewatching those videos, um, from your last case, for example, just to check how things went or did they go the way you thought they did, or that sort of thing I do because I'm a trainee. So to me it's extremely valuable. I don't know if that's very scalable. And, and I believe this is where A I will play a role in the in speeding up. Basically the time it takes to extract some valuable insight from videos and also potentially, I mean, I don't want to review the old video, I want to review some steps, but now that I have the system in place, I mean, this is the work of Lawrence, uh uh a fellow from the lab you met in July and it was presented at the A S and this is something we built to enable uh our Multicentric collaboration. Basically, the problem we had is that towers record videos, usually splitting them out in different files and these files contain identifiable meta data and the video contains out of body images. So images where you can potentially recognize the patient stuff and what the system does is basically you define. I mean, this is just setting up basically, you enter a patient name, you then load your files, your video, let's say. And what the system does is what we used to do manually uh with uh with engineers in Strasbourg. So basically, it merges the, the, the several video files into one then uses an A I system to detect out of body images and get rid of it by replacing that with um just pixelated images or black images, whatever you want and strip out all the meta data, that way you get a file out, which is fully identified. And this is the way we are basically convincing DP OS and data production officers and I RBS that we have data that has done has been through the best effort, at least to identify and can be shared. Wonderful. Thank you very much for, for sharing that with us and, and thanks again for your talk. Uh What I will say is um Doctor Mascagni and, and the rest of the team in, in Strasbourg, hold an excellent summer school on surgical data science and, and I highly recommend it to, to all of all of our audience here. So do you, do you get your applications in when they open? Um sometime next year, I'll just spend another um few minutes. Um Just as we close the webinar today just to, to talk about Oscar. Um If you haven't come across it just to, to let you know it's so it's Oscar stands for an observational study of camera assisted surgery recording and, and it's a nationwide audit of, of surgical video recording in, in the United Kingdom and Ireland. And, and what this comes off of is is we're as we've discussed tonight and in our other webinars and II really do hope you take the time to if you haven't already watched back our, our last one and do join us for our next one. We think that um there's tremendous value in, in um surgical video um for a number of reasons as, as we keep discussing. Um and we want to understand what's being done in the UK and Ireland to, to record um operations and, and endoscopies. And so we're interested in all patients undergoing any operation. And really importantly, it's whether we're interested whether or not the the actual procedure is being recorded, we still think that that's valuable data because we also want to know that the sort of scale of, of the operations that aren't being recorded as well as the ones that are so, so any patients undergo any surgery, whether or not it's recorded and we will collect cases um via red cap and for the UK based trainees who are familiar with log book, the data collection is very sim similar to that. And we went through that on the, on the previous webinar and we'll release some, some videos of how that works very shortly. And then we also want to know what's happening um 30 days after the, the patient's operation. So we're interested in their outcomes and we're also interested in, in video use uh um especially in the, in the context of, of um whether or not the patients had any complications. And so we'd like everyone to contribute data to this and, and we really are AAA collaborative um organization. We want as many people involved as as possible. Um And so this is a diagram showing the structure of, of our um of our, our group. Um And we want as many people as possible to be local collaborators, data validators and, and some of some of you local leads um really helping us to run this study um at scale across the, the four countries um and contributing data and for that in any of our outputs, you'll be acknowledged as, as a collaborator at whatever level you've been involved in. Um and it's PUBMED site. And so it will count for, for all of your um future applications and to put on your CV. And also being an audit. This is very valuable for um those of you who, who need to complete audits as part of um training programs. And also um applying for, for surgical training, either co at core level or, or registrar level. And so just as a quick recap on, on the timeline, we we in the middle of our three weeks of, of webinars um to join us next Monday for, for our next one. And then we're hoping to start data collection um in a couple of weeks um for, for week one, starting the 18th of November and week two, we will follow on, on the ninth of December. There'll be a bit of time for us to um upload all our data um over over the Christmas break. And then after that, um we will analyze the data and hopefully have some some outputs. So I really hope you can, can join us um just in, in terms of what we can do now. So register um for Oscar, you can sign up using the QR code. Um And we'll need to register each audit locally and there's some information as, as you sign up and complete the registration that will help you um do that and then please tell your colleagues about Oscar as well, cos it'll be great to have them involved. Um And, and contributing data two. It's um we, we just ask for, for to be accounted for authorship. We ask for a contribution of one full operating list within each data period. And then we uh if we organize the teams um for local data collection and make sure you've created an orchid account so that we can properly acknowledge your contribution. And then during the study really is as simple as if you, if you're used to logging your operation on an electronic logbook, really, you're you're ready to, to take part in Oscar. Um So, so it's exactly the same just using our Redcap system and then um to contribute the 30 day follow up data as well. And then afterwards, we want you to get the most out of this as well. So you're obviously completing a local order. Um So it'd be great if you could present the results back to your teams, get them interested in, in surgical video and digital surgery as we're discussing um in these webinar series. And then of course, we'll credit you with authorship in any of the outputs that we, we generate from this as well. So, thank you very much. I uh thanks for joining us on, on tonight's webinar. I hope you've enjoyed it as much as I have. We've had three really excellent speakers. Um And my thanks again to them and, and hopefully, uh you can please do join us for the next one. It's on Monday, next week, the 11th of November at half past six UK time. Uh And it's on digital solutions from industry. Um And so I'm really looking forward to, to hearing from speakers from intuitive uh medtronic and proximity. So I hope you can join us then. Thank you very much.