Computer generated transcript
Warning!
The following transcript was generated automatically from the content and has not been checked or corrected manually.
Uh Thank you for joining today's webinar on artificial intelligence Surgery. Uh My name is Pana Capa or, and I'll be your host for this evening. This is part of the education schedule of Kingston Hospital Department of Surgery, jointly delivered with the Department of Medical Education. And today we have the immense pleasure of welcoming Pedro Makani, who's one of the leading experts in the world on surgical artificial intelligence, who will give us a talk on artificial intelligence and surgery with a focus on his uh scientific interest, which is computer vision, which is one of the applications of A I. Before we start a few housekeeping rules. First, I would ask everyone to mute their microphones while uh we have uh the main part of the talk. And um subsequently, uh we will have a question session. Uh You can post your questions in the chat or the Q A book and we will address them one by one. There will be a recording of the uh event posted online for anybody who is not available to uh join us today uh together with the slides. And uh if you have any technical issues, like the one you had in the beginning, feel free to let us know in the chart uh in the end of this in of the uh webinar, uh please um uh complete the feedback form. Uh because this is the only way unfortunately through the application to get your certificate, certificate of attendance. And also it is very important for us um for to improve our sessions in the future. And we encourage everyone to engage and ask questions because this is why we have PTO with us to ask, to answer questions and enlighten us on this very exciting topic. Uh Today with us, we also have uh Mr who is the surgical tutor at Kingston Hospital and uh the educational basically supervisor for the department of surgery. Hi. And the, the floor is yours. Thank you very much, Palos for your generous words and for inviting me to join this webinar. I'm very excited to speak a little bit about my research passion. So the applications of um A I and more in particular computer vision in um in surgery. So now I'm sharing uh my powerpoint one second. So do you, do you see my S lights? And did they move uh to slide number two ss slide number three? Yes. Fantastic. So you have introduced me very generously, but I'm a surgical resident in uh clinic in Rome. But I'd like to spend five years in research at the Institute of image guided Surgery where I was working under the supervision of Nicola PABA, a computer scientist from the University of Strasburg who has, uh, guided me through a phd, looking at how can we use surgery to improve a computer vision and A I to improve surgery. So this is where I'd like to start from. Surgery. Surgery is not what we show in the picture here. This is a photo of an operating room about 130 years ago, taken in Boston and surgery back then was not highly prevalent, quite simple and not very effective. Today's surgery looks much more like this. This is a picture I've taken a Strasberg in 2018. Basically, surgery has evolved a lot and today surgery is highly prevalent in health care system and both in terms of number and in terms of cost and it's much more effective than it used to be. But it's also, it's also much more complex. And unfortunately, we still have a surgical safety gap because in between three and 22% of operating room procedures are complicated by major complications which translate in a high risk uh of injury or surgery of death of surgery. Just consider that medical care was considered the third leading cause of death in in the United States. And it does a huge cost per year. The good news is that a large part of these errors are preventable. Now, let's look at the operating room and division of surgical data science. Basically as we've seen in the previous images. And as we all know in our surgical experience, operating operating room in 2023 are highly complex social technical processes where there is a high flow of information. And computer scientists and engine and uh surgeons have joined forces believing that analytics move uh from standard statistics to machine learning. And deep learning can be used to model this information in order to give the right information on the right person at the right time. And this is the vision that guides all the works we will see in the next slides. Before moving into application, I would like to demystify a bit the expectations around A I in healthcare and and clinical by introducing what A I is in very simple terms. So first we need to know what we're speaking about and you will use this Ban diagram just to give some terms. So A I it's a very umbrella term referring to any software mimicking humans task human's activity. A big part of the eye, what powers most of the eye is machine learning. So the ability of algorithms to learn through experience and a big part that has brought many recent successes in the eye. It's deep learning that powers applications like computer vision, natural language processing and robotic control. But what is intelligence? This is very hard to define both in biological intelligence as well as in um artificial intelligence. In a few words, intelligence is the capability of a system to adopt to learn learning is key also to artificial intelligence. Because if in classical programming, um the the software engineer asked to write a program knowing a a priority, all the function needed to solve a particular task. In machine learning, the programmer writes an algorithm that basically learns how to approximate a task by iterating over data. And this is the fundamental difference between classical programming and machine learning. And this is what uh uh has brought us the ability to solve much more complex problem for which we don't, we don't know necessarily every function a priority. And if learning is key, the a fundamental part of learning is experience and experience in, in um and software is data and also in uh in real life. And here we exemplify one of the most common learning paradigm, supervised learning. So in the setting of computer vision, if we give an algorithm and input an image of a laparoscopic procedure, for instance, uh untrained function is initialized and will output a result which is then compared to reality. So the ground pro usually are notated by domain expert. And the prediction is compared to discount truth and an error, a loss is computed and back propagated to the function in order to update its parameters. And uh basically ate this process until the system learns how to predict the reality. And uh through this optimization of uh algorithm, we can solve different task. And here I will just simplify the most common. So if we ask a system to, for instance, classify um what type of instrument is being shown in an image. So the system has to peak in between 12 or more classes. And this is what we call classification. If we add a special Conant to the question, for instance, can you indicate where the hook is with, for instance, a bonding box? then we're speaking about detection. So the um and the uh the, the classification and the spatial localization of uh the item we are interested on. Finally, if we want a more detailed analysis for, and we can ask the system to classify every pixel of an image. And this is what we call semantic segmentation. And the reason why I've made er these examples around computer vision is because vision is very important to surgery and it always has been from open surgery to minimally invasive and robotic surgery to what we call image guided surgery. So whatever surgery is enhanced by uh more imaging modalities and the vision here is that the uh vision, sorry for the, the, the game of words is a very important sensing modality both because it allow us to um basically understand uh the surgical field. But also because it could give us an idea what goes on in operating rooms as an environment, for instance, through a celic mounted cameras. So if this is a good sensing modality, we need to extract mim meaningful information. And a fundamental first step is to do that manually and I will give later on some examples. This is fundamental not only because we need manual annotation to train the machine to basically scale our analysis. But because we need domain expertise, surgi surgical expertise, our role is to basically identify and formalize problems that can be assessed in images. And this analysis can later be scaled through automated Emma information extracted extraction, for instance, using computer vision suit to provide this information, this feedback in real time and at scale to assist surgical procedure. And and this vision has been um powered by uh a lot of developments. A lot of um technical developments if early attempts at analyzing surgical videos were providing varicose information both from a temporal point of view and a special point of view, for instance, classifying procedure type and what type of instrument was used. These have evolved over time to be much more detailed to the point that today we can analyze surgical activities at the action level and at the pixel level. And in this slide, I will just give a couple of example if they come up the lab where I come from and where I work in 2017 was publishing this demo where we had system that were able to analyze the surgical workflow from the phases of a procedure to the to the instrument usage. Today. Actually one year ago, almost we are able to understand tool tie interaction formulated through triplets. So what tool aspir in this case is doing? What action on what target anatomy whoop de in is that this is the work of Tina Du Moyer and the subject of some recent data challenges. So, uh what are the applications of these to surgery? So here I will speak about er, some of the work to use this technology to improve safety around, to improve the safety of laparoscopic cholecystectomy. As you can see in the video, this on the left, the surgeon has no hesitation in keeping cutting what he believes to be the cystic duct. And only later on he or she finds out that that was actually the common bad duct so that the problem of bile duct injury, which has dramatic consequence, as everyone knows, uh was found to be a visual perception problem, which has a visual solution the of your safety. But unfortunately, despite a lot of um efforts from surgical societies, this is still under implemented and the rate of bad duct injuries has not gone down a lot in the last 30 years. So we have tried to apply this kind of techniques that have just introduced to improve the safety of laparoscopic cystectomy. First by promoting the implementation of this safety step. And this work I will present right now does not, does not have any A I in it, but it's just to simplify what looking at that, what the manual component of database assessment can provide. So we start collecting the videos of the procedures performed at Ya Strasberg. And we found out that over a year, there was a 16% of CV S implementation. Then we went to um the surgical department and we asked to implement uh what is advised by guidelines. So to time out to recall the principles of the CVS before keeping cutting the cystic duct and artery. And we found out through video based assessment that in the year after, after this quality improvement intervention, the rate of B A sorry, the rate of CV S implementation was much higher and actually, it was much higher in the first two, the procedure after our intervention then decreased. So even though the before versus after is still very favorable because it went three times high with there's still a high margin of improvements and this is what we would like to tackle with some I intervention that hopefully I will show in the next meetings. Um Then colleagues from uh from a collaborative between uh uh uh colleagues from SS between uh Canada and the United States proposed to guide surgeons to safe areas of dissection and unsafe and uh away from unsafe areas of dissection, what they go go and no go zones by basically training a model to replicate the mental model of expert surgeons. So in this work led by Amin Madani, basically five expert surgeons were asked to segment where they would dissect the gray zone, the green zones, the go zones and where they would stay away from the red zones, the no go zones and, and then they trained a network to replicate this kind of assessment. Then we worked on assessing the of your safety to prevent the integrated variability that uh limits this uh uh process measure. So we trained a system to segment the hepatocyte anatomy relevant to the Clinical Bureau Safety and to continuously assess the three criteria defining this view to structure that assist the triangle. This is both to guide surgeons towards achieving this c critic view, but also to provide an automatic safety check. Imagine this in a pipeline with a five second full time out. Finally, uh we implemented some um some techniques to promote a reliable documentation of this maneuver which as suggested by by the multi society guidelines could uh enhance the clinical implementation of it. So we thought that that videos are the best way to document whether you have done a good dissection or not. But surgical videos can be long and we are only interested in reporting the clinical parts. So whether in this case, in laparoscopic cholecystectomy, we have achieved or not the clinical be safe. So basically here we um use some A I to recognize the face and the tools then implement a rule based inference system using leveraging this automatically extracted information to identify the time we want to document and then automatically edit videos around the system that division. And we found this to work both from a technical point of view. So we had about a minute of error or 100 consecutive videos used for testing. And then we did a use a study. So we gave these videos to um to a surgeon. So to a surgeon, we gave the short videos two minutes and 30 seconds long videos produced by the I to one surgeon and the long video to another surgeon. And this could agree on the uh on the documentation of the CV S in 91% of the case. So this is um a big improvement over what is reported in the literature where um NI NI and colleagues have found that only 18% of the narrative postoperative reports actually document what can be seen in the video. Uh So if this is the research work around using surgical data science to improve uh laparoscopic cholecystectomy, now, our interest is to go to clinic and we do so we have partnered with a major provider of accelerators to basically implement our deep learning system on their um who are cleared device. This is a a shoe shoe box size device that can go in operating rooms. And there we are able to basically run, deploy and run in real time several A I models. And so back in November 2021 almost two years ago, we were extremely happy to test this system and do an A I assisted laparoscopic cholecystectomy and bring that to the uh es conference and another conference in Rome. So there you can see Professor Deter operating in the operating room. So Strasberg and getting a video feed, the normal, the normal laparoscopic view. So we were grabbing that video signals and um for, for safety and regulatory reason, running the A I inference outside of the operating room. But in real time, as you can see in this video that was streamed at the conferences. And basically here you can see like we are able to get prediction without any lag on face detection, anatomy, the uh sorry tool detection and and tracking anatomy and tool segmentation and a continuous assessment of the three criteria composing material to your safety. So this for us was a very important moment because we wanted to prove that this system can be run during procedures. And so we made everything very explicit. Of course, we don't foresee this kind of view being used during surgical procedure because all of this information would disturb rather than help. So what we're currently doing is working with uh human factor, expert designers in order to understand how to provide an interface so that we are sure to give the right information to the right person at the right time only and don't disturb clinical workflows. The last part of my presentation I will try to answer a few uh to answer a few of the question that I get very often. So can we do it with a I? How much data do I need to start my own pros project on this? So can we do it with the eye? It's a very difficult question because a eye is so fast evolving that what you couldn't do yesterday, maybe you can do today. How much data is also very difficult question to answer? Because there is no analytical way to estimate the data necessary to train a system a priority. But there is some rule of thumbs we can use. For instance, first thing I would suggest is that we analyze whether the data we we want to use for our system contain the necessary information to solve a given task. And a simple way to do that is to ask a domain expert if he or she can do that, then we need to uh analyze whether the data set is representative. So whether the data represents the patients, I want to apply my technology to and the disease spectrum I see in my practice in terms of size. As I was saying, there is no exact rule to estimate and the the number of data point needed to train a system. But again, a rule of thumb is to try to estimate how complex the task is. For instance, in a classification problem, the um the biggest the difference between classes. So the biggest, the interc class variability, the easier the problem. So for instance, if let's take the example of a colonoscopy. If a beli a benign um uh a colon without polyp is very different from a colon with polyps, then there is a big inter plus variability and we will need less data to train our system to recognize polyps uh and vice versa. If there is an inclass viability that it's high. For instance, if we ask a system to classify whether it's a benign polyp. Uh So I ne plastic polyp uh or for instance, what kind of uh lesion we're seeing and this lesion can have many different shapes and appearance. So there is a big inla viability, then we will need more data to, to learn this um difference. Finally, it's always good to think whether we can optimize our our data set. Getting back to the colonoscopy example, for instance, if we want to character characterize a lesion using uh um enhanced imaging like MP I is easier for us, so it will probably contain more information for the system to learn from, taken out to get started. Should I learn how to code? So this is a question Pannus asked me at some point and I would say the best way to get started is to um team up and to team up with your computer science counterpart with engineers. And today there is a growing number of opportunities to do so. So I met Panos at the A I master class at uh EAS but other societies both clinical, for instance, the this is waw the just, this is week runs an A ING I workshop. But also uh technical venues like Mika have separate um uh events dedicated to clinical translation where engineers and clinicians can meet. And finally, we are organizing uh um a training prob uh program for a computer scientist and surgeons to meet called the Surgical Data Science from the school that will be at the third ation next year. And another way to team up where I would invite you to join is by is by joining um projects like the SAGES CV S challenge. Basically the SAGES um as task eu and MJ H to run a data collection around a laparoscopic cholecystectomy videos. So we want to get 1000 videos from all over the world. We, we are assembling a team to annotate whether the critical view of safety has been achieved or not. And then we plan to release this data that so surgical data science team can compete and propose the the best models to train their system. And I would invite you to participate, either donating data or by joining the annotation team. Finally, can we trust the I there is a lot of um discussion around the ability problem, the black box problem and this is a real um this, this is a real issue, not only from a scientific point of view because we want to understand what goes on into this very big multiparametric functions. But also from an application point of view. Because different rate from a scale when we apply an A I model, it's very difficult to uh give the right indication to use because we only know if a system works or not by testing. And of course, we cannot test in every possible uh every possible patient. So it's very difficult to um define a good, the right indication to use of an A I system. And the other big um problem I see around trusting the eye is around the human machine interaction. So there is a common bias called automation bias. The same bias that that makes us believe too much in our GPS. When we drive around, that basically induces us to trust too much what computer says. And this was recently demonstrated in this work I cited I cite um where they basically quick uh computer aided detection system to read mammography to give wrong answers. And they found out that both novice and expert tend to rely too much on what the computer says and tend to be wrong more often than they, they would be when they follow A I. So we tend to trust too much. And so we need to make sure that our human, that our interfaces are designed in a way to prevent this bias to kick in. Finally, where do we stand? We are at the very beginning of uh A I. So we are in an imitation phase of A I. So all the big um models we are seeing today and no much, not, not much more than statistical power of task francois cho A ba I researchers puts it and we will see in the for coming time, years, breakthroughs that will lead to at some point general intelligence. So the capacity of models to solve different tasks and super intelligence, the capacity of models to overcome human intelligence. So with this, I conclude by thanking you again for the attention and I would be more than happy to discuss with you this topic and answer any question. Uh Palos, we can, we cannot hear you again. There's a problem with the sound. No, we cannot hear you uh until Panagiotis will uh uh fix the problem with uh with the sound pietro uh congratulation. OK. Very nice. Um Very interesting and unique, let's say presentation. Uh because we all know that A I is something very new and something very innovative. Uh specifically myself. I uh initially, I was very curious now I'm very excited about what A I can offer. Uh I will start with uh some questions if that's OK. Sure. Uh A couple of questions. Um So my first question is uh I recently, I've uh I've been through some new A I techniques, especially in gastroenterology, for example, been introduced to a, to a software that you can have, for example, your colonoscopy and enhances the, the way you find lesions and polyps in the large bowel by almost 30% in some cases, even in experienced endoscopies. Um what I've realized and what, what I've I've seen uh through this says that I've done is that it's a very good tool uh in uh in the hand of experienced phy physician. Uh and it's uh uh like dependent on the user. Really. If you are someone very experienced, it will help you achieve even uh better results. If you are not a good user, in a sense, you're not experienced or you don't know where to see. I'm, I'm using the, the, the example of colonoscopy. If you don't turn the camera to see this area, the I will not recognize this lesion. So what is your comment on this and how this can be implemented in surgery and how important is this for surgery? So I think you touched on a very important point. So just to for, for the evidence. So here we speaking about computer aided detection of colorectal polyps during colonoscopy, which has been implemented in clinic in 2018 and which currently is the largest application of computer vision in medicine overall where most of the. So if in A I medicine in general, there are 39 trials, the uh about more than 10 of these are in A I for computer detection for the detection in color of colorectal polyps. But I would say this is the tip of the ice iceberg because as you were saying, results are good, there is strong evidence that improve that these systems improve uh adenoma detection rate, which is the key performance indicator that is most important to screening colonoscopy for instance. But as you said perfectly that that is only part of the examination. So you could have a perfect computer data detection system. But if you don't do an eye quality exam, an eye quality colonoscopy, then this is useless. Because if you don't see the lesion, if you don't show the lesion, actually the system won't see it. So um of course, this is something both research groups like ours and um companies are working on and there's a whole new set of applications which are generally called computer data quality CG that will be implemented in a pipeline. So this will guide towards doing a higher quality examination, which is the fundamental um the fundamental first step to then discover polyps. So in the setting of A I application in surgery, these I think can teach us a few things first is that there's never going to be a single model, a single a, a single application that solves or drastically improves for instance, the safety of a procedure because we need several tasks at the same time to solve several tasks at the same time. And this is for instance, why we wanted to demonstrate that we can run different deep learning models con concurrently at the demonstration we did of the A I for safe laparoscopic cholecystectomy live streaming. Er second that we need to have some um outcomes that tell us about quality. So in colonoscopy, this is for instance, whether you have done a complete examination reaching the s whether you um are scoping um a clean bowel by assessing the bo bowel preparation scale. For instance, whether you have um taken enough time to view to see the colon. So the withdraw time, these are all key performance indicators that A I is trying to automatically assess what we lack in surgery or we are at the very beginning of it in surgery is this type of metrics that can help us distinguish between a high quality, an high quality procedure versus a lower quality procedure. This is actually why I insisted so much that the manual analysis is extremely important, not only to generate the datasets that we need for A I training, but in order to basically get these insights, these metrics, these outcomes that we then use A I to S scale. Thank you very much. Yeah, that that was uh very important. The answer. Yeah. When I get yes. Yeah. Um uh I think for the question is, do you think you um you hit on the core of the the issue? Um because people think A I is, you know, like a magical thing that will solve of all of medicines or surgeries problems. And as we can see now we are a I is not one thing, it has several applications and it's how we use it. Uh in the end of the day. Um um The thing about the quality indicators, Pietro mentioned uh I would like to touch up on. So, Pietro, what do you think are some um things we could standardize or some things we can generate quality indicators because the apr that you mentioned, for example, the adenoma or uh sorry, the adenoma detection rate, the AD R is a quality indicator of colonoscopy. What are some initial steps we could take to that direction for surgery and which operations, for example would you say are good um examples. So for instance, the reason why most of computer early computer vision applications in uh in, in uh in laparoscopy and surgery were on cholecystectomy. It's because not only we have a high number of these procedures, but because since about 1995 there were groups trying to understand process measures, for instance, the safety because of course, it's a more standardized procedure where there were clear problems like bile duct injury. And given the high number was easier to do this kind of to get this kind of insight that generates outcome, outcome metrics. And uh this kind of process is only starting in other procedures, more complex and longer procedures where there is more variability. But they are appearing, I think a landmark paper in this space was the one of bit Mayer in 2013 where they did video based assessment of anastomosis in um in bariatric procedures. And they found that this correlates with outcomes. And now this type of studies are uh growing, growing in number. Just last week. There was a study on the annals of surgery looking at the video based assessment of uh uh uh anastomosis in uh in pancreatoduodenectomy whipple procedures. And this correlates so best, better performance assessed through OS correlates with uh lower incidence of uh pancreatic fistula. So I think that this is going to explode because now we have a way to study the intraoperative phase of surgical care videos. We are developing tools like OS A S process measures like the clinical safety that allow us to basically extract information from these videos and measure what we do. And then we will use A I to scale this in order to have intro assistance and to get more insights across centers and outside of the outside of research. Your second question. Um and I can proceed with a few. I have, I have a couple more questions because I find this, this subject very, very interesting and I'm very excited you uh you take the initiative to arrange this and congratulations to you too. Not only to Pietro, I did ask um I was in eas and I it was like I was planning to leave and the last day I decided to do, um, the master class and I would encourage everyone to start, either do the master class, attend AC UCAS in the masters consider going to Strasbourg. Um, unfortunately, I had too much study leave, uh, already, so I couldn't go this year and, um, start learning the basics if you are interested. Uh There is a very good book, um, Artificial Intelligence Surgery by Daniel Hasimoto that actually Pietro has contributed. That's a good way to if you are more into books. Um And you know, just start dipping your toes in the domain of A I and I don't think it's, you know, for colder or tech gigs or it's for everyone interested in progressing surgery. I just wanted to say that and Pietro had a very good presentation. That's why I, he was and he was the person that organized the master class. So I think he was the right person to bring to Kingston and to the audience and, and let, let me apologize for my low quality presentation today. I'm a little bit sleep deprived because my first baby just joined our family. And so between clinics and, and the baby and a little bit deprived and my English suffered a bit out of it. I think it was great. It was great. Pietros said something very important that the A I is for everyone, but now I will go from the more, er, question that I did earlier to something a little bit more uh us, what, what some, some of us will think when we see, we see A I in surgery, for example, we have, I can see in the audience we have many levels of surgical doctors from trainees to very senior um surgeons, for example, who have vacio mare this year who joined in. It's a very experienced PB surgeon. What you will say to vaso if you will ask you, do I have to learn how anything about A I, uh I'm experienced enough to do a cholecystectomy achieving the CVS very easily because I'm S PB says, and I'm very experienced why I need to introduce A I in my practice, what you, what you will, so why introducing A I in your practice? Let's take a um let's take uh a more limited and, and uh at this time, more practical example. Uh so he's an experienced HPV surgeon. He perfectly knows how to do the clinical, your safety and he can deliver a high, a high quality procedure most of the time, but unfortunately not everyone is at the same level. So if you want to decrease the variability in the quality of care provided to the patients accessing your service, that could be a good way to basically trying to decrease the the gap in between uh the different operators because it could help us standardize some for this measure CV. S is just an example that was highly standardized and suggested since a long time, this is why we started from that. But in general, we all know from, from our own clinical practice and experience that unfortunately not all procedures are the same, not all surgeons are the same. And from the perspective of uh a, a surgical leader and the head of the or the chief of the department, I think the value proposition of implementing some of this A I system would be to make sure that everyone's delivers a, a high quality examination. So does he have to ge get into this kind of space in terms of research? I don't think necessarily because so I I spent a lot of time into the development, but I don't think every surgeon needs to do that. But what I believe most surgeon experienced a, a specialist, experienced surgeons in a leading position should understand and should uh learn is the fundamental difference between other medical devices, other tools and A I based tools. So we need to understand the strength and limitation of this system. So this system are based on data, they are trained on data and they are tested against some uh data with some on proof oftentimes, unfortunately, still confined to the data set of, of companies. So let's take again the example of computer detection. Now there is about I think more than 5 10 system that do computer data detection. So how does the chief of a service picks what to buy. So make sure that what he or she buys is actually a, a good, a good device. It's very difficult to tell because the, the, the company can tell you a for instance, an accuracy or some other metrics. But it, we need to always remember that the, that matrix is computed versus the ground truth annotated by who knows who on a proprietary data set. So this is the kind of um thinking that even someone who doesn't want to get into A I research or development of A I application should consider to make sure that the tools that will be implemented in, in our practice are safe or accurate enough. Thank you. Uh We have a question from the audience from uh Mono Patel uh asking um would any relevant info be available for ENT and head and neck surgery? So, uh are you aware of any studies or any applications in the field of being of ear, nose and throat and head and neck surgery? So I'm never an expert on this, but I know that there were some attempts a about uh uh around using uh uh laryngoscopic videos, the analysis of laryngoscopic videos to assess for instance, the mobility of vocal cords. Um So yes, there are some studies uh applying computer vision to ent uh but I cannot describe any of these because it's not my field of expertise. Um Piro. Um Now that around her question about ent or we mentioned colonoscopy, laparoscopy. Um We see most of the applications are derived from clinical parts of clinical surgery that are more endoscopic. So you we have the gra the data, we have the video, we have the um we have the means of obtaining the information for A I to be trained and then give us the output. However, a lot of surgery is open or we don't have at least a widely available ways of capturing data from open operations. So do you think to start having A I in open surgery, we should first establish ways of capturing the data, meaning ways to capture the video in open surgeries and to make the question more wide, what the lead a um a clinical leader or head of department can do is capturing video and data and having a data infrastructure to speak more technically more important than actually getting the latest A I algorithm in the hospital. So this is a very relevant question. Um Of course, this kind of application, it's easier in minimally invasive surgery because the endoscopic details, laparoscopic videos are not there in open surgery, you would need to record but a few uh a few comments. First of all. So today I spoke about what I know best. So A I to analyze endoscopic videos, there's a lot more going on, for instance, in um in major uh hepatobiliary surgery, there's a lot of work in into using A I to model preparative variables in order to stratify patients. So you don't necessarily need to um have 000 operative videos to uh implement to, to add some clinical value out of A I. Second. If you want to uh do this kind of intraoperative computer vision application in open surgery, you can, there is a few uh studies uh doing that at um mostly at two levels. There are groups recording the operative field, either through head mounted cameras or cameras in the lights or other ways to do that. There are lots of telepresence system today in the market that allows allow open surgery to be recorded well. But also there are other groups using ceiling mounted cameras for instance, what I've shown in a slide to understanding dynamics. So I work with a group from, from Boston, uh led by Roger Diaz and Maati that use ceiling camera to understand team dynamics. So basically they, they understand our team work together, interact and make decision and they use computer vision in order to do that at scale. Um Finally, yes, I agree. If you want to apply exactly computer vision exactly in the in the operative field, then you will need some technology to record and this is of course more difficult than a minimally invasive surgery. Uh um Mr and I think you had another question as well. There always more questions, Pietro, I have another question from because we need to think. So, actually two questions, one is um the importance of A I in training because um we are not only clinicians to operate patients, trained surgeons. And the second question is uh about um the technology and the innovation uh as the years are passing by, for example, I assume listening to your presentation and know a few things about A I, you need to have the right software. Uh And at the same time, um to have uh the hardware to support uh the, the use of A I uh how this can be managed from the financial point of view. Uh Let's say that the one institution wants to buy the latest A I equipment, like we buy the LA, the latest iphone, the iphone 15 or 16, I don't know. And the next year there's another iphone better than that. And in two years you are behind. Uh uh what's your opinion about? Well, very important questions and uh questions I'm very passionate about. I, I could speak a lot about this so regarding and, and they are very related also because I do believe that uh this kind of analytics will play a big role into training. So as I was mentioning before A I could be used as a way to decrease the gap into the quality of care with the liver. And of course, this as I implication for training and there is probably where. So there, there's a few aspect that make training applications very interesting. First, the need. So globally, the large part of the w the uh the population does not have access to high quality surgery. So there is a huge need to have better ways to train surgeons. And of course, bringing expertise everywhere doesn't scale is not possible, but A I can be used to mimic this expertise. So that even if your experience cannot be brought everywhere, your experience could be replicated to, to oversee what uh is done for instance, in under resourced countries. So this is quite the ical but also very practical because for instance, and be because of another aspect, the regulatory aspect if we're speaking. Um So if we use A I in a training setting, of course, the regulatory burden the regulatory bar to overcome to implement this system is much lower. So uh for instance, we are developing some uh some models to uh use the videos of simulated task to give feedback to trainee. So to basically obviate the need of an expert doctor behind you. This is the skills, the expertise from a regulatory point of view, it's not a medical device. So the moment we have a good system working, we don't have uh I mean, we don't have to overcome a big barrier of regulatory and this can be quite easily spread because at the end of the day, you already have a camera and computation is quite cheap and this gets to your second question. So I'm in the position of a leader in my service. I need to decide what to buy besides the like decision. Um I need to make, I was mentioning before what system to buy I'll do and make sure that this system is updated. So this is extremely relevant, not only from an economic point of view, but also from uh an A I point of view. Because this system being based on data, how do we guarantee that the the data? So the data distribution in three years will be the same? So your patient population will change your uh endoscopic hard, do it will change? So how do we, how do we make sure that the performance of the A I system is maintained over time? How do we make sure that we can bring upgrades? How can we make sure we can monitor how the A I does in the what we call technically in a datasets shift? So there is no absolute answers to that there. This is a very big topic of research and also development from companies. I think 11 interesting solution are the the companies uh proposing cloud based systems because these are easier to uh monitor and uh update over time. Uh There is a big question on regulatory because so the safety around the cloud is already solved in large part, what what I was referring to is how do we monitor the performance of a system over time, we would ideally need an L doubt test set uh in the ends of a third player, not the company, not the FDA, which is trusted by surgeons. This is also one of the reason why we founded the global surgical A I collaborative. So a a nonprofit society looking at building an ecosystem to share surgical data. Because if we have a shared data set, a global data set that surgeons trust this is what we could use to, to basically benchmark our systems over time. Uh So these are all ideas in the development. There is no clear solution yet. And uh I think it's all to be explored. And uh we will see a lot of different uh venues being explored in the, in, in, in the, in the coming years. Thank you, Pra. Um So um this comes again to what you said about standardization. So using those scales like os or other tools can and we're not talking about A I. So anything that can be measured can be assessed. So um I would like, I would encourage trainees to seek like a more objective feedback to their training and not OK, you are doing a good job like a blanket statement. So try to get videos of your simulations or your cases when you operate, try to record that even if you don't use A I at the moment or if you don't have a solution to use keep at least your own videos, of course, with data protection and identifiable information, not um those things sorted. Um because uh you don't know how in the future you can use. And gan actually encourages us to always record our cases and keep them, you know, for our own use and we can review them with our supervisor or the consultant we do the case with. Um and this is just like a message to the training from the trust and wherever myself, I think this is extremely important. I learned by my own case. Yeah. Um So uh we have a question from Lydia. She's uh one of the foundation doctors in our uh department at the moment. Uh It says, have you encountered fear or suspicion from patient groups on use of A I in their procedures? Perhaps misconcept the the there is misconception amongst patients regarding how A I is utilized may be uh the issue when it comes to its widespread use. Um Would you care to comment on that? So basically how patients view it at the moment, this is extremely important. So short answer because I tend to give longer answers and apologies for that short answer is we don't know, we like we don't know yet because um what I've shown is mostly in the research space. So if in endoscopy, there is cli there are clinical system, there is clinical evidence in surgery, there is no algorithms or actually just a couple of algorithms being approved now for clinical use. So basically this is a question that still needs to be explored. Uh But there's groups working on surveying patients and of course, we can learn from other disciplines like in endoscopy or in radiology. So based on my personal expe my personal clinical experience, we do A I assisted colonoscopy in Rome detection characterization and oftentimes patients ask for it. So they are happy that there is a system overlooking basically what we do and trying to uh improve what we do. So they ask for it. So far. In my experience, I have not seen a patient refusing uh um our uh the the the application of a eye in their, in their procedure in their procedure. Uh This said, I do agree that there is a lot to uh involve patients on because when we uh at the end of the day, we use patient data to to train system and we need to make sure these are representative, these are safe. These are always used in a beneficial way because we are guided by good intentions. So we don't discriminate all of that. But someone that is not guided by this good intention might use these data for instance, to select patients to discriminate access to care. So there's a lot, I mean, it's as always, it's not the technology that should um that should be feared, it's how we use it and probably more data data than the technology in this case. And a big role will be played in my opinion by uh by basically making sure that there is a good narrative around these technologies. Because if we try to demystify I, the A I is not the terminator that will automatically do procedures or, or take care of patients. At least not in the short term. Yeah, it's not a magic weapon. It's not the good, it's not the bad. It's just another tool in our tool set and it's a powerful tool and we need to learn it, strength of limitation, but we also need to demystify it a bit. And communication, late communication with clinicians and, and engineers uh sorry and patients is critical, I think to make sure that this narrative goes in the right way. Um I mean, it is kind of the same with the robot first in terms of misconception, you know, patients might think a robot operates on them, but at the moment, at least a robot is just surgeon is. Yeah. E exactly. And also uh in the future that people say, you know, the robot because A I will be integrated into that. So the robot will be doing the operation, there are levels of supervision. So uh at the moment, we are maybe the robot will be able to do some specific part of the operation. But I think there was a consensus um from who I think uh somebody discussed in the web in the master class saying that I don't think we will ever read a computer. Uh, sorry, uh, a computer or a robot doing a completely unsupervised procedure. I don't think we will basically allow for it. There will be some expert supervision. Uh, like we will read level five. Basically there is a scale, uh, correct me if I'm wrong. Petro. No, I tend to agree with you. So I, I don't use the word ever with this kind of technology developments because technically, I think it will be feasible at some point, not, not sure when, but at some point, we might have uh enough information to and enough uh knowledge on the modeling of this information to do that. But there is much more than just doing a simple act and I believe that there will be always the place for a human in the loop design. Um So OK, we, we have uh at the moment one last question from uh a create to who is one of our registrars at Kingston? Is there a way we can prevent uh the bias in medical data being exacerbated by the uptake of A I given. There may be an imbalance in where and how it's taken up. So this is a question I'm loving for two reasons first because it is a very important topic. And second because it's very well formulated. So, and I think not many get this many say A I, we bring um A bias. But in the way that the question is formulated, it's clear that it is understood that A I, it's only an amplifier of A, that of a bias that is included in the data, which is extremely important. It's not the A I that is biased. It's the data that the I, the A I or the model or whatever we call, it is trained on that is biased. And the A I then as the, the, the the opportunity to amplify exponentially that bias. And it is a very important question also in computer vision. And so there is a few centers that currently record data published, make this data publicly available. And, and so the patient population we see it's not really representative. So I I'll take it from uh uh the most simple aspect. For instance, most of our works were uh done using data from elective lab calls because those are the ones we, we record more easily. Of course, this will prevent generally the ability to the acute care setting. And this is what we are currently working on there. We have a few acute cholecystitis data sets and we're trying to uh implement to, to fine tune the system to work in this setting as well. So this was just to give you an example of a bi of a bias that can have this I think is part of the development game you start from where you have good data and where it's easier and then you scale. But if you are speaking at a larger population level, for instance, discriminating um populations of patients that uh don't usually access to academic centers. And uh this is a big question and a big problem and um I think we will have and this is also one of the things we want to tackle with the global Surgical I collaborative. We want to make sure that the majority of centers can record and share and work on their data. Because if we are inclusive about data, we mitigate a bit this selection bias. So yes, there is the risk that A I amplifies biases in the data set. And yes, I think there is a lot we can do to prevent that. Um So I think at the moment we don't have any questions um uh just um to uh say that on recording uh that uh we, the video and the slides will be available, of course, after Pietro, if he has any proprietary or any trademark or anything that he wants to remove. So we will uh once we finalize those and we go through the recordings, we will um release that. Uh And also uh you can ask questions and send um Yes. Uh I think read the um wrote summary that's quite interesting. Um looks kind of GP T output to be honest. Um uh So the, the, the uh uh uh a final question uh would be, how can somebody get into like, er from tomorrow? Somebody from a training wants to get into A I research. Um How can they start? W where would you um advise them to um start their journey on A I research to be specific? So we are surgeons, we, we love to learn by doing and I would advise people to join the CVS challenge. That's very easy, very low bar to entry. And I think we get you a first experience in this field. Data donor are more than welcome. Of course. But if you want to get into the mechanics, the actual work, I would suggest you join the annotation team because there you will understand what does it mean to analyze this data. So that in a way that it's consistent, that is clinically meaningful, that is machine readable and so on. And from there on, I would uh I would uh join the many, the growing number, not many yet, the growing number of uh courses that are being offered on the space. Of course, you can come to master for the A I master class that we will uh run as a second edition next year. But if you want really to get deeper into it, I would suggest you apply for a place at the surgical Data Science some school. And by the way, I always forget to mention it if uh so if you go on EDO for SDS dot org, you will find a link to an educational platform where you can freely join and, and go through uh uh a series of lectures on the fundamental of laparoscope of endoscopic is fundamental of endoscopy, which is the clinical introduction and fundamental of computer science, which is the computer science technical introduction. So there I think there are a lot of resources to start from. And of course, the book you mentioned is is I think a very good starting point A I in surgery by Dan A. And um yeah, so uh for the attendees, I'm sharing the link to the surgical data science um website. So feel free to register there. I think um I mean, I would probably be joining that next year. It's a very, very good way to get into it. And I think Strasbourg is also beautiful so it will be a good holiday as well. It's very intense. So if you plan on holiday, stay a few days more, yeah, exactly. Don't bring your spouses or whatever because they will be complaining that you left them alone. Um Any comments uh Mis gan uh No, I think uh it, it is obvious that the the whole conversation was so interesting that we kept on asking questions and discussing all the time. I think that this is, this is unique. Uh Congratulations for the initiative uh Pietro, even though I'm in a generation of surgeons that I'm becoming now a little bit older. Iii I feel all seeing all this audience that were young surgeons and you are young and enthusiastic. I would, I would like, I would like actually to be involved and I think uh uh the, the fact that um some of us we keep our videos in hard drives uh when we do laparoscopies, maybe it's a small treasure. And uh I think this is, this will be the last, let's say messages that we can give to the people that are listening to us. Er Maybe this is the those videos can be very, very useful in the future when you have a very large database and make the A I even stronger uh a very large database, different s different mentalities. And uh we will train a system that uh can be uh can, can reach the excellence uh in, in operating uh of course, in good hands. Uh So this, this will be my methods and I'm I'm happy to participate in any, any kind of uh training or uh um event that you will organize because I'm very interested in this. Thank you very much and thank you very much for inviting and asking this nice discussion. I would just conclude by adding on what you said that the real value is not only the set of videos you have but is your surgical in expertise in analyzing those videos because that's really what we need. So we need to extract insights if we want to develop some A I that replicates it. So that is really the, the real value I believe so to, to, to put down in analytics, your surgical expertise. Yes. All right. Thank you to conclude the section. Thank you. Thank you. Thank you to all the, all the attendees uh to the trainees, to the consultants, to whoever everybody who attended, please to get your certificate and also to help us improve, uh complete the feedback form. It's two minutes tops and I will send it as we conclude the session. Uh So just take a couple of more minutes of your time to complete the feedback form, please. And uh with that, I would like to uh wish everybody good night. Good morning, good evening. Uh wherever you are in the world. Thank you very much, everyone. Bye-bye. Bye. Yeah.