Home
This site is intended for healthcare professionals
Advertisement

Explainable AI for real-time endoscopic cancer characterisation - now and how | Prof Ronan Cahill

Share
Advertisement
Advertisement
 
 
 

Summary

This session gives medical professionals an introduction into Professor Ronan Cahill's research in using A.I. to help with surgical decisions. By using biophysics, scientific discoveries, and clinical ways of looking at perfusions, Professor Cahill and his team have developed an AI which has a 98% accuracy when determining whether a certain colonoscopy lesion is cancer or benign. We will also hear how AI can be used to help analyze lesions on different parts of the body, such as the liver, as well as how it can be combined with deep learning for greater accuracy. Professor Cahill will also discuss the importance of being explainable and trustworthy when using AI, as well as the public's opinion on the matter.
Generated by MedBot

Description

Explainable AI for real-time endoscopic cancer characterisation - now and how | Prof Ronan Cahill

Learning objectives

Learning Objectives 1. Identify the potential utility of A.I. in surgery and the challenges associated with its implementation 2. Describe how biophysics-inspired AI methods and computer vision can be used to differentiate between benign and cancerous lesions in visual inspection 3. Estimate the accuracy of traditional methods such as biopsies and radiology in detecting rectal polyps 4. Explain the concept of image stabilization and comparative analysis for recognizing and classifying lesions in real-time 5. Outline the importance of ensuring an A.I. system is explainable and interpretable when used to make decisions and actions regarding irreversible steps, such as surgery.
Generated by MedBot

Related content

Similar communities

Sponsors

View all

Similar events and on demand videos

Advertisement
 
 
 
                
                

Computer generated transcript

Warning!
The following transcript was generated automatically from the content and has not been checked or corrected manually.

So our next speaker is Professor Ronan Cahill, who is the professor of Surgery at the University College, Dublin and the Martin Misericordia University Hospital. He's also the Director of the Center for Precision Surgery at University College, Dublin and the digital surgery unit at the Martyr. Please. Can you give him a warm welcome. So, uh Angela and Matthew, thank. Thank you very much. Thanks Josh and Deirdre and Martin for the invitation to present. I'm going to talk just about surgery. So not about pathways around surgery, but just maybe how A I could help us in operations. And uh this is what we're doing practically to try and see. Is that possible? So this is a polyp, you put a colonoscopy into someone you see a lesion like this and what do you do? And I wonder how many people here might now know what this is. Is it a cancer or is it benign? It's more than two centimeters. So we know that that biopsies are about 80% accurate in these visual inspection alone is somewhere between 50 and 75% accurate radiology for rectal polyp like this is only about 50% accurate So who wants to say it's a cancer with and with what degree of certainty. So just we're talking there, the computer's worked out that this is a cancer uh with 98% probability to it. I'm gonna tell you how we did that. Uh So this is a prospective work looking at using biophysics, inspired A I methods with computer vision to give that statistical inference based on profusion patterns of the polyp And it's proving pretty accurate. Um We have to validate this, we have to move that forward into a multi center prospective up to a randomized trial center. But we're doing here is characterizing in comparison to the final pathology and to the biopsies current standard of care and also looking at margins, the boundary analysis of it. Um 78 patient's in the training set and then there's 50 patient's in this thomas groups. That's the sort of interface sleep lesions. These aren't Frank cancers or Frank small benign polyps, but ones that could have a risk of cancer in it and biopsies only 80% accurate the moment. But we're identifying um accurately of 75% of those missed biopsy rates to it uh in this work. So this is about a eye but really, I'm talking about A I is the immediate application of statistical analysis in this and there's a whole bunch of different types of, of A I deep learning being one set to it. Um A eye has been characterized by boom and bust. So you can see that, that Gartner hype cycle there, lots of things are expected to deliver and often don't. And that's been a feature of A I since the 19 fifties. But maybe now it could become a real tangible thing. But deep learning has problems with this is Netflix Mitchell and the machines and it's so this isn't a secret that deep learning has problems with recognition. Tech company wouldn't have our best interests at heart. Uh Munchy, don't be scared. Why are you clutching that large feral hog? Like it's a child that's not a feral hog. It's punchy. What is that? A dog or dog? Pig dog, big dog, big loaf of bread system error. Come on guys. It's a dog. So this can be the problem um that you have to sort of show the computer nearly every possible variation of the thing you then wanted to identify next different to a child. You can show a child a horse and it's pretty much going to recognize a horse ever again after only seeing 11 instance of it. And the problem in surgery, particularly inter operative cases is we don't, we lack that large amount of annotated warehouses of data, unlike mammography or our, our opthamology. So we need ways to better uh train or understand what we're characterizing in, in systems to it. And there's a whole bunch of hot topics in A I hot topics, meaning things we haven't really quite worked out yet. But the key one at the moment I think is making sure that it is understandable, explainable and, and interpretable, even if we may be giving up some of the some of the potential benefits of, of A I by, by reducing it to things we can, we can understand. But this currently is, you know, this, this is A I based on deep learning types of methods. Um but it's okay when you're just looking back at things, it doesn't really matter as the I gotten this right or wrong. It's like selling cat food or dog food and advertising it, you know, these aren't very important things applying ai in surgery before an irreversible step is uh is much higher stakes. So back to the polyps. So basically, our hypothesis here is that cancer is different to not cancer and that that should be define a ble by its behavior. One of the reasons cancers are different is that they have a different blood supply are different type of angiogenic pattern through it. There are clinical ways we can look at perfusion dynamically. Uh and fluorescent angiography is one of them. So uh this is a way of getting a dye that goes into the bloodstream. You then use an infrared camera to detect that die wherever it is and you can see up close of us. So, profusion isn't really bad. Does it get green or not? It's about how does it get green? How quickly did those patterns happen dynamically? And we've been using computer vision methods to quantify that type of inflow and outflow for a couple of years now. So it might be able to see the top left hand screen that we picked regions of interests along the bowel, the green boxes. And we're going to have some image stabilization. We're going to use computer vision methods to make the intensity display into a time series. And you'll see as the dye starts being given in the lower left hand side of the screen with a couple of seconds of delay, that this is quantifiable by its patterns across the bowel at different points to it over time. So the theory in cancer then is that giving a dying, this could be a drug or, or anything to it will go to the normal tissue first struggle to penetrate into the cancer. Um But then while it's been cleared for the normal tissue around it, uh it would be some relative retention of the dye in the center of the tumor. So all this could happens uh in a few moments, two minutes. And that's what we'll be looking to compare. What's the behavior in the normal bit of tissue on the same screen as the abnormal bit of tissue as we're looking at it. So uh these are commercial near infrared cameras, the white light views in the top left that's useful for image tracking stabilization using surface features. There's a near infrared view which is where the information is going to happen, the white coming in on the black background and we're going to compare these different parts in the screen. So we get an output like this uh judging there sufficient difference in the abnormal area to classified as cancer. Um So this is the technological pipeline for, for doing that. I mean, these are visually apparent differences in it. You can work this out with the pen and paper, you can work it out with Matt lab. But if you're trying to do an operation, you really need to do it with high degrees of confidence uh in moments to give that feedback to the surgeon. But you see here for this benign polyp that, that this is a typical type of curve between the abnormal area and the normal area. They more or less behave the same way var cancer though they're different. Uh there's a different peak and slope to the curves at different levels to it. And uh to make that statistically significant, you can pick arbitrary time points along those curves in order to give statistical comparisons between the curves and these are statistically different depending on whether the lesion is cancer or not. Uh And, and even whether it's benign or normal tissue. Um and we see that high accuracy rates by this type of way, depending on how many boxes on the screen you're, you're going to compare with, you can move it into A I uh many rocket class of higher was a breakthrough in time series and analysis. This measures now 15,000 points in each of these curves. But you are giving up some of the explain ability by moving it into A I. But the results do get even better uh when you increase it up. But explainable is more than just how do the chips work. Uh This is we can take biopsies and look at the fluorescent patterns at different time points in these samples to it. So it's pretty clear understanding of why the computer is saying what it does say, what does, what's happening in each of those on those curves and you can use even A I and I C G to help localize different more morphology with within the tumor types to it. This is Deirdre now lease work and it's, it's important that we can explain ourselves to patient's and to the public. And when we ask them about this type of work, they're very open to the idea they trust us to do the right thing for them, including using new, new, new tools and I guess they always have done, but they do of course, expect us to be trustworthy, including being transparent about where what information is being chaired, who's looking at it and uh what the purposes of these types of things are. Um this type of biophysics A I though, isn't, isn't antagonistic to deep learning, of course, is the data sets build up. You, you can uh use it complimentary. Why, why wouldn't you want to use every possible means of understanding lesion's to it? But just looking at single boxes, ours are a few number of boxes on the screen isn't, is perhaps still a step. Uh you know, front frustrating, I guess because we should be looking at the whole screen And that's, and that's what we're doing now. So we're uh image stabilizing tracking uh each pixel on the screen relative to areas that have been most obviously easy to, to track when there's 30 frames per second. Not much happens between each frame. So you are able to uh get quite a high degree of registration of the data sets behind it. So rather than just looking maybe at four sections, we're going to look at every pixel across the screen. And that allows us to build up these type of heat maps to it. The tumor now being shown by how did the pattern of profusion behave in each of these portions of it over the over the observation time of about 90 seconds. And this brings you then into this ability to start to develop boundary maps look for margin ation um and not just in the colorectum, but also another lesion's to it. You can see here the liver surface lesion on the left, but a deeper lesion two on the right hand side to it. So that's where we think this could really be more useful to. I said the biopsies were 80% accurate in these type of polyps. Uh There's a positive margin rate of about 20% to which for a benign lesion can uh leads to re growth or for cancer, of course, can be, can be, can be very significant to it. But that ability to fully characterize inside to a polyp could allow you just to ablaze it. One of the main reasons we take them out of course is to try and understand what the nature of it was. So this is the pathway of how it works. Now in theater. Um The image stabilize the diet's been given, we're looking at uh the whole screen in this to characterize that area of tumor to it. We want to kind of get out of the region of most interest. We don't really want to see the bowel lumen to it. So we'll select out here an area of normal and abnormal tissue, which is what the comparison is going to be, the analysis is being made. Um And that's going to be projected back onto the screen in real time to indicate the lesion's nature at the operation. This I think this parameter here is time to peak and you'll see it then being shown in real time and the left hand side of the screen there and it's relatively robust registration, you'll see a biopsy forceps come I/O and it doesn't, uh, it doesn't distort it or lose, lose the screen display on it. So we're back to a polyp again. Who wants to say now, is this benign or cancer? And what degree of certainty with the same process with this? Uh, you can see that actually, it's behaving very much like the normal tissue so that this is a benign display lesion to it. So this is it working in real time in a case. Uh The next step of course is to validate this. That's gonna need 500 patient's in the next couple of years. And we're lucky to work uh Horizon Europe funded project with five other centers in Europe and also work with some legal and bioethical experts because validation of course is the technology but also the acceptability, the usability of us in patient's with physicians, different types of providers. So I really would love to hear your thoughts about this and uh thank you again for the opportunity to present his conference. Thank you. Thank you very much, Professor Cahill. That was it's brilliant to see the level of technology that's available and right around the corner. I'm sure there'll be some questions from the audience. So we have five minutes. Um So if you do have any questions from the audience or online, please uh post in the chat or raise your hand, I think I think this is kind of interesting because it is about software and not about hardware. And I agree with an awful lot of the last speakers, talks on it. It's just accept that if we insist on robots being the, the only way forward in the future, it really limit it's way, way down to it. And this sort of stuff should be applying anywhere in the world that a screen has been used to determine the inside of the patient. I think that has a much bigger applicability. That's to me that's a little more exciting. Then it's just been stuck at a 2 million lbs kind of hardware system that's only currently providing 5% of operations in first world countries to it. But maybe that's uh maybe that's not a very popular thought on it. Robot might showcase possibilities to it. But I mean, the world is much bigger than who can afford a robot. I have a quick question. Um with artificial intelligence and diagnosis, do you think there will be barriers of acceptability for clinicians who don't quite understand how it will come to these conclusions? Um You know, not trusting the machine and relying on their own experience. Whereas actually, you know, statistically it's, it's probably better. I know it's interesting, isn't it? So that's uh yeah, but the more we insist on the ai being understandable, the less use we're going to get out of A I and, and maybe it should be about the clinical trials, determination rather than just sort of explaining every, every line of go that didn't. But that's where it currently is at the moment. That's the, the, the FDA insists on a degree of explain ability to it. So, yeah, so who's going to use these type of things? I mean, what often happens to these things is some experts will say, well, I know that's a cancer and I don't need these type of systems to it. But the problem is patient's present to a variety of us in different places. And we've become experts because of the privilege, good fortune of working in a center where you see a lot of these things. But patient's someone with the gi bleed will present to a variety of practitioners in a variety of different places around the world. And this sort of distributed expertise might be of value to it. Um So yeah, trying to kind of understand where our problems our to me, I think it's a colorectal surgeon. I think one of those things is that immediate sort of at index investigation. If you knew it's a cancer or not a cancer, there's two different pathways to it and not a cancer could have an EMR any SD uh other sorts of interventions. The cancer pathway is quite well defined. But these patient's really, I think are, you know, it's a good one at the moment, we're only 80% accurate. So we should be really better than that were way off where we are in other areas of colorectal and general surgeon surgical practice. So I think, I think, yeah, I think we can be better. But what's nice about this type of stuff then of course, you can get a printout to explain what your decision, why you've made that decision because those graphs are different versus the other graphs to it. And us as black boxes and ourselves is an interesting idea, isn't it that more often? You know, we often do think we're all making the same decisions based on the same type of principles, but maybe we're not really, we see that sometimes at M D T s, you know, if you ask people to explain why they're maybe sometimes less confident in, in, in, in their visualization aspects to it. But all our surgical training is about making decisions based on what you see. Isn't that that's been, that's, that's why we spend so long at it. So ways to supplement that with confidence. Uh I think, I think are useful. I think that's all we have time for, for our questions. Thank you very much. Thank you. Thank you.