Computer generated transcript
Warning!
The following transcript was generated automatically from the content and has not been checked or corrected manually.
Perfect. Good afternoon, everyone. I hope you can hear me loud and clear. Welcome to this afternoon session on surgical simulation. My name is Ryan Aftab and my co chair is Miss Lara Rose Manley and we're both aet vice presidents. We're going to keep, keep brief and kick straight off with our first speaker. So we've got Professor Kenneth Walker starting this off. He's a colorectal surgeon and trainer in Inverness and the Professor of Innovation and Surgical Education at the Royal College of Surgeons of Edinburgh is interest to bridging the gap in traditional apprenticeship model of surgical training using adjuvant training interventions. And he's here to talk to us about the future of simulation over to you, Professor Walker. Hello. And thanks so much for the invitation. I'm Ken Walker from Inverness and I've had education roles with Deanery in college. I'd like to talk about how the future of surgical simulation is. I think more about how we integrate it properly into training programs than necessarily about the simulation technology. And along the way, I'll be showing some QR codes and references. So you might like to have a phone handy to take pictures of those as a group, I guess we cut our teeth really on designing and delivering the Scottish surgical boot camps in the Inverness Clinical Skill Center with its parallel progressions in technical and non technical skills. And we really enjoy still having all the Scottish CT ones come for this course, they never failed to restore our faith in the future. As for evidence base, I'm just putting up these two systematic reviews to show that these are a decade old now and this is really very well established. And yet somehow, I think we're still under using surgical simulation, whether it's for um technical skills instruction or deliberate practice, team training or other types of non technical skills training, all sorts of simulation. We're under using, we're like musicians but who only practice during a gig. And when we did a public consultation with our simulation collaborative, the the public agreed with this viewpoint, Barry Isenberg from Miami already had quite a simple but evidence based formula for success in simulation based education. And we felt that in Scotland, what we were most missing from this formula was the curricular integration. So we formed a collaborative, including T PDS, trainer, trainees, um providers of simulation and drew up a comprehensive simulation strategy for core surgical training. And we were delighted in Scotland when with the improving surgical training pilot, this was funded and is still delivered for core surgical trainees at no additional charge to them. So their study leave budget remains untouched. Um So you can see that it includes a number of courses, um boot camp at the start other courses in the second year, they have managing surgical crisis, a simulation based course where the deal with crises are rising in the operating room is the consultation, skills training, shared decision making difficult conversations, that sort of thing. Basics is a general surgery cadaveric course. And there are alternatives for non general surgery themed trainees. There's some simulation woven into the monthly training days, stoma training, for example. And then there are these two take home, deliberate practice programs, one in laparoscopy and one in vascular, which I'll mention later. And there are in fact, similar strategies have been drawn up for other programs, ST programs um which um might have to be a bit more bespoken nature as uh as you get more senior. And we may have to have strategies that are for themes rather than programs. So for example, endoscopy, but regardless, I think the future really is properly integrated simulation strategies like this, which are provided by the training program funded by the program and not out of the trainees pocket. And the program has to become accountable for these things being available and accessible and getting the right training to the right trainee at the right time. You might ask why we don't have properly integrated surgical simulation across our training programs already. Uh It can't surely to be be due to lack of evidence base. Is it to do with the cost? Well, that might be a concern for programs, but we delivered this strategy for 2000 lbs per trainee per annum. And we have some figures to show that that compares favorably with some savings that may accrue in theater time, medical, legal savings and so on. I dare say there could be hesitation because of the complexity or um concern about the quality of what's been provided. And there is certainly a different type of complexity compared with, say civil aviation. We know that this is a tool that can be used well or used badly. But there is already a kind of instruction manual if you like in the literature, like these 10 golden rules from Barry E again in Miami, which you might like to take a picture of. But finally, I think that there might be issues to do with the leadership and the strategy here and we'll talk about that a bit more quite often. I find myself saying in this whole conversation, you know, it's not about the simulator, it's about the learning. It's not about the tools, it's about the job. I used to reach for this analogy here. But um I realize it hasn't stood the test of time. So I'm looking for a better one if anybody has ideas. So if it's not about the simulator, what is it about? Well, I've already spoken about curricular integration. Let's talk about three other things which I think can really make the difference between success or failure. The first of these is constructive alignment. So, what's that? Well, it's a useful model when we're designing surgical education interventions. And as you can see, it has to start from the needs, the curriculum or a perceived problem that we want to address using simulation. And if we start from considering first, I've got a simulator, I've got a tool, what can I do with it? Then we're rather in danger of becoming like the person with a hammer and we want everything to look like nails. So for example, I've lost count of the number of people in units that are who are looking for surgical simulation applications for VR headsets. Now, it's important that we have people working in that part of the spectrum of research in this domain. Uh And there will be applications that will come along, but there's yet to be one that's been useful to us. By contrast, this eos here was developed by a group of trainees who uh in response to a problem and they reached for the most appropriate technology for that problem, which was physical tasks with instrument tracking software um rather than uh VR with um haptic feedback, which technology wasn't quite there yet. And it's a much more aligned simulator. We're not actually averse to the use of wearables and novel technologies. We have a, a project of our own called IU expert where we're using head camera, cued recall and debrief to understand experts and mental models. But you'll see that even there, we've reached for the um minimum and most robust technology necessary for that task. OK. So the next thing in constructive alignment is detailing your intended learning outcomes. And when we set out, we were very diligent about this, we were good boys and girls, we wrote out detailed uh intended learning outcomes. you using Blooms technology, Bloom's taxonomy, you know, the kind of thing. Um But as time went on and we became more familiar with our simulations, we realized that a lot of the most powerful learning outcomes were actually coming as seemingly incidental or or tangential to the stated ones and stand up comedians know about this, you know, the final punchline, which you can see coming off. It may just be a way of coming into land, but it might be the tangents that are the more memorable. And II love these light bulb moments. These are real quotes from trainees where there's been a realization often in a session that was um that was about something else wet lab or simulated ward round. Uh But there have been these realizations that weren't in the stated learning outcomes, often there's something to do with the expectations of a community of practice which are very important. Um And I think rather than having a separate session about these or a session called um booby Traps and surgeries see it's perfectly valid to smuggle deliberately smuggle these in alongside the stated learning outcomes. And I said I wasn't going to talk much about the simulator itself, but I would like to just touch on one thing and that's the validity or fidelity. You know that there are many kinds of validity, face validity construct transfer. Um There's physical fidelity, psychological fidelity and it's not always the case that the most high tech means the most high fidelity. And we've learned sometimes a hard way that we need to avoid something called the uncanny Valley, which is a concept borrowed from robotics engineering. So basically, if you have quite a complex uh simulation, like say our simulated ward around here, you kind of need to get everything right and um or the important or certain important things, right? And if it's almost right, but not quite uh then it becomes quite, quite horrible. And what called uncanny. And in that instance, you would have been better to, to have quite a basic simulation, like most of our other sessions, which doesn't claim to reproduce the whole thing. It's a bit like saying you're better to have a really good cartoon drawing than an oil painting. That's not very good. Uh Just last week, I was at a session talking about using uh A I and avatars for uh practice and communication skills. And I must say, I really thought those avatars were still in the uncanny valley next thing to mention is that this need for the practice loop to be accompanied by feedback or debrief. This might be quite instantaneous feedback, like put your needle a bit deeper, that sort of thing or it might be more detailed debrief. And in fact, Peter Diekman said it's unethical to use simulation based education without good feedback or debrief because you can train people in the wrong way, you can train people in a psychologically unsafe way. Um So we've put a bit of effort as a faculty into trying to improve our debriefing. We've used the Scottish Center for Simulation and clinical human factors model that some of you might be familiar with that you can find here. Uh And if you're interested, there's a podcast that Steve you and I have recorded with Michael Moneypenny from that center. So the next thing is culture and leadership and to illustrate this, I'd like to tell you a story about one of our take home deliberate practice programs. Uh The laparoscopy one, we thought this was a no brainer. We knew that there was insufficient exposure to laparoscopy skills practice in the first two years of training here and in the States, we knew that there was great evidence for the transfer validity of practice in take home simulators. Um We knew the stages that learners can go through and that as they become more automated, not only do the technical skills improve, but they have more spare cognitive bandwidth for non technical skills and deep learning. We thought we were wise to the dangers of handing out kit and having gather dust in rooms like this. So we constructed a program with take home EOS kit with volunteer ct one trainees structure of instructional videos and modules to practice videos, to upload to us, certificates of, of completion and so forth. But interestingly, we found that a minority of trainees actually completed this and Vivian Black Hall or um phd fellow did a qualitative study in ours and three other centers that were trying similar things looking at the barriers and facilitators. And she found some technical and logistical things which were fixed to good effect. But she found interestingly, the greatest barrier seemed to be a cultural thing. Trainees said we know this is good for our skills, but the system values different things particularly when it comes to ST three application ACP and so on. It values other portfolio indicators over and above skills acquisition. So here we have well motivated trainees being apparently discouraged by the system in which they're working in this regard. Anyway. Um Interestingly when the Edinburgh global surge camels group have been doing a similar project in low and middle income countries. They've actually had a greater uptake and we might hear from them later. Um So I think there's an issue here for us with the transactional element, what the system is asking of people, the weighted burden of of of different requirements from the system. Getting that to much better with the transformational element. What we are seeing is good for them and trying to encourage people to do, ok, running a bit short of time now, but just a quick word about the momentum of technology. So we know that technology can bring us great leaps forward. Sometimes it can promise one thing and deliver another. We know this from the um social media on the phones in our pockets. So let's take robotic surgery. A fantastic leap forward in surgery. We are at the point where the curricula, the training curricula are being driven by the companies and they may be doing a good job. It's in their interests that their products are used well and safely and effectively. But we are at the point where we do need training curricula that are agnostic to the particular simulators or companies. And we might need to take it on a slightly narrower path than the companies would drive us on. And that's what Anna Klich in our lab and some other people are working on. Now. A I is obviously a technology with a huge momentum. Now, I said earlier, I'm not yet impressed with its utility in for use in training for consultation skills, but I can easily see where it will be useful for the real time interpretation of new metrics in surgery in order to use those data to enhance performance. And I'm really looking forward to hearing from the surgical sabermetrics lab later on, it might be of great utility in interpreting real time um operating room black black box data. So there we are four things to think about that I think could make the difference between success and failure for simulation for surgical training. That is if we had more time, we could talk about simulation for planning and guiding surgery or about transformative simulation for changing the way we do health care, but we don't. And if you'd be interested to read more from the papers that our phd fellows have done evaluating some of these interventions along the way, then take a picture of this slide. Some of these are quite practical and some of them are more theoretical and we're trying to bridge that gap between theory and practice. And as I mentioned, we have a new lab, surgical education, research and innovation lab. And if you're interested in future possibilities for research fellows, then do visit our website. I hope I've given you food for thought about what matters for the future of surgical simulation. Thanks so much. Thank you so much. You being integrated so early in surgical training and this is so learner centered as well. Um Just as a reminder to our audience, we have time at the end to ask any questions to any or all of our panel members. So please do put any questions that you may have in the chat as we go along, we've got Miss Helen that one. Um So Allen is a Irish trained clinical researcher and surgical educator working currently in Australia. She's a consultant, surgical oncologist, robotic and colorectal surgeon based in Melbourne Australia. She's also very proud to say an asset past president and her interests focus on adapt to training solutions. She's here to talk to us about advances in simulation and models over to you, Helen. Hi and thanks again for the invite to speak. Um My name is Helen Mohan. I'm an Irish trained colorectal surgeon, currently practicing as a consultant at the Peter MacCallum Cancer Center in Melbourne. Um and at the Austin Health in Melbourne, um I am the Director of Clinical Research at I MRA, which is a Robotics Academy um and a senior lecturer at the University of Melbourne. I'm gonna talk about advances in simulation and models. So I do have some disclosures which are listed here. So when we think about simulation, I think one of the best examples for me is watching Monsters Inc and looking at the monsters doing their SIM in the scare lab. And I think it highlights that simulation is about more than just the technical skill. It provides a safe space to simulate technical components of operations but also gives us the opportunity to build on multilayered simulation with non technical skills and human factors. We recently led an Adelphi consensus at the Society of Robotic Surgery where we looked at some of the key elements for robotic to map to in robotic um surgical training design. And really the kind of take out points through that training is multimodal. And there's many facets to robotic training, looking at the trainee perspective in the UK and Ireland on simulation in surgery. Um I dug out this paper from a couple of years ago that we did with asset um where 98.9% of trainees considered simulation important. They valued hands on more than e-learning and the majority wanted greater access to simulation. However, there were barriers with accessibility and then of course, of course cost as we progress with surgical technology, new challenges emerge, for example, adapting to new technology like robotic surgery. And then beyond that, adapting to what is essentially a multiplatform environment of robotic training. Currently, this is one of our phd students, Kirsten Larkins, he's done some very interesting work looking at transferability of skills um in multiplatform simulation training. So here at the International Medical Robotics Academy, um we basically have an academy that's developed both uh models and a curriculum and the curriculum pathway spans from medical student level and with introduction to robotics with robotics discovery module in combination with the University of Melbourne and then takes uh training is on a journey all the way through from bedside to council and then um specialty specific courses. And this is a multimodal path um training pathway involving a combination of online learning, virtual reality simulation and then hydrogel based learning followed by taking it to the operating room. Non technical skills and human factors is a really important component. And we're lucky to have uh Captain Mat Gray, who's the previous training director at Qantas, um advising us on the design of our non technical skills simulation. And then trainees go and do an online course which is foundations of robotic surgery, which is also offered now in collaboration with R CSI in Ireland. Um and is endorsed by the Royal Australasian College of Surgeons here which is an online comprehensive uh introduction to robotic surgery to then equip them to take it into the simulation lab um to use hydrogen models to to begin to develop their skills. So the hydro models, um the advantage of these models are that you can suture them, you can dissect them with dye, the, they have tissue planes, you can staple them and you can use them locally because with training instruments, they can be used in your local hospital, bringing training to where the robots are. We also have abdominal wall models which are made with a synthetic material to resemble skin and some of which are instable, which can be helpful for learning port placement. And one of the important things we try and capture is simulating the team experience with the interaction between the bedside and the console surgeon. So this is a basic skills trainer where you can dissect out um the star and the circle with diathermy as well as a range of other tasks which basically map to virtual reality tasks and taking them on to a more wet lab environment. Then there's procedural models like the ventral hernia model where basically you use an insert into this abdominal wall um model. Um and then you can simulate the steps of dissecting out the sac and suturing the defect closed Some of the procedural simulation that we undertake looks more at the core component of an operation. So here this simple right Hemicolectomy class two model is simulating a specific component, which is the intracorporeal anastomosis, which for many people is a big difference in moving from laparoscopic to robotic surgery. So here we're making the enterotomy stapling um and then suturing the enterotomy closed. We have been developing more complete procedural models such as as gastric models with the vasculature and the anatomy intact. I'd like to thank our Northern hemisphere collaborators and in particular at or CSI um at guys in Saint Thomas and at the University of Freiburg. Um And thank them for their ongoing collaboration. We also can use these models, taking it back to laparoscopic and open training um which is important as we think about multilevel learners and simulation. So if we're using a model for the more senior training to do robotic simulation, we can use that same model for the sho for example, um to then do a component operation laparoscopically. Here's an example where we did this recently where we were using an abdominal wall model. Um And then uh we got the junior trainees to practice creating um stomas um at the end of the course. So in summary, new technology can make simulation more accessible because we can bring it locally because they can be used in the local environment. Robotics are very amenable to simulation but simulation developed for robotics can have applications to a wider audience. I'd love to hear from anyone who's interested in collaborating or has any questions for further discussion at this email. Thank you. Thank you very much, Miss Mohan for another really interesting talk. Again, it's great to see how early training in robotics is being introduced and the importance of offering that multimodal training that's tailored to each stage in someone's progression. So I'm delighted to introduce our next speaker who is Mr Austin, co a neurosurgery training and phd candidate at UCL who was talking to us about evaluating innovations and surgical training. Thank you. Hello, my name is Austin. I'm a neurosurgical trainee currently doing a phd at the UCL Institute of Neurology. And I'd like to spend this talk firstly, talking briefly about the principles of evaluating surgical innovations in general uh and talking about ideal and what it's about before moving on to talking more specifically about the specific challenges involved that are unique to surgical training that makes it so difficult to conduct research into. So what is ideal, ideal is a collaboration that seeks to establish a series of consequences frameworks for the evaluation of such renovations. Given that such renovations are or surgery is a complex intervention. I think most of us are familiar with the traditional phases of drug development and evaluation. You go from preclinical er and discovery to phases 123 and four clinical evaluations. And I think most of us are probably familiar with this traditional pyramid of evidence with randomized controlled trials sitting at the top only below a systematic synthesis of those randomized controlled trials. So what's different to our surgery that makes this traditional model difficult. Well, surgery is a complex intervention. First and foremost, it involves lots of different steps and variations and even in the few cases where there are randomized controlled trials and surgery, it's not infrequent to see surgeons argue that the results don't apply to them because they're more experienced than the trial surgeons. Or they might argue it doesn't apply because they do the surgery differently. And that in many cases, these are sensible arguments to make surgery is a irreversible treatment on like medications which you can stop or reverse. There are frequently fewer patient numbers involved and there are also ethical considerations uh and considerations about acceptability. So how does ideal differ? Ideal is still based on modeled on the traditional four stages, but it makes allowances for things that are specific to surgical innovations like development and iterative modifications, learning curves and et cetera. It also makes allowances in the pyramid of evidence uh where randomized controlled trials might be feasible or desirable for various reasons to use other study designs that keep many of the advantages of randomized controlled trials while still being feasible or acceptable to conduct. So what about surgical training? And it specifically that makes it even more challenging to, to conduct studies into. First, there's the learning curve, there's the learning curve of the tech or the innovation or your simulation environment itself. And often it's a mundane thing. Questions like how do I turn this on or where do I plug this in? Why isn't the internet working I in research into surgical training? You have to remember that your units of interventions are the surgical trainees themselves who are most often on their own steep learning curve. And even two of your participants may be in two very different places on the learning curve. There are many, many ways of measuring learning curves. Uh One approach you could take is simply ask how many cases do you need to reach a plateau? What's the gap between the er performance as a result of their learning curve? And what does the plateau performance look like? And you can parameterize all these mathematically er and produce curves like this again, just one of them anyway, is about one good way of doing it. And finally, there's a learning curve within your training environment itself. So even expert consultant surgeons the best in the world might find that once they walk into your simulation suite, there is even a bit of a learning curve. Even for them, outcome measures are really challenging in social training. When I think about outcome measures, I like to think along these axis. So there's patient centered outcomes and surgeon centered outcomes. And I also think about objective versus subjective outcomes and these are not dichotomous. These are on a spectrum and you can visualize them on A CS like things patient centered outcomes, these can be things like prongs, complications or something a bit more hard and robust like morbidity and mortality. But overall, I would argue patient centered outcomes are your gold standard of outcomes, the outcomes that you would ideally want to use because the reason that we do anything we do, the things that we do as surgeons and as doctors in in the hope that we would make a difference for the patients receiving our treatments. But there is a real difficulty associated with measuring patients and outcomes particularly in s training. And the problem is that in most surgeries, we do, the outcomes are often pretty good and your volume of bad outcomes which you need to, to conduct comparative and meaningful analysis is often fairly small. This is especially the case in social training where your trainers and supervisors are very frequently present in person to intervene, should they think something untoward were to happen and they would be there to prevent that outcome. Uh And as a general statement, uh both in surgical training and in other forms of social research, the pace of surgical innovations exceeds your ability and your pace uh to collect the da the the data and the outcomes that you need to conduct the uh meaningful analysis. So what can you do instead you can look at surgeon centered outcomes again, visualize it the same way. Uh face validity is a term I derive from the simulation literature. Er but it, I suppose it can really apply to everything you're asking the question. Just a very high level instinctual question to your participants. Does this intervention or innovation or training seem useful to you? You can move on to looking at the surgical performance specifically, whether that's in real life or within your training environment. There are subjective trainee and trainer feedback, various forms of structured and unstructured feedback is ep one notorious example, I think most of us probably have an account on the uh the the problem is these are often are highly subjective and only very loosely structured. And uh unfortunately, the it does enable so unquestionable practices you can use external validated metrics which are highly structured. Uh They are still not objective in spite of that. Oh and the acronym, they're not objective but they are uh supposedly one step better. You you can look at something more objective and more quantitative like mistakes and the number of mistakes and even operating time. And the surgical forces that you're applying and with quantitative outcomes, you can conduct your analyses in such a way that you, you can develop uh predictive thresholds uh where you can predict surgical expertise with reasonable accuracy. And there's also a role for new technologies. We are increasingly able to measure things that we weren't able to measure before like surgical forces using force sensors uh mounted on surgical gloves. And we count the learning curve again, learning curve in and of itself can be an outcome that you measure in your research, interstitial training. And what your learning curve might show you is that where as your learning curve might look like this. Without your intervention, you might find that you get a head start in your intervention or your learning curve is overcome more quickly. You may even find that you're breaking through the breaking through the plateau of your learning curve or completely getting rid of the learning curve or any combination of these things. Now, we move on to thinking about what we do with those outcomes and what comparisons we're making. And this is very important uh when it comes to doing your analysis and statistics. Uh given that we said most surgical outcomes and most surgery that we do, and the gold standard is actually pretty good. It's difficult to be uh truly superior. Oh we need to ask the question. Are we look really looking for superiority or by intervention or are we satisfied with proving noninferiority or by intervention because they're a advantageous in some other way? Uh No, I'll illustrate what I mean, the superiority statistical analysis is what we're traditionally familiar with where you have uh intervention effect. This middle line means no effect at all or the effect that your control group would produce, but you have an intervention effect. You do a two sided statistical test with two sided confidence intervals. And if the confidence interval crosses the control group effect, then you uh or it doesn't cross the control group effect, then you conclude that your intervention is either significantly superior or significantly inferior to your control. Whereas if your confidence ts are crossing your control group effect, you conclude that there's no significant difference but where there's no significant difference, it's really important to remember that you from there, you can't necessarily conclude. And in fact, you can very rarely conclude that your intervention and the control are the same. You have not proved that. So what can we do instead, we can do something called non inferiority statistical testing. Where instead of asking the question, whether your intervention is significantly better or worse, you're asking the question is your intervention significantly, not worse than the control within a certain margin. And here we've plotted the margin of non inferiority with a do it line. This is a one sided statistical test instead. So you have one sided confidence intervals. And if it doesn't cross this margin of noninferiority, then you would conclude your intervention is significantly non inferior. And what about this bottom point here? You would conclude that it's not significantly noninferior and they'll give you a moment to think about these points in the middle. I have cod them helpful for you. But these first two points because they're not crossing that margin, you conclude they are significantly non inferior. So your, your intervention is at least as good as you controlled. And this one, you'll be there is not a very significant non inferiority. So as I said, you want to be thinking carefully about outcomes and whether you're truly measuring for superiority or trying to establish whether it's at least as good as the gold standard. And I, I'll be very quick because I'll be, I'm running out of time. Concurrence versus consequence. What I really mean is correlation versus cause and effect concurrence is if you take a cohort of people in your test performance and look at what the real world ability is. Like, people who perform well on the test are people who have high real world ability and people who perform poorly on the test at that time are people who have poor real world ability. And this is concurrent validity. You'll see that in literature uh where you can generalize it to concurrence consequence. This is cause and effect. So you asked the question, if you have somebody who performs poorly on the test, they improve on the test environment or your, on your simulations, does that then lead them to improve in their real world social skills? And they might turn them into an expert. This will be uh a measuring whether there is a consequence and find the volume and frequency. Yeah, you, you, you may find that there's a sweet spot or volume and a sweet spot of frequency where uh if it's any lower than that or maybe if it's any higher than that, then your intervention doesn't work and just to bring it all together, the ideal collaboration is working on a proposal for a consensus framework for the evaluation of uh specifically technological interventions and surgical training. Er and this is something that's under the words and if you made it this far, this brings us to the end of my talk. Thank you very much for listening. I hope there were some useful nuggets in there. Lovely. Thank you so much for that. Um Very informative talk. I think it's very useful to see how not only are we developing simulation technology but how we're developing this science to see whether it is achieving the results that we'd like it to achieve, which is improving the training of the surgeons of tomorrow. So last but most certainly not least the talk we've got this session is delivered by Miss Emma Howie. She's AST six general surgery trainee completing a phd at the University of Edinburgh with the surgical sabermetrics group. She also sits on the Royal College of Surgeons of Edinburgh Training Committee and Emma is passionate about human factors and patient safety. She's here to talk to us about surgical sabermetrics over to you Emma. Hi. Thank you for having me here to talk today. My name is Emma and I'm a clinical research fellow within the Usher Institute at the University of Edinburgh. Today I am going to introduce to you the topic of surgical ETS and present some of my phd work and some of the work with our sister lab and within our surgical sabermetrics lab here in Edinburgh. So what is surgical smes? It is the advanced analytics of digitally recorded surgical training and operative procedures to enhance insight support, professional development and optimize clinical and safety outcomes. That's a pretty jazzy statement. But what does that really mean? And what problem does that solve in surgical training? And as a consultant, we know that the feedback we get and the performance assessment that we get is pretty behind our high performing fields and it's really holding us back from improving ourselves. We often just get retrospective subjective feedback. It's non dynamic, it's static, it's not changing throughout the case. It often involves self reflection or a human observer. Either somebody else is operating his attention is not fully on you or a separate costly person both in times of money and timing. And there's also both the risk of human bias from an observer and from ourselves, you may recognize the slide in the middle, which is uh screenshot from I SCP where you and your trainer are supposed to go through certain comments. And again, this is retrospective and often not filled in as it should be. What we really need is objective, real time feedback and data. that's dynamic, removing that human observer need and therefore preventing human bias. A way of doing this is looking to our high performing fields such as looking to athletes. Now, this is both professional athletes or even ourselves. As we track our stats on strava using our smartwatches, we can leverage technology and the use of sensors to look at different sources of data within the operating room. So if a footballer can use a sensor on his ankle to look at his kicking, why can we not use electromyography and posture sensors to look at our ergonomics and perhaps prevent M SK issues that can blight our careers? If footballers can use ee GS, why can we not be wearing EE GS and equipment such as functional near infrared spectroscopy to be looking at our brain function and our cognition? If we can monitor our heart rate and our heart rate variability, why is this not something that we can be doing in surgery too. Athletes wear these sensors and it produces a wide variety of metrics and data where an athlete can see what went well and what perhaps didn't go so well and work on both of these and improve. Well. Actually, we can use similar sensors. I use similar technology. The operating room is full of data that we're really not leveraging. You've got data from the environment such as the temperature, even who's just in the room, you've got the individual surgeon or nurse or anesthetist themselves. So what's their physiology? What are they doing that day? How tired are they? Um And then you've got the patients. So what case is it, how difficult is it going to be? How long is it taking? What position on the list is it? We can integrate and use all this data with artificial intelligence, advanced analytics and audio visual capture in order to perform and to develop sophisticated performance analysis to really let us be better surgeons. Now, I did the scoping review recently um that looked at what technology was already in use and the applications and specifically looking at non technical skills and over over 100 and 20 papers found a vast variety of applications for this technology and I'm not going to bore you with them all. Um But just to go through a variety of them. And the rest of this presentation, we need to speak about the concept of cognitive load that comes up quite a lot in sero metrics use because we can apply it across the board. Cognitive load is our cognitive processing. It's our bandwidth. Um We need a certain amount in order to carry out a task. But also if we're carrying out too many tasks, there's too many influences, then that can take up that cognitive bandwidth, we can quickly become cognitively overloaded, that can lead to problems and lead to us not performing at our best. We can object the measure calling to load in real time wearing sensors under our scrubs. And if you look at the video on the right there, you can see that this tracing is changing in time with the scrub nurses, um behaviors and them doing their kind. So we can really see in real time what's going on. This might be a print out of what your kind of load looks like from your electrodermal activity from that sensor following a case. So you can see that it changes throughout and as the case progresses, load increases and it sort of stabilizes and it reduces towards the end. And you can see that there's peaks and troughs in response to events. Some examples of um con load measurement benefits is we can look at the task demand. So as I was talking about the difficulty of the task, that's likely if it's more difficult to exert more of a cognitive load on us, using up more of that bandwidth, we can use it to look at simulation fidelity and difficulty. So we've all been in simulations that are perhaps a little bit too easy and not fully replicating the surgical field by doing things to increase our cognitive load and making it a bit more life. Like we can get a lot more out of that simulation. We can also use it to test the impact of innovations. Simulation is a great platform to test things such as new techniques or robotics. Or again, we can compare the cognitive load that these innovations is exerting on us in real life. And in simulation, we can also use it to evaluate our skills acquisition. The first time I do an appendix, my flow is gonna be a lot higher hopefully than when I do my last appendix. And we can see in track how I'm acquiring skills with how my common loads just and perhaps if my load is higher the last time, why is that? Is there anything that I need to do differently? So my s metrics work for my phd is focusing on looking at the different case factors, patient factors and surgeon factors that can influence our load. These might be things as the position in the list. So the last case often exerts more common load in the first case. How long is the case? Is it a difficult case? Is the patient unwell? What's their BM? How old are they? How sick are they? How's the surgeon feeling that day? How many times they done this operation? H how have they slept? Are they on call? Looking all these factors and seeing how that impacts our cognitive load and seeing what the impact is on the surgical case. This is some work that my colleague Joe Norton's doing and he's looking at the impact of communication on training using simulation. And there's some really exciting work coming out from this improving our performance is going to have a great impact on patient safety. But our colleagues in the States have also found that by monitoring our cognitive load in real time, we can pinpoint high problems load prior to an error. And this paper has found that there was an increased period of problems load prior to a near miss during cardiac perfusion. We can also integrate video and we can use artificial intelligence um to look at video to assess non technical skills without using sensors or with integrating sensors. So for example, work has shown that our proximity is an indication of our communication teamwork skills. So the closer we are in the operating field to each other, the better. And if we apply artificial intelligence and video, we can measure that proximity. Thank you for listening. I'll take any questions. Thank you. Thank you very much, Miss Howie. It's really interesting to hear about sabermetrics as a way of overcoming some of the shortfalls in learning feedback of metrics that we're currently using in surgical training. So I just want to take the opportunity firstly to thank all of our speakers for such a brilliant session. We now have a 10 minute space for questions from the audience. Please do continue to post in the chat. We do have some ready to go um from our audience members. So if I could just start by saying um to everyone really and and I'd invite anyone to answer who would like to um is simulation going to help us to address the workforce pressures of the future and ensure training needs are met in an increasingly pressured environment. So perhaps if I could go maybe by the order of my screen, if I may um if I could perhaps ask Professor Walker. Oh, rookie error, sorry. Um Yeah, that's a difficult one because these challenges will be on the rise just as we er, continue to try and improve how we train, there'll be more and more er workforce challenges er around the corner. Um All I can say is that I don't think um apprenticeship model training being too ad hoc and relying on volume of experience without the design of quality will cut it. So by definition, adjuvant modalities have to, in other words, apprenticeship based training alone um won't be solving our problems. So we have to build this in and I think if some of our workforce problems are um really to, to do with um I mean, to some extent, one of the best things you can do for, for um for that patient, for trainees wellbeing and for the workforce well being is to train well. Um And to do it by design. Um uh And we know that that lies as much in non technical as in technical skills. So this kind of integration of the two whether we've been successful in trying to do that, you, you might want to ask them. I'm, I'm very conscious that there are people um in the conference and in the room who've come through our, our one of our simulation strategies and you can ask them if it was any good or not for that. Um But II, II, think simulation has to be uh part of this and I think no other high risk industry has waited so long for such evidence before uh really integrating it. Thank you so much. We've got lots of questions coming through now. So I'll try and um move on to the next one if I may. So, next question is what are the challenges of getting some of this new simulation technology to trainees, especially in low and middle income countries if I could perhaps maybe come round to Miss Mohan if that's OK. Um If you've got any insights on this, I think it's very much as um as kind of Walker said earlier. It's about the right fidelity for what you're trying to do and trying to map um what the complexity of what you're trying to do is to what, what, what, what you need. So for example, um if you're trying to simulate um an intracorporeal anastomosis, you can use a simpler model because you don't need to have a more complex model and you can do it several times on that one model. Um And that's how I think you can drive costs down by using the models very efficiently and by selecting the right models for what you're trying to do. Um And I think, you know that um you said earlier was very interesting um in terms of the fidelity. Um but I think it's just thinking of what you're actually trying to achieve, setting your learning objectives and then building your simulation around that um to then fulfill that learning objective. Thank you so much. And as I say, we, we've got lots coming in through quite thick and fast now. So um perhaps one, maybe for the trainees in our panel, we've had a question which is how can we ensure that the adoption of metrics and data analysis doesn't inadvertently contribute to burnout by fostering an overly competitive or pressurized training environment? Maybe if we start with Mr Co? Yeah, II. Yeah, I think that's a very, very good question. Uh And not a question, I'm gonna have a good answer to. Um But some, some thoughts are um yeah, you know, II talked a little bit about um um kind of metrics and quantitative metrics as did um uh MS how we uh I believe it was following me. Um And the, the, the first thing is um we, we have to think very carefully about whether these metrics, as I said, or uh whether we've truly shown that they uh are beneficial in the sense of uh having ha uh having any benefits to patient care. So, it's, we have to think very carefully about how we use these metrics. Um And the second thing is, um, you know, let's say we're in a situation where we do have the perfect metric. It's, um, we often I II think it's very difficult to get entirely, get rid of a competitive environment. We've been through medical school, uh surgical training. It, you know, it is competitive by nature. Uh patients like to think that they are being treated by the most competent surgeon. Uh uh and, and, you know, consultants, uh and you know, other surgeons are like always having to talk about, uh uh you know, that, that you can find on websites, the kind of high, low medium uh complication rate CBC infections. But what we do with those metrics to enable people uh to improve themselves and their own surgical performance the best possible way. Um II think that's the really important thing in that we use this these in a introspective way. Uh um, a very poor hunter, but my rambling thoughts not at all. I think it raises a really important question as to who are these metrics made available to, um, and, and how we, we share that information and distribute it and as you say, how, how is it used in practice and what is the aim of that metric? Um II, Miss Ha I can see you nodding along as well. Have you got anything you might like to add? No, I think that, um, point that surgery is competitive, like we're already competing over numbers. I'm sure we've all been in a coffee room with colleagues that are bragging that they've got, you know, 2000 hernias when you've only got 20 for example. Um So there's always going to be that competition in surgery. Um And you know, there's, there's so much information coming in surgery that we are only sort of at the, the bottom of the mountain. So it's going to be a while before we truly know what is important and how we should be using it. So I sort of agree with your rambling thoughts. Thank you very much. And so again, a question for you, miss how he has come through, which is, um, do you feel that sabermetrics are more equitable? Um Yeah, II do in some ways, I think everything's um will have its issues. I think taking out the human aspect. So be it your um trainer not having enough time to give you proper feedback or your own self bias. So there's been studies that show different genders will rate how much of an operation they've done. Um Or I will definitely rate how well I've done the operation compared to objectively how I might have done it. So I think it removes that, but at the same time with all new technology and innovation, there's going to be a period of time where things aren't equal where people don't have the same access. We've seen that already with robotics. There's a lot of people not getting access to robotic training or losing access to cases that perhaps they might have done before, but now been done via robotics. So I think it'll be a while before, you know, any of these new innovations that we're talking about are equitable. Thank you so much. And just, and leading on from that, I suppose um is uh what low cost or manual methods can be used to apply Sabet principles to perhaps surgical performance improvement in those environments where we perhaps don't have access to some of that technology. Yeah. So we try and use low cost anyway, the plan is not to use thousands of pounds worth of equipment we try and use is low cost anyway. Um So a device might cost about 60 lbs which isn't nothing but it's not sort of 100s and 100s. Um There are other ways that they're not as good. So the um subjective skills. So there's a thing called the NASA TLX or the surgical TLX. So tas load indexes, there are sort of a 1 to 6 questions on rated 1 to 20 on. How stressed are you? How um hard was that case, et cetera, you can use them but they only give a snapshot. So you can't really stop a case and say how stressed are you just now perhaps in simulation, there could be points built in but that disrupts your flow. So it's very difficult to use sort of free tools um to really assess those kind of things, especially in real time. And especially if a sort of um objective, real time changing dynamic method. Unfortunately, thank you so much. You've had some really brilliant answers and I said we could continue with this. We've had so many questions being sent in even as we speak. Um But I just want to take the final opportunity as we're out of time just to thank all of you again for taking the time to come and speak today, really interesting set of talks. So thank you so much for giving up your time on a Saturday. Thanks so much. Great meeting you. Thank you.