Home
This site is intended for healthcare professionals
Advertisement

TIPSQI National Session 2024

Share
Advertisement
Advertisement
 
 
 

Summary

Join a concise and effective online teaching session led by Northwest based core trainee, Christian, and Gary, a registrar in the Northwest and chair of Tips Q I, who will be moderating questions. In this session, they will be providing crucial information to help medical professionals understand and navigate the Quality Improvement (Q I) process. They discuss the importance of Q I for junior doctors, the duties of a doctor from the GMC, the difference between audit, Q I, and research, and Deming's theory. Deeper insights into Deming's theoretical background in Q I, how to measure things in Q I, generating ideas using drive diagrams, and the use of PDAs cycles are also presented. How to pick and identify the best Q I projects and approaching these aspects with a systematic mindset is also addressed. This session is suitable for medical professionals who wish to locate gaps in their skill set and improve patient care.

Generated by MedBot

Description

Join us for an lunchtime session of QI methodology and practical tips to make the most of your upcoming QI projects!

TIPSQI are a teaching collaborative based in the North West of England. We deliver QI sessions as part of F1 and F2 regional teaching programs. This year we are aiming bigger and will be delivering a couple of our sessions on a national basis here on MedAll. The sessions are primarily designed for F1/F2 trainees but are open to anyone keen to develop further interest in the theory and design of quality improvement projects. Attendance shows an awareness of QI (useful for ARCP) and can really assist you to create a more effective QI project.

All attendees will receive a certificate of attendance.

We also run a now national QI conference each year. This is open to foundation trainees and give you the opportunity to present your work at an accredited national conference.

For any more information, visit tipsqi.co.uk.

Learning objectives

  1. Understand the concept, importance, and applicability of Quality Improvement (QI) in healthcare settings.
  2. Differentiate between audit, QI and research and understand when to apply each approach in medical practice.
  3. Gain an understanding of Deming’s Model for Improvement and how it can be applied in healthcare to improve patient outcomes.
  4. Learn how to generate, structure, and implement QI ideas using tools such as run charts and PDSA cycles.
  5. Identify potential QI projects in their own practice setting, understanding how to pick a problem, engage stakeholders and measure change.
Generated by MedBot

Speakers

Similar communities

View all

Similar events and on demand videos

Computer generated transcript

Warning!
The following transcript was generated automatically from the content and has not been checked or corrected manually.

Right? Can someone just pop in the chat where they can hear me at all? Yes. OK. Too cool. Um So my name's Chris. Uh I am one of the tips Q I faculty. I'm a core trainee up in the Northwest. Um Welcome to our, our national online session. So it's gonna be quite brief. Um This is usually a session we'd like to deliver about two hours or so, a little bit more interactive. Um But it's our first time using metal. Um I think giving it over at lunch time, we seem to get better engagement. So we've got some more questions as to when you guys would prefer this session, but essentially we'll get through all the content that we would usually cover, but we'll have to skip out some of the more interactive areas. Um So like I said, my name is Chris. I'm a core trainee at leading the national sessions this year in the chat is Gary. Um Gary's a registrar up in the North West as well and he's the chair of Tips Q I. So essentially if you've got any questions about anything um that pops up because the way this is set up. It's not the most interactive. Gary's gonna be moderating those questions in the chat and dealing with any issues you've got there in the chat and he's gonna be monitoring that throughout. So hopefully we'll get this done within the hour. Um And hopefully you'll learn a little bit more about some Q I. Uh if there are any issues with any of the slides or anything we've seen, just let me know. Um So what are we gonna cover? So what is Q I? Why is Q I important? How Q I is sort of relative to audit and versus research? That's something I think gets a little bit mucky sometimes um in foundation training, um the theoretical backgrounds that we use in Q I, particularly Deming's er methodology in the model for improvement, how we measure things in Q I and how we use run charts and then how we generate ideas and put a more structured basis on that with things like drive diagrams and then how we use our P DSA cycles. Um Some of those things might not make sense at the minute, but they will do in a short while. So what are tips, Q I so tips, Q I is essentially a collective of junior doctors who deliver Q I teaching throughout the North West Union at the moment. Um It was started at Manchester Royal because there was a, a perceived lack of training for junior doctors in Q I um despite junior doctors being the ones taking part in most of these projects, so tips grow, so basically meet that gap. So why is Q I important for junior doctors? So in our duties of a doctor from the GMC, it states we must take part in systems of quality assurance and quality improvement. So that's plain as day right there. Um But it's also part of your ACP for um F one to be involved with an audit or AQ I and then F two to actually present and take part in a full Q I project. Um So essentially the aim of this session is to give you a good sort of understanding of the process of AQ I to be able to put together a really good Q I and do want to get that sorted for your A RCP. Um And then in addition to sort of our self needs, um in terms of there's significant evidence that by engaging with improvement of assist delivery that we can improve patient care. So the main aim of Q I is to improve patient care, but we do have those, those self needs at the moment. So we talk about using a systematic approach when it comes to quality improvement. And what does that mean? So under the umbrella of quality improvement, there's a, a lot of different ways you can go about it. So we've got things like audit cycles, what we're gonna focus on today. So Deming's here in the model for improvement, other things like root cause analysis, lean six sigma um using other industries. But so we're gonna focus on the model for improvement and then just a quick recap, like we mentioned, what is the difference between audit Q I and research? So usually we'd ask you to break this down, but again, we'll just sort of deliver it to you. A lot of, you may already know this. So what is an audit? It is essentially a snapshot to determine if we're meeting a defined standard time scale is however long it takes to collect that data, how frequent you see that condition or how infrequent when it may change, how long that takes. Um What is the purpose of the data that is to determine? Are you meeting the standard? So usually you'll have to, you'll have that set standard, set guideline you. Yes, no pass fail. Are we meeting it? And then what volume do we need? So we need to represent our practice. If it's something you see a lot, you may need a high number of patients, it's something not so common and a lower sample size might be sufficient. Um And then a full audit cycle is when you've made an improvement, you've come round again. Have you made any improvement? How does that differ to a quality improvement? So for a quality improvement project, we focus on a problem. So we start with a problem area. We come up with a change idea and then we monitor progress to see. Are we improving in that problem area rather than are we meeting a set standard? So these can be very quick, it can be over days, over weeks and you won't need enough data to demonstrate that change. So it doesn't need to be a set number. We just need to say, have we made an improvement? And then we've got research which is a completely different ball game much bigger. Um basically, to acquire new knowledge, it can take months to years, ethics and all this sort of thing. And the purpose is to show significant statistical significance. And again, volume of data needs to be sufficient to power studies and that sort of thing. So we're not going to go into research, we're gonna focus on quality improvement. So what is Demings theory? So the system of profound knowledge, we're looking at it in 44 areas. So within that, we need to build knowledge, appreciate the system we work in, appreciate the human side of change and understand the variation. So breaking those down a little bit further. So appreciating the system, we need to know how systems work to know how we can change and how we can improve them. We all work in systems, healthcare systems. So being able to understand how little bits of the cogs come together um to work day to day is really important if we want to make any changes, understanding a little bit of human psychology of change, um is gonna be important because we work with people, we work with people from different um professions, different sort of mindsets and what they need to achieve from a day to day work. So being able to engage them, understand how to engage and why they might be engaged, why they might be turned off. The project is really important to get these things off the ground of understanding variation. So we work with patients who are humans, there's gonna be variation. Um We need to be able to understand and expect that. So how do we go about picking a project? So there's four things that we can look at is do you care about it? So you're gonna be the one doing this project. If it's something you don't care about, you're not gonna do it as simple as that question two, can we measure it? So to be able to say something's changed, you need to be able to measure it. If it's something that we can't quantify, then it's probably not gonna be the best project. And you're gonna have difficulty and be able to say whether you've made any improvements and looking at will you have enough data points? So is this something that where you're working, you see frequently enough to be able to monitor regularly and monitor for change and see if we've made an improvement to be able to make a worth our project. If not, probably not. I think question four, will other people care about it? So that's when we come on to stakeholders and things like that. So if it's something that is clinically relevant is a patient safety issue, something where you can get support from within the department, within a hospital. Again, that's gonna be useful if it's something that only you care about, admittedly, that's, that's a good reason to do a project. But if no one's gonna help you with it, you might struggle a little bit, it's probably not the best project for this level. So how can we pick them? So, is there anything that frustrates you, is there an incident you've been involved in that could be done better? Do you have lessons learned sessions where you've identified areas for improvement, um in terms of specific things with your department? So if you work in orthopedics, things like the hip fracture database, how are you doing with pressure? So how are you doing with mu scores? All that sort of thing? Um Is the stuff there that you can get on with and get sort of making improvements with? So the main idea is to start focusing on problems, not solutions. So you may have an idea of how something could work better and a solution you could change, but just because you've got a good solution, that problem might not exist in the first place. To be able to make any better. So let's focus on the problems, make a solution and then make that eq I rather than the other way around. So I think about your problem. Why is it a problem? Who is it a problem to? Is it just a problem to you or the fellows and your doctors? Is it a patient problem or the team members for the physios, the nurses for the wider organization? Is it a cost problem? Is it? Um And for the wider community? So that's coming more into things like sustainability with too much plastic being used and all that sort of thing. So, thinking about your implications in a wider context, helps you with your sort of stakeholders and who to involve and it's something we can figure out. So that's where we'd consider. So have a little think, what issues do you know of? What could your problems be? Um some simple examples. So particularly if you're working in surgery, outlier patients not being seen every day, it happens every day where I work um and just get on with it sometimes, but that is definitely something that we need to address and figure out why is that happening? Um Discharge, discharge summary has not been completed with an hour. You getting bleeps at night to be asked to complete TT OS. Why is that happening? Um Errors with Gentamicin prescription. So like we still coming back to this, this this model to help with a problem, we need to understand the system. So with this quote here, so if you make each part of the system as efficient as possible together, it may not work as effectively as possible. So why is that important? So if you don't understand a full process and how it works, how it comes together, you're not gonna be able to manage it effectively, change it effectively. And if you can't manage it and change it, how can you improve it? And then there's, it's a quote there at the bottom that's estimated about people who work in big organizations waste about 20% of the time redoing things that are wrong or doing other people's jobs. Um generally due to a sort of lack of understanding of how our system works. So what tools can we use to understand the process? So process map is a nice one. So we start with what comes before versus what comes after. So one example you could start with, with the outlier. Patient is patient is in A&E what happens at each stage for that patient then get to the ward or what happens. We got our out loud patient. What happens from handover in the morning to that patient being seen on the ward round? What are the steps involved? Who's involved at each step? What can go wrong? What can change at each step? So breaking down your day or breaking down that process into small chunks as to who's involved. When they're involved. Each sequential step, you can identify opportunities to improve and remove any unnecessary steps that may be causing issues, reverse driver diagrams. So these are something we'll come back to in a little bit more detail um in a little while and talk about it in a bit more detail and you've got some other tools. So the five Ws, so basically you ask yourself why, why? So if you can keep asking yourself why and answering that question, eventually you'll get to the, the root of that issue. Um Spaghetti diagrams, these are particularly useful. Um for example, say if you're planning something like a blood culture pack, a Spaghetti diagram, you could show where you have to travel to pick up every single part of the kit, do a set of blood cultures or an LP or an acidic drone. Versus if you made a pack and had it in one spot in terms of um efficiency, heat maps essentially be used in a department. They're a little bit more intense, probably not gonna use as foundation doctors and then a hand off. Um Essentially this is a diagram of people involved in a process and where they link together. So when they link together, drawing between them, you can see sort of areas which are the most important areas that interact with each other the most. But I'd say heat maps and handoffs me are a little bit more advanced um a bit more complicated, I think definitely a process map is the perfect place to start in terms of breaking things down simply um figuring out where we can intervene um with a lot of things, understanding variation. So you'll be quite aware that for dealing with patients dealing with systems, things show variations of um observations being the perfect example, someone's heart rate, someone's BP is gonna go up and down throughout the day. Everything else will too. So what we wanna look for is not the variation, not the little ups and downs, but the trends. For example, looking at that heart rate, 84 74 82 8, 74 up and down, but it's staying stable versus the one at the bottom where that patients getting more and more tachycardic. That's something we wanna pick up on something we wanna address that's very similar to what we're doing in Q I. So we rely on trends rather than individual numbers. Um And our aim is to influence the trends. So we need to be able to analyze data in that way to be able to find these trends. When are they significant and when have we made that change? So, an important way of doing that is how we represent our data and how we present our data. So looking at to a bar chart here it shows a number of complaints in ad GH by a year. So you got year one year, two both had 89 complaints. It looks the same. However, if you put that same data as a two year run chart, and you can see in this first year sort of the baseline was set around sort of 14 ish there and we had a pretty horrendous month in February, there was a lot of complaints, but then it seems to be that we actually dropped that baseline ever so slightly. So something happened here in February that we possibly learned something from and changed something to then overall reduce our media and our baseline complaints. So if we looked at it in this point of view, nothing's changed, we've learned nothing. Whereas this, we can see where something's happened, investigate that point, learn from it and then show that we've actually improved. So this is a run chart. Um And that's something that we would definitely um promote the use of in Q I. So like we said that the variation in complaint numbers becomes more clear. We saw that one very bad month. But then short, we can also see in the same graph that monthly reduction from year one to year two. This is pretty much what I've just said. So what is a run chart? So it shows it's we, we collect data over time. So it's quite useful. You can make them on the go. So say if you're doing your Q I project and you're collecting data daily weekly, you can add to your run chart every day, every week and just see that those points kicking along by collecting data over time and collecting a baseline. It allows us to appreciate the variation in the system that we're working in and they are the standard tool we use Q I. Um we can annotate them which we'll show you a little bit later. Um And also by collecting data frequently and plotting points, frequently it informs as if we're making changes. So we may see that we make that improvement one week, the week after the week after we know something's working. Um And let's just see that rather than waiting a full year to then um plot our on a bar chart and see if something works or not. So we allow that a little bit more dynamic interpretation of your project and how you're getting on. So again, just comparing it back to audit. So Q I is continuous small scale data collection looking to identify an impact of a change we've made through us to run charts and by addressing problems versus an audit, which is more of that large scale snapshot of action at one point in time. Do we meet the standard? Yes or no. Um So, rather than being compliance monitoring or affecting an ongoing change. So coming back again to the model, so coming into the human side of change, so moving into our stakeholders, how we're gonna get people on side So you've got the quote, never try and go on a solo mission on your own. So, stakeholders in Q people to help are gonna be the, the medical break for a lot of big projects. So the difficulty is engaging people and it's figuring out why these people would want to be engaged and how can we do that? So we know we work in teams on the ward every day. You've got your therapy teams, your nursing teams, your pharmacists, your discharge teams, your porters, everything works together for a uh an end outcome. So we need to involve all the people that are relevant to our problem in the solution, you know that to work most efficiently. And that includes our nonmedical stakeholders. So this is where we come back to our process map. So once we've figured out the problem figured out the process behind the problem, where do those people slot into that process map? Where do we want to change? And who are the people relevant to that area of the map that we need to involve? And different people will need different levels of engagement. And we will come on to that in a little bit in a moment. So thinking who as a anyone can be a stakeholder. So medical stakeholders, you've got your department leads, your clinical leads, your consultants, your supervisors, your seniors and your registrars, your core trainees, um Allied health professionals A C PSP A S basically rather than read about everyone, your ward clerks, if you're doing more admin based stuff, GPS bed managers, literally, anyone who could have any involvement in your process is a stakeholder. The next issue is how we engage them. So it's thinking you'll know you have people don't like change. So it's y should they be involved? And what changes are they gonna be bothered about to keep them interested? So obviously, from our point of view, why are we interested? We know we need it for a RCP. If we get it presented, we can get a poster, we can get some points for applications. So we have that little bit of intrinsic. We're always gonna have that little bit of intrinsic um motivation to do this. Although a lot of us are also gonna be interested in improving patients safe and improving these issues. So we at this, usually with a little bit more um motivation about these projects, we've got more to gain from these and we want to gain more from them thinking about other people. So why would your consultant be involved? It's their patients, their patient safety, their patient outcomes, more of a clinical background, but also a managerial background. If it's gonna improve, say reduced stay times, things like that, it's gonna be beneficial for them in terms of management. If there is say a business case to be made by changing something, if they can save money, if they can reduce stays, that's something that they're gonna be interested in. Um And then this is what we can look at. So the diffusion of innovation. So over time, everyone will get involved eventually, what we see at the bottom is different. People getting involved at different times. So your innovators, the first few are gonna get involved right. At the early days, you're early adopters and your majority here in the middle, followed by your laggards at the end. So what we need to do with projects is find those innovators and those early adopters, get them on board. A lot of them may be colleagues or maybe a particularly um keen consultant, get them involved, get them on board early, get things moving and then figure out the people who are more resistant to change, find your laggards, why and explore them why they're resistant to change. What do they not like about it? Is there any particular concerns they have? Is there anything you can do to address those concerns to get them on board to in order to improve that engagement? But being aware that these people exist, um just through general psychology is gonna be important to be able to look out for them. And then again, who do we need to involve? So we can plot out who to go after the best essentially. Um So making a graph of influence versus interest, so you've got low interest, low influence, high influence, high interest. So putting this on an example. So say this, this is a quality improvement project to increase your documentation of E US scores for patients presenting through ambulance triage by 75% by the 31st of March 2019. So who are our stakeholders? So looking here in the box, low interest, low influence is gonna be hea s and your own triage nurses. So it's their job anyway to do that. They'll be doing that if anything, you're gonna be giving them more work to do um more paperwork. So they're not gonna be massively interested or influenced and because they, the outcome for those patients having that documented doesn't affect their day to day very much, essentially. Um high interest but low influence would be your fellow junior doctors. So these are gonna be your friends, they're gonna be interested in helping you. Um They're gonna be seeing these patients as well. So having that sort of stuff documented would be useful for them. So that interest is gonna be there. But as a junior doctor, unfortunately, your influence is limited. Um So all you may be keen, you may get involved how much of an impact they can have for you is, is a little bit more limited. Um moving into managers so high influence but low interest. So managers don't really care about a whole lot about other than outcomes. So if you could show managers a case that if we did this, our outcomes could improve, they'd likely be on your side and they would push things through and get things done, essentially. Um However, you need to be able to have that um, essential evidence to be able to say, look, this could improve cos unless they can, there's something that's gonna benefit themselves and their service, they're generally not gonna be too bothered and then moving into your consultants and your matrons. So these would be your high interest, high influence people. So they have both clinical and managerial interests. They know how the system works, they know what will work and what won't work. So if you can get your consultants, A&E coordinators on side, they're gonna be vital in being able to help you generate changes and implement changes in that department that would likely lead to some, some um improvements. So it's figuring out where on your process map, where those people lie in your interest and influence chart, figuring out who's the best person to go about, identify their concerns, identify their interests and help get them involved cos they'll be the best people essentially that to push that idea through. And then moving on to the last part of the model, which is, is the building knowledge. So, improvement is a science, you can experiment with it and improve it. So, looking on the, the right hand side, so what are we trying to accomplish how we address that is we make a smart aim and we're gonna make one to Smart aims in a moment. How will we know a change is an improvement? So we need to know it's an improvement. It needs to be measurable and we need to have collected baseline data which comes back to our run chart which again we'll come back to and then what changes can we make that will result in improvements? We need to generate change ideas and then we need to trial them with P DSA cycles. I'm gonna break each part of that down a little bit now. So smart aims, we've probably all heard of smart aims um through medical school. So we all know it stands to specific measurable, achievable, realistic and timely so specific. So keep things manageable, focus on a single aspect of care and define very clearly what you want to improve. If things get too broad, things get more difficult. So pick an area very specific and focus on that. Is it measurable? So it needs to be something numerical that you can put a number on quantify the percentage and average a frequency that it happens that you can monitor that you can see if you're making changes achievable and realistic. So is it actually possible to do usually that is gonna be a time limit? So obviously, you're either on four or six monthly rotations, it may be possible over a longer time. But is it gonna be realistic? You can get this project turned around in the time that you're there and then timely. So we wanna put a time limit on things. So a specific date to go off to focus on. So, looking at some examples to improve junior doctor confidence in prescribing electrolyte replacement by 50% by March 2021. So it's specific for looking at junior doctors, looking at prescribing electrolyte um replacement, it's measurable in terms of you could survey um confidence 50% is relatively achievable. Er and it's got a timely, it's got a march, a march 20 of 21. So you've got a time frame on it. That's AAA decent example of a smart. So moving on to how we know it's been improving and measuring um data. So why do we collect data? So we need a baseline to, to say if we've improved something, we need to know where we're at. So we need to have a baseline to say where we've gone from. We've gone up, we've gone down um to an a and allow us to, to study if we made any changes and then that will tell us if we're actually making our aim. And so without data, you're just another person with an opinion. So we need numbers, we need data. And what we look at is we've got three measures that we focus on. The main one to, to think about is outcome measures. Um So essentially this is the m part of your smart aim and it's what you're actually trying to improve. So the example we're gonna use for these measures, slides is for 95% of patients diagnosed with an A hour admission to have a urine dipstick performed within 24 hours of admission by third of January 2021. So your outcome measure is gonna be your percentage of patients diagnosed with AKI I of a urine dipstick form within 24 hours of admission. That would be your main running chart within this Q I project something you could monitor um daily, pretty much with AK hours or weekly if you wanted to and keep a good track of that. Other measures we have are process measures. So if we make a change, for example, in this case, we wanna get urine dips being done more often. So do we make an AK I pack that includes the urine dip? So we wanna say has our change made that difference or is it a fluke? So by a process measure, this essentially lets us track the frequency with which your change has been used. So you can do this with another run shot or you can do it with surveys anecdotal 11. So you could survey your any doctors and nurses saying have you been using the pack such a thing? Um But say if you made the pack, your process measure could be the percentage of patients diagnosed with AK I where the AK I pack is used. So then you know, if your outcome measure of your Indip and AK I is is improving and your process measure of the percentage of patients using the AK I pack is going up, you know, it could well be likely that the use of your AK I pack has led to the increase in that you're in. So you can say that your intervention has improved as a result of your you. So your outcome has improved as a result of your intervention. And then balancing measures are essentially being mindful that things we change could have unwanted effects um positively or negative on other things. Um We call it a balancing measure. Um So essentially, if we're introducing extra steps in A&E through the use of this AK I pack, could that potentially delay patients leaving A&E to go to the ward? So your balancing measure you could look at is the average time patients spend in A&E from AK I DI with a diagnosis of AK I before moving to the ward after the bed's been allocated. So getting your baseline of that prior to your intervention, then introducing your intervention, obviously, if that increases, then is your intervention causing delays in A&E um is something that has to be mindful of and, and, and look at because although it may be improving, that's we want to sort of balance that against potential negative impacts. Um But again, this could be done as a run chart or again, could be um be addressed in other ways. So again, this is as a quick summary. So your outcome measure is what you're actually trying to improve. That's the end part of your smart goal. A process measure is the frequency of use of your intervention and your balancing measures is something that is inadvertently negatively affected by your change. So for your Q I projects for your A R cps um to meet sort of your requirements, a run chart of your outcome measure is sufficient and then just a, a short point to show that you've at least acknowledged and considered process measures and balancing measures. Um It is more than enough. So how do we go about using a run chart? So this essentially is a run chart. So we can see we've got percentage use of something at the side and then weeks along the bottom in time. So we've been collecting data weekly um in this Q I project. So for the first eight weeks, sorry, no, first five weeks, see nothing's been done. That is our baseline. And then we use the medium um in Q I for the average rather than the mean on run charts. And then you can see there at week five, it's been annotated. We've introduced a new pro performer for something and we've carried on collecting data, then we got this astronomical point and then we presented our me um presented something at a team meeting. We've led to a significant improvement and we've got an increased baseline. So that's an overview of a run chat. But we're gonna look at one in a bit more of a better example on the next few slides. So run charts aren't just sort of graphical show and um illustrations of data. There, there are some statistical things we can do with them. Um So essentially one simple thing we can look at is called a shift. So shifts and run charts are ST statistically significant and it's equivalent to ap value of less than 0.05. So essentially a shift is when you've got six points consecutive points below or above your current medium. So looking at this, you can see over these first few points, we've got a set baseline and then we get this upward shift of um points we got six in a row. So we can call that positive shift and that's a significant change. And then at that point of seeing that shift, we can reset that baseline and reset the medium using this new um new data set up there. And that will be our new baseline to then um measure further data from. So there are more, there's some short videos on the on the tips to our website, explaining this a bit more detail if you are interested in the sort of the maths behind it. Um And explains that a little bit more, more detail. So this is a worked example using a QR project for 80% of non cardiac arrest and duration attempts in ed to include an intubation safety checklist by October 2019. So again, we can see our baseline to start with has been collected between September um 2018 and April 19. So from that data, we've created, we got our medium, we created our baseline about 10% and you can see where it annotated. 1234 and five. And then we've got a little key at the top. So what changes were made? So at 0.1 we presented the data at an audit meeting. So we made our team aware that we were really bad at this. Um And just by doing that would add to some form of improvement. At 0.2 we made a checklist and we put the checklist on the resource trolleys. And then 0.3 we made posters, we displayed posters at four. We included it in induction for new doctors. And in five, we even started using the checklist in resource train in simulation training and resource. So we can see that we have six points above our baseline. So it led to a positive change and that's where we then reset our um medium and reset our baseline up at that new point to determine then where we move forward from there. Um But it is quite useful over a longer term to see the impact and the sort of longevity of the impact of any changes we make closely, that's all quite compacted together. Um I think there's another one a little bit later which shows a little bit better. But essentially using that, you can see when you made the change, what change was made, how big of an impact it had, did it lead to a significant change? If so how long did that last for? And it's a really nice way to visualize things. So how do we come up with change ideas? So yeah, so what changes can we make? Cos this is where we're coming back to our driver diagrams? So what is a driver diagram? So essentially, it's a nice way to identify factors that influence the aim. So on the far left of a driver diagram, we have our smart aimim and our outcome that we want to get to immediate it to the right in the middle of this diagram, you have primary drivers. So A B and C. So primary drivers are things that your smart aim cannot exist without. So if you take one of those away, the the outcome doesn't exist, it needs to exist for the outcome to exist. And then secondary driver is things that then influence the primary driver. So these can be present or absent um and they affect the primary driver, but they don't affect whether that exists or not. So that I'll explain that in a little moment with uh an example. So the best example we explain it with is the roast dinner. So if our outcome is that roast dinner right there, w essentially we need to think what needs to happen to make, what happen. So as part of that roast dinner, roast potatoes are an integral part. So that's a primary driver, secondary drivers. So how do you make good roast potatoes is goose fat and good potatoes. So that would be ideal. However, if you get rid of the goose fat, you want to get rid of your secondary drivers, but you've still got potatoes, you can still technically make roast potatoes. You can still get to your outcome measure, still get to your roast dinner. However, if you don't have potatoes and you don't have goose fat, you can't make roast potatoes and then you can't get to your own your dinner essentially. So that sort of tries to explain the how important the primary drivers is versus versus the secondary drivers. So the primary dri the outcome is dependent on the second, the primary driver, but not dependent on the secondary drivers. So an example of how we can use this to create some ideas for a QR project. So quite a common thing is inappropriate, antibiotic, prescribing in upper resp tract infections in Children. So big things. So having the confidence not to prescribe antibiotics. If you don't have the confidence, you are gonna prescribe antibiotics. So we that is a primary driver. How can we improve that? We can educate doctors. So having guidelines available teaching sessions and that sort of thing. So having the knowledge to know who needs antibiotics. Again, if you don't have that knowledge, you may inappropriately prescribe antibiotics. So in order to improve that, you need to increase knowledge of guidelines, increase accessibility of the guidelines. Again, you could even bring education to that one as well. So having the guidelines available on the desktop printed out leaflets, more teaching sessions um and then shared decision making with parents. So if you can't have that discussion with patients, say about delayed prescribing and things you're more likely to inappropriately prescribe. Um So again, change ideas is how can we whether do we come at that from a training, the doctors in communication skills a little bit better or do we create information leaflets, et cetera for um parents to, to aid that discussion? So you can see how we can come from quite a a common issue to then come at this from various different points. So it's like do we change the environment and say posters guidelines? Do we change the education of our staff? Do we increase our patient um involvement leaflets and that sort of thing? So come at things from multiple different angles, trial them, run them as P DSA cycles, run them as run charts, see what works, see what doesn't you can get really sort of rich projects by breaking it down and really thinking about it with a driver diagram and then figure out what, which ones you want to do. So again, we can graph it, we can plot them on a graph figure out which ones we want to try and which ones you think are too difficult, so hard to do, easy to do high impact, low impact. Um So again, you're gonna want your high impact, easy to do, get those out of the way and then start addressing um some of the other ones that are gonna have less impact, a little bit more difficult. And then finally, I believe coming on to P DSA cycle. So I'm sure some of you have already have heard of P SA cycles, but essentially, this is basically our scientific way of doing Q I and our experimental way of doing Q I. Um So again, usually we'd have some experiments and activities to do here, but obviously, we can't do that at the moment. Um So step one of AP DSA is like at least planning. So what are we gonna do? How are we gonna go about it? What do we think will happen? What data do we need to collect to see if that has happened? And how will we know if it's worked, what's changed? And then we move on to do so we do the change, we make the change and we record the results, any problems, any issues you run into and then we study it. So what happened? What did our data show? Is it what we thought would happen? What have we learned? And also negatives are very useful to see what didn't work and then act. So we planned it. We've done it and we studied it. We've figured out what that change did. Now, we need to decide, do we adopt it and take it forward? Um Do we adapt it? So do we change it a little bit more and recycle it or do we abandon it? That didn't work? Ignore that. So essentially, we can roll this through into repetitive cycles based upon what happened. And the nice, they've got a nice example here. So our smart aim in this scenario was to increase um prescribing a VT prophylaxis for acute stroke patients at admission by 50% by this date, the baseline currently at 30%. So the plan initially was to pop the VT risk assessment into your clar and pro performer. So that was the plan. Do you change the admission clerking? Fair enough study. So what happened? 50% were prescribed some improvement. Lots of variation, lots of out of hours clerking. Um So what do we do next? So we need to adapt the performer. We adopted the performer cos it works, sorry, and we need to expand the exposure. So getting out to more doctors and um spread things out. So the next plan was to put a post up in M AE. So again, do you put the poster up in the m of your office study? You saw a little bit more improvement. Um but still some variation. So what did you notice your issue was? So it may have been a change over, it wasn't being recirculated. Um So the new doctors coming in weren't noticing that. So your longevity, your intervention was lacking a little bit. So that's when we adapt. So we need to address that change over issue and we need to circulate it different. So how would we go about that? That's when you think about E poster. So what do we do then? So we make an E poster and we email it to all doctors on the acute take in each induction, each change over. So everyone who then changes over has this poster every time they come to acute medicine. Um we study it again. So then we saw an 80% change significant scene. So most then clerking doctors would adopt that practice. And then we, what we do from there is then we'd adopt. So essentially then from there, we've seen that including this E poster at induction cause this change. So every induction, we'll keep doing that. So you can see we've gone from changing the Proforma then doing some posters, 10 to an E poster through this cycle, seeing what worked, seeing what didn't work, how to spread that out a little bit better. Um has essentially improved our project over those three cycles and has found some, it's worked quite well. So again, this is a run chart example based upon that project. So starting with that baseline, initially, at 30% here, at 0.1 was when we changed the proforma. So we saw a little change and then at 0.2 that's when the post was increased um included. So again, we see another change of positive shift and a reset baseline. And then again, at three, the new post was then emailed out and again, another shift, another positive change to see that total 50% improvement. So this is a really nice way to show how everything comes together a little bit. So we've got our run chart, we're showing our P DSA cycles have come in to influence these changes and you can see how we've had, we've made quite an improvement with quite simple changes. Um And that's something that's very doable as a foundation doctor. So I think that might be as dumb. It is actually. Um So the powerpoints been cut down a little bit um because of the set up. But essentially that is everything I do appreciate. That was a very quick way through. So we got through that in 45 minutes. It would usually be a two hour session. Um So if anyone does have any questions, feel free to pop them in the chat, um I'll try and answer them myself, but Gary is there as well. Um If not. So we've got our website. So it's um tips Q i.co.uk where you can find essentially a, a project guide to help guide you through your own Q I projects, various ideas, various templates. Um I think there's also some links to an Aqua website um which has a, a run chart tool which essentially builds your run charts for you. So you don't have to bother about plotting them in Excel or whatever. It, it will literally build them for you and monitor things for you. Um We do run coaching sessions so these are initially put out to the Northwest guys. Um However, if there are space, I'm sure they would be um you're more than welcome to try and book onto one of those. And of note, we have our Q I conference in July um which is now open nationally. Again, it was initially a regional thing, but now it is open nationally. So essentially like any other conference, you submit your Q I projects as an abstract and then if you get shows and you have posters and we have oral presentations that would um essentially would count as a national conference nowadays. So, again, really good for um application points and things like that. So definitely get on board with that. Um So let's have a look at the chat thing. Gary, I think Michael's had a question. I think Gary's addressing that already. Yeah, let Gary. Gary take that one. So I'm gonna try and invite Gary on to Stacey if he wants to, to have a word. Hello? Can you hear? Hello? Very perfect. Wonderful. Thank you very much, Chris. That was fantastic. No worries. And you, I was trying to furiously type away to this question for a reason to try and get you on, figure out how it worked. Um, so the question was, um, how do we approach a situation where we tried to get some improvement regardless of the intervention? And I guess it, it kind of depends on the specific thing you're trying to do. But I guess generally where projects are found can fall down are um have we included the right people? So, you know, all successful QR projects really involve her own, creating a team behind that particular problem um rather than doing it on your own, um you know, in a, in a dark room collection data somewhere, the most successful QR projects are when multiple people are involved. Um And actually sometimes it can be to people that you haven't thought about. So, for example, I was um helped on a project um on improving the, the acute N RV provi provision within a tertiary Respiratory Center. And actually, if you're gonna do something like that, um physiotherapists are a really key uh group of professional to involve and they had a really keen um interest but also enthusiasm as we think about our stake, Agra. Um They're like some key players. So, yeah. Have you included the right people? But sometimes some interventions you make aren't gonna work. And that's ok. That's just that, that's life. That's, that's science. You know, you test our hypothesis is the thing I'm gonna, you know, going to try and do to improve my problem work. And if it doesn't, that, that's ok. But I guess what you wanna take from that as well, why didn't the, the thing that I tried to do that teaching session, that stick of that pro for there, why didn't it work? Um So I guess I go going back and talking to people to say I tried to do this, didn't really make any change or anything I was seeking to improve. Why was that? And, and that can help gauge your future changes? Um Yeah, perfect. So I guess Anna in terms of difference between F one and F two. So my, I guess I don't know what trust you're from. I mean, what Deanery you're from particularly, but in the northwest, the F one criteria is that you have some involvement in quality improvement that can be um collect data for an audit, for example. Um or, you know, just demonstrating that you've had some involvement in a quality improvement project um at some level, whereas in FY two, um the requirement is that you independently lead, manage, analyze and deliver AQ I project. And I would argue that in order to actually do AQ I project. The whole point of it, as Chris has mentioned is that you actually introduce a change. Um So I would say, yeah, you do need to do like AP DSA cycle. Um Cos that, that's the whole point of doing a project that you try and do something to make something better. Um But that's not to say that, you know, each of you, each of you individually needs to do an individual project. Let's say if you're at F two and there's four other F twos on your ward, you can very much do it together. Um As I say beforehand, and it's about creating a team. Um Yeah. So next question for unsuccessful QR s, is it still worthwhile presenting, discussing it at your conference or time to admit emit defeat? So I guess that depends on your outlook in life. Um So I guess if you've um showed good use of methodology and you can show that actually thing. Um You can demonstrate learning from your project. I would accept that, that, that um project into my conference. I guess it's about what you've learned from it. Um More than anything else as well as what the data was to show. Um in real life. We don't know what's gonna happen until we try it. So it's not something you'd be penalized on for your A RCP. And if you've got a good project, Michael, um I'd be happy to, to look at it and you could submit it for our conference. All right, perfect. Um Any other questions at all? Anybody I think, do we have a, a feedback QR code thingy Chris or is that, that essentially it looks, it'll get sent out after this um to the email you registered to uh to attend, to show my technological ignorance. The there was an option for your QR but it, there's a lot of things to sort out. So basically it'll, it'll fire, as soon as this has ended, it'll fire out a, um, a, uh, an email with a, a feedback form upon completing your feedback form. That's when you get your attendance certificate, which is all nice and automated. So everything should be pretty simple. Perfect. Um, any other questions anyone's got at all, if not, I guess, feel free to feel free to head off. Cool. Doesn't seem to be go jump you. No, please call today. Thank you for coming at all. Ok. Um, Chris, should we? Um, I'll just, shall I call you when we've closed down and? Yeah. Yeah, we'll have a chat. Yeah. Call you in two T.