Home
This site is intended for healthcare professionals
Advertisement

Statistics in Orthopaedics | Professor Daniel Perry

Share
Advertisement
Advertisement
 
 
 

Summary

Join our engaging session that seeks to shed light on important aspects relevant to medical professionals. This talk features two seasoned speakers, Catherine, a Consultant Relationship Manager for Medical Protection (MPS), and Professor Daniel Perry, who will delve into the world of Orthopedic Statistics. Catherine provides an overview of MPS's reliable and comprehensive medical defense services, appealing to professionals for an added layer of protection. Professor Perry's lecture aims to incite a deep interest in orthopedic research and statistics, discussing study design, case reports, and series. There's a chance to interact and ask questions, making this an interactive, enlightening educational experience. Don't miss out!

Generated by MedBot

Description

Our webinar, held in collaboration with BOTA, will feature an insightful presentation from MPS followed by a highly practical session, 'Statistics in Orthopaedics,' led by Prof. Perry."

Are you ready to take your understanding of clinical trials to the next level? Join us for an in-depth session with Prof Perry on the crucial topic of "Statistics in Orthopaedics"! 📊✨

This talk will give you a deeper insight into how statistics are used to design, analyze, and interpret clinical trials—an essential skill for making informed decisions in medicine and healthcare. Prof Perry will break down complex concepts into practical knowledge that you can apply in your own work or studies.

Why attend?

  • Master essential statistical tools used in clinical trials.
  • Learn how to critically assess data and apply findings to real-world scenarios.
  • Get expert insights from one of the leading minds in the field.
  • Engage in interactive discussions and have your burning questions answered.

This talk is perfect for anyone looking to improve their skills in interpreting trial data, whether for academic purposes or everyday clinical practice. Plus, it’s a great opportunity to network with others who share your interest in healthcare research.

Speaker

Prof Daniel Perry, MBChB (Hons), MA(Oxon), PhD, FHEA, FRCS (Orth), FRCSEd (Ad Hominem), Professor, Orthopaedics & Trauma Surgery

Prof Perry is a Consultant Children's Orthopaedic Surgeon at Alder Hey Children's Hospital and a Professor at the University of Liverpool, funded by the UK NIHR. He leads global research in children's trauma and orthopaedics, with over £35 million in funding. He also supervises PhD students and serves in key leadership roles, including Orthopaedic Trials Lead for the Royal College of Surgeons.

Learning objectives

  1. By the end of the session, attendees will be familiar with the structure and operations of the Medical Protection Society (MPS), and understand how this organization can provide assistance and protection with legal and ethical problems in professional medical practice.
  2. Attendees will be able to differentiate between observational studies and interventional studies and understand the different methods of data collection and analysis used in each, in order to effectively critique existing literature and plan their own research projects.
  3. Participants will be able to identify the differences between case series, cross-sectional studies, cohort studies and case-control studies, understanding their individual strengths, limitations and appropriate uses in medical research.
  4. Participants will understand the concepts of prevalence and incidence, and how they are derived from cross-sectional studies and cohort studies respectively.
  5. By the end of this session, attendees will be familiar with practical aspects of conducting randomized control trials in the field of orthopedics, including identifying suitable populations, determining and measuring outcomes, and ensuring robust data collection and analysis procedures.
Generated by MedBot

Speakers

Similar communities

View all

Similar events and on demand videos

Computer generated transcript

Warning!
The following transcript was generated automatically from the content and has not been checked or corrected manually.

Are we live? Yes. Um Just as we went live, you asked? That was brilliant. Um Hello everyone. My name is Ria. I work for medical support. I'm also a final year medical student. I'm delighted to introduce to you our two speakers. So we have a speaker from MPS and then we also have a speaker by the name of Professor Daniel Perry who will speak to you about statistics in orthopedics, just some housekeeping rules. Um Please feel free to ask any questions in the chat. I will mark them as Q and A questions and I'll ask them on behalf of you to Daniel at the end. Um Please make sure to follow medical education for more for future talks um on orthopedics and other interesting medical subjects. And also um please follow voter so I will pop their link in the chart as well. And of course, fill in the feedback at the end to obtain your certificate. And without further ado I'll introduce our first speaker, Catherine. Hi, everybody. So, thanks very much Ria. My name is Catherine. I'm actually a consultant, relationship manager. Um and I work at medical protection or mps as you may know us. Um It's obviously a pleasure here to be here tonight supporting Botha both tonight's both events and thank you so much to me. Um So I'm just gonna speak really quickly for probably less than five minutes. Um, but if you do want to speak to me individually at any point in the future, then um, please pop your details down on the feedback form which you'll get at the end of today's um session. Um And I'll also put my email address and phone number in the chat. Um If you want to take a note of that, um and contact me directly, you can do. Um So for those of you who aren't familiar with um medical protection or MPS, we're actually the world's leading um medical protection organization. We're a mutual organization, so we're not for profit. Um And we're global and we're operating in over 42 territories. We're actually founded in 1884 and we have over 300,000 members. So we've been around for a long, long time. Um And we're not a small company. Um Medical Defense as you probably know, is assisting members with a wide range of legal and ethical problems that can arise from their professional prac, from your professional practice. So, working for the NHS, you may know you have trust indemnity, but membership to mps gives you that added protection where the NHS can't help you. And obviously, as you probably know, it's highly recommended by the NHS to have additional indemnity in place. The support we provide can be support with things such as GMC investigations or help with disciplinary procedures um or any good Samaritan Actx as well, cos that's not covered obviously by your NHS um indemnity and being a member of us, you have access to a 24 hour, seven day a week um helpline and our medical consult, Medico legal consultants work in house. Um and you can call them for any free or confidential advice and that's not going to um affect your subscription membership at all. We also have a counseling, uh confidential counseling service and prism, our database of e-learning where you can access workshops, webinars and courses and that's there really to help you um prevent the risk before it happens. And this is all part of um the added value and the added extras that you get with medical protection membership. If you're already a member, then. Great. It's lovely to see you. Um Just please make sure that all your details are up to date, especially as, as you progress throughout your career. It's your responsibility to make sure that we know um who you are and what you're doing. Um And if you're not a member, the NHS consultant membership price is 549 for the year. It's really easy to switch um from a dem um provider. Um And you may well be paying a little bit more or maybe a lot more um with another provider. Um I can also be contacted for any private practice quotes if you're doing any. Um, and if you're any other grade, then please also email me and I can let you know our subscription rates and anything else that you might need to know about medical protection. So as I say, I'll drop um my contact details in the chat. So if there's anything else that you need from me, then please contact me. Um, and thank you very much again for your time and I'll pass you over to Professor Daniel Perry. Thank you. Oh, Daniel muted. I hate that. I'm so annoying. How far into this are we? Um Right. So, um, so lovely to see you all. Um I'm sorry about that. Um I'm Dan Perry, I'm a consultant, children's orthopedic surgeon, um, up in Liverpool. Um I'm also a professor of uh orthopedic surgery and I do lots of er, randomized clinical trials. So today I wanna talk to you about statistics. I wanna wanna talk to you about study design. I wanna make you as excited about studies and, and stats as, as I am and I get a bit a bit passionate about this. So first things first, let me share my screen, which I'm hoping is gonna work. Uh So that's working. Correct? Yeah, we can see that perfectly. Awesome. Ok. Um, so we're gonna talk about research in orthopedics and so I'm gonna show you the light and show you how amazing orthopedic research is. Cos, it really is cool and it's really amazing but I'm sure many of you aren't yet gonna be convinced. Um, so this is a great book. Um, um, as I'm sure many of, you know, for kind of the basics of, er, or Peak Sciences, er, it's written by Manoj, um, who's, um, who's a colleague of mine but I actually wrote the stat section in this with, er, along with a few others. Um And so I've been given a really long list of things to talk about um and also told to make it fun. So, er, so there's only a limited amount, there's only so much fun, you can make statistics, but I'm gonna make it really, really ace. Um So firstly, let's talk about study design. Um So whenever I design a study, um I always think about uh the first thing I think about whenever I review a paper for the bone and joint journal where it is, the first thing I ask myself is, is what's the study design? So how has this, how has this study been constructed? And, and wh why have they done what they've done so broadly, there's two different types of study design, there's observational studies and there's interventional studies and in the observational studies, there's a few different types of observational study. You can do a case series which is the kind of lowest quality evidence or case report case series. You can do a cross sectional study, a cohort study, a case control study and everyone who always gets mixed up between a case control study and a cohort study. So we'll talk about that in a minute and the intervention studies, we're usually talking about trials and typically a randomized controlled trial, which is what I do lots of. So when we do a case series, you all know what a case series is. A case series is. When your boss says, please, we go and look at the last 10 cases of total hip replacements. I did. They were all amazing and please really write the results and, and tell everyone I'm amazing. So it's a collection of cases. There's no real order to it. Um The population is unclear. Um There's no control group. Um and it's cheap and dirty and because it's cheap and dirty, you can imagine that very seldomly do these get published. So, although your boss thinks it's the best possible idea to publish his last 10 toenail operations, it no one actually cares. Um, cross sectional studies. What? So what's a cross sectional study? Well, this is a bit different. So cross sectional study is a really powerful way of getting a prevalence. So we're actually gonna get real words. So a prevalence is the number of cases of a disease in a given population in a given time period. So a cross sectional study I'm gonna, I've got a clear population. So I'm gonna find out what the prevalence of back pain in Liverpool is today. So my population is live in Liverpool. Um It's gonna give me the prevalence. I haven't got a control group cos I've only got the, the population of Liverpool. Um, and, and I'm gonna go knocking on the doors and I'm gonna ask them today. Have you got back pain? Um, so a nice fixed time point. Um, it's gonna tell me the, the disease burden or the prevalence in the society and it's a really, really good way of, of giving me prevalence. Um, and that can be really powerful if I'm trying to define for whatever purpose, what, what a disease burden is a cohort study. Um, uh, it is, it is where we start with a risk factor. So we start at a point in time and we look into the future at what the outcome is gonna be. So we might start today and I might measure the amount of steroids someone uses. Um, and over time, I'll look into the future and I'll keep following them up. Um, er, until we've got, um, er, until we've got the outcome which may be a VN or whatever it may be. So you start with the population, you start looking at a risk factor in that population and you look forward until you get an outcome. Ok. Um, um, and so that's really, really important because it's very different to a case control study. Cos our arrow goes the other way. So in a case control study, I start with the outcome. So in a case control study, I'm gonna say I got 50 cases of A VN and I've got 50 cases without a VN. So I'm just gonna find 50 cases of, of a VN of the hip and 50 people without avian in the hip. And I'm gonna look back to see what's different between the two groups. So I'm gonna look back in time to see if there's any different in their medical records in terms of their steroid use. So, so this is really, really key. So, so a cohort study starts with a risk factor starts with the population and you look forward in time even if you're doing it retrospectively, even if you're doing it retrospectively, you're saying, look, I'm gonna look at all of these hips, um, that, that were done 30 years ago and I'm gonna look through the notes and see what I can make about smoking and steroid use and all of this. And I'm gonna come to today. Um, look at the A VN rate we're still going forward with that arrow. Um Whereas, whereas probably a better way to do that would be to say, OK, let's find 50 cases of A VN, 50 cases without a VN and let's, let's look through the notes of those 100 cases um, to see what's different about the, the different notes. Um And so, so tho those two are, are very distinct. So in terms of what you use, well, cohort studies um classically are, are a prospective. So they're, they're usually prospective. Um and you can get an incidence. So the only way you can get an incidence is from a cohort study. So an incidence is a number of new cases of a disease in a given population in a given time period. So if you wanna get an incidence, you need to do a cohort study. If you wanna get a prevalence, you do a, you do a cross sectional study. OK. And so the thing about a cohort study is we, we can measure the confounder really, really well. Um And so a confounder is something which has got an independent relationship with both the outcome um uh um with the, both the um dependent and independent variables. So, so something that's got uh an independent relationship with an outcome. Um And so, so we might look, um uh so we, we might wanna look at uh uh at, we might think smoking causes a VN. Um but people who smoke also drink a lot. So alcohol's a confounder. So in a cohort study, we can measure it really, really reliably prospectively, cos we can ask all of the people about how much they smoke, but we can also ask them about how much they drink and we can look in time and get really good estimates. A case control study is cheap and dirty. I can get my 50 cases of A VN and 50 cases without a VN. I can get them today from the notes. I can't get an incidence rate cos I don't know the population. Um, all I know is the 50 failures and the 50 successes. I mean, it's really hard to measure confounder cos while I can look through the notes about, you know, I can get an idea of how much they've smoked. It's really difficult to kind of get a good idea of how much they've smoked and how much they drank and really start to estimate that. Um But for a rare outcome, it's really, really good. Um So if I was gonna look at the prevalence of bow legs, um in Liverpool today, you can tell my Children orthopedic surgeon. So if I wanna look at the prevalence of something, what do I do? I do a cross sectional study. Um If I'm go, if I wanna look at the relationship between eating strawberries and bow eggs, so how might I do that? So, bo eggs are quite common. Um And, and it's a relatively short time period. So, so it's only a couple of years when kids get bowlegs. So we might wanna do a cohort study cos it's not, we haven't got a really long period, um, until we reach our outcome. So we might wanna do a prospective cohort study and we'll measure all the strawberries that people are eating or the kids are eating. And we can also measure the number of bananas and number of grapes just in case there's a confounding factor. So we can try and adjust for all the different things and then we can look forward in time and do a, a nice cohort study. Um, however, um, this is, um, a sufi so slipped capital Femoral, if this a Skiffy, um, if we wanna look at the relationship between Bananas and Skiffy, well, Skiffy is really, really rare. And so if I wanted to do a cohort study, I'm gonna have to do a massive cohort study to get Skiffy. So it's far better off to do a case control study where I find 50 cases of Skiffy and 50 cases without Skiffy and try and look back um to see what the difference between the two groups is. I may send questionnaires to the families to try and try and unpick what the relationship might be. And I might ask about other things like strawberries and, and grapes, but, but the relation, but it's gonna be far harder for them to kind of recall what the differences are um, compared to a cohort study. Um And, but there's a big, but like I mentioned before, it's all about the confounder. Um So if I say that a, so if I say that gray hair is related to, um a hip fracture, you know that having gray hair or having no hair isn't related to a hip fracture. The confounding variable is age. So all the time in all of the observational studies, I need to try and measure as many of the confounding factors as I can. Cos the only way that I can adjust statistically is to have measured all the different confounding factors. So I wanna have the best possible way of measuring all of these so I can adjust for them in my analysis. So I've got a really good idea of what's going on. And then the other side of things is intervention side of things. So intervention side of things is where we do a trial and where we randomize. So why do we randomize? And I can't remember if we've got a slide. So why do we randomize or the reason we randomize is because of those confounding factors. So the reason we randomize is because we can get rid of all of the unmeasured confounding factors, all of the uncertain confounding factors by randomization because I know by pure randomization, we're gonna make sure those two factors will be balanced in each group. So, so, so we even though we might not know a confounding factor, if we do enough randomization, it's gonna be exactly the same in both groups. And therefore confounding doesn't matter anymore. And the only thing that's, that's important is, is the uh I is the interventions of interest because we've balanced the two groups otherwise perfectly, which is why randomized controlled trials are so, so good because we balance the, the confounding, we balance the unmeasured confounding, which is really cool. So, you know, whenever we're designing a trial and we'll, we'll skip through this quickly. But whenever we're designing a trial, the first things I always write down are the po um so that's the population, the intervention, the comparator and the outcome. So, uh I'm one of the editors for the bone and joint journal. So whenever I'm reviewing for the bone and joint journal, they're the first things I write down about every paper that comes across the desk. Cos I wanna know how people have approached each of these different things and really defined what's going on at each stage. Um um And then when we talk about trials broadly, there's two different types of trials. Um There's experimental trials and experimental trials are um or e er, or er er efficacy trials and these sort of trials, we've got a really, really um er tight population, we've got tight interventions and we've got, we've got often a kind of an experimental outcome. So it gives you a best case situation. So, here we've got right, total hip replacements in males less than um er 60 years old. We're looking at high volume surgeons, a posterior approach with a defined technique, er comparators, er, another defined technique and the outcome acetabular re retroversion on CT. So kind of all very, very kind of very, very defined and, and niche. And about what, what in the best hands in the best situation, what can we deliver? What we more often do um uh in, in the big trials is we take a big pragmatic approach. So we do a pragmatic uh approach. And the reason we do that is because the NHS is generally, generally wants to say, well, what can the NHS deliver? We're not saying what's gonna be delivered in the best hands. But we're saying in a, in a generalis national healthcare system which we all live and work and operate. Um Is this intervention cost effective or is this intervention effective and cost effective compared to this intervention? So it'll be much more pragmatic, it'll be all total hip replacements, the posterior approach versus a lateral approach. Um uh with an outcome being something clinically relevant. So it's much more relevant trials to patients and it's much more relevant trials to the NHS. And that's what, that's why nice care about these trials. Although I know they sometimes cause controversy. Ok. So we've, we're sort of 20 minutes in or 15 minutes in. Let's take a little breather cos cos now we're gonna move into the realms of statistics and I know you all love statistics. I certainly love statistics and I wanna share my, my passion. So it's difficult before it comes easy. OK. And II used to find stats difficult. Um And then I kind of changed my mindset to think. Well, if you just like no one else likes stats, you can be the absolute winner here if you start to like statistics and then it will become easy. So we're gonna keep stats super simple. And if you've got more complex questions, you can put them in the chat, but we're gonna kind of keep it simple and I'm gonna go through everything I think you need to know for the F four CS. Um so broadly, there's two types of data. Um So there's categorical data and there's numerical data. Um And so when we think about all these different things, the categorical data can be nominal or it can be binary. So binary is obvious. It's, it's yes or no, it's true or false. It's men or women. So you've got two options nominal. Um It's something that's, that's ordered. So, so sorry, something that's um something that's not ordered. So, so something that's um something could be put into groups but the groups don't have an order. So classically colors, so colors can be put into groups. You can say this one's green, this one's red, this one's blue, but you can't put that in order. Um You could try them with rainbows and stuff, but normally they're all just distinct groups. Um ordinal. Um um er er er can be put into groups. So ordinal is something that you put into categories and you put the categories into groups. So it might be small, medium big and then numerical. Um er we've got discrete and continuous. So continuous is something that can be divided and divided and divided and divided and it's still a reasonable number. A discrete is whole numbers of things. So whole numbers of people, whole numbers of hips, whole numbers of stuff and each of these things um are, are kind of relevant cos we, we approach stats in different ways. Um So these are some of the obvious ones, like we just said, tossing a coin. Um, so height on this, it's both ordinal cos it's got um small, medium large. Um, but it's also continuous, isn't it? Cos you can break it down, you can do it millimeters, you can even go smaller than that. Um, discrete is the number of something. Um, no, is a color in terms of orthopedics. This is um, some of the ways we may think about it. Um, so again, people time, er, er, size of different joints, um uh or, or groups of different uh different prostheses, et cetera. OK. Um And so discrete stuff. Um, we generally analyze it with simple bar charts, um with um, er, with pie charts, nice simple statistics. And one of the biggest things as a journal editor is I'm constantly, constantly writing back to people saying you've overcomplicated your statistics, you just need to keep it really simple. Um, because my statisticians who work alongside me, they deliberately try and keep things simple. So the people that try and write really complicated statistics generally means they don't understand it. OK. And then in continuous and this is kind of where it's at, isn't it so continuous. Um We use these things. So w what, what's this called? So it's a box and Whisker plot. Um and we also use these um uh beautiful distribution plots and we'll talk about those in a little bit. So whenever we're talking about continuous data, um the, what we're trying to talk about really is we're talking about spread and we're talking about the spread of data, cos the spread of data tells us everything that's going on in the data set. Um And so we've got, so, so when we're looking at spread, we look at a measure of dispersion. So the measures have spread and there's some really simple ones and you've all learnt about these before. So we talk about the range. So the range is the, the difference between the biggest and the smallest. So here the, the range is between one and eight, we talk about a mean. So we're talking about the mean mode and medium. So they're all on average, aren't they? And there's different sorts of average that we might use at different times. Um So the mean is the add them all together and divide them by the total number there. Um So I'm not sure what it is here. But, but we're gonna add each of those together and divide them by however many there are the mode, well, the modes, the most frequently occurring. So the most frequently occurring here is two. So our mode is two and our median is we put them all in a line. So all the, all the, um, er, all the data points in a line. So we've got 123456789, 10 and to our median um is gonna be the, the central one. And so the central 12345 is actually um so, so, so the centers between this two and this two, isn't it? So we've got five on this side, five on this side. So our center is between these two. So we add them together and divide by two. So our center, our center point is two. Perfect super simple. Um And then we've got data distributions and people constantly, constantly get mixed up with, with what's going on in these data distributions. Um uh And so, so we always think about my cats um and my cat's sitting here and his tail is pointing in one direction. So his tail, in this case is pointing towards the negative side. Um because this data is negatively skewed, this data is positively skewed cos his tail is pointing in a positive direction. Um So whatever way his tail's pointing, he's telling you which way the distributions skewed and it's skewed this way, cos there's more data this way and this one's skewed this way cos there's more data this way. So it's super simple. Remember my cat sitting there and remember what his tail's doing. Um And so in terms of orthopedia examples, um this might be the age of hip replacements. So the age of hip replacements has got a, er, has got a negative skew. Um And um what might this be, this might be the um ACL reconstruction. So this has got a positive skew. So, so very few get done in very little kids and then suddenly you get the peak in the sort of thirties or forties and then it drops off suddenly. Um So whenever we um so whenever we look at distribution, um we um er er whenever we get a, an experiment. So, so ideally in a perfect world, I do an experiment and I'd repeat my experiment um or I do my experiment and I get a result and then I or someone else um would repeat the experiment and they get a slightly different result and someone else would repeat the same experiment and they get a different result and the next person would get a different result and everyone's gonna get a slightly different result from repeating the experiment. Um but by repeating it lots and lots of times and if I repeat an experiment or a test or whatever, at 100 different hospitals. I'm gonna get this, which is the most beautiful thing in all of the world. This is a normal distribution. Normal distributions are magical, normal distributions are magical cos you can do loads of course, statistics with normal distributions. So firstly, we know that in a normal distribution, the me the mean is in the middle. So the mean, we've talked about that before. So the mean is the men. So the mean mode and median in a normal distribution are all exactly the same. So here's the mode, the medium means we've got the same amount of data on each side. And the mean means if we add them all up and divide them by the number here, we're gonna get this number. So it's amazing. Um So in a normal distribution, um uh so our measure of central tendency, which is the me motor median are all gonna be the same. So the spread of the data tells us about the natural variation within the sample and tells us about the accuracy, the, the, the of the, the measurement tool. So we're all gonna, we're always gonna have this spread. Um And we can make the spread better. Um If the population's a lot tighter. So if the population is more and more defined, um So kind of our experimental trials, we might make our, we might make our, our distribution tighter cos we're getting rid of some of the variation. And if we make our our measurement tool more and more accurate. Um, we're also gonna squeeze that down, aren't we? So we're gonna squeeze the, the normal distribution to, to get rid of the, get rid of the bits around the edges. So we've talked about that. Um, and, er, so that may be blood loss. Um, er, er, I put obesity but I, II think that's probably skewed actually. Um, so I think that's probably got a skew towards being more obese. Um, um, er, so we've talked about these, so this is a negative skew cos my cat's tails on the negative side, this is a positive skew cos my cat's sitting here with a positive tail. Um, and so what's really important though, um, is if we've got a skew distribution, we look, here's my mode. Um, and then my long tail is dragging my mean over that way. Um, and my median is gonna sit in between the two. So my mode is the most commonly occurring. My mean, uh, my median, er, sorry, my mean is gonna be pulled off by this tail. Um, and my median is between the two and that's important because therefore, if you've got a skewed distribution then your best measure of, of central tendency, your best measure of, of, uh, of, um, uh, your best average is the medium because it sits in between the, the, the mode and the mean. Um, um, so there we go. Um, so back to spread. Um, So we, we know that the mean mode of medium sits er in this central, in this measure of central tenancy. Um We all remember at school, we learned about standard deviations and we learnt that 68% of our data set is gonna lie er in a normal distribution between wi within one standard deviation of the uh of the me. Um And we also know that within two standard deviations is gonna be the magical 95% of our data. So all of our data points, most of it lies um within this measure of, of uh uh uh wi within this measure of central tenancy. But often we're not really, we don't really care that much about all of the data. What we actually care and what we usually want to do is we want to be how sure we are about the mean. Um because it's all very well like saying what the standard deviation is doing. But actually, I don't wanna, you know, I wanna be able to compare my mean to your mean to see if there's any difference between our means. Um And the way we do that ii it is, it's, it's almost the same as the standard deviation, but this is where people start to get confused until you just see the light. Um So if we just look at the top and we remember we did standard deviations for the whole thing and we're gonna do exactly the same at the top, but we're gonna use something called standard error. Um And so, so one standard error gives us a 68% confidence. Um And two standard, two standard errors or 1.96 technically, but two standard errors means that 90 I'm 95% confident that the true mean will lie within two standard errors um uh of the, of the mean. So, so just like you are 95% confident that, that all of your data would lie within two standard deviations. I'm 95% confident that the true mean, if I repeated the experiment lots of times will lie within, within two standard DVI two standard errors um er, of the mean. Um And so we've talked about the range. Um So the range is the, the biggest number versus the smallest number. We've talked about the standard deviation cos we're just gonna lose the, the, the kind of 2.5% each side just to, to tell me exactly where the butter is in this case. So to kind of, so I'm confident about whether w where we're spreading it. Um And the standard error um is actually telling me about where the middle of the butter is, where that butter is going on. Now. Um It's telling me that I'm 95% confident that the, that the majority or the, the true mean, um er, is within, within those um, er, two standard errors which is a confidence interval and that's how we come up with a confidence interval. Um And the way we calculate it is the standard deviation divided by the square root of the, the total sample size you've got. Um And so if our sample size gets bigger, so if this increases from 10 to 100 then this number at the bottom is gonna get bigger. Um And so as this number gets bigger and bigger and bigger, it means that I'm more and more confident because my confidence interval is getting smaller and smaller and smaller because we're dividing by a bigger number. Um And so that's why it's so good to get a bigger sample size. If you get a really, really big sample size, you become more and more confident about where the true mean lies. Um um um And that's all based on our standard errors. Um And so here we go. So we're increasing our sample size and by increasing our sample size, you can see that, that the kind of the point of the curve gets more and more obvious. Um And I can be a lot more confident about where, where the true means gonna lie. So here you couldn't be quite sure where the true means gonna lie. But on here, it's obvious the true means around here somewhere, isn't it? And that's all because we're, we're increasing our sample size. So if I wanna look at the height of orthopedic surgeons um versus the height of theater nurses. Um There, there may be AAA. So here, um we've got a small sample and we've got a small difference. So it's actually quite hard to tell these two curves apart. Um But if I have the same small difference, um but a big sample, um I can quite easily tell the difference so I can easily tell they're different. And that's just by increasing our sample size, we can be more and more confident that that the standards. Um Well, the confidence interval, two standard errors of the mean here is gonna sit here somewhere and here it's gonna sit here so I can tell these two populations apart. Um However, if I've got a small sample with a big difference, hey, look, I can already tell they're different. And this is the whole premise of doing a power calculation. This is why we do power calculations before we do research. If I wanna find a really big difference um between two measures, then look, I don't need many patients. Um Because if I'm looking for a big difference between this and this, then I don't need many patients to show a difference. Um But if I wanna look for a really small difference, then I need loads of patients. So if I wanna look for this very small difference, I need a very, very large sample in order to show that these two are separate. And so that's the whole premise of doing a power calculation, there's lots of different ways to do them. But, but fundamentally, in order to do a power power calculation, we have to see what difference are we trying to find and what's the variability. So what's the standard deviation within our sample? And based on that I can produce a power calculation which is gonna tell me the number of patients that I need in order to, to, to be able to find the difference if it exists in that sample. So I've told you all of this um basically, because we've just talked about what at test is. So at test is broadly the difference between the means divided by the standard error. So the difference between the means um divided by um divide by the, the measure of um the measure of dispersion, the me measure of variability. So it's all kind of super simple. So at test considers the difference um uh between the means it considers the, the, the sample size and the spread of the data and all the T test does, it gives me a uh a statistical summary of the difference. So ap value um which you will love but like, but frankly, I could do all of that with a confidence interval. Um Cos if we've got a confidence interval, if we've got a 95% confidence interval and my data point isn't within your confidence interval, then I know that my data point has to be different to, to, to what, what you've just produced. Cos I've just said that, that, that this is my confidence interval. And therefore, if I've got something that's over, that's outside my confidence interval, then by definition, P must be greater than naught point, naught five. And, and it's much more powerful than, than just P equals naught point naught five because, because it's actually telling me something about the sample, which is really cool. Um So other measures, other tests that people might do. So if you've got more than, more than one, more than one normal distribution in the sample, um you may do an an over test. Um the way I remember that is another normal distribution. Um Yeah, it's just a kind of bad, bad stats joke. I don't know. Um Anyway, um so moving swiftly on, let see our time. So we're doing all right. Um So we'll talk briefly about Box and Whisker plots. Um er, so I love Box and Whisker plots. I think they're underused. I think they're really, really cool and they can tell you loads and loads about the data. Um And so in a box and Whisker plot, this point is my, this is my medium. Um er and then this is my um er lower quarter, this is my upper quarter. Um er so between this and this is my interquartile range. So how do we calculate this? So, so if we've got 100 data points and we put them in order. Um And so then my, um so because we've got 100 then the center data point is gonna be the 50th. And so that's my median. The 50th data point is the median. Um the 25th and the 75th are my lower and my upper quarter. Um And then these are the, these are the outliers. So these are the extreme values. Um er, and so we're gonna um er er mark out the extreme values. So look a and, and, and if you imagine, um if you imagine my cat's sitting here, ok. So my cat's gonna be sitting on top of here and my cat's tail is gonna be over here, isn't it? Cos this is really stretched out this week. So this is gonna be a negatively skewed distribution if we were to draw it on top because that we've got a central, uh a point of central tendency here. My mean is being pulled over this way. Um And sorry, yes, my um my means being pulled this way, my mode is gonna be sitting on this side and this is my medium so you can kind of draw out what the curve's gonna look like. So how cool is that? You, you guys are proper statisticians now? Um So, um so we can, if people wanna ask questions about that, we can uh we can do more on that afterwards. Um So, let's talk about funnel plots. Um, so they're, they're hot, hot in the, um, in the N JR. Um, so this is what a funnel plot looks like. Um, and look, if you all turn your head slightly on the side, it almost looks like a normal distribution and there's no accident and that almost looks like a normal distribution. So, in any final plot we have the failure rate, er, on the side. Um, So whatever you wanna call it, it's, it's just failure, it's, you know, it's, it's, it's that, that's all it is, it's a measure of failure. Um And then we have um an increasing number of something. Ok. And so we've already said that the more we increase the sample size, the more certain you become about, about a value. Um So the, the, the, the less variability is because, because you're more and more certain about a particular value. So if we increase. So, so down here, if I'm doing three hip replacements a year and one goes bad, well, that's just unlucky, isn't it? Cos cos you know, I've got a third failure rate. So my failure rate is really, really high. But look, it could just be natural variation because cos I'm only doing a few joints and, you know, it may just be, it may just be tough luck. But look, if I'm doing 100 and 50 joints a year and I've got the same third failure rate and that's massive, isn't it? Like that's a really big problem because the, because we've got, because we're more and more certain about, about where the true mean of the whole population should lie. Because, um, as we get a, as we do more and more and more this line gets narrower and narrower and narrower because we're more and more certain about our, our measure of central tendency, we're more and more certain about where the population means should lie. So this is called our target line. Um, and the people above it, or the people below the target line aren't any better than anyone else. Th this is all just natural variation. This is just, this is just statistics. Um um um, and then, then we've got the, the upper control limit and the lower control limit. Um, and this is the, er, this is set by the standard deviations. Um er, and so, so if you're, er, in, in the N JR, if you're outside three standard deviations, so, so at whatever point, um that then you become an outlier. Um And so you, so these guys are outliers and for whatever reason, their failure rate is higher, um, er, than the, than the normal sample. And it's not just natural variation because it's outside the, the natural variation. Um because we've already said that, that a natural variation in this case is, is three standard deviations. So you're sitting outside that, that natural, um, er, you're sitting outside the control limiter and therefore there, there's a special cause variation going on. So this is our control limit. This is our mean or measure of central tendency. Um uh This, this is our upper control limit um uh above which something bad's happening and there's our lower control limit below which something amazing is happening. Um er And everyone outside this is the outliers because there's special cause variation, there's something different happening in those sites probably. Um And so this is our, our beautiful, beautiful graph to, to explain it all. Um And so, yeah, so everyone here is just there because statistics say that that's where they are, they're all just the same statistically. Um OK, so moving on through stats quickly, um uh we will be quick on this. So it systematic review. So people get mixed up with what a systematic review is. Um But it's kind of what it says on the tin. Um So a systematic review is, is when you look at the, the, the literature systematically, so you're gonna go out and you're gonna say I'm gonna look at papers between 2010 and 2012 and I'm gonna follow a very, very rigid process to, to, to, to figure out how I do my systematic review. Um And so I'm gonna search PUBMED with the following terms. I'm also gonna search Google Scholar with the following terms and from PUBMED, I've got this many publications based on all the criteria. And from Google scholar, I've got this many publications and then I'm gonna put all my review together in a, in a, in a way that I can synthesize this following uh ideally some, some approach, some, some formulaic approach. And as a, as an editor, the first thing I'll do is I'll put your search terms into PUBMED cos I'm a bit of a geek and if my PUBMED search terms don't match yours, then there's something not systematic about it. Um If I can't reproduce it, it's not systematic. Therefore, it's not a systematic review and therefore it will automatically be rejected, which is kind of sad, but that's kind of life. Um So it is what it says in the 10, it's a systematic review but people do get mixed up with what, what a meta analysis is and a meta analysis is where we take your data from your systematic review and we, we formulate your data together. So classically, it's the Cochrane group that, that do this really, really well. So they get the data from all the different systematic reviews um and they, they put them together. Um um so, er, er er er er, an analysis of all the different er, of all the different studies and through that you develop this thing called a forest plot and this is what a forest plot looks like. Um And, and so, so a forest plot um gives us the measure of effect. So this line um and this er this triangle is, is our, is our measure of effect and in this case, it's an odds ratio. Um er and so in an odds ratio, one means no effect. So this is the line of no effect. Um er er this is the er er this is the, the where, where the effects lying at the moment. Um um and we can see all the different studies and the bigger the study, the bigger the square. And you can also see because we've already said the bigger the study, the more and more certain you are about the data. So your, your, your uh your whiskers, if you like your whiskers decrease because your, your confidence uh about the data set increases and therefore you narrow your confidence interval. Er And so this is why the confidence interval of this is really small compared to this, which is huge because we're a lot more uncertain about what's going on in this. Cos our sample size is really, really small and consequently, this one will, will, will give a uh give more data to the overall er overall effect. Um So, so this, this study is gonna be more heavily weighted um to influence this sample size compared to all of the others and that's how meta analysis works. Um And so kind of finally, I thought I touched on health economics. Um cos I think it's really, really cool and I think, I think we, we need to learn more and more about it and we'll, we'll keep it just a few slides. But, but I think it's really fun. So, so whenever we do trials nowadays, we always look for the clinical effectiveness of, of an intervention and we always look for the cost effectiveness. Cos I've already told you that that the reason we do these big pragmatic trials is because nice, the National Institute Institute of Health and er sorry, nice. The National Institute of Clinical Excellence and nr hr are basically together and they basically say, ok, w we'll w we we want you to do this study, but we wanna know if it's cost effective for the NHS. And the way that we do this is by looking in, in all of the different studies we do, whether it's about big toes or whether it's about hip replacements or w or whatever it is, we look for quality adjusted life years. Um and so if you live for one year in perfect health, you've got a quality adjusted life year of one. That's your, that's your quality point. If you live one year in half, good health, then your quality adjusted life year is naught 0.5. Um if you live six months in half good health, then it's 4.25. So this is the way that we calculate quality adjusted life years and nice. So at least in the UK and in a lot of other healthcare systems. Um We've got a value that we're willing to pay for a quality adjusted life year. So in the UK, um we basically cost all of our healthcare interventions on quality adjusted life years and we're prepared to pay as a country 30,000 lbs for a quality adjusted life year. Um uh And so it doesn't matter if you're doing hip replacements, if you're doing cancer treatment, that what the country is prepared to pay is 30,000 lbs per quality existed life year. And that's the basis of health economics. There's a few exceptions, some of the cancer, cancer drugs have got a slightly higher rate that they'll pay for. Um, but, but generally 30,000 lb is what, what we'll pay. So if I increase your, um, one year in perfect health, you get one quality. Um, um, and if I, um, if you've got a 20 year marginal gain in health. Um, so, so if I increase your life expectancy, um, by, um, by 20 years and give you, um, naught 0.1 Q for each of those, then I've given you, um, two qualities. Um, so, so I'm much, it's far easier if I can really improve your, your quality of quality of life. Um, and, and make you live longer like things like a, like a, a hip replacement or a, or, or, um, uh, uh, uh, or, or fractured hip surgery does. So, um, um, so, so by increasing your, your life expectancy and having significant increase in your um your quality of life. Uh then we can really, really prove cost effectiveness. Um But of course, life's more valuable when you're younger because you've got far, far longer to live. And therefore, one of the criticisms of, of Q just in life here is if you're old, then you've got very few years to, to see anyway. And therefore the, the relative cost is gonna look less or the relative value is gonna look less for any treatment in you. And look, you've, you've all heard of this tool called EQ five D. And the reason we use that is because that's how nice decide our quality of life. You, so EQ five D, um, gives us a utility score, um, from that we can work out perfect health, we can work out dead or we can actually work out worse than dead. So there's certain states, um, er, that are worse than dead. So people have people when they designed. It said that actually it's not worth living with this much pain. And therefore there's a score that's worse than dead. Um, and so on the basis of all of that, we look at whenever we do an intervention, we look at the amount of, um, uh, the amount of qualities gained. So the, the amount of, um, er, increase in Q qualities, um, or the, the, the amount of increase in quality of life the amount of increase in cost. And therefore, if your intervention has got a increase in the quality of life and a lower cost, then it's a complete winner. Everyone's gonna want your intervention. However, um er that then we look at different thresholds. So different thresholds for treatment, as I say about 30,000 lbs is what nicely prepared to say, prepared to pay. But broadly, if you plot out where your intervention sits on this quality adjusted life year curve, which is called an isola. Um If you, if you plot out where your intervention sits, then as long as you're below whichever threshold you've put, then the government will pay for your intervention. Um And so that's how health economics works. Look, I've talked to you for a little bit, I'll answer some more questions about statistics and uh and, and try and entertain me, er, to try and entertain you even more. Um just for the last 30 seconds. Um, lots and lots and lots of you are in the UK. Um And we're really, really passionate about delivering randomized controlled trials in kids orthopedics, we've got loads of randomized controlled trials coming and you guys predominantly as trainees. The one I want you to look at most for is called the odd sock study. So the odd sock study is looking for sort of how it's two factors of the distal tibia in kids. Um And the question is, do we need to perfectly realign the fiss in order to maintain growth or can we accept displacement of the fiss? So, do we need to reduce this? Um And put a screw in it if we want to or do we not need to? Um And so if there's one study that I want you guys to go and recruit for, um it's this, it's recently started. I need 200 of them and we've got about 15 or 16 so far, but we're doing pretty well cos considering we started early, but I need everyone on board. Cos it's a tough trial. Cos they're relatively ra, um, uh, just a whiz through the others. There's also a big study going on at the moment called Basis about scoliosis about night time braces versus, um, versus full time braces in scoliosis. Er, there's also a trial about Perthes disease which is gonna start any day about surgery versus no surgery in Perthes disease. I learned all my statistics and when I did my phd in epidemiology about Perthes disease. So I'm really, really, really passionate about Perthes disease trial. Um, um, so surgery versus no surgery, containment surgery versus no, no surgery. So, if you see Perthes, please, please, please s, um, um, er, er, if you're not a specialty center, send it to a specialty center who's doing Perthes disease research, who's doing the op nonstop study cos that's where it's at. Um, and look, I'm not just, uh, whilst I am asking you for a favor. I want you guys to be part of all the studies as well. We're very, very transparent about how you guys can join the trials for all the different trials. If you score 10 points in our system, we'll make you one of the, make you one of the, the um collaborative authorship group on the trial. Um They're all gonna be published in the New England Journal or the Djama or the Lancet. It will be the best publication that, that you have. Cos they're always the best publications that I get. Um um and all you need to get is 10 points and usually you just have to recruit one or two patients and you're in. Um So really, it's a massive win. I want you guys to be on board um er, check out the website tops research.org. Um And I'll, I'll happily get you all on board cos I want everyone, er, everyone playing the game cos that's what it's all about and if you do that, I'll keep talking to you about statistics or whatever research methods you wanna do. Um It is all awesome stats. Research is really cool. Um And I hope I've, er, proved it to you a little bit. Cheers boss. S see if we can share, see if we can come back into the room. Oh, no, how do I do this? Hi, prof, we've got one question so far. I'm hoping that more people will ask questions for you, but we've got a question. So um as an editor, what level of evidence would you give for a literature or narrative review? And what advice would you give to make it more robust considering that it's not as methodical as system, a systemic review, systematic review? Yeah. So if you want to write a narrative review, so, so I've got to have a really good reason to publish a narrative review. Um And so, so usually, um the way that I'd um So, so, so if I'm to be honest, if I'm gonna publish a narrative review, I wouldn't have asked for it. So, um so, so you need to write to me firstly and say, look, I wanna write this review. Um Would you be interested? So there's no point in you writing it and then, and then, and then sending it to us cos we're almost certainly gonna reject it. If, if you write to us firstly and say, look, we wanna write a paper on this, it's relevant for the following reasons, then we may commission it. We may say it's a really good idea. Um You know, we want you to give it a certain slant and the way that's of often useful is if there's a randomized controlled trial or something else coming up and your, your help setting a background to it. Um But generally, I mean, narrative reviews aren't gonna, aren't gonna flo at our boat unless we're, we've asked for them. Certainly the bone and joint journal. I'm hoping that we will get more questions. Um Please go ahead and ask prof any of the questions you have. Oh, there's someone here saying they want to ask one question. Oh, So someone asks, could you please elaborate on the N Jr Funnel plot again and how you would need to answer it in the FL CS exam? OK. Uh Yeah, let's, it does involve me sharing my screen again. Uh present. No, share a screen allowed to share a screen, allowed to share screen. Uh OK. Can you see me yet? Not yet. No, so no share screen, share screen. Uh still not know. OK. Uh So if I was gonna, I don't, it's a good job. You work before, isn't it? Oh, there we go. Yeah. OK. Um Fine. So if we go to gosh, I had a lot of slides, didn't I, I talk fast? OK. OK. So in the exam, um er, so if I was an examiner, um what I'd ask you to do um is I'd ask you to. Um s so one of the things II might ask you to do is to, to label this funnel plot. Um er and so I'd s so I deliberately um so II actually created this and it's, I think, and this is a manager's book. Um And, and if I was an examiner, I'd take that and I'd, I'd scrub all of the um all of these bits out and I'd ask you to, to, to, uh, firstly to label it and if I was gonna label it, um, I'd label it as I have. So I'd label it as the number of cases. So the increasing number of cases along here, um uh I'd label it here, I'd probably label it as failure. Um, because typically we're looking at a failure rate. Um, then I'd say, ok, so, so we've got a failure rate and our failure rate is gonna be um er, is gonna um overall, there's gonna be a, an average failure rate for a whole sample and that's what our target line is gonna be. So that's gonna be our mean failure rate and it's gonna be our target line. Oh, I've gone. Um that's gonna be a mean failure rate. Um And around that we're gonna create, um we're gonna create control limits. Um And so our control limits are gonna relate to. So, so they're gonna be um er er, so we're gonna base it around three standard deviations um is what the NJ ri think does. Um So we're gonna, um so we're gonna draw control limits. Uh And so our control limits are gonna narrow or, or gonna get reduced um as that number of cases increases. So our control limits are gonna, er, er, tighten um or, or, or reduce uh as a number of cases increases because I'm more and more certain about where the true me where the true average lies for each person. Um um er, and that's simply because the more and more we do, the more, er, the, the more certain we are regarding a, regarding a measurement er, in this data set. Um because this is all natural cause variation. So this is all statistical variation. Um And so increasing our sample reduces statistical uncertainty. And we, we've shown that with, with our confidence intervals and with everything else we've just done. Um, and so anything within the control limits is a statistical variation and we call that common cause variation. And that's just, that's just life, that's just what happens. That's just statistics, you know, numbers can w it can be here one day and then the next day. But that's just just how it is. If we're outside those control limits then something different is happening and it may just be unlucky. Um, um, but, but more likely it's something, something funny is going on and if it's up here, then, er, because this is failure, then if it's up here then something bad is going on and if it's down here, then something good's going on. And so we wanna try and, um, uh, we wanna look at both ends of the control limits. Um, cos we wanna know what, what good's going on down here to, to try and make everyone the same and try and, er, and if we were gonna improve the whole population, then the idea would be to try and lower the overall target line to lower the overall failure line by learning from these guys. And again, if we made these guys better, again, we'd lower the um, er, er, w we'd um er, lower the overall target line if we made this group better as well. Um Cos ideally you want a lower and lower target line, cos you want a lower and lower failure rate. Is that kind of made sense? Yes, that's wonderful. I wanted to also add to that question that you, you were answering. Um So someone was asking if we asked, what if we asked, what are the, what's the percentage of outliers? They take it that it only means the bad outliers. So would that be one standard deviation rather than two standard deviations? No. So this is three, either side. So this line is three standard deviations and this line is three standard deviations. So, so anything outside three standard deviations of the mean? Um It is, is up here, anything below three standard deviations of the means down here? Great. And I have a few more questions for you. Um If you're happy to answer. So we have another question that is, can you elaborate on the Kaplan Myographs and how to answer an Fr CS examiner asking what one is? Uh um Sorry, I didn't put that in, did I? OK. So, so, so I'd say so if someone's gonna ask, uh how do I stop sharing this again? Cos I'm, this has all gone funny, hasn't it? Um, um, the middle circular button in the bottom. There, there we go. Um, ok. So if I was gonna talk about Kaplin Meyer, I would say that. Um, so, so a Kaplan Meyer plot is a, er, is a plot of failure against time. So, so, er, er, so, so we can see how, er, er, er, typically, well, typically a joint replacement or it may be death, um how it occurs over time. Um We have a, uh we, we have AAA uh we have a plot and the plot or the, we have a line, the line um should have a confidence interval which goes along it. Um um because there's always some uncertainty around our plot and that's why we're always gonna have a confidence interval. Um as, as the time goes on our confidence, the, the number of samples we will almost certainly become less and less. Um because we're always gonna have more and more hips um or more and more events um at one year, two years, three years than we do at 12 years, 1314 years, just from the, the way we've been measuring things. So, so we're always gonna have a failure, we're always going to be able to see failure, but our certainty around the failure rate is always gonna be greatest at the beginning. Um And so we're gonna have uh increasing confidence interval around the, the failure rate as the time goes on. Um, um, and then, um, er, and then each time there's a failure, um we're gonna, um, er, will, um, er, each time there's a failure there will be a, a AAA change in the, er, a change in the plot, um, er, at certain points. Um, people may drop out of the denominator and people may drop out a denominator depending on, on what our rules are, but they may drop out the denominator, er, should they die? Should they move? And therefore we can't measure them anymore. Er, and those, those people are generally said that, that they're censored so they're not including no, no, any longer. Um, and that's kind of kind of it if I, sorry, I didn't really prepare that one. There's another question after that. So someone is asking recently there, um, there are more and more systematic reviews and meta analyses. What do you think about the umbrella review role in the future to make evidence for guidelines? Um, so, so I think so there's lots of different types of systematic reviews. So, so, um, so there's living systematic reviews which are constantly going on. There's, um, there, there's, yeah, umbrella reviews, there's, um, there, there's, there's a whole different heap of systematic reviews and you can, um, you can even, um, do these network meta analyses and, and reviews now where you, you kind of are able to compare treatments which w which were never actually compared in the first place. So you, so I if someone compares apples and oranges and someone else compares apples and pears um in a uh in a network meta analysis, you can then actually compare whatever two that I just said that weren't compared in the first place um by, by combining all the data. Um And so there's loads and loads of cool ways that you can, you can meta analysis, meta, a analyze and look at data. Um I think whenever you're doing it, you need to do it in a really cool way. Um So ideally you need to get statisticians on board to understand it. Um because if we're really honest, we're just playing at this. Um and so funding bodies, so nr hr routinely often um give out 100s of thousands of pounds to statisticians to do different reviews. Um um And so if you've got a genuine question, you think needs to be answered by, er, er, a clever fancy reviewer. Um I'd reach out to your local group statisticians, local university, your local, er, er, academic and say, look, I think it's a really cool idea. Um You know, how are we gonna get funding or how may we get funding to, to get the statisticians on board to really support this? Cos I think, I think all of this is really tough and I wouldn't start to do that sort of stuff on my own anymore. I may have done at one point and I would have been wrong and then the next question. So excellent talk for Perry. Thanks. I'm an sho just wondering how to get involved with recruitment, interested in trials but no experience of them yet. I saw the training, a colleague in that table. Not sure about how to go about this. OK. So the best thing you can do is go and um so, so firstly look at what, what studies that your hospital is doing and, and pretty much every single hospital in the UK now is doing a, an orthopedic study and even better, a children's orthopedic study. And so the best thing you can do is go and talk to the, the, the, the children's orthopedic surgeon or whoever you think might be leading the research at your site. Um and go and say, look, I'm really, really interested. Um um How do I get involved the other way you can do it? You can look at our website, so you can look at Tops research.org or you can look at the Oxford Trauma website which has the adult studies on as well. Um So you can go to the um Oxford Trauma website, you all of the different studies that we lead have all got the different rules set out and you can um you, you take, take a copy of that to the, the principal investigator at each site. Um And say, look, you know, I wanna be part of this. Um, how do I do it? And if they're not helpful, email me or email, whichever chief investigator of the trial that you wanna be part of. And, um, and we'll help you on it. Look, there's massive, massive wins for us having you engaged. We want you to be engaged. Wonderful. Um, and then another question, I'll just get through that one. if a randomized controlled trial is the best type of study to test the efficacy or, or effectiveness or the, of therapies or interventions. So, if it is the best, so an RCT is always gonna be the best. Um And so, so there's lots of different ways to, to look at studies and, and people that argue that you. So, so, so orthopedic surgeons constantly say to me, look, we've got registries and, you know, we wanna base everything on registries and registries are fine. Um, and registries do give us an overall, er, an overall effect, um, er, an estimate of an effect about what's going on in the population at the moment, but they, they don't control for confounding. So people make decisions in registries um, um based on their own personal opinions or based on what they think is best. Um, and, and so there's always gonna be confounding factors. The ideal way and what I'd really, really, really love to do is to randomize within a registry if we can get registry randomization going in the UK. So, uh randomized within NN JR randomized within NH FD, then that's the really, really cool way to answer all the questions. And then you've got your beautiful registries giving you all the beautiful data, but it's also randomized and that will be really cool. And that, that's how England or sorry, the UK um, will really, really change the world when we start doing, er, registered randomized controlled trials, which I don't think that's far away. Well, we certainly have the data. We certainly do have the data, uh, especially orthopedics. So, so I actually, um, and, and so, so I know Ria's not Ria's a medical student, so she, she doesn't yet appreciate how cool orthopedics is. But actually, um, the, the chief executive of N RR Lucy Chappell, um, it, it actually cited orthopedics the other day as, as one of the places where this should start to happen. So, the, the new government are really, really keen on this and orthopedics is probably gonna be the place. It, it, it starts to happen because we are so rich with data and we've got such a cool group of clinicians and that's all cos you guys are cool. And then we've got another question. Hi, Prof, I am a junior doctor with a strong interest in research and I'm willing to put in the effort, but I'm unsure how and where to start. Do you have any suggestions for a beginner? Or is there any way I could join you in any of your current research work? Thanks. So, so, II think the, the way for the beginner is a wish to start doing recruitment cos that's really valuable. And if you're good at recruitment, then all the chief investigators are gonna, are gonna love you. Um And then try and team up with um one of the local clinical academics um preferably. So someone in a, uh either local or, you know, it doesn't have to be local anymore. So someone who's in an, in an area where, where you wanna be. So if it's children's orthopedics, you know, perhaps me or one of the other guys that works alongside me, um or if it's gonna be trauma, there be Matt Matt Costa or Griffin. So reach out to one of those guys in an area that you wanna do. I mean, ultimately, if you wanna be an academic, then, then you, you know, and you wanna do it properly, then you need to do a phd. Um uh and a phd is where you learn your, you know, you learn your methodology and where you learn kind of how it's all going on, you know, it's really, really fun and cool to do it as a trainee but, but give but doing a phd or a high degree really gives you the, the kind of depth you need to, to fly. Wonderful. And then we have another question where someone has asked you to please elaborate on the meaning of P value less than 0.5. OK. So, so, so P values um so P values are what everyone focuses on um and what you guys all love um because you will want to be um you all want to be. So, so ap value is all about your, your um uh So uh so it's all about the chance of a type one error. Um So, so p value is about the chance of, of you saying something. Uh I have to get this right now. So, so p value is the chance of you finding, saying that something is statistically significant um when it's not actually statistically significant. So, so the chance of you saying this treatment is really, really good when actually it's really, really bad. Um And so that's what a type one error is. Um And so, so ap value er less than naught point, naught five means that I'm 95% certain that what I'm telling you is the truth. So I'm so I'm 5% certain, so naught point, naught five is 5%. I'm 5%. Um I've, I've only got 5% uncertainty about what I'm telling you is that cool? Wonderful. That's a really good explanation. Apart from that, I can't see any more questions. I just, if they were getting hard, I'll just remind everyone um please do fill in the feedback to get your certificate. It's also helpful for our wonderful speaker here to get some feedback on the talk. It was a brilliant talk. Um Please don't forget to follow medical education to get notified for more events like this, more events than orthopedics and certainly pediatric orthopedics. Oh, we have another question, but more importantly, recruit trials. Yes. Um And please also follow boa, I stepped their link in the chat a bit further up. So someone else is asking what are some of the controversies regarding experimental versus pragmatic trials? Um OK. So, so, so whenever we s so we'll make this the last one. but whenever we do pragmatic trials, there's always um th th there's always controversies because um uh b because people um s so, so, so pragmatic trials look at a populate, er, look at an intervention at a population level and they, they, there, there's always lots of assumptions in a pragmatic trial. Um So we used the example before that we might be looking at uh at perhaps we'll look at total hip replacement versus hemi arth plasty um uh or something like that. So if we looked at those two interventions, then we're saying, well, we've got a total hip replacement. Um and you might say to me, well, what sort of total hip replacement, you know, is it a, you know, what, what's the bearing surface? What, you know, what make of total hip replacement is it? And so there's lots of different kind of nuances in the intervention or in the population or in the comparator or in the outcome. So there's lots of different things that, that you might as an orthopedic surgeon, you might kind of question and say, well, you know what exactly does that mean? Um But in a pragmatic trial, we, we typically say we're gonna compare all of these. So, so, so for me, um at the moment, it's Perthes disease, I wanna compare containment surgery. Um, versus, versus no containment surgery in Perthes disease. And you can say, ok, what about what, you know, what sort of containment surgery? And I'm saying, look, I don't care what containment surgery you do, just do containment surgery. And so I know from the start that there's gonna be criticism, cos people are gonna say, well, actually, if you did more of this containment surgery, it might be better. But what I'm saying is, look, you know, I wanna look to see if doing any surgery, whatever your favorite type of surgery is, if doing that is better than doing nothing is ultimately what I'm saying. Um, and then in terms of the, we, we're kind of going through the PICO in the wrong order. So in terms of the population, so I'm saying, you know, we're, we're gonna look at the population of kids between six and 12 years old. And you might say, well, actually, what about, what about 6 to 7 year olds you know, does, does that make a difference? And I'm saying, I don't care, like, you know, we currently, we use this in 6 to 12 year olds so we're gonna take the whole population that's relevant with a whole comparator and, um, er, intervention, comparator and outcome that's relevant, er, sorry, er, intervention comparative to that, that's relevant. And then we're gonna use the outcome of, of the kid's function and you're gonna say, well, what about, what about their x-rays? And I'm like, well, it's all very interesting but, but, but the function is relevant to the child, not the x rays. Um And so we will measure the x rays of course. Um But, but I'm kind of being a little bit facetious cos a pragmatic trial has been been just that it's been really, really pragmatic in saying, look in a, in, in a, in the context of the NHS where there's lots and lots of variables that are changing. What, what are we, what can we actually deliver? Whereas an experimental trial is gonna do exactly what you wanna do. It's gonna take Children between six and seven years old to get a salter osteotomy versus, versus a very specific physiotherapy regime versus a varus osteotomy with 15 degrees of varus. And the outcome is gonna be, is gonna be acetabular. I don't know something Stalberg disease, whatever Stalberg um er, classification, whatever it's gonna be, but it's gonna be something really, really niche. Uh And so, so my big pragmatic trial, you're always gonna be able to criticize different elements of it. But I'm always gonna come back at you and say, look, this is what the NHS can deliver in the context of the NHS. This is, this is, you know, this is the best sort of study that we can possibly design, that's gonna cover our whole healthcare system. And, and if, and if there's no effect and there's no good result, then you really have to take a step back and say, OK, you know, how are we gonna do this differently to actually justify this intervention? So II kind of went off on more. That was a wonderful answer. Um I, so I think we'll wrap it up there. Um Thank you so much for your talk. Um And thank you so much, Catherine as well. Um I can't see Catherine. Um But thank you so much everyone and we will be back for more talks like this. So please don't forget to fill in the feedback. Follow medal and voter. Cool cheers guys.