Home
This site is intended for healthcare professionals
Advertisement
Share
Advertisement
Advertisement
 
 
 

Summary

This on-demand teaching session is designed to help medical professionals understand the basics of study design, key terminology of research, and forms of bias. It will cover topics such as longitudinal studies, cross sectional samples, prospective and retrospective data, explanatory studies, and the hierarchy of evidence. It's a great resource for preparing for interviews and understanding the evidence behind clinical decisions.

Generated by MedBot

Description

Welcome to Session 3 of our 123 Series on the Specialised Foundation Programme!

Here, we’ll introduce you to key research terms that are essential for the academic portion of your interview!

We'll cover different study designs, going through the pros and cons of each. Not sure on the differences between a composite outcome or a surrogate outcome? What about common sources of bias, or methods used to randomise? And what do intention-to-treat or per protocol even mean!?

Find out by tuning into this session with the 123 SFP series - this is one not to miss!

Learning objectives

  1. Identify the components of study designs, such as prospective and retrospective studies.
  2. Understand the hierarchy of evidence and where the gold standard of evidence lies.
  3. Describe the differences between longitudinal studies and cross-sectional studies.
  4. Learn to recognize bias and confounding factors in research.
  5. Analyze the data from a case report, series report or observational study.
Generated by MedBot

Speakers

Related content

Similar communities

View all

Similar events and on demand videos

Advertisement
 
 
 
                
                

Computer generated transcript

Warning!
The following transcript was generated automatically from the content and has not been checked or corrected manually.

um, just so everyone can come in. I just want to have a quick scan through it and just check. There's nothing kills. I got into working on chrome as well as Firefox. Like the chat function doesn't work very well. Everyone, um we have just some starters and we've gone. We've started broadcasting, and we're just going to wait a couple of minutes, so we let everyone come in. Hi, everybody. We'll start in like, a minute or two. I hope you all enjoy my stripey shirt this evening. Which Aqua tells me makes me look somewhere between a 16 year old trying too hard and a 40 year old. Yes. Sorry. As long as we land somewhere in the middle, Can you guys hear us and see us in the chart? Got 15 in the That includes us. So be 13 in the audience. Mhm. You Can somebody just type in the chat to confirm if they can hear us, please? Hi. Hello. Thank you so much. Listen, Okay, Okay. Okay. We got it. Thank you. Right. So do you want to just get going? Yeah, I think, um, we managed to improve on our timings each successive session, so we'll just start make sure that we have plenty of time for a Q and A at the end without overrunning. That's going to be the goal today. So welcome everybody back to our series and helping you get into the specialized foundation program, a group we're calling 123 s, f p. We have to give the disclaimer before we carry on as we always do. That. Everything that you see in here in this talk and the series overall, uh, or the views that you see there in our solely those of the presenters and do not reflect those of the N. H. S are employing trust where we work as doctors or anything to do officially with the Specialist Foundation program. This is all our own content, our thoughts and views and our best advice to try and help you get what you want to be. And unfortunately, our colleague Alex is on call, Um, which is why he can't join us. But he obviously did contribute to the organization and the slides. That's a bit about him. I'll get Ali introduced very, very quickly because I see a lot of people that have attended our previous talks. Yeah, keep it really simple. Hello, everyone. My name is Ali. I'm, uh, an AFP or a specialist foundation year to doctor, working up in Newcastle in the northeast of England, About to rotate onto my academic block. As it happens, come December. Yeah, and I am, uh I am an academic foundation. One doctor and what am I started? I'm currently on the inside and I will go into stroke. Yeah, I have to think about that. What am I doing? What I'm doing with my life, You know, I only take it away. Okay? So I'm afraid that this session is going to be a bit more didactic than the other ones, guys, By which I mean, it's going to be a bit more us going through lecture slides and talking at you. That's because unfortunately for what we're talking about today, which are some of the basics of study design, key bits of terminology that you'll need to understand research and the sessions later on and thinking about some forms of bias. That's basically what we're covering today. And yeah, I was raised that really important point. Any questions, please Let us know in the chat. Whichever one of us isn't speaking, we'll be monitoring the chat. Um, but basically there is no other way to go through this stuff other than talking through it. So please bear with us. We'll make it as as understandable as possible. And anytime you don't understand, let us know. But let's start with some study designs, which are forms of experiment ways that you can actually test clinical and scientific questions that you want to try and answer. You'll have heard a lot of these before. Hopefully, some of them are familiar. Some of them might not be, and that's okay. That's what this is all about, but a longitudinal study. Anytime you see this word, it means that that study in question is dealing with patients over time over some longitudinal beginning to end. So you might be, uh, looking after them during stay in the hospital. You might track them if they are in for seven days, for example, for a surgery, you might track them during that time. You might then follow them up for six months after their surgery. If you're doing something like a big drug trial, you might do it for 10 years. Um, after the trial, it just means that you're doing it more than one point in time. I'm going to go to Cross Sectional next, which is actually on the other side because it's almost the opposite of that. You are capturing a snapshot of your sample cohort at one point in time. So, for example, we might want to see, uh, if we took all the junior doctors working on backward as Ward, where she's working in general surgery. Um, maybe none of them have had chance for a toilet break in the entire week, and we might want to look at their, uh, urea and electrolyte levels. So what you could do is take a blood sample from all of them at once, and that's a cross sectional sample. You have taken a measurement of those use an ease those blood tests at one point in time for your sample population. You're not doing any follow up or any repeat sampling Another time. It's just what is happening in the instance that we measure coming then to prospective. That refers to a situation in which we are planning something that we are going to start measuring now and then continue measuring in the future. Essentially. So if I were going to plan that study that we were just talking about taking blood samples from those doctors were collecting the data as we generated. So we're taking those blood tests and analyzing the results, and we could continue to do that, continue to collect more samples. And because this is all stuff that is either newly happening in the now, it's not been done at some point in the past or it's happening in the future. That's what makes it prospective data. The other side of that again, coming to the right hand side is retrospective data, Uh, and this is dealing with data that has already been collected. So a really good example of this is something called U K Med, which is a database that tracks UK medical students. You maybe never heard of this before, but it's a good thing to know about. It tracks medical students from the point of admission to medical school, so everything you put in your application to all the way through medical school or your exam scores everything that you generate, and then your practice as a doctor is all gathered into UK med. And what we could do is say, look at everyone in this call now and look at your exam results from the first year of medical school. And that would be a retrospective analysis of data that was collected in the past. Um, an explanatory, um, when we talk about something is being explanatory. Were performing a study that is looking to answer a question essentially about something. It's taking place in the ideal settings to actually test that hypothesis. Um, I might ask Aqua to give an example of that if she can think of of a particularly good one. Yeah, maybe that she's been involved in, um well, you know, So unfortunately, a lot of the clinical practice stuff is pragmatic because those are the real life setting. Because if you want to change clinical practice, you ideally want them to be pragmatic, which is on the right rather than explanatory, which is more ideal settings. Can't come up with an example from my head right now. But say, for example, if we want to compare a new drug versus placebo that you know, and if we were to really, really restrict our sample population and make it really, really, really so. You're only testing very certain age, and you have to make sure that they're very fit Bobkova. And you basically want to get rid of as much much as confounding as you can. Um, those are testing the ideal situation, whereas unfortunately, ideal doesn't necessarily mean practical. So that might be a good. Hospitals are a great sample of captive patient populations because they're already there, which will make them, by design, much easier to gather data from than recruiting a load of people into a lab somewhere. Great. So the next important thing and this is probably one of those things to know and remember for the purposes of your interviews. Specifically, this is something that you'll have all seen before. It's called the Hierarchy of Evidence. And generally speaking, things at the bottom of the pyramid, while they may be interesting, are of less clinical. Hesitate to use the word relevance. But perhaps a value is more the term the higher you go in this pyramid, the more rock solid and certain the evidence that you generate is considered to be the highest standard of evidence. So when you're making a clinical decision about something, um, which you will often have to do as a junior if it's something routine, Um, you want to be as close to the top of this pyramid as possible? Um, ideally, using evidence that is based on some form of randomized controlled trial or a systematic review and meta analysis of those trials. So last week we talked about Aristotle, which was the trial looking at the warfarin and apixaban, and that's, you know, that's a great example of a routine clinical thing that we do every day. And it's based on that kind of evidence. Now. We were talking about this before. It's not always possible to get to the top of the pyramid, depending on the specific thing that you are dealing with. But as juniors, we are operating very closely within the nice guidelines were not kind of striking out and forging new clinical pathways because that would be unsafe. So a if you are asked to have a look at something in the context of your interview, it is much more likely to be at the top end of this pyramid. And when you're making clinical decisions again. We should be working to as close to the top of this pyramid as we can. So get to recognize that and learn the order sometimes comes up in finals as well. Yeah, but I'll take you guys through. Um, I guess. What are we just spoke about, Um, your case report experience of a single patient, they're very easy to write. And lo and behold, I think some of us in the audience may have been involved in them, Um, and unfortunately prone to chance Association because the samples of one and they are prone to bias and they're observational. An example is, as you see right there, a 19 year old man with shock, multiple organ failure and a rash. And just to see if you guys are, I guess tune tuning into our talks. Can anybody tell me which journal this is from? Just by looking at the front and and, um, the color choice, by any chance, Anyone want to have it? I guess. Yes. Well done, Catherine. And hi, Cathy. Yes, exactly. It's just the fund isn't well done. Okay. Series experience of a group of patients, and it's really useful for studying rare diseases because some group has gone and comprised all of them of the same pathology. And it's again observational. An example here in New England again case series of Children with acute hepatitis and human DNA virus infection. And you're very I don't think you're likely to get these two, um, in your interview. Yeah, total of 15 Children identified all with acute hepatitis cohort study. This is more likely to show up. Um, and the reason why and what they'll ask you in your interview is why I'll be useful. And you need to understand that you know, you're specifically looking at two groups. Usually one group is exposed to a risk, and then you compare that with another group that's not exposed to risk, and then you follow up to see who develops your chosen condition. So, for example, if you're looking at a group of smokers versus non smokers, you want to see who's developing lung cancer. You would perform a cohort study how, and however in all of these things, you need to come up with, like a little buzz words or wrote things that you just need to spit out and for the disadvantages of your study design, you would say, Unfortunately, your studies do take a long time. They're expensive and they are really, really high risk of drop out, and they will actively be looking to see if you say attrition bias because cohort study because you're following them up. Unfortunately, you are less. You're more likely to lose candidates along the way. And yes, usually it is perspective. However, I have heard of some retrospective studies where they look back, but that's not common for case control studies. This is kind of the opposite where you already have cases versus a control group and the case group will have the outcome that you're already looking that you want to see and then your respectively, looking at the risk factors to see what exposes the group of patients to that particular condition. So, for example, in a similar example thing that I said before, if you compare a group with lung cancer versus without lung cancer, you want to ask everyone they were exposed to, and that's how you can assess risk factors, and it's really useful to investigate new diseases, and they tend to be quick and cheap. However, the previous one was a risk of attrition. Bias case control are a risk of recall bias because you're trying to depend on your candidates or patients. Memory. Essentially. And if you ask me what I had for breakfast five days ago, I won't be able to tell you so, you know, think about that cross sectional study. Um, So, as I said, it is a snapshot of a population, and unfortunately, you can't really determine a cause and effect relationship because you're just looking at a snapshot. Unfortunately, again, they don't have followup. And again, when I'm mentioning all of these things, you need to like, make sure that when it comes to, um if they ask, you tell me, Tell me to summarize the study. You will talk about the Peko that we went over the first week, and then you will go into analyzing what type of study designer was. And then you will go into the pros and cons and you will go into the generic generic ones because that is what gets you the marks. Because as soon as they listen to you, they were like, Okay, this person knows what they're talking about. Fine. And then you can really shine by picking out particular things in the abstract. And an example of a cross sectional study is learn. Um, don't pay attention to the authorship line at all, but this is an example of a cross sectional snapshot of urology teaching across all UK medical schools. Now, this is the study design that you will most likely be asked, Um, and our CT and this is pretty much your gold standard for studying treatment effects. Patients tend to be randomized to treatment arms, and when it comes to disadvantages, it's difficult. It's very time consuming. It's expensive. These involved ethical approval and masses and masses of efforts from different like organizations and different groups. And, you know, it's it's potentially ethical. And this is where you can bring in what we've discussed last week. Equipoise, Um, and you have to consider blinding as well. This is an example of an our CT, where you compared antibiotics with appendectomy for appendicitis and again, the beautiful journal New England. Now the next few study design are kind of rogue. However, they have come up in interviews, so they are important to talk about. A crossover trial is when patients receive a treatment and then switch to the other treatment halfway through the study. This is how you can assess which treatment works better. And in all of these they should have a washout, period. And this is something that they can ask you to really assess you to see if you know what you're talking about. A washout period is basically a tiny little period of time where you want to minimize carryover effects for example, half life withdrawal effects Just to make sure that, um, the drug is completely cleared, renally. And, um, hepatic lee is not the word from your liver. And crossover trials are very useful for rare diseases where if there is a lack of patients, it could actually make a trial underpowered because each of each patient now will be providing two times the data because of the cross over. And, um, a really cool thing is that the same patient is in both treatment arms, so they actually match with themselves. So it's a really cool study design and end of one. So this came up. Uh, I don't know if I'm allowed to say this, but it came up in our interviews with London last year and an end of one. This this caught out a lot of people. Unfortunately, the study design. But this is when you have a single patient receiving repeated courses of an intervention or treatment and control in a random or alternate order, and it's kind of a similar to a crossover trial. But instead it's just a single patient. And it's really useful for determining what works best for that patient. And it is actually comin in pain studies. However, it can take a really long time and audits. I don't want to dwell on this because it's very unlikely you will have this in your interview. Um, and this is more to do with your standard, um, foundation program, because everybody, it is part of our GMC requirements that we have to take part in order cycle. But how it make it come up is, um when if they ask you, how would you, I guess, distinguish between order and research. And you would say that audits really compare the current standards, and that's how they analyzed to see if our current standard is matching gold standard guidelines or local protocols. For example, where is research tends to look at something new. That's pretty much the biggest difference that you will say systematic reviews and meta analysis at the very top Top top if they're obviously surprising. And if the analyzing, uh, CTS, that is at the very top because they will pretty much systematically review every single thing on that topic. And that little refined question that you want to ask a meta analysis is combining the results, um, of the results. Does that make sense of a systematic review and usually have, like, a quant assessment? And just so that I can stop listening to myself for a second? Can anybody tell me what type of plot this is and to to see if you guys are still listening to me? Yeah, exactly. Good. So with forest plots, can anybody try to tell me what this shows was brave enough? Just by looking at it? What can you determine? Let's say left favors placebo and right favours intervention. Does anybody wanna just has it? I guess. Yeah, favors intervention. And can anybody tell me if it's a significant or nonsignificant finding and how would you know as well? Yeah, why is it significant Amanda. Yeah, exactly. Yeah, exactly. So looking at this Smith and I guess the other Well, both Smith studies, unfortunately, showed nonsignificant effects, right, Because they crossed the one line. Yeah, exactly. Whereas the rest. Well, Jones Not sure if that really counts, because it just touches one, but the other is definitely right. And that just shows you the value of the meta analysis, because when you're, um, putting them all together, you've shown that whatever this intervention is is actually, um, significant. Now, this is quite important. And, as usual, we will put up the recording for you guys. Um, but it's important to understand the research pathway because they might ask you what type of phase, uh, certain abstract is for um And I don't want to do well on this because of, you know, the interest of time. But sometimes they may give you a phase one study to look at or phase two. And you need to know, um, what phases testing what specifically so like in phase two, you want to determine the safety, whereas a phase for okay, you know that it's already past the marketing authorization. So you want to see anyone to really assess the side effects, you know? Yeah. I mean, I see exactly, but I'll hand it back over to my colleague. Okay, well, I'm going to take you through the less interesting half of the talk guys, but we're actually doing really well for time. So there should be plenty of time to discuss this more at the other end of things. So in the second half, we're going to be talking about bias. And it's a really, really common term in research, and I think it's very reasonable to be asked about it. I was certainly asked about it. Um, in my interviews. Aqua, if you can reveal do you think it's relevant? Yes. That's why we're talking about it. Yes and no. I wasn't sure whether they directly asked about it, but it means various things, depending on the specific context you're talking about. But we're going to be talking about medical research here, So there are lots of different ways of trying to summarize what exactly bias is. This is my very ramshackle attempt like 11 PM last night, um, which is a systematic error. So it's a form of error in either collection of data or the analysis of that data once you've got it, which leads researchers or the people that are interested in asking a question to deviate from the truth. That's kind of the my best summary of it. And the word systematic there is important because it means that it's it's generated by the way that we do things. It's it's something that that is inherent to the process that we're trying to carry out. That's what I'm trying to to capture in there. So what's really important is that these eight that we've got on screen here, we've already mentioned some of them tonight. Innocuous part of the talk. These are not like the eight that I think everyone should know. These are eight examples of biases that may appear or may not appear in different studies. I think the best thing to do and I'd invite Aqua is thoughts on this as well is go away and read about common types and sources of bias in experimentation. Because there are loads, there are so so many and there are often umbrellas of categories within categories, you know this effect will be some form of bias with which is itself a category of different biases, but this is just about trying to get you introduced to the terms and what they mean. And I'd like to say that there are so many different types of viruses, but I think these eight at a minimum, you should be able to just, like, recite them in your sleep. And I'm only saying this because I know the caliber of the candidates that we have listening to us today. Yeah, so before we jump into some definitions because we're going to give you a definition of an example for each trying to put them in context. But the thing we've got to understand is that biases it is inescapable. You cannot eliminate bias. It's impossible. It is a side effect to a degree of, of our own cognitive processes of being human, and it affects everything that we do. You have heard about cognitive biases before and how they affect us in the workplace. You know how we relate with our colleagues. It's not just research, so the best thing that we can do is we have to design. Are experiments really, really carefully to try and eliminate as many forms as bias as possible, or at least reduce the effects of them. But the most important thing is that we acknowledge that that the bias exists and then try to mitigate for it and try and learn from it rather than just wave it under the rug and assume that it doesn't exist. Because it does. Um oh, and I would just like to add a very important topic. Um, that was just on the slide a second ago. Um, yes, we can. Basically, we can try our best to eliminate bias as much as possible. But a very good point is just by throwing in that key word again, um, PPI, You want to get as much as many perspectives as you can and just mentioning that that you'd like to look into the paper or the research project PPI, How involved where they, um that really shows that patients are at the center of your care. And just by saying that again, it's all about brownie points. It's all about making the holistic researcher that you can be, and you want to be, right? Yeah. Okay. So this is this is like one of the biggest ones. One of the most important ones and and the one of the easiest ones to start with selection bias. Sometimes it's called a sampling bias, but a general summary of what it is. It's when you are intending to collect a random sample from within a population. But you design your experiment unwittingly in such a way that makes it so that certain members of that test population are more likely to be selected than others. Right, So the example of given here is you want to measure how long a university student spends on social media from your population, but you only advertised for participants for your study on Tic Tac and no other platform. Now, For the uninitiated, Tic Tac is a platform that exclusively does short form video content, and that's all it does. So the population of people that use is ticktock may have very different social media usage habits compared to people who use Facebook or YouTube or Snapchat or whatever else you can tell them old be really I don't know, Um, but the point is that you you know, someone who uses YouTube a lot. If they if you have them as part of your Stakeholders Group, Your PPI group could step in and say, Well, I only use YouTube and I use it to watch documentaries, which means that they're usage is going to be very different. So it's, uh it's just a reminder to If you're going to try and select population, everyone with in that population has to have an equal chance of being selected. Otherwise, it's going to skew your data, and that's what we call a selection bias. So the next one is called a response bias. And this is a This is a group, an umbrella term of different biases. But response bias appears when you're asking a participant to respond to something now, most of us will be familiar with something called a lick. It scale, which is, uh, the thing that you see where it's usually five points between strongly disagree and strongly agree or rate this thing from 1 to 5. Whatever you're asking someone for their feelings on something and it's essentially a series of bias is that come from different places that make participants more likely to give you false or false or inaccurate information which which are not the same thing, but data that is not true. Um, and a really easy example is something called courtesy bias. And it's where a participant will alter the answer that they give you so as not to appear discourteous, impolite or unfriendly. So you can example, right? You can imagine that. Let's say I go to a restaurant and a waiter comes and brings me a feedback form and puts it in front of me and stands there while I fill it out, saying, How much did you like the food rate it from 1 to 51 to 51, please. Yeah, yeah. And they stood there watching me now, simply by the fact that they have given me this thing and are asking me to respond to it in that particular situation, it massively influences the data that I'm going to give them. I might say that everything is fine. I loved it when in reality it was awful. But it's, uh, that's just one example of a form of response bias. When you ask someone for something, you've got to be sure that they can, that they are safe to give you an honest answer that won't influence anything else about the life. So when it comes to a study that has, for example, validated questionnaires, you want to make sure that the patients are pseudo anonymized, anonymized so that they can really give you their true responses without feeling embarrassed or ashamed or, you know, courtesy bias is all he said. So in the example of because I'm very interested in urology if I want to look at erectile function, I definitely want to make sure that they have a safe space to really express and answer the questions, right? Yeah, So the example of that shouldn't be the surgeon who did the operation should not be the one asking them for how they're doing because they're more likely to lie, Um, reporting biases. Then, as the name suggests, another series of bias is that appear at the reporting stage when we are looking at publishing our data? Um, and this is really important, actually, because it leads to a skew in what other scientists, our colleagues and the public ultimately see. And I'm sure it's really easy to imagine why the consequences of that can be really significant and disastrous. So an example which you may have heard of is publication Bias. And it has been shown in its own statistical experiments that the single biggest predictor, the single biggest thing you can do to ensure that your paper is published is to have positive results for the question that you're asking and that that should really speak volumes. If the single biggest predictor of of a study even being published is a positive result, that means that studies that either show inferiority, which we've discussed before, or that are a negative results, you know, does our tablet work, or does using antibiotics prior to surgery or something change anything about the outcome? And the answer is no. Were unlikely to ever see that paper. It's not sexy. It's not hot. Yeah, it's not interesting. It might be very valuable to patients, but maybe not very interesting to a journal. Uh, so then confirmation bias again, you have heard of probably is the tendency to seek out data or results that agree with things that we already think, basically, which is just a very human thing. Um, and you might consider this when you are collecting data as part of a study. If you think I don't know that preoperative warming or something was going to help patient outcomes in your group. And then you found that it didn't make any difference. You might be tempted to simply think, Well, it should work. The science says it should work. So the fact that it doesn't work when I measure it is probably just experimental error. I'll disregard that result and run the experiment again until it, um it gives me the result that I was expecting. Yeah, that's not good. That sounds like me in like, um yes, six. When I kept on rolling that I just to prove the point that that's not good. Yeah. Yeah, because mentioned this one already recall bias, which is, uh, it only appears, obviously, when we're dealing with retrospective data about things that happened in the past so and it's relying on human memory of a particular exposure. So the example that I've given is if we were doing a retrospective analysis, remember, that's data that happened in the past for causes of testicular cancer. In a cohort study against, we're looking at people who received one exposure compared to those that didn't, um it's been shown that people with cancer I was reading a study about breast cancer. Um, that was specifically to do with this the other day, where people who have cancer are much, much more likely because of their situation to sit down, reflect on what's happened in the past, and thoroughly searched their memory for different exposures. So that might be things like asbestos, like smoking, like exposure to radiation like Oh, yeah, I had a CT scan for something else at that time. They are much better at remembering specific events than men without cancer. Um, who would be your control group? Because why would they like? I don't remember what happens to me each and every day, because I'm well, and there's no reason to do that. Um, so therefore, my memory compared to somebody who is who is sick or suffering from a major condition is likely to be much worse when we're asking about exposures. Particular things. This is a really interesting one, um, something called the Hawthorne Effect, which is a It's one of a series of phenomena that occur in experimental conditions when someone realizes that they're being watched, basically are being monitored. So a really easy example would be if you're wanting to monitor. You have 50 patients. Uh, maybe you're a GP, and you're wanting to monitor 50 of your patients at home and look at their blood glucose levels, and you give them a monitor and say, I want you to go and measure your blood glucose. You know, every morning, seven o'clock, whatever. Um, if you tell them that that's what the machine is measuring and saying, I want you to use this to measure your blood glucose. Um, if they're aware of that, at least some of those participants are going to alter their behavior because they're being monitored so they might change their eating habits because they want to score well on on the result, because you're going to moan at them about their blood glucose. If it's too high, they may change their sleep patterns. They may start exercising more to to fiddle around with that number, and it applies in virtually every experimental circumstance. Um, and then this one observer expectancy is It's similar to response biases that we've discussed before, but it's instead to do with the researcher and the expectations that the researcher has sort of pushing their expectations on to a participant in an experiment. So in a double blinded RCT we've discussed before, neither the researcher, the person giving the tablet or the patient who is receiving the tablet knows which one is that the patient is receiving. However, if I, for example, think that I'm giving, I'm the researcher and I think that I'm giving them the active ingredient, the test drug, and it's to do with subtle behavioral things. Like I might say, Oh, this tastes good, doesn't it? Or I bet you feel great now, don't you? Or the other way around? The patient says. This one makes me feel really good. And I go, Yeah, I think it does, too. Um, it's It's basically expectancy. Behaviors from the researcher will subtly influence the outcomes and the behaviors that the patient shows. So it's it's again, a reminder to do your are CTS properly. And then I think this is the last one that I'm going to talk about, which is called availability Bias and again, quite simple. It's the tendency that humans have basically to rely on data and information that is most easily and readily available to us. So let's say that I was going to do a literature review on I don't know, like prostate cancer outcomes or something that aquile know a lot about. Uh, let's say that I am a researcher for a university and my university is not allied to a big prostate cancer center, so it doesn't have subscriptions to several of the big urology journals. Um, but there are some really big and important studies that have been published in other urology journals that I don't have access to it. If I'm going to do a proper literature review, I need access to those big recent trials. I can't just use the papers and the studies that are easily available to me that are an arm's length and my university has access to it would be tempting. And it's unlikely, necessarily that someone would spot what I had done, at least for quite a long time. Maybe when it got to peer review, but it's just basically, do your research properly. And like on a similar vein, I mentioned this. I think maybe two sessions ago, the Tower of Babel remember bias, where if you're excluding, um, language, is that our English again? It's kind of similar availability, bias you're not using the true data available around that evidence, all that topic. But now I briefly want to talk about confounding and essentially a confounding variable. As I'm sure a lot of you know is a third variable in a study examining a potential cause and effect relationship. An example of this could be if you're trying to look at the rate of ice cream consumption compared to the number of sunburns. Obviously, the confounding is hot temperature. And can anybody give me an example of another confounding factor Can be a really simple one. Just just so that again we get some sort of engagement because I realized that were just talking at you rather than with you. Yeah, exactly. You're living next to the beach is probably going to make sure that you're gonna be sunburned because you're going to be more likely to spend more time. Yeah, exactly. So how can you battle confounding? And these are again just methods that you can just rattle off when you're talking about it. So with restriction, you basically restrict your group by only including subjects with the same values of the potential confounding factors. So, for example, if you want to look at, um, whether a low carb diet causes weight loss, since you know that age, gender, education, exercise intensity are all factors that might be associated with weight loss, You choose to restrict your subject to your subject pool to only 45 year old women with bachelors degrees who exercise at moderate levels of intensity between 100 to 1 50 minutes per week. So, yes, it's relatively easy to implement, but it restricts your sample size a great deal, and it actually lets you on to potentially even more. Confounder is if you're restricted that much when it comes to matching. If we use the same example, you match up your subjects based on age, gender, level of education, whatever, and you get to include a wider range of subjects. But each subject on a low carb diet is matched with another subject with the same characteristics who's not on the diet. So for every 40 year old, highly educated man who follows a low carb diet, you find another 40 year old, highly educated man who does not. Then you compare the weight loss between the two subjects, and you do this for all the subjects in your treatment sample, so you can include more subjects than in the restriction method. But it can be difficult to implement because you need to find pairs. And that's not really ideal, is it with statistical control? Um, it's easy to implement, and it also can be performed after data collection. And this is when this is more to do with statins, which we'll talk about in another session. But this is where you confirmed, and you add your possible confounder as variables in your regression model. So that's more steps heavy. Does that make sense? So when it comes to your regression model, you would add aged sex, blah, blah, blah, blah, blah as different variables and, um, math habits. Which again we'll talk about another time. But I think the most common thing that we will see in our SFP interviews is randomization so huge, huge, huge group of subjects, and you just randomize the whole group. Um, and half of them will be the low carb diet, and the other half of them will be the normal eating habits, and it will allow you to account for all possible confounding factors, including the ones that you may not actually observe directly. And it may be the best method for minimizing the impact of the different confounder. However, as you know, in our CT, a random randomization in general is difficult to carry out. And, um, you need to make sure that the ones that are allocated to the treatment group receive the treatment, which again, it might be a bit. Now, this is obviously something that you will inherently know because you read this in your methods in the abstract, right? Um, but this is again something that you need to be mindful of, because when you're reading an abstract, it's not going to give you all the information about inclusion, exclusion criteria. So this is another example of where you can show off and be like you would like to look into the full paper to really assess the inclusion exclusion criteria. So on the screen right now, bad example subject will be included if they have insomnia. Okay, cool. That's kind of vague. How are you going to establish how what do you define as insomnia? Whereas a good example would be they will be included in the study if they've been diagnosed by a doctor, and they had symptoms for at least three nights a week for a minimum of three months. It's very, very clear the diagnosis, the symptoms and your specifying the timeframe to make sure that the condition is more likely to be stable throughout the study. And again, this will all make sure that the internal validity is intact. And that will make your evidence stronger because you can actually draw more likely conclusions from it. Exclusion criteria, bad example. Subject to be excluded if they're taking medications. Okay, it's pretty much everyone. I would be excluded because I take caffeine. You know it's too broad. There are many, many, many different forms of medication, and it will definitely interfere with study results. Good example. Subjects will be excluded from study if they're currently on any medication that affects their sleep or any other drugs that are in the opinion of the research team, that may interfere with the results of the study. So, yes, that that probably makes more sense, like gabapentin or something that is known to make you drowsy like codeine. That is logical. Now we've been talking about me and Ali and um, Alex have been talking about these three terms quite a lot, but I think it's important if we just briefly discussed this a bit. And at least it only made sense to me when I saw a graph, but briefly because you will see this again and again and again. A superiority trial is when the intervention proves to be better than your control. An equivalent trial is seeing if the intervention is equal to the control. A new drug is not unacceptably different compared to the current standard. There is noninferiority is essentially the intervention is not worse than the control, so it may be meaningful e less effective compared to the standard. But that last efficacy is acceptable to us because it may in the longer run, be cheaper and it might be a suitable replacement for the current standard that we have. So this really made sense to me. We set a noninferiority margin in which anything on the left can indicate that the standard what we have right now is better. And then after the zero line, we can conclude that the treatment is better if it falls between the noninferiority margin and the positive positive noninferiority margin. It's equivalent. It's basically equal. If it's anything above the noninferiority margin, then we can declare as non inferior. If it truly lies above the zero line, we can conclude that it's superior, and I would really reflect and take a, you know, a decent amount of time of looking at this. So you truly understand. The difference is, but because these these terms will be second nature to you when you're a working clinician and giving you just going to go through this really quickly as we draw things to a close. But these these are terms that you will have heard before. But these again should be rattling off your tongue because these are the very bare bones version of everything that we're talking about here. And you may be asked, You know, if you were to design an experiment for X, what would be your hypothesis? What would be your alternative hypothesis? What would be the know hypothesis, which we'll talk about? So hypothesis is a subposition, an assumption made based on the evidence that you already have without any assumption of truth. And that's that's the key thing is an assumption, not addict UM, not a certainty. So it might be a laparoscopic. Surgery requires smaller incisions than open abdominal surgery. Therefore, the rates of post operative infection should be less in laparoscopic surgery when compared to open surgery, It kind of makes sense within itself. It's form of inductive reasoning. But there is no assumptions made of truth. And so you would have to design an experiment, a study that would tell you the answer, and that's called the hypothesis or the alternative hypothesis. They are the same thing. The thing you were testing. Whereas the null hypothesis is, uh, it's that there is no significant difference between the two tested populations and that any observed difference between those two populations so it might be infection rates in open abdominal surgery patients. Compared to upperoscopy surgery patients for appendicectomy, any difference in infection rate between those two groups is due to either experimental errors. That is something that we're doing or random chance there is no experimental evidence for a difference. You know, I briefly want to talk about these because I think they are very, very important, and I alluded to this previously as well. In our, you know, a different top talk that we did, but a composite endpoint is a bunch of individual end points combined to make an overarching umbrella endpoint. For example, if a study, as you see on the screen, investigates a drug to try and prevent a vascular ischemic event, it might combine rates of em. I stroke death, three hospitalizations to form a composite endpoint, and it's really important to assess these, Um, but the process of evaluating these is I want to just, you know, remind you that it's different when you look at a surrogate end points with surrogate endpoints. You're looking at the causal flow to determine the legit. The legitimacy, I guess it is. That's the correct word. I think of the surrogate end point so we can mark and try to, I guess, analyze LDL formation because we kind of know that it could lead to Oh, that should say plaque not play information. Um, which could allude to am I if that makes sense, because it might for us be easier to analyze LDL formation by looking at the triglyceride and TG. Like all of those components and cholesterol components in our blood, rather than M. I straight on because that might not even be ethical to be like, Let's just wait to see how many patients of ours have my eyes, you know? So that's the difference between surrogate and composite composite matches them all up, which is very, very common in cardiovascular studies. Sort of, for example, if you get one, um, in your interview, you will be like, I note that they use a compass outcome, which is a combination of exercise, exercise, exercise that and you would like And then you can say and I know that they've also used a surrogate end point, which is possibly looking at something else to in for some sort of causal relationship to the actual outcome, which is most likely the rate of em. I does that make sense now I've This is for your own learning. And again, this is for, um, for you to study just advantages and limitations. And I want you to go ahead and read this, Um, because this is more I'd rather I'd rather you read them rather than me explaining them, too, because we are getting close to the end of, you know, our talk. Are there any questions and answers. And when you watch the recording, you can just play back and pause over. Yeah, well, maybe hang around for about five minutes, guys, and then and on time and let you let you back to your evenings. And just as usual. Please, please, please, Could you provide some feedback? Please? Please, please, please, please. Literally. Amanda's asked. I think there's a word missing. Yeah. How do How do they ask you about it? I mean, we'll each of have different experiences. So what? What do you think? Yeah, sure. So in London, for example, there it'll be specific, and it will be very examiner dependent. So for mine, they were really trying to pluck each little, I guess brain cell that I had in me. It's fine. It's fine. Um and, um, so they would ask me. Okay. What do you think about, um, the sample? What do you think about their randomization process? What do you think about? They're blinding. They're trying to extract it out of you, and they're looking to see if you will say those buzzwords. Whereas some examiners will be like, describe the study. Nothing. Zilch. They don't give you anything. They give you just They're blank expressions, and you need to be able to read the abstract and be able to be like, Okay, okay, so this kind of means that this could be leading to selection bias. Okay, this could be leading to recall bias. You know, you need to just think about the study design and associated with a particular bias. That's how they would ask you where, As I know only, for example, you have a different experience. So mine, Just as with the critical appraisal stuff that we talked about last week, my stuff was all almost done backwards in the sense that it was it was all very prospective. If you were going to design an experiment to answer this clinical, Uh, it was kind of done even more backwards than that. It was This is the clinical question. How would you design an experiment or a study to answer this question? And as part of that, it would be how do you eliminate certain forms of bias that you could be? As so it might be? How do you eliminate selection bias from this thing or in the form of pushback? If I said you know I will. I will give all of the patients in my study a questionnaire that that I want them to fill out. And they, you know, that Examiner, as like we said, could turn around and challenge me and say, Okay, well, now you've got a study that's full of response bias. So what you're going to do about it, or how are you going to change what you've done? It's I think it's fair to say that again. The point of this and what we're trying to get across is that it's not about always wrote remembering, although there are some of these definitions that you definitely do have to remember. But really, it's more plastic and fluid than that, and it's designed to test your thinking and understanding. Can you apply these rules that we've talked about in an unfamiliar context? Yeah, exactly. So I think it's very examiner dependent, but they'll really try. You need to be able to have some sort of understanding of them for them to test you on the spot in terms of structure to use. Oh hi, Becky. Um, in terms of a structure to use for answering structures on this should we start with discussing the generic points and then this? Yes, exactly Know I would. If you had time, I would go into making sure that you truly understand the study designed First go generic first, then go specific because you will get marks for nailing the generic points. First, anything above that, they'll be like, Oh, wow. Okay, that's that's impressive. Yeah, never go beyond, I guess, without covering the basics. Because that's where the marks truly are. Becky, uh, Jonathan has just asked where can review the recordings of the previous sessions. Um, good question. They just saved a medal, aren't they? Yeah, I thought so, too. If you request catch up content, I think he should be able to watch it because we make the recordings accessible. But also Ali, um is applauding them on his YouTube. But please do the feedback. Yes, so they are. Oh, yeah. They are in your main page. Yeah. So they're either on here or if you want to give us a bit of advertising kick back and they're on my YouTube as well. You can do them in either place. Uh, about, um, sarin. Sarin was saying, uh was observer expectancy bias in relation to open label or double blind studies or both it. It's much more general than that, sir, and it's a psychological principal. It applies in any study where the researcher is interacting with an observed participant. If you if you're watching them doing something, it's about how how the observer subconsciously influences the behavior of the person being observed. That's what it means, so it could apply in in any of the studies. The example that I gave was a double blinded study, but it applies in in a huge range of possible studies and anything where you have an observer that is watching someone in an observed study. Um, so it's it's similar to the Hawthorne Effect, which was the thing that people change their behavior when they know they're being observed. But instead of coming from the person themselves, it comes from the researcher as a as a subtle influence over their behavior. But, uh, I think we'll wrap it up there. Yeah, let's just leave it for another two minutes to see if there's any more questions because there's a lot of information to Yeah, there was a lot I'm trying to get away. I've got another meeting. Literally. Now? Yeah. Jesus Christ. Yeah. You okay to hold the fort for two minutes, then if I disappear goodbye, I'll leave you all in a very capable hands. Take care, guys. Thank you for coming. Hi, guys. Um, yeah. I'll leave it open for another couple of minutes. Uh, and please, please, please do the feedback. Unless you have any more questions, Which, of course, I will be able to answer. I hope we're just very keen on making sure that all of this information is available to you because we know that these are the things that will give you the brownie points. Or like the top tips that we wish we heard when we were in your position last year. Yeah, I'll wrap it up in the next minute or two. I'm guessing no more questions. And if that's the case, please do, um, follow us on Twitter or social media because very, very happy to answer all your questions there as well. Yeah. I hope to see you guys in the next one. Thank you very much. Thank you. Guys. High Ana.