Home
This site is intended for healthcare professionals
Advertisement

SFP Interviews: Academic Station & Critical Appraisal Structure

Share
Advertisement
Advertisement
 
 
 

Summary

This on-demand session is aimed at medical professionals and will provide insight on how to critically appraise evidence and apply it to their practice. This virtual session will focus on a structure to critically appraise evidence, as well as demystifying the academic setting. Participants will review a case study and work through topics such as study design, hierarchy of evidence, internal and external validity, research funding, and ethics. The session will also cover an acronym and framework which is designed to help remember and easily summarise the evidence. Join us as we dive into evidence-based medicine and learn how to apply your knowledge to clinical practice.

Generated by MedBot

Description

Join our SFP prep course to learn to maximise your application success this year!

Learning objectives

Learning Objectives:

  1. Identify the purpose of a critical appraisal and appreciate its importance in medical practice.
  2. Recognize the components of the P.E.A.C.O.C.K. framework for summarizing a study.
  3. Understand and apply the QR P.R.E.F.E.C. acronym to evaluate a study design.
  4. Evaluate and identify bias, funding, ethical considerations for a study.
  5. Comprehend and effectively discuss the evidence hierarchy and how it relates to medical practice.
Generated by MedBot

Related content

Similar communities

View all

Similar events and on demand videos

Advertisement
 
 
 
                
                

Computer generated transcript

Warning!
The following transcript was generated automatically from the content and has not been checked or corrected manually.

Hi, everyone. I'm not sure if we've actually gone live yet. If someone can just pop into the chat, if you can hear and see us, please. Perfect. Great. Thank you. Um Yeah, I'll uh let Angie introduce the session today. OK. Hi guys, I'm Angie and this is, and we're gonna be carrying on from last week's session and doing a session on critical brasil structure and just talking in general about the academic station. So we're about halfway through our course and like I mentioned last week session, um covered key terms, questions and some of the statistics and today we're just gonna show you a structure on how to critically appraise and also give an example that we can work through together. So the plan for today is the structure and then we'll recap some of the study design and hierarchy of evidence and then walk through an example. So, um yeah, hi, everyone. If you don't know, I'm, I'm Nester, I'm also an SFP um in Norwich. Um So obviously this is the Norwich SFP course and a lot of what we're teaching is based on applying to Norwich, although it is applicable to applications everywhere. Um but just a little bit about how the interview format is actually structured. So this is directly copied from the website from the specialized unit of application for Norwich. Um So essentially what this means is. So for us, when we, we did this last year, it's a bit like a um an MRI when you apply to medical school. So there's three separate stations. So we were based on um an online platform and then we had a dedicated time. So 10 minutes for a clinical scenario, 10 minutes to prepare for critical appraisal, 10 minutes to do the critical appraisal and then 10 minutes as a research plans, um personal station. So that's how we've kind of split these sessions for um the course. So at the moment, we're focused on the critical appraisal of a an RCT. Um And for, for us last year, this didn't actually consist of a full critical appraisal. All that does doesn't mean to say that it doesn't, for everybody, it could be back to a full critical appraisal. It could just be questions and terms. So make sure you cover all your bases essentially. Um So the single interview lasts for approximately 40 minutes, which accounts for 3, 10 minutes and 10 minute preparation for the critical appraisal next time, please. Ok. So a bit of a recap from last session. Um So in the previous session, we sort of looked at the key terms and figures of the crystal appraisals including some basic statistics in this session. We're going to focus a little bit more on the structure and work through an example for you to do yourself. Um So there's a really good definition about what crytal appraisal actually is. Um And I really like this statement of the three parts is judging it, it's trustworthiness, its value and its relevance. So making sure you highlight those three things. Um So how we best sort of think about splitting this down? Um It's a brief overview of the study. You weigh the strengths and weaknesses of the study. You discuss the key findings, judge if the study is reliable and if it's valid and then apply it to your own practice, that seems a bit chunky. So um I'm gonna hand over to Angie who's going to go through a bit of the structure and um mnemonic that helps us remember. So the structure. So the first thing they might ask you to do is just to summarize the study or the abstract that you're presented with. And the structure I like to use is Peacock, which I'm sure some of you might have heard of before. So firstly, you look at the research question, the title, look at the journal and with the journal, is it a well known journal? You might know something about the impact factor um as well as how recent it is. So if you get a study from 2010, that's already 13 years out of date. So that's something to comment on as well. So that could be your first statement, just reading out the title or rewording it slightly. Then you talk about the population, who is the study looking at what sort of patient group and how many people are involved in it? And then I for intervention looking at what is the tested group that they put forward and control? So what's the group that they can um comparing this with? And this is mainly applicable to randomized control trials, which are most often the ones that are given the outcome is the next thing you should talk about um whether they state the primary secondary outcomes. What sort of outcome is it? Um if it's subjective objective or if it's a composite outcome, which is something made up of lots of different factors and then at the end just summarize the key findings, which is usually in the um key findings or conclusion part of the abstract. So that's a really clear structure to work down on to quickly summarize to the examiner as well as understand the study or abstract yourself really quickly. And then you'd be looking at internal and external validity. So internal validity will be looking at study design, how strongly it's um testing what it's testing including assessing the different types of biases if it's blinded or not the type of statistic it use if um um the key findings are significant or not, it is external validity. So is the study applicable to the group of patients or cohort that you're looking at? And then if you then move on to pros and cons of the study in general, and this can cover internal and external validity and just talk about a couple of the key strengths of the study. So for example, it has a big sample size and the cons maybe it's not blinded um of the study. And then talking about funding and ethics, there's usually a line right at the bottom and ethics, if it's um ethical to randomize people. Um this is when we talked in our last session about clinical equivalence and equipos. And then finally, you'd give a statement um relating it back to your clinical practice. So just saying, oh, this is just one study. It's usually not enough to change your clinical practice unless it's really obvious that it's such a clear benefit in a time like COVID. Um Sorry, I just got a question from Jess saying, do they expect you to talk about the study for the whole station or do they ask questions? It depends. So for my station and the East Midlands, they asked questions throughout and they didn't expect me to say anything of without them prompting me. But then for the Norwich one, they started off by saying, please uh summarize the study and then they ask me specific questions. But I think when you're practicing if you're practicing, especially by yourself, just be prepared to do a monologue and just expect them to interrupt you and ask them, ask specific questions at any time. Yeah. So the last part would just be relevant to treatment practice. And most of the times your answer would be this study wouldn't change my clinical practice and you would want to see uh meta analysis and more studies before changing your clinical practice. So this is an acronym that has been used by a few people go QR Peacock, Rambos, Pr FEC, which is quite a lot of letters, but it in case you blank, it gives you a couple of letters to go by. So QR, so initially just talk about the question and relevance and peacock again summarizes the findings and I think that takes you through the paper quite nicely and with internal validity, looks at Rambos. So recruitment. So how were the patients recruited? Is there any sort of selection bias here? Allocation? Is this even split between the two groups? And it's the baseline characteristics in the two groups even? So, if there's any confounder that could um affect it maintenance, was there any attrition and nutrition bias? So if one group had like a high rate of people leaving it, it could bias the results baseline characteristics. Um So if there's anything that's really obviously different between the two group or anything that could be causing the difference scene outcomes, if they're blinded or if there's relevant and statistical analysis? Was it um statistically significant as well as um did they use the right statistics to advise it? And for external validity, you just need to ask yourself, is the population relevant to my population that I'm treating? And is it feasible and um cost effective in terms of resources? And in fact, it's just fundings ethics and then conclusion you just summarize everything up and study design. I think I'll hand this over to Nesta. Sure. Yeah. So just um a couple of brief slides before we move on to going through an example. Um So what is the study design? So when you're talking about this through the critic appraisal, um you need to kind of keep it quite brief. Um So which design have they selected? Um So you've got a randomized control trial in front of you. Um At which point you can then talk about the hierarchy of evidence um which links into the, both the study design and also the relevance clinical practices. Andrew was just saying, um you'd want to see further studies and systematic reviews um which I'll show you on the next slide and have they selected an appropriate study designed to answer the question at hand? So you'll most likely have a randomized control trial. Is that question, something that can actually be achieved by a single randomized control trial? Um or would it have been more appropriate to have done a different type of study side, please. So yeah, so this is a, a pyramid, hopefully you've seen before. Um And you probably saw in the last session as well. Um So in terms of the study, it's the quality of evidence increases, the higher the pyramids we go. So at the bottom, you've got things like opinions and background information, then you got to just case reports and case series cohort studies, non randomized control trials, randomized control trials, which we all likely have. And then Chris appraise literature evidence based practice guidelines. And then above that systematic reviews and meta analysis. So that's where the um the change in practice usually happens is more towards the top excise, please. OK. So we're going to now do an example. Um What we'd like to do is if you just scan this QR code um or follow that link if you can, if you can, whichever's easiest. Um I'll try and put that link into the, into the chat as well. Um And we'll give you 10 minutes just to go through. We'll pull up the um the slide for um the, oh, thank you, Angie. Um We'll pull up the slide for the structure and I'll start a 10 minute timer and then um we'll take you through it, but we'll try to make it as interactive as possible once we go through. Um But also want to give you a time to, to through this. OK. So if you or anybody has an issue with opening the paper. Please let me know. Um But we'll rejoin it 28 minutes past just, just the abstract. Sorry, sorry. Just the abstract. Yep. So we're just doing the abstract. It's actually on the screen here. Um So yeah, just the abstract. Um feel free to um you know, use this, use the QR code, the link whichever works best. OK, guys. Uh roughly around 10 minutes. So everybody just sort of rejoins. Um Great. Just drop a message to let us know you're back and if you felt that was enough time, OK? Um So we, if I ask someone to summarize using the peacock structure and I leave this on screen, does someone wanna put it on the chat? If anybody's feeling particularly brave, we can do, I can invite you to the stage and you can do a short summary, obviously, no pressure. Um It is good practice if anybody's interested pop in the group chat and I'll put you onto the stage. You don't have to have your camera on or anything like that. Um And you can do a summary, verbalizing it. Um If anybody's, it'll only be a couple of minutes ma two. Perfect. All right. Mark. I have to add you to the stage. OK? I'm not sure if that's worked ugly. You still there. If you just put in the group chat, I can't find you. You, it's he on. Um If you press the So I've pressed the invite to stage. So if you accept the invite to stage, then nothing will happen. But it should come up that you can see more options. Mhm. Um, it should just pop on the, on the screen. Um, let me try again. It doesn't work. We'll just go with the, on the chart. Sorry. Go on. Um, ok, if it doesn't work, then we'll just go with the, go with the chat. Sorry. So we just run through the, we'll run through the structure and then we can sort of like do in the, in the chat. Oh, it might be your brows. No problem. Ok. So does anyone wanna put in the chat? How to summarize the research question title in a sentence baby? Both you through the population intervention control outcome. Ok. So p patients undergoing abdominal surgery pain, uh I'll just go into the next slide here. Great. Yeah. So I think if you put your answers both together for population. So it's the 13,301 patients undergoing abdominal surgery across Severn middle and low income countries. Intervention is um the change of gloves um before wound clo um and instruments before wound clo closure control is the current practice of not changing gloves and o is reduction in surgical site in infections. So outcome would actually just be the number of surgical site infections, not, they wouldn't actively look for a reduction. But yeah, that's great. And what's the key finding if anyone wants to put that in the check exactly. Um So the key finding here was the 13% significant reduction in SSI in the group with routine changing of gloves before we clo closure say it's can also be defined as the adjusted re risk ratio of 0.8 seven. Um And you can see that the confidence interval doesn't cross um the one and the P value is 0.0032. So the findings were significant. OK. So that summarizes the paper and using the structure, you can see clearly what exactly they're testing, how they're testing it and what the key findings were. Now, if we were to look at the paper, can anyone say anything about the strengths or weaknesses of this paper or pros and cons? OK. I'll just move on to the next slide where I've highlighted it. Does anyone wanna pick it up? Yeah. So I think one of the cons that and the comments is it doesn't look like they've controlled for bias since if staff were not blinded, they could have been more wary of infections in the patients. So that's I can see where you're coming from, but it's kind of hard to blind surgeons or surgical staff um on whether or not they're changing the gloves. So in this example of a study, it's not possible for blinding, but they have shown that it um the patients were masked and it's also, maybe it's an interesting question to pose whether the patient, um doctors and nurses who were taking care of them POSTOP were blinded or not. Um And that could introduce some performance bias. Anything else? Great. We've got some more so strengths, high impact journal. Great large sample size. Yes, there's 13,000 patients, weaknesses, adverse events were not accounted for, perhaps not accounted for the cost resource limitations of changing equipment in these countries. Exactly. So I think a great answer to say in um stations like this is obviously the abstract doesn't go through every little thing. But you'd be like, I'd be interested to see if the um practice or intervention was actually feasible and cost effective in these um middle and low income countries. Yeah. Exac um they haven't controlled or mentioned about controlling POSTOP care. So it might be that some patients might get, be getting regular wound dressing, changing might be getting post operative antibiotics and others not. So that's all very important. Sorry. And Emily says, can you say that since this is only an abstract? You cannot 12 if they're controlled for these? Yeah, exactly. So it's sort of a cop out answer. But I think for everything you're not fully sure about, you can say uh I'd like to read the whole paper to find out more about this randomization process includes allocation, concealment, not disclosed the abstract. Exactly. Yeah. Say there's it all good points. Um So I've got as a strength. It's a very relevant study as it is one of the most common complications of surgery around the world. It's a multicenter. So it's very Generali and cluster randomized con um trial. So, randomized control trials are obviously very high in the hierarchy of evidence. So it's got greater internal validity. So it, it excludes C section. So it might be interesting to comment on why patient. Um The study excluded csections as c-sections, mass count for a large number of abdominal surgeries and patients who are masks. So there's some sort of blinding, but again, it's not double or triple blinded. And you'd like to know about postoperative care and confounding. I think it's interesting to mention that they used um the US centers for disease control and prevention criteria for surgical site infections. So it's a validated measure that they're measuring up against. And Nester will talk more about the intention to treat principal. A good thing to pick up that the trial is adequately powered. So there's 90% power and again, big sample size, um significant findings, appropriate statistics used and it's good that there's a pre specified subgroup analysis. So, um I'm guessing in this, they might look more at postoperative care if they were given antibiotics, things like that. And since it's pre specified, it's um better study design as it prevents data fishing, which is when um patient um scientists sort of get a lot of data and then they try different sorts of analysis and different sorts of subgroup analysis to try to find significance in subgroups. And then finally, I just said the funding at the bottom is not biased towards say the glove company or instrument company and there's no conflicting um conflict of interest. So yeah, so the big pro is multicenter blinding clear and important outcome. The funding was not, was covered by grants and nothing um was an obvious conflict of interest and it was appropriately powered cons it's not really generis to the UK population. And then another con would be secondary outcome including time for closure. If there were intraoperative or postoperative complications, things like that were not measured or at least reported on the abstract. OK. Where is the symptoms? So just a quick um recap about um in attempting to treat versus pro protocol analysis. Um So this we've covered briefly in the last session. Um but it's something to pick up on and it's very likely that this will be within the abstract. It all mentioned how they actually analyzed the study. Um So an intention to treat is that all the subjects that were randomized to a group in the study are included in the final analysis, treatment. And that's regardless of whether they actually complied with the treatment protocol or not. So that includes people that drop out. So it could be that if you've got two strands, one of the strands have no intervention and one of have an intervention if um a page was actually allocated to the intervention group. But then at some point just never ended up having intervention, they would still be included in the analysis at the end of the uh the trial. Um essentially some of the pros for this is that it maintains the effect of randomization and reduces the risk of selection bias and is more representative of real life um where that sort of thing does happen. Um And that's then contrasted to per protocol and that's actually where only the patients who actually complied with the treatment and did not deviate from the protocol at all are included in the final analysis. So in the previous example, if they were allocated to um the intervention group and then they dropped out for whatever reason, they wouldn't actually be included in the end protocol. So what we're trying to say with using this is that there are pros and cons to both um intention to treat and the protocol. Um So whichever one they've used, say what they've used and say the pros of that and the cons of that. Now that will be the same for every study. So guess I used to a phrasing that works for you is, oh, so this was um an intention to treat analysis. Um These are some of the pros, these are some of the cons of this and you've, you've already said that multiple times every time you uh practice and it sort of flows off your tongue a lot better. Um We'll come back to your question in a moment. Adam, we've got a side about ethics. If we don't, we'll cover it afterwards. Sad piece. Ok. Uh So I'll do the ethics slide. So, funding, so funding ethics and safety. So with funding, you're trying to look at if the funding introduces any bis to the study. So if it was, if the study was like I mentioned um funded by a glove company or an instrument company or medical devices company, then you'd say they have a conflict of interest and you'd like to see um exactly like the transparency of the funding. And if there were any like criteria or anything, they stipulated with the funding and then going back to last session, just comment on the Nber code and Declaration of Helsinki. So just making sure that any patients taking part in this knew all the risks and benefits in about this trial um and was fully informed before they consented to it. And another one is if the control is the gold standard or current practice. So if you're testing an intervention, you need to make sure you're testing it against something that's already acceptable in the current population in current um clinical practice because otherwise it's not, there's no point in testing something to say it's better than doing nothing if it's not better or the same as doing whatever we're doing right now, the gold standard and the nature of the intervention does the benefits outweigh the risks and if it's safe. So, in this, the benefits, um, the, the risks are quite low and the benefits, the results show that it does outweigh um, the risk and if it's safe, yeah, we're not introducing anything new, anything different, anything novel in the study. So, just going to Adams question for ethics. Could you comment that if they were seeking seeing positive results early on in the intervention group, they could have potentially stopped the trial earlier? Yeah. Uh That's completely fair to say. And it's a really interesting comment to bring up in a critic appraisal. I think the biggest example of this is the dexamethasone trial, which they stopped at the 28 day mortality rate because they saw that in COVID-19 when there was no, there was nothing before the dexamethasone that showed an improvement in mortality. And with this study, they showed an improvement. So at 28 days they stopped. And from then on, that was the gold standard and then going on to inter just recapping internal validity and biases. So the population, the study design is where you talk about the population sample size, selection, inclusion and exclusion criteria and if that introduces any biases. Um Yeah, it's just, there's a lot in here and I think different people critically phrase in different ways using different structures. But I think as long as you understand what the study is about is able to, uh, like, generally summarize the key peacock structure and then pick out a few, maybe biases a few strengths and weaknesses. That's enough because it's only a 10 minute station. And does anyone have any questions on this or any questions in general? If not, then we'll just move on slightly. So, um, but, and says it's important to recognize that you only have 10 minutes. Um And your critical appraisal shouldn't last more than a few minutes. Um doesn't need to fill the 10 minutes because you'll be asked questions and be interrupted if it is a full um critical appraisal you're doing. Um but external validity is important and it's part of a critical analysis, a critical appraisal. So external val is essentially how much the results of a study can be applied to a non-study patient or population. So how well it can be generalized and applied to a larger population. So things to look out for on this would be the patient cohort. Did they um patients with say a particular condition um that's relevant to, is it a really rare condition? For example, the patients had a really rare condition. Is that very Generali um the location of which this took place. If it's all took place in one country, then you can comment on that. Um For this particular trial, it took place in a in low to middle income countries. So you can comment on that, the financial feasibility um like a lot of these studies, it's all well and good saying that this works better. But is that actually possible um in say lower um income countries or in a public healthcare system? Um And then is this relevant to your clinical practice which you have um a slide on? So the next one please. So um the the question that's always asked is would this study change your clinical practice now have an answer to this? Um The, the answer um is usually no. Um So, um but no, your reasons for that don't just say no. Um So think of an answer um that works for you with the phrasing like like before. Um but then obviously apply it to the study you've said. So things to comment on as to why I know it wouldn't change your clinical practice or maybe it would. But um things to think about uh the hierarchy of evidence of that pyramid again, where does this fall? What should be doing to allow you to change your clinical practice? So, um you say, you know, this, it's a one single study. Um You'd want to look at a whole range of evidence rather than just one abstract. Um read the full paper. Um Think about the fact you want to have some systematic review or um and a meta analysis. Um So you could bring together all of the relevant um current research into this topic to decide whether it would change your practice or not. At this point, you can use the pros and cons that you've said before. Um, if you've not got around to saying them all, you can just sort of pop them back in here. Um, and then things to say, um, if you're saying yes or you would consider it to change my practice. And with following review of guidelines or meta analysis, et cetera would be to say, you know, this shows promising for better outcomes, improved patient care, financial um costings. Um And essentially, it's really why was this study conducted in the first place? It should usually be at the top of the abstract. So they don't just randomly do studies as a reason behind that, whether that's a improved outcome, safer um economic impact, that kind of thing. So that would be a reason as to yes, it would. But then you have to sort of counteract that with um thinking about it being as a one single study, um just got a question, how much time should our appraisal take is 4 to 5 minute, too long or should we just stick to 2 to 3 minutes? And I think one, it depends on how fast you talk. Um If you probably aim for, I think, I think I was aiming for say four minutes, I think, but when you start rast in through them, it does become a lot quicker and with practice, you definitely make it a lot more concise. You're probably looking at more three minutes. Um, but if you do go on too long, I promise they, they will just cut you off. Um, like that's, that's how the interviews are set up. Um, so, yeah, pro probably, yeah, you're right. Uh, what sort of questions do they ask? Um, we can come back to this at the end. We'll do a bit of a Q and A, we'll just finish off the, the slides and then we'll just come back to A, to a Q and A. Feel free to pop your questions in the chat and then we'll come back to them at the end. Um Yeah, next up is um OK, so here's just a, an extra paper that we found. Um If you feel free to do this yourself, a lot of you are in our mentorship chats. If um for myself, I'm happy if, if you want to send a brief summary or something. Um Just as what you would say, then that's, I'm more than happy for that and everybody can talk about it in the, in the chat. I'm not gonna go through and um give you feedback on every single one, but um we can sort of have a, have a general discussion in the mentorship chat. Um But if not, then feel free to go through this in your own time. Um for practice things use um pubmed um search for randomized control trials. Um We've used the lancet, you can use any of the journal. It doesn't particularly matter. Um It's likely that they'll pick a study that's recent and a study from a big journal with a high impact factor. So keep that in mind, there's no need to go through really specific um journals at this point, but they, they could use them at the end of the day. Um OK. Uh Next side, please. So in conclusion, um create a template that works for you. Um So we've given some recommendations of what we did um that, that might work for you might not um but find something that works. Um So use the structure and stick to this. Don't, don't change the structure midway through your practice. Um Unless obviously you need to but try and stay with the same structure. Um be prepared to answer questions on statistics and figures. Um such as in the previous session, if you say a term or a figure, be prepared to define what that means. If you just randomly say, oh, a confidence interval, this but you don't know what the, what confidence interval is, then, then they ask you the, which they will the follow up question of what does that mean? You're going to stumble. So if you say a term, if you use it, especially in your pros or cons, make sure you're able to define what it means and practice ma is perfect. Um When I was preparing, we just, we just do a critical appraisal and just work with somebody else. Um I usually somebody else that's applying to SFP and then you can feedback off each other, do it online. A lot of interviews on I think majority now are all online anyway. So it's good practice. Um sitting in yourself in front of the camera, how you're speaking and going through these critical appraisals and then you, well, um that will hope you get better. Ok. Uh Next side, please. Yeah, these are the two references from the studies we used. Um So please, thank you. And then I think it's just um if anybody has any questions. Um so to answer Jessi's question about what sort of questions they ask, um if you look at our previous session, it was a fortnight ago. Um It's usually questions, well, from, from when we did it, it was more questions about um statistics terms um that were used within the study. Um And then, so basically they asked the questions of what peacock would show, but they asked them a few more sprinkled in specific questions such as, er, what is a confidence interval. Um And the other thing that they do um or did was they had to have a graph or a table or a figure and they'd ask you to um talk them through what that figure showed. Um And it's important, obviously, not just to we said in the process, not just to verbalize what it shows, but also to say some of the pros or cons and get into the swing of um critically appraising whilst you're um telling them what it means. Does that make sense? Just to go off that? I think that's why it's really important. You guys practice with someone else just to get used to not just critically appraising but getting asked questions on it. So I think that it covered them. It was mostly about, I think I got a forest plot and they were like, what sort of graph is this? What does it show? And then it ask questions on that. And they can also just, if you make a comment about a certain sort of bias, they might just go off on that tangent and ask you questions based on your answers as well. So just practice lots of different things and be ready to just think on your feet because that's what they're looking for and they're not expecting you to know every single thing and I didn't know what a cross plot was and it was fine any other questions and just as like an added add a tip for something um that works. Um But these interviews are all um online. Um you, you're allowed to have SCP paper with you. So um if you want to pre jot down your structure, there's, there's no ruling to say you can't pre jot down unless it's changed. Um Have a, you know, you, you just structure in front of you already. So within that 10 minute prep, you've already got a structure that's there with um what you need. Um Is this session recorded? Yes, it is recorded. Um And if you fill in the feedback, which we will put on them, which should come out automatically, um Then we'll hopefully be able to have the feedback. I'm just going to put in the group chat um for mock interview, sign ups and also the next event sign ups and I'll also put in the feedback. Um OK. Um If anybody has any questions, we're going to just stick around for a little bit. Thank you guys for coming. Um I'll hand over hand just for fun a word. Yeah. Uh just, just saw when uh were the mo interviews. We don't have dates out yet. Correct me if I'm wrong. Lester, I think it's just to, yeah, we'll, we'll start to think of dates sort of when um offers start to come out. Um And then we'll have dates within your membership groups. If you're part, you're part of the membership group. I imagine Jess, if not, then just send a message on here and we'll figure it out. Um And so, oh, actually just fill in the form for the mock interviews and then we'll go from there, we'll sort of slot you into a group. We're gonna try and keep within your, your groups. Uh How did you become part of the membership group? If you sign up for the mock interviews, then we'll attempt to get you into a mentorship group from there. And I think, um if you can't do a certain day or certain times, I think a few of us are quite flexible and we could try work to find a time that works. So, don't worry too much about it. Definitely. Any other questions? Yeah, I think we'll, is it possible? So you should be able to if you send um Adam, if you send an email to um our email address and we'll get Cathy who's in charge of the recordings to send you the link. Um And did you have to just put the, put the email in? Um And then cause yeah, I'm not sure if I remember it from here. Um Yeah, and, and that thank you so, so much everybody for continuing to come to our sessions and for filling in the feedback. Um But so um here we go. Um Cool. Thank you everybody. Um If anybody has any questions, feel free to put them in the chat um message your membership group. Um We're very, very happy to help. Um and hopefully offers start coming through and just don't be alarmed, don't come through immediately for, for us. I think it was like two weeks before the interview date. So don't be alarmed that they've not come through Um, they, they're quite, quite late. Um, but practice before then obviously. Um All right, thanks everyone.