Computer generated transcript
Warning!
The following transcript was generated automatically from the content and has not been checked or corrected manually.
Oh, that's good. Mhm So welcome everybody. Uh I'd just like to start by saying sorry about the session last week. But thanks for joining us again for the 10th and final session of our SRM A teaching series. Today, we'll be looking at getting your systematic review and meta analysis ready for submission and delivering the session. Today is Neeraj Kumar. Er you'll have meet him before, er, but for those of you who haven't, he's a current phd student in Cardiovascular Science at the University of Leicester and previous medic from UCL, he's the founder and president of the N Mra and has worked on over 50 projects. Just a reminder to please fill in the feedback form to get your certificate, certificate of attendance. Um And I'll make sure I put it in the chat part way through the session for you. So thanks Neeraj and er I'll hand o I'll hand over to you. Awesome. So uh yeah, welcome everyone. This is the final session again, apologies for not being able to do it last week, but uh we're, we're here now in the end. So this is essentially just wrapping up to be honest by this point in your paper, you will be pretty much finished. So we're just going through the, the, the final bits and pieces to get it touched up and ready for submission. But before uh as usual, just a quick introduction to a for anyone who's new or if you've forgotten. So MRI found in 2022 it's a nonprofit aiming to give students and young academics. You could be, you don't even have to be a medic. But as anyone who wants to do medical research, you get a foundation, we teach you to be independent, we teach you to learn all the skills and have all the resources you need and to create those opportunities for you guys. Uh One of those is obviously the mentorship scheme we've launched that now. So a chunk of you, if not all of you will have some involvement of the mentorship scheme. And um that's kind of your way to, I guess, apply the things that we're teaching you. If you're in the scheme, you'll know that the, the planning documents that are in all your groups are mapped according to this. So step 10 of your mentorship program is also this lecture 10 today. So they kind of map across. So if that's if you're keeping track for your own project, then you're nearly at the end. But uh yeah, so this is the, the series, as you can tell we've done all of this. And unfortunately, we didn't do it by the fourth of July. But nevertheless, we've covered every single step from start to finish. We've done finding a question. We've done your protocol. We've done screening, we've done risk of bias. We've got all the data we needed. We've done me analysis and we've done results and write up. So we have a essentially a full completed first draft and the final step we're gonna cover today is getting ready for submission. As I've kind of written on the screen, it, the aim is to give you all the tools you need to successfully conduct a systematic meta analysis and you can reengage in all the steps. So the final step it is today. Uh so I always have this slide, you'll kind of see a trend. But yeah, from az recession, we've interpreted the results and we've written up the full sr and we have a draft. Now, when you think about what do we do with submission to journals and what do we do about the publication process? So to start off with, before we even think about going to a journal, the first question you'd ask yourself is, is my manager actually ready? Is it good enough? Because the last thing you want is that you submit or you know, you, you start thinking about the journal process, but your manuscript requires more edits and it's just not at the right level. What that will lead to is obviously your the journal will be sent, sending your paper back to you and saying this isn't of sufficient quality. Maybe it's a formatted, right. And it just doesn't look great for you, but also it's just more effort that you would have done anyway. So instead of having them tell you that just, I think it's important that we go through what is required and then you can avoid that unnecessary back and forth. So, first off, um, this is essentially uh from a, from a really good paper I liked. It's from Wenzel and Dunster and Linna. This is something called uh it, it's, it's a, it's a really good guide. Actually, you should read this. It's called a step by step guide to writing a manuscript. And essentially, um what they put here is these are nine steps that will lead to your paper being immediately rejected. So when you submit the way it normally works is you send your journal, your journal article into the journal and they will initially read it from an editorial perspective. So they will verify that it's formatted correctly. It's verified that it checks out with like the word count and some referencing rules and all the other requirements they have and they will check for each journal have their own kind of criteria, but these are like the generalized versions. So they'll say, well, you know, is the question actually any good, is the work actually original? Has anyone done this before or? Um you know, is it, is it just an implausible study? Like are, are the, the studies being the questions being answered just ridiculous for what they've done? You know, if, if, if there's something severely and obviously wrong, the, the paper will not be taken for peer review, they will just send it straight back to you because they just think it's not good enough at this stage. Uh Neeraj just to interrupt there. Could you um share your powerpoint? I did hang on. What, what slide are you on? We can't see the powerpoint at the moment, but we can see that. OK. So we've done this, we've done this, we've done this, we're here fine. So before we think about going to a journal, we need to know if it's any good and this is what I was talking about rapid rejection criteria. So Wenzel Dunlin, they will talk to you about all the steps you need to know for a manuscript not explicitly stated, but it's always, every journal has like a rough copy of this and they'll check for this within the work. And if not, then the paper will get rejected back to you. And they'll say, well, hang on. There's something obviously wrong here. It doesn't fit with peer review. And we're not sure that what you've suggested in this manage group is actually even plausible. It's just, there's something obviously wrong with it. So the editorial team will first check that before anything happens. And then so the first thing you should do when you have a first draft is yourself and amongst your team, reread it and make sure you don't miss, miss any of these things. And you know, it, it can be a bit challenging sometimes because when you read your own work, there's this kind of bias of you, you, it makes sense to you because it's the way that you've written it, which is why I think a second perspective can be really helpful. Sometimes it can be really helpful to just explain what you've done to someone else and say, well, you know, do you, what do you think of this? And if they start asking you questions about the work being done before or if the work not making sense or you know, the samples being off, whatever it may be, that's when you're gonna find these rapid rejection criteria kind of things. Um In mentorship, the way I'm kind of doing this with you guys, you obviously is all the groups are at, at the stage where they're coming up with a project idea. I think a good well constructed research question will dodge every single one of these because the question being asked will automatically be novel, interesting worth a asking. And if the question is structured appropriately, there will be a clear hypothesis of a drug being better than B drug or, you know, whatever whatever the hypothesis is depending on the style of the question. And in line with that, we will formulate obviously the methodology, the study sample all these things. So one through six, I think can be done just with a good research question. 78 and nine are the ones where that's to do with your write up and your interpretation. If you have interpreted your data correctly and written up the appropriate things, you'll be in good stead. Uh The next thing I wanna cover is for specifically for systemic human analysis is a Prisma checklist. Um This is actually a requirement for a few journals now. So just the other day I submitted to B and J for a systematic review protocol that we're doing and they asked us to do a Prism checklist. It was really interesting because obviously protocols are actually future tense. It's talking about the work we will do in the future, but they still ask us to specify all these things, inclusion criteria, the rationale for the review, what methods are used to collect data, all these things kind of line up with the stuff that we've covered in previous lectures. But you have to very explicitly mention when you mentioned them. So you have to give a location where it was reported that's typically done in terms of a line. So what you do for your manuscript is you'll turn on the line number. So and it will just keep going on and then you'll say, well line 100 and 23 in the methods section refers to my risk of bias and then you'll put that down there. So it helps you to know that you didn't miss anything and you've covered everything, but also it's required by journals. And most importantly, it gives a reader a kind of a quick fire way to check out every single thing they need. And, you know, if you're reading a review for your own purposes as a researcher that might make your life easier, because if you wanna know, well, you know, what kind of things did you look for in terms of effect measures, let's say you wanna know, what did they check? Well, then just, just let check what line it is and just don't waste your time. Just go straight to that. That makes your life easier as well. So it's good for transparency. It's good for navigate, ease of navigation and it's just good to make sure that you are being as thorough as possible when you're doing, doing your review. So that stuff is really important. Um But yeah, be between these two. If you verified that your paper doesn't fail any of these and it covers every single thing on the Prisma checklist, at least method, you can say that it was sound and that's a really good place to be because it means your, it was a review was well conducted. It answered a decent question. It probably had some kind of value to the literature, those are all really positive signs. And then, you know, at that point, I'd say, well, your journal article is probably good to send off. And, you know, at that point, you have to just see what the, the journal will say. Um I also added in the other half to show you that the Prisma guideline is not just methodological, it's actually for everything. So you've got to specify the results, you've got to specify, you know, when you reported the synthesis, what kind of biases you found you, what you did in discussion and you'll notice it's very thorough, right? So discussed limitations for evidence included in the review, limitations of review, review processes. And finally, I implications for practice policy and future research. So if you remember the last letter I did two weeks ago, you'll start to see that this Prisma checklist maps on really well with the stuff that we covered it kind of forms a skeleton of the best things to mention and how that will look. Um And then there are other things as well. So for example, you've got competing interests, financial support, um you know, report whether the following are repa uh available in terms of data collection forms, data extracted a LYT code, any of the material used in the review. Um A lot of people as a rule of thumb will not give this out because they think it's proprietary to them. So the then therefore they don't make it publicly available. However, if you really need it for some purpose to verify what they did or what they got data from, you can always contact the corresponding author of the study and say, well, I went through your study, I wanted to know where you got this particular piece of data. Do you mind sharing that with me? And you know, they will, depending on whether they have it or not and what happened to that study, they might be able to help you out there. Um So things like that, they're not specifically to do with your write up, but they're things that you should have in the back of your mind. You know, you wanna be able to have all your tables, your, your, your prisma flow chart, you wanna have your code, uh sorry, your search strategies, potentially any analyses you did, you wanna explain what you did and maybe some code but unlikely to, it's unlikely that you'll give the code um for because it, it can be proprietary. So things like that are very important and yeah, it's, it the, the depth in which you cover all this will reveal very strongly in terms of how that goes. And it, it, it goes to show the preparedness with which your article was written and how it's gonna be presented to the reviewer team. Mm So yeah, that's it really for, I guess knowing that your quality of your work is there, what else do we need? Um So first things first verify all of the guidelines for formatting and structure. And what do you need to include? Typically, the things you will include are these three. So you have tables and figures. Uh journals often will say put this in a separate document um or sometimes they'll say put them at the end of your document and then just put like a little placeholder. So you know, you obviously in your text, you'll kind of just put like table one in brackets, but where table one should be in your manuscript, you'll just leave like the the header. So just say table one study characteristics of included studies. So that marks where that should be within your overall manuscript. But the table itself will be uh either say it might be at the end or it might be in a separate document. Um Alternatively, some journals might not ask you to do that at all. And then when you go into proofing, they'll ask you to then just check that as I was put in a reasonable place, supplementary material. So, so anything in the supplement is essentially anything that you need to refer to, but you're not gonna actually specifically put within your manuscript. It's not, it's not something that belongs in a table or a figure or in the main text. So stuff like risk of bias or a third strategy. There's nowhere in your management where you would reasonably put that. So you just put in the supplement, it's available for people to read and to access, especially online these days. So like the online version of the journal article will have that. And then as a result of that, it saves space and you know, people reading a manuscript don't have all that extra stuff to, to look through. But if they want to check it out, they can do. because for example, c strategy, you're gonna describe it in the method section and then you'll just say, well, the full thing is available within the supplementary. So again, the reason it's there is for transparency and so someone can theoretically replicate your work being able to do that doesn't mean that it's necessarily in the, in your face, it just needs to be there. And last but not least is the cover letter, cover letters are not fancy. All they do is they tell the editor of the journal why they should consider your particular piece of work. So you're gonna tell them things like what your work achieves, why it's novel, why it's important to know about in the context of, of the work that's been done or other policies are being implemented or the future research that may be required. And as a result of that, you then talk about why this journal particularly should take it. Maybe you're gonna reference the fact that this journal is, has commented on previous work in the field and they have an interest in that field. Therefore, or you know, maybe you'll speak to the the specific region by why, which that journal is published in and how that's relevant to where you're publishing factors like that show them to the journal. Sorry that this work is particularly relevant to its readers because at the end of the day, a journal is just a medium by which you're sharing your results to your potential audience. So you've got to think quite strategically in the sense of am I gonna get the right people on board? So, um yeah, I'm gonna go into this into a bit in a bit more depth. So first off, I wanna you some guidelines, this is gonna be, so this is from um I think this is from B MJ Sports medicine. I don't know why I picked that one in particular. It's just a random one that came up in when I just googled it. But my point is that this is everything you'll need to do from their perspective. So they've said it should be around 4 to 4.5 1000 words with a word, maximum word kind of up to four. They want an abstract of up to 250 that is structured in a particular way. So the key thing is to bear in mind. Uh you've got, so they've said the literature search needs to be done. Prism checklist must be provided and should accompany submission in a supplementary material. And all systematic reviews must address every item in the prier statement. So you've got to cover the whole thing. The second bit that's important is you've got to have the title saying systematic review. You've got a structure of 250 word abstract with headings such as objective data, uh sorry, objective design, data sources, eligibility criteria and results and conclusion. So essentially this is a bit of a clunky way to do it, but they've broken down the study a little bit. So you've got the objective is just your introduction. It's why you're doing the study, design data sources and eligibility criteria are just methods but broken up. So your design will say, well, we're gonna do a systematic meta analysis. We're gonna follow the Prisma. We're gonna, then data sources will say we're gonna obtain data from such and such journal Ovid Scopus, uh MEDLINE. And then eligibility criteria. Just gonna say, well, we're gonna screen studies that include RCT S. They might be for such and such type of patient who had a certain drug treatment or whatever that may be in a normal abstract methods. You will cover that anyway. It's just because it's this specific journal you're submitting to, you've gotta do it their way and if you don't, they'll send it back to you and say, do it again. So just do it right. The first time is the reason that you check all this stuff. Um Please include a summary box in 3 to 4 clear and specific movements. What is already known and what are the new findings? So that is this is, this is some journals do this? Uh The B MJ does this? The lancet has a section on this you'll notice for every single submission they make. Um Essentially, it's just to, for the sake of the reader. If they don't know the field, you're gonna summarize what's been done before. What's new from your study? That's it. Um It doesn't have to be in depth and to be honest, it's not something that will be severely graded on in peer review. They're not gonna say, well, why did you say that in your bullet points too much unless they obviously wrong? Of course, um, system I registration, that's just put your pros number down if you have one. And if not, then first of all, why don't you? And secondly, we'll think of other, there are other places you can send it, you could even publish a preprint and just have it out there in public domain. The main thing is you've gotta think about why you're doing this and like where, who it's gonna be disseminated to and then being transparent about making sure everything's clear. The final point is a little bit odd. Please consider whether the topic WW warrants systematically or whether scoping you when you more appropriate, you can go and check the guidance. Um if you're curious, but in essence, what it kind of comes down to is what we've always talked about in this whole series. If you have a very specific research question, you'd like to answer that has a very objective result and it, and the way it's gonna be done, you do that. Otherwise, if you're doing this to probe into what could be a good question and where it's gonna go, you need to think about scoping review. Obviously the guideline is more in depth, but that's kind of the summary. And then the final bits are, this is for your actual formatting word count. 4500 abstract 250 no, no more than six figures and up to 100 references and Prisma checklist statement and flow chart. You've got to give both, you've got to do the checklist and also the flow chart. Um And then um and the last thing is just to bear in mind is six tables of figures can be a little bit challenging if you're doing meta analysis. So that's why you want to minimize any extra things that are not gonna be required to be tables or illustrations. You put that stuff, risk of bias or strategies, anything extra that doesn't fit in like any extra patient data that you, you can avoid putting, put any of that extra stuff in the supplement, cos supplement doesn't count towards this. So that's why you've gotta be really careful with what is worth presenting, even with word count 4000, 500 feels like a lot. But a long a lengthy discussion section explaining a, a good amount of detail for a meta analysis. You will hit that pretty soon. So these things are important to remember because they teach you and they inform you of how you will write your paper and what it will become. So that's really important. Um This is a review we did. So this is just a cover letter example. Uh Shreya actually got a couple of papers out of NMR. So she was a mentee 2022. But yeah, so we wrote this paper. This is a little bit of an unusual one. It's not something I've talked about too much in the series. I'll pro probably cover this in the future lecture at some point. But uh systematic review of clinical practice guidelines, we really like these in Mmr. They're awesome. They are super beneficial because they start to tackle tough concepts such as are guidelines any good in this area? Are they congruent? Do different countries recommend different things? Do they look at different evidence? It's really important, right? Because you wanna make sure that what you're studying has a great amount of value. So this is structural like an A letter you and we we we use that to structure how we do things. First off top left corner, you mention who you're addressing you give a subject line and then you say, dear professor, we're happy to submit our journal uh article and then we put the title for this journal and then we explain what's important. So deep brain stimulation can be used for O CD. Our study AI aims to explore and assess quality of guidelines for DBS. We identify differences and we also identify different valid, this agrees agree to it's just a risk of bio score in the risk of uh is in, in how good these guidelines are. We also found that they failed to acknowledge cost effectiveness. So this is our key results. We've said guidelines are inconsistent. They don't mention cost cost effectiveness and the quality appraisal of them varies quite a lot. They do really well on scope and dependence, but they're not that great on applicability, which is quite damning for a actual guideline to be honest because if they're not applicable, what do they do? Who are they guiding? Right. So, um yeah, well, DBS is a newly emerging technology. Um We've kind of said, well, it needs more attention and we've there's no previous review of guidelines. It's the first one and we believe because of that, it's gonna be important to the journal because this is the British Journal of Psychiatry. We want them to publish this so that they will be able to get this out to their audience who are all psychiatrists who will potentially be using this therapy Right. Um So yeah, the other thing that I need to mention is um so we've said declaration of interest, none, some journals will have like a whole spiel for you to write. They'll say you need to put down like declaration of conflict of interest. You, some of them might say you need to specify that the work has not ever been submitted to any of the journal. They might even want you to say that, you know, declare any funding that all the journals received, all the work received. I guess they do that. It's, it's already on the manuscript anyway, but they'd make you do it again so that you're declaring it specifically to the editor when you're submitting. So it's for transparency and then you sign off like any other letter, what you sign off with your name where you're from and you say for on behalf of all coauthors because this is the corresponding author who is just the author who submits and deals with the journal on behalf of everyone else. Typically, the corresponding author is often the most senior person or the person who did all the work because they're probably gonna know the most about this, the topic and they're most capable of responding to author queries. They're also the people who officially have to write about um like and also that people officially have to write about the um what am I trying to say officially? You have to write it back about peer review. So obviously they need to know the field as much as possible. Uh Give me a second. Mhm. Sorry. Back. That was awkward. But uh yes, so I present, oh, you don't see that? Oh boy, I'm gonna go. Where was I? Yes. So we've done, we've done cover letters. They're not too complicated. The main thing is just be confident about why your work is significant and why it really adds to the literature once that's obvious they're not gonna turn you down. Also, I should say a cover letter does not make or break any decision about your work. They won't say, oh, this cover letter was terrible. We're not gonna touch this paper. It's not like that, you know, th this is in essence an introduction to your work, but the work will always be for itself. Um just to carry on identifying an appropriate journal, it's important to consider both the journal which is likely to consider your work and publish it, but also work to actually reach your desired audience. So there's a few things to consider. Um you have the scope of the journal indexation impact factor, what kind of funding and open access options you have and journal acceptance metrics. So to start off with, I'm gonna show you a journal and talk about where you find this information. So this is a journal journal we've published before the European. Um essentially AES and scope tells you what kind of journals, articles they want and what they are publishing for and to whom that work goes, right? So this is quality of care and clinical outcomes. Essentially, they out they won't care most about clinical outcomes and they care a lot about quality of care, right? So as they've said here, it's about quality of care. It's about forum for showcasing the best outcomes research to inform public health policy globally and what they want. Exemplar types or relevance are novel treatment strategies, electronic health record prognosis research and cost effective analysis and a global perspective with focus on policy consequences and quality of life are important to their publications in the journal. So what they're telling you is present something to us that will add value to the literature, you know, in, in specifically regarding outcomes for cardiovascular disease outcomes for cost effectiveness and something that is gonna be important when we make public health policy about cardiovascular health, which is why these guys are our number one choice for reviews of guidelines. Because we think well, clinical practice guideline being assessed is a massive public health policy thing because we can update guidelines for the future and people who make those guidelines should know about this so that they can modify and talk about cost effectiveness and prognosis riches, which is exactly what we covered in the previous slide here. When we said about, you know, we said there's different recommendations for tailoring to characteristics and outcomes and also cost effectiveness is lacking. So based on what you see in front of you, you find a journal that matches very simple. And on top of that, for extra confirmation, what you can do is you can verify other previously published studies in your field and check what kind of journals those go to. Because if you're republishing in the same field about a similar thing, those are the kind of appropriate journals that will fit you. And it's unlikely by the by the time your journal article comes out and theirs did in that time, they're not gonna change scope, scope is very overarching. So nine times out of 10, that's a good metric to go by. Um But yeah, the other thing I want to cover here is kind of impact factor and ranking some researchers get really worked up about this and I can understand why it feels a bit like gamifying um what you work on sometimes it's like I got this many citations and my work therefore is so relevant and the impact factor of this journal is so good. So I'm getting into really good journals, that kind of thing. It can feel like a metric that you need to do well on and it is in some regards. But first off, before we say we comment on that, I think it's important to just say well, actually what does impact, I mean, so in essence, all it is is it's an average number of citations earned by the papers within that journal. So in 2023 the average number of citations received by journals from uh articles from this journal were 4.8 which is number 43 out of all the 220 cardiac journals out there in the world. So pretty good, right? 4.8. So you're saying the average paper in this article gets 85 citations, that's pretty good, right? Because I mean, bear in mind, the average across all journals is less than one, it's 0.5 or something, it's really bad because you have to remember publishing is kind of, it's, it's kind of like you get like it's, you get like this very big extreme of some articles get 100s of citations and those are really important ones often, although the best conducted ones, the ones with the most significance. So those might be things like trials, really important, meta analysis guideline papers, that kind of thing, they're really important and the reason they get cited again and again is because they act as a foundation for other stuff to be built on top of it. So for example, if you write are involved in writing a trial, that trial might then lead to people studying other specific things. They might do subgroup analysis, they might comment on your trial. And do you know like a commentary piece, the the guidelines for that field might refer back to your guideline, someone might do a meta analysis and include your guideline. Lots of things will cite you so it go, it feeds in really well. But then equally, there are papers that don't get cited as much such as, you know, maybe letters, the editors commentary pieces, uh certain reviews don't get com uh cited that much because they don't chain themselves on to other research. If I write a commentary about why, you know, such and such trial was conducted badly, I might get a response from the guy doing the trial. But outside of that, no one's gonna keep repeatedly refer to my work because I made a point and that's it, the point stand on its own. But my point doesn't then lead to someone else doing additional study inciting me. So for every journal article that gets 100 citations, there might be dozens that don't get even more than 10 citations. And what I'm trying to tell you is that you have to take this average with a grain of salt. It is a proxy metric for the reach of the journal and how much it can act as a platform for your paper because there are some articles that do really well and some that don't and hence, there's no such thing as a good impact factor. I remember working with someone once who said that they wouldn't read a paper with if, unless the journal was published and had an impact factor of more than four and funnily enough when we wrote up the final manuscript, I actually went back and counted zero papers in that review had an in by write of more than four. And that is a funny anecdote I think. But what it tells me is that this whole thing was silly. It didn't achieve anything because we obviously thought that those papers were good enough for the review. They, they did fine on risk of bias. They were impactful. They were relevant to our research question. So we used them anyway, the journal impact but it didn't affect our decision to do it. It might have made those papers easier, easier to find and more well spread. But the quality of the work doesn't reflect necessarily on the impact factor or vice versa. But that's not to say that an empire is meaningless, right? Because in journals that are protective of the empire writer do really well. It's like the lancet, they are genuinely stricter on what they take. They're more selective, they have more strict uh peer review process. A colleague of mine who submitted to the lancet, they had to sit there and review uh their paper with seven peer reviewers that's unheard of in most journals. But because they're so strict on what they take, they really went for it because they thought and we need to be, we need to be certain, this is the right paper for us. So that does happen. But it's not normal. It's just a, it's, it's a, it's an artifact of them just publishing really good work and making sure they continue to do that. So there is a link between the two but it's not like one causes the other and it's not bidirectional, I don't think. But uh yeah, uh funding. This is a big one. So there is a spreadsheet from the Oxford. Yeah, this is from Oxford University Press. So they have 100s of journals that they publish in from the press. And for every single one, if you want to publish open access, they'll ask you for an A PC which is an article processing charge. It's a strange concept to get your head around. But in essence, if you don't want your paper to be behind a paywall, you pay for it, which is really, you know, a bit of an unusual thing to get your head around because I don't think you're used to that when you go to like a shop or something, but that's how it works here. So, yeah, when, when we speak about AP CS, this is kind of the ballpark figures was about 234, 5000 depending on where you submit. That's kind of standard price that is not abnormal. Now again, this is for open access. There are loads of journals that do either hybrid access, which means you get open and closed or just closed and those are fine. They'll be free because they'll make their money through people buying subscription and reading your paper. Which is why for a lot of you guys who are at the university or in hospitals, you have institutional access, your institution and your library pays a lot of money to these journals so that you can read unlimited papers. Um But yeah, a license is offered, it's just copyright. So you have C CBCC NC and C CNC and D. Um that doesn't really matter too much at, at, at this stage. The main reason that it's mentioned is because if you're using funding. So for example, for, for my phd, um they require CC by NC. That's just what they need to fund the publication A PC. And you've got to check that stuff in advance because if not, then you've gotta start carefully picking and choosing what journals are appropriate to you. If you don't have funding, for example, that covers CCB and you need to do CCB and see you need to start taking off, you've gotta find an appropriate journal. I have the full list, but then you've gotta ignore the ones that don't have this criteria met, met, met. So the more that you look into journals, it's not just what's relevant, it's also what you can afford. There are like these conflicting, often results that you need to cover, but it's important to think about early. Um your institution or hospital will often have like funds for this stuff. Uh you can always speak to like a librarian team, they'll be able to guide you if you're on a funded program, like a phd or some kind of grant, then again, you can speak to them and they'll probably allow you to pay for this stuff using that provided that licenses are met and things. Um So for example, for my phd, they will only pay for it if it's in an open access journal and open access is available publicly within six months. So hybrid journals, I don't think they'd pay for because they'd just say, well, just publish a close access, you know, they want it to be a fully open access journal ideally. But uh yeah, so again, your funding will specify, please check uh and then journal statistics, the final thing to cover. So B MJ opened again, we published the previously. So I just copy and pasted it then. But um yeah, so acceptance rate is just how many papers they take out of what they get submitted. 36% is actually pretty good. Most of these are in a 10% 20% kind of range. They're a lot lower. So 36% tells you that these guys are actually quite keen to take work regardless of where it comes from provided is of sufficient quality. There is also some kind of selection bias, I assume because you wouldn't publish the B MJ if you didn't think your paper was good So II reckon a lot of not so great papers just find themselves elsewhere. And then, um, so that's kind of relevant because if you're submitting to somewhere that has like a 5% acceptance rate, you need to think, well, they don't take 19 out of 20. Does that mean I need to be really careful about how good my paper is. Maybe you do. Um, you know, that would always kind of make me a little bit cautious and the other thing to think about is time to decision. Um So that kind of just tells you how fast the journal will handle your work. So let, let's go through this. So time to decision without review that applies in two times. Either your work doesn't need a review. So it's like a, like a letter to editor or something. It just doesn't need to be peer-reviewed or it's gonna be sent back to you before peer review because you've missed something on editorial stage. So in that case, they'll get back to you in four days, which is really good. You know, that could be the same week. They'll get back to you. Awesome. And then the second part is time with review. Pretty much every systematic you met in last we touch will, will cover this because I've never ever seen a journal that doesn't appear review systematic human analysis. It just is a thing that needs to be done. It's very appropriate methodologically for a journal to verify, you know, whether your was any good. So it makes sense. Um So they're saying that they can get your work back to you with a decision from review within 3.5 months, 100 and five days. So that's actually pretty quick. I always tell people expect at least three months, maybe closer to five or six. So that's not bad. 3.5 is fairly fast, but do bear in mind, this will double because your work will very rarely be accepted on the first decision. You know, there will always be comments that are raised by peer reviewers. And if there are comments, it's very likely that your the journal will say address the comments and send it back to us if they think the work is, the work is worth it. So peer reviewers, when they, when they mark your work, they will give comments and then they'll be asked to give a judgment. So the judgment will be reject, accept or accept with or accept subject to um peer review comments. So if they have any substantial comments about your work, they'll say except with subjected comments. Um and reject just means yeah, they didn't like the paper at all. It has something wrong with it methodologically or something like that, maybe it's not original, but maybe in his work. And then except first time is very rare because there's no way that it's very unlikely to have work that is so well crafted that you've covered everything, right? Your peer reviewers are a third party source, they're gonna see different differently to you do. I'd say 90% of papers don't get accepted on the first decision. If not more. So because of that, you will then have to go back redo your edits, uh you know, amend your paper accordingly, respond to peer review and then send it back and then one by the time they, the peer reviewers verify that you've answered all their questions and they're happy with it, they will then send the work to the journal team who will then say yes or no again. So you've got to go through this most likely two cycles. So we're talking 210 days, which is seven months and that's after your first draft is ready, your second draft is ready and then fully finalized. Yeah. And then at the very end exception of publication. So that is essentially just a little process. Um Essentially your work gets converted from a word document to the fancy PD FS that you see when you click on like the lancet B MJ or whatever that process is called proofing. It takes them a a bit of a while to like get everything type set and put into that fancy format. And sometimes they have to like redraw or like edit your figures and tables sometimes and then they'll send it to you to as a proof, which just means you proofread it and you verify, you're happy with what they've done and then they also then go upload it, send it out to indexation, things like that. So that process 21 days is about reasonable. I always say expect it to be near, near nearly a month or thereabouts. So 21 days is pretty good, but all in all just do some quick maths, right? 2105 times two plus 20 is 231 days. So that's pretty much eight months after you finish your work and your systemic review itself probably will take 689 months to write. So from the conception of your work to publication, we're talking at least a year, maybe more depending on the journal and depending on other factors. So I'm trying to illustrate to you that this is very important because you need to have that time scale in your head and you need to know the journal will actually live up to that cos if you have a journal who says our average time of the position is 100 and 80 days that that slows everything down. So you gotta think about that stuff in advance. But uh yeah, peer review, um I've kind of touched on this already a little bit, but I wanna go into it properly. Peer review is a vital step. It ensures your papers actually any good, it ensures that you add values in literature. Typically you get two or 37 from Atlanta is not normal, it's really not. Um And then, yeah, they will essentially, they will decide whether your work is accepted, but it can be overridden by the editor in chief of the journal and the editorial team. They, they can choose to have a paper even if the reviewers don't like it or vice versa. And uh yeah, if the journal thinks your work is of interest, they'll ask you to respond to reviews comments and resubmit. So I wanna show you peer review feedback that I got for a manuscript of mine just to show you what kind of things they say. So this is for a a review. We did dis discharge pathways for patients with, I think they were going through uh cardiology for Tavi. Yeah, for Tavi. So essentially we've kind of compared early discharge normal discharge pathways and what the difference is. So um yeah. So the the the peer reviewers kind of said, well, in, in the case of what we're doing, what's the criteria? How does ed make any difference? And what do we define ed to be? So, he's answered these three questions and then he said, well, you've answered question two because obviously, if you do, if you do a meta analysis, you can show is there any difference in mortality and morbidity, POSTOP complications? That kind of thing, we can I identify every, obviously what criteria should be used for patients. That's tough because, you know, here they've said they probably have information but don't present it. What they mean is when you did your SOMA review, could you extract this stuff from papers that talked about pathways? Probably. So he's like, maybe you have it. And finally saying, well, did you have any comments about expanding criterias? Because was that something that was mentioned within the studies itself? Uh maybe. And so, you know, as a, so he's kind of saying, well, methodologically, these are questions you should answer and you should think about and that's actually really important, right? That, that, that teaches you a lot the interesting things they, they show you that actually selection bias and discharge and structure is important. Um So Ed probably is highly selected, you know, it results in better outcome. It doesn't result in bad outcomes. They're right because the act of discharging someone doesn't improve their care. It's, it's like saying the reason they're being discharged early is because they're actually well quicker, which is a good thing, but we didn't do anything to get them well faster. They just happened to have that anyway and we just sent them off on their way quicker. So what we are testing is not, does eed make things better? It's just, it's just safe, it's fine. So the wording, you know, was not right. And he rightly flagged that up, then bene balance of benefits. Again, you were studying retrospective. This one's a bit annoying. But again, he has a point and then different cutoffs ed. That's because different studies about different things. And so he's like, well, actually, why? Let's go into why? And did you not include this analysis? Well, we didn't, because it wasn't appropriate, it didn't meet the inclusion exclusion criteria. But that's where we get to respond because we don't have to say every single thing they say is right. We have to respond in accordance with what we think is best because peer review is at the end of the, didn't do a systemic review. You remember, they are only reading the manuscript, they're commenting on it from their fresh perspective. So they might not know everything. And as a result, we have to respond. So this is what we said. I said, well, listen, you've made some salient points challenge. So it's challenging to interpret number one because Ed is different across every single study. There's no consistent practice and there's no guidelines. So we can't thoroughly examine this because the criteria don't exist. So we and I've said, well, we believe this is a pertinent area for future articles to, to explore. So sometimes peer reviewers have a point, but I can't do anything about that. So I've just gotta say, well, I'm sorry, I can't fix that. It is what it is. But if I don't make this rebuttal and argue this, we're never gonna know. Right. So yeah, another one is, this is uh one we submitted for wellbeing. This was published like two months ago. Just another example. So here sometimes the points are really pedantic, right? So they've said you've used the word understand which overstates the possibility of such a review. Ironic. But anyway, um so yeah, maybe it'll be better to say just deline describe outline because listen, you can't get a research favorite as, as, as you understand pedantic, but fair enough, right? If that's how you feel, if that's if, if we're going to say that research papers can only present information and understanding is not part of what the paper can achieve valid, we'll change it. And we did the key thing I want to point out here is that when we change text in the manuscript, we copy and paste it here. So we put it in the in the review in the response document so that the editorial team and the peer reviewers can specifically see what we've done and why. And so therefore there's transparency because otherwise I could just change nothing, tell them I've changed everything and then the manager just goes through, right? The whole point of this is to be as transparent as possible about how we are improving our science and how we are com communicating that in the most effective way to the audience. That's very important because otherwise, what's the point of a peer review process? We're just gonna just lie our way through it. But uh yeah, so some of these things can feel very silly but they're important. And uh yeah, also not everything is a negative they have to respond to. So this stuff is a well structured well presented plan. It's important to, it's considered international perspectives and methods are clear and appropriate. That's a good thing. So, I mean, we could, we could respond and say thank you, but you don't have to actually respond to that because they're, they're being nice to you. Like they're acknowledging the good things in your paper and they will, you know, not everything they say is negative. There are also some things that they're reviewing your work and they're saying, well, that was fine. You did a good job there. But yeah, um what I'm trying to illustrate here is that a peer reviewer is not out to just reject your paper. They're just giving you an honest assessment of what they perceive it to be from what they're reading. And uh yeah, that's pretty much it, to be honest. Um You know, once you've dealt with this and you've cleared it through your paper will be accepted most likely and proceed from there. But yeah, um I can imagine this is a very understudied topic, not really anyone comments on it too much, but um especially for you guys in mentorship scheme. I definitely think if you have any questions, you wanna go through this with me. I'm happy to take any questions and happy to stick around and help you get through this process as much as possible. But yeah, I'm gonna just stop sharing so I can look at questions cool once a year. OK? I can't see any questions other than the whole thing about my slides. So yeah, if everyone's happy and you're good to go then, fantastic. It's been awesome to, to speak to you guys. And um honestly, I had a lot of fun doing this whole series. So I hope that from this talk, we've covered everything in system few meta analysis that is kind of immediately relevant. If there's anything else that crops up, I will probably end up doing a few follow up sessions about like conferences and a a few other bits and pieces. Um We will fix that and then secondly, um obviously for the people doing the mentorship program, feel free to revisit these and come back to them as and when you need them because you're gonna be doing these steps pretty soon. So hopefully that's all helpful. And uh yeah, thanks again for coming everyone.