Computer generated transcript
Warning!
The following transcript was generated automatically from the content and has not been checked or corrected manually.
On the critical appraisal of research studies. Well, before starting, I just want to introduce us, who are we and what we are doing. So we are four people in Birmingham living in Birmingham, me myself, edit Yasin and Carlos and Janine. Unfortunately, she's working at the moment in the hot uh geriatrics, I guess. Yeah. So we have four doctors from Turkey and one from Spain who has almost completed first year here. And we taught that we struggled everything in our first year as an International Medical Gradus. And we settle down and taught that we might create a teaching course and we can focus on the teaching needs of the newcomer doctors to UK or NHS to assess or to address the issues that we have the problems uh when we first start. Mhm. And we had previous teachings on how to find job applications, how to prepare C VS and how to apply for GP application. And today's session will be on critical appraisal of research and in the se session I me and Carlos will be presenting and it will look quite basic presentation because I'm, it's kind of introduction to the critical appraisal and research and we will not go over in detail, but we will have conversations as a workshops where we can practice hands on experience, like go over a research paper and try to understand what are missing and what are those trends. And I just want to check that if the online people can hear me they gave you. OK. That's great. And my presentation is always as well, right? OK. So today's session title as enhancing for the evaluating research and critical appraisal. And it will be based on the critical appraisal of the medical research, biomedical research and for the in person attending. Uh If you can please register from this QR code, you will get a feedback and then you will get an attendance certificate. And by the end of this session, what we are aiming probably you will be able to understand basic concepts of critical appraisal and you will be able to identify key sections of a research paper and their purpose. And you will be able to under evaluate the research question and to be able to say that it's a good research question or a bit week research question and you will be able to assess the design according to research question. And you will be able to understand that if it's a high quality research or high quality result or a poor quality result, but overall, it will hopefully improve your critical thinking and it will help us uh to evaluate and improve the evidence-based medicine. And the presentation will go as we first will start as a definition of critical appraisal and why it is important particularly for doctors. And we will go over the research type different of study designs and in what stages which sleeve design is appropriate and other questions. And we will go over the quality checklist to see how to assess a quality of a research paper. And we will quickly discuss what is the structure of the research paper and what sessions does it include? And what is the quality of the sessions? And finally, we will quick shortly do talk about the biases. So what is critical appraisal, critical appraisal in simple terms? That's the system, systematic examination of the research paper to understand that if the results they are offering is really meaningful and really valid and can be applied to our clinical settings and clinical practice. But what makes it different from just reading a paper is a systemic way and it has been studied before to understand that what makes the paper a good quality and what are the liars? And it's particularly important important for physicians because we know that uh we need to perform evidence based medicine. And GMC also offers that as doctors, you need to be competent uh to make your practice and practice based on the best research evidence. And unfortunately not, unfortunately, but there's a significant increase in the research and publication in biomedical area. And in this graph, you it's from PM data only over the last uh only in 2023/1 million paper has been published in PUBMED. So it's quite high number, there is ongoing significant amount of research. And unfortunately, some of this research is low quality and they really don't provide that much to our practice. That's why as doctors, we just need to understand that how to read the paper, how to critically appraise them and how to differentiate good quality from best quality papers. And as I mentioned that by critical appraisal, we are looking at the vol and reliability of research and we are able to identify some flows and also develop our research practices. So it also provides us to how to do research and how to improve our practice uh in research methodology. And it also increases the transparency and accountability. And by doing critical appraisal as a doctor, you also develop your decision making because there are many, many different results and you just need to make a decision based on the different results, different data and overall it uh promotes our accountability and what are the key questions in particular appraisal. So we these are just like uh broad questions that first you need to think when you read, read the paper. So first question is the relevance is that research is relevant to your study, to your field and why the researcher needed to do that study? What was the research question. So that's the one of the most important things because as I said, that there are many researchers and sometimes research findings and sometimes they are not relevant to your field or they don't see, tell anything new. It's another question is this research. Does this study add anything new to the evidence in my field? It's important as both as like reader also as a researcher when you are doing a research, it's really important to ask that my research, my project, does it add anything new to the field? And after that questions, if you feel like that research question is fine, it's relevant to your s field or your interest, you need to look at the research design. What is it? So basically, we have two types of research, primary research and secondary research and primary research is the basically the research that you do an experiment or you do collect data and secondary research include the study reviews and meta meta analysis where you are interpreting the uh ongoing data, literature evidence. And you are making an interpretation of this data and Carlos will go in detail about this research designs and studies. And the other thing is that after you read a paper and say that oh this is that kind of study, you just need to consider that is that study design is appropriate for this research question or this research interest because that shows the value of the result if the research design is not appropriate for the research question, what you found is might, may not be rel relevant or valid at all. And another question that you always need to carry in your mind is that uh is the statistical analysis performed correctly? I know that we are not statisticians. And we, I mean, although we had classes in medical schools, most of us like, don't understand anything other than P LA significant or not. But the thing is that by using guidelines checklist, you can understand that if these statistical advisor means anything or not or important for the uh clinical practice. And another question, you always need to carry the, do the data justify the conclusions? For example, we see some um descriptive studies where they checked, for example, the effect of some gene on some disease and they said they can make a general outcome or general interpretation, but sample size is too small. So that's something that we should monitor. That thing that you just need to sample size as well. Uh If the researcher can do this interpretation from that sample size from that data is one of the things. And again, every study has some kind, some sort of biases. These some studies have significant amount, some studies has less. But it's important to research her to discuss this in the methodology and to discuss that how they try to minimize these biases. And of course, you just need to check and all this concern that is there any conflict of interest? It can be, some authors might have conflict of interest. But the question important question is that, is it declared because if they have any conflict of interest, it should be declared. And it's also important to that is this study founded by someone especially researches has been uh founded by the pharmacy companies and this might should be in your mind to consider the results. And of course, always consider ethical courses and check that if they have appropriate ethical uh signatures. And now Ca Carlos will mention about talk about the pyramid of evidence and the study designs can thank you very much. So, um the pyramid of evidence is one of the tools we can use to start assessing whether uh a a study is relevant to our field. Uh whether we can draw conclusions or whether we can influence our practice by much or by too little. Um The Department of Evidence basically rs the the strength of medical research. It's based on the potential for that research or for that paper to influence how you practice medicine. It's what's called a heuristic, which means it's a practical solution to the problem, not a perfect solution. Um As we have mentioned, um critically analyzing research is quite a complicated thing and this is a very simple tool. So it is a good guidance but it is nowhere uh perfect or something we should use as like uh a set in stone criteria. Um The pyramid of evidence is actually not uh exactly agreed upon thing. This, this example is from Proctor and Gamble, but there are over 80 different uh pyramids of evidence with slight tweaks and slight modifications to themselves. But uh the main criteria are well agreed upon at the pre of evidence. Um divides papers in secondary studies or primary studies and other publications, secondary studies are studies that base themselves on preexisting evidence. So clinical guidelines, me analysis and systematic reviews. Uh These are papers where previously existing evidence is analyzed further quality and then conclusions are are drawn. Um Clinical practice guidelines are the papers that we use to guide our practice the most. And just after that, we don't have a clinical guideline. Then metal systematic review is what uh we can trust the most. Um as I mentioned is not a perfect thing. It depends on how well designed uh a paper is, whether it's going to be more or less valuable. So you can find yourself where a randomized controlled trial is poorly designed and is actually less relevant to your practice than a cohort study or very frequently. Uh a systematic review that's slightly uh less accurate than a meta analysis if it's performed very well. Like um a known example with the reviews are considered to be as good as a practice guideline. Um study types. We can uh separate them between descriptive or no analytical and analytical which would include observational and um experimental studies. Descriptive studies are case reports, case series and basic surveys. Uh These studies are one or multiple cases of a specific disease. Their objective is to showcase abnormal presentations which might later influence how research is done. Uh They can be used to showcase um unusual conditions that otherwise could not have a um large trial. So a good example are rare diseases or rare presentations or to showcase unusual solutions. So a very common example of a case report or a case series is uh how to manage certain patients or a say time uh problem in unusual conditions like lack of funding, lack of resources. Two, it's all right. H DM. I um an important thing is oh my God. Uh sorry we're have having a small technical problems in the life part. I don't know uh it could be this one. So um I change it. Sorry for that. Just a just a small second, please. Mhm I think there's at this point. OK. We we have it uh to we can use uh case reports and case series as a form of expert opinion, but we cannot use them to draw or quantify a relationship between factors and we cannot draw conclusions from them. They are not evidence that should base our, that should guide our practice, but it may influence the way we do research analytical studies. They do attempt to quantify relationships between factors. Uh They can be either observational or experimental observational studies. They investigate and record the effect of an exposure or an intervention and they can be used to evaluate risk factor or observe outcomes. Um Experimental research is research where uh researchers manipulate actively the exposure or the intervention or the control conditions. Um These are particularly useful where you need to create the conditions for your research. So for example, uh a new treatment or the use of a new drug in within the observational studies, we have three type, the cross sectional, the case control and the cohort studies, they are all observational. So they do have factors in common. A cross sectional. We can basically understand it as a photograph or as a snapshot of one specific population in one specific point in time in time. We are taking one population preexisting, we are analyzing them but we are not following them and we are not comparing them to another population. This type of study is able to assess the prevalence or measure the exposure to a certain factor. So this can be used to uh say how frequent a disease is or how frequent the exposure to a certain uh known or suspected risk factor is they are able to show a coincidence between a disease and an exposure. But they're not able to show a correlation because we cannot see uh whether the exposure was happening before or after the disease. We cannot say that they are correlated. We can only say that they coincide at the same point in time, which is already a good factor to guide future research. Case controls. In case controls, we have two specific groups and we follow them across time. We start, we measure them and then we follow both distinct groups across time. This comparison makes us able to compare causal factors. It makes us able to show correlation between a risk or an exposure and the later pathology as we are able to see how that exposure happened in the beginning and the pathology afterwards, but it's not able to show causation. It's only able to show correlation. It's a very important distinction. You will hear very often. Correlation is not causation because it's very, very easy to interpret research. It as such as as showing a causation when it's actually only showing a correlation. Cohort studies are uh a step in between case controls and cross sectionals. In cohort studies, you have one specific group and you follow it across time. In case control, you will have two groups in alcohol study, you have one group and your control would be the general population. So since you lack a control group, you can um you compare against previously known data of general population. So it's able to show a correlation, but it's not able to show a causation and it's not able to um properly show whether a risk factor uh like uh uh to sorry to properly show exposure to a risk factor because I think online people cannot see slides. Let me just share it again. Is it, can you see this light now? Ok. Let's see. Uh I'm not sure if that's the right way to show them. I see. Can they see these lights? Yeah. Ok. So because general population is already exposed to your risk factor, uh cohort study is not able to account for that properly unless uh your risk factor or your exposure is quite unusual. Um Experimental studies, we divide them into clinical trials. So clinical trials involve multiple populations with multiple interventions. The most common way to do a clinical trial would be with two populations, one control and one with an intervention. But multiple interventions are possible and no control can be um can be used whenever we are comparing to specific interventions. Uh It allows to compare between different treatments between uh different ways of handling a disease. Uh It's able to show causation and it's a type of research which provides the most accurate data and it's also able to show superiority or noninferiority. When you are comparing treatments, crossover studies are studies where patients participate both in the control and intervention studies. Patients will be taking turns first taking uh part in the control study and then later taking part in the intervention, either blindly or non blindly. Uh to do this type of research, you're basically doing um two clinical trials one after the other and it requires a washout period. It requires a period where the patient is uh sort of. Um so what the effects of the intervention are disappearing basically, so that the effects of an intervention do not carry over into the control group. Uh It's also able to show super deno inferiority for treatment. Um Experimental studies have very high limitations. They are very costly and very importantly, they tend to have ethical concerns. Very often, you're not able to try a treatment without having ensured that that is ethical by knowing that the harm, the risk of harm is small, or you may not be able to try a treatment against a placebo because the placebo means receiving no treatment and that can pose serious ethical concerns. So if now we'll explain uh checklist. So I will quickly tell you about which checklist you use for which specific study I will not go over in detail at all. But I just want you to make you familiarize with the names and the where you can check if you are doing a critical of research paper. So as Carlos mentioned, there are different research study designs and for each design, you just need to look for different things, for example, especially for methodology and the analysis part. If you are looking at paper of the randomized clinical trial, you just need to go in detail and it's really difficult to understand statistics for RCT when you are looking, but sometimes it's quite easy to understand the statistics of a cross section or court study. So for that, hopefully, um in the past initiatives create some guidelines and some checklist saying that some, for example, that kind of paper might need to have that kind of features in the study and they did some recommendations. So for observational studies such as crosssectional court, either retrospectively or normal court or um you just need, you can use tro back. It's like there are different checklist as well, but I am just presenting the most common ones. Strawberry and you can see the example of Strober. So basically this checklist has recommendations and you can check that if the paper you are reading has these features. And for randomized trials, the most commonly used one is Consort and for metal and systematic reviews, it's Prisma. So these checklists are quite straightforward, but sometimes if you are not familiar with the concepts, you might have difficulties. So that's the reason that we are why we are doing this session. Basically, we just want you make familiar with the concept. So when you are reading a paper and you have a checklist with you, you can understand what checklist asking to check. And for case reports, there is ac and I didn't know but there's also a checklist for R studies. So it's called square. And if you are doing AQ state study research, there is a sure and correct I am familiar with the correct, a good checklist as well and have to find this information if you don't know. Uh there are some guiding resources to ensure that the we are checking the quality of the healthcare resource. For example, a network is one of them CP checklist. Also CSP is an initiative who also provide their checklist as well. Co handbook is always a good guideline for the metal and Systemic Reviews and GBI collaborations and other stuff and other initiative you can check and you can understand that how they create this checklist as well because creating this checklist also has a research behind it. And basically, I will go over the shortly, I will go over the go with the structure of a scientific paper. I know most of us have read papers and we know how this paper should be. So a paper includes title, introduction methods, results discussion, and the references, literature cited. But one of the things is that when you have the paper, you just need to go over, over read the paper. And I want to look into the publishing journal and the year especially in the medicine, the recent research is might be more relevant to your study because of the improvements and developments. And also you need to check with the ultrasound institutions and findings. So what I do usually before like medical appraising, I was just reading the abstract and trying to understand that it's something for me, but I have never checked the publishing journal or authors or conflict of interest and it's not the right thing to do. You just need to consider these aspects as well. And when you look at the title, what a title should include or like which title does the title say anything about the quality? I cannot say that it showed the quality, but there are some features, some research titles should have and you can see that I included the most recent articles in BMG Open uh Dizzier and there are different titles. So we can just check that how the titles occur. So I think one of the important things in the title is that it should present key focus and it should present the population in some sense. It shouldn't be like in detail, same setting or anything, but it should present population. And although it's usually accepted as optional, I believe that it should include still design because it's helping you to understand just from title, what is the key focus? What is the population and what is the study design? And it can be really helpful if you are doing systemic reviews because you need to go over like 1000 papers and title would be important. And there are different types of faith titles and it's up to you to be honest, like after some experience, some people prefer some title types, some prefer others. So there might be some question type paper titles like, for example, the first one is the question type is LDL cholesterol associated with the long term mortality among primary prevention adults. A retrospective courses from a large health care system. This is one of the way and sometimes they might like claiming something saying that for example, although it's not here, Ramipril is superior to another uh kidney drug or something, that's good that I have kidney drug here anyway. And sometimes it can be just descriptive saying that Ethiopian woman's sexual experiences and coping strategies for sexual problems after gyne gynecological cancer treatment, a quality study. So sometimes it can just say descriptive and doesn't give any details and it is usually recommended to avoid jargon. But why? But by saying jargon, I am me, I'm meaning more like the specific niche terms in the department. So you can use abbreviations. It's all right, but you must avoid abbreviation. That's too specific to your department because at the end, you just need to attract all these and readers to your study as well. So you need to be catching. So usually they avoid specific jargons and avoid two specific abbreviation or they just include the long names as well. And sometimes I don't know it's personal preference. Some people, some people might choose to use phones in their titles and this might be catchy for readers. But sometimes some people asser that it showed low quality, but I'm not sure if it's up to you at that point. And regarding S and journal after checking, I will go over with the first uh paper in the BMG open one, the cardiovascular medicine one. So I will use that paper but I will not go in detail. I will just use as an example to see that how I focus on that. So when I see that is LDL cholesterol associated with long term mortality among primary prevention adults, a retrospective court study from a large healthcare system. So this is a good title because it's saying what is the focus? And it's saying me, what is the primary outcome on? And it also give me info information about the study design and the setting. So it says that LDL cholesterol. So I assume that it's exposure or in the study since it's court study uh associated with long term mortality, I assume it is the primary outcome among primary prevention adults. A retrospective court which is the study design from a large health care set set system. So it's also showed give me some um uh setting as well. And as I said that TERS are al always important. So you just need to check Ters where they are from. So when I check these TERS and they uh location, I see that that's from USA, that's also important because if you are looking at clinical relevance, sometimes different social culture or different ethnicities might not be showing the results or the question that you are asking. So it's also important to check that one. Although that pa that paper didn't include specific uh no funding, it's also important to check the no funding or if there's a funding, what is it or if there's a conflict of interest, what is it? And it's also important from perfect for journal. We know that there are many journals and some of them are really high quality like L nature and, and DM BMD is a good quality paper as well. So when you had the paper, you cannot start, did I hear that journal before? Because we know that there are many predatory journals as well and it can just create a first idea about the quality of paper. But also it can create some biases for if you are seeing a paper from nature, you automatically assume that it is a good, good quality, but we know that there are many people retracted from high quality journals as well. So it also created um biases. And the important other important thing is that is this journal appropriate for these publications because we know that specific department specific specialties have their journals and it should be uh appropriate for the journal as well. You cannot publish uh I don't know, uh general surgery case report in a public health journal. So it's kinda awkward to do that. Um Yeah, mostly like you just check the authors and journals and it gives you an idea and then you usually have an abstract. Abstracts are usually quite straightforward, but it depends on the journal as well. Some specific journals have specific like templates of abstracts and length of abstract. But usually what you see in an abstract is the first you see aim of the study and background in that part, they usually summarize their objectives as well and they kind of summarize the research in this abstract in materials methods, results and conclusion. So here there's an example and in the right upper corner, you are seeing these Strober recommendations. So Strober is a checklist for the observation study for including code and this is a retrospective code study. So I just use it as an example there. So in this episode, although the format is a big difference because it has subtitles of objectives, design and settings, you can see that they presented their objectives and they clearly wrote their design and what I like that, they clearly said that their main exposure is uh LD RC categories and main outcome is the overall mortality and they presented the results as well. I will not go into the details, but you can just quickly check the results and can see that if there is a statistical significance because they presented raio and confidence intervals. So you can just quickly check and have an idea about that. What is pre this paper saying, what what I usually do just that if you go a conclusion and just have a summary what they are saying and when you check with the recommendations through recommendations, it says that in a observational study in cohort, for example, title and abstract should instigate the study design with a commonly term use term in the title or abstract. So this study had both entitled saying a retrospective course also in their abstract, they included the retrospective course. So there's a positive sign that it shows a good quality. And it also says that in the abstract, the informative and balance summary of what was done. That's the other thing that abstract shouldn't be that crowded so that they should include the primary findings other than like proving up every day to day health. And after I start, you should have the background introduction session in a paper, sectioning a paper. And this is usually why they explain why they needed that they that research and what was the previous feature? So kind of summary of the research to review and usually in the last sentence or in the last paragraph, they explained their objectives or clinical hypothesis. Although maybe you are aware that there are different frameworks to how to create a clinical hypothesis. And although it's not part of the critical appraisal, II think that it is has to just to check a clinical hypo and the research question. So there are different frameworks PCO eco and so po slash po is basically tells you that what a clinical hypothesis should include So pe is for population. So a clinical hypothesis or clinical question, first question should include the population like if you are looking for some a disease prevalence, which population you are looking for intervention or exposure. If it's an observational study of like if you didn't do an intervention, it's exposure. If there's an intervention or any other external factor, it's intervention but they are the same. So you need to include the what is the intervention or what is exposure? You need comparison. So you, you need to include your comparison saying that what you are comparing from and you need to include what is your primary outcome. So let's look at into that one. Although it is not like clinical hypothesis, it's kind of objective. They are saying that they explained in before saying that L LCL DLC categories has been commonly used in the medical practice and they prescribed some pre primary medication based on these categories and the level of the LD RC. And they feel like that the literature doesn't have enough evidence to prescribe based on this data. So they aimed within a large and real world healthcare system. We evaluated the association between L DLC and all cause long term mortality among primary prevention type adults without diabetes aged 50 to 89 years. So it safe population adults without diabetes, primary prevention type adults without diabetes, age 50 to 89. It says exposure which is L DLC categories here. So L DLC. And the comparison will be the between different ID IC categories and the outcome is the long term mortality. So overall, it provides a good understanding of this po like provide good clinical hypo or objectives for reader. And another thing is a surprise is just a another abbreviation. It's setting uh population intervention comparison and evaluation. So they are both to tell us how to create a good clinical hypothesis. And Carlos will continue with the appraisal of me results and cool turn, turn me on for a second. OK. It so um to appraise the methodology first, you need to have a a bit of a basic understanding of what that research is trying to accomplish. That research is trying to accomplish something and the methodology is what is going to allow it. So if the methodology is not appropriate, then whatever we are accomplishing is not going to be what we are looking for. So to start, you need to have uh you need to be able to identify the main research questions. This should be clearly stated somewhere in the article usually in the in a in the material methods or in the introduction. And you, the first thing you need to think is is the study design fit to answer that. Are we having a study that is capable of answering that question if we are trying to answer something like causation and we're just using a couple descriptive uh studies and that's not going to be feasible. And the type of research we're conducting should be stated and described clearly. If we are starting a paper, not describing properly how things are done, not describing properly what we are aiming. Then that starts to look quite bad as to whether these are professionally made research, whether there has been a patient detail and whether things and the methodology is appropriate, one of the things uh that it's very important to assess and we will assess right away. When we start looking at a paper is the participants who is included, who is excluded, where was where the participants sourced? And is the sample big enough? If the sample is big enough, it's a really complicated question to answer with an actual number that requires actually quite complicated statistical uh calculations. But uh just using common sense, we can judge more or less if that uh study is underpowered or not. So a study um start looking into a very common um a very common illness or a very common situation that has, for example, 50 people that's probably quite underpowered. If we're looking at thousands, we can already very easily say that's probably enough. And one of the things we can judge is are the authors including a strategies or protocols in their methods to limit the amount of bias in that paper bias is inherent and un avoidable at some degree. But an author can put uh strategies or can take steps to limit it how much bias is in the paper. This strategy, uh the these strategies tend to be well explained by the authors and 10 to be stated inside the methodology. And the outcome that is measured is the outcome measured relevant for what question is being answered. Um You can find very often that uh a paper trace to look on whether something is treating an illness or not. And they're not looking at symptomatology or at patients expectations, uh life expectancy or at survival, but they're looking at one specific very particular inflammatory marker that might actually not be uh relevant to the outcome you're looking for when we look at results, one of the first things we need to identify is demographics, demographics, they should be described in the results. They should be similar between the different groups in the research. So between um cases and controls or between um an intervention or a placebo group. Uh They should be representative of your normal patient population. If you have a research that's done with your population, it might not be representative of a common illness that tends to happen in the elderly. If you have uh a population that is very limited because of inclusion and exclusion criteria, it might not actually be the population you are actually treating and it might not be representative of what you want or it can be similar things like looking into pediatric cases compared to adult cases that your specialty might be handling um inside the methodology are the statistical tests appropriate? Do these tests answer the actual question proposed? Are they capable of doing? So, are we accounting for confounding factors? And are these results appropriately displayed or is there information missing? Um a bit of a telltale sign that something's wrong with that study is where you start to see results but you do not see how they have been obtained on there or there is no explanation of how they came to that result. Um Percentages are or, and numbers are thrown around without their significance attached to it. Those tend to be either signs of sloppiness or trying to portray results as something they're not. Um I know um quite concerning thing is if there have been changes to the study after the data was collected, uh this tends to be um quite frequent in studies that are looking to justify a particular thing or in clinical trials where they have looked at everything under the sun and they are looking for something that might shine and could give up um sort of an answer. So this is what's called P hacking and fishing. P hacking is where you try any possible statistical analysis until you find one that does give you ap value under 0.5. And then you stick to that and consider that to be justifying your results. And fishing is where you look at everything on that patient and then just try to find whatever happens to be significant and then pretend your study was always about that. So for example, um you look into patients with pneumonia, you analyze absolutely everything and then it turns out, um something shows up as significant and then you just create a result. Um, a paper around that, that's why it's important to look into whether there was any changes to that study. The studies tend to be clinical. Uh research mainly tends to be registered before the research is done. So if there's a difference between what the trial was suggested as and what it ended up being, then that should be appropriately justified. Um In the limitations and discussion uh part of a paper the authors should include and their knowledge, the limitations of their study, selfcriticism of lack of the or lack of thereof, speaks to the authors ability to understand why their paper might or might not be good and what bias might they have even on the best papers, there should be limitations and there should be criticisms by their own authors. Another thing to look for is a authors presenting nonsignificant data or trying to argue that it is significant even though the paper doesn't show it. Uh this, you're usually going to see it as uh data that is not shown to be significant where authors try to argue uh without mentioning its significance or not, or whether uh statistical significance is portrayed as clinical significance. So statistical significance is properly when we see a value with ap value of less than 0.5 just next to it. That is a statistical significance. Clinical significance is whether that actually doesn't matter to your patients. So for example, um lowering your BP by five millimeters of mercury after taking 10 pills might be statistically significant but will not be clinically significant. That is something that mm mixing those two factors tends to be a bit of a red flag. Although none of these things on their own uh can just disrepute the paper. But when they start accumulating, then you can start to have founded uh suspicion that this is not a good quality paper and a a very as mentioned before very common thing. Um Drawing causation from data that only justifies correlation or considering anecdotes as evidence which is particularly common on case series or case reports, uh biases. So biases are systematic errors that distort the measurements of that distorts the investigations and the results in our papers. All papers have biases. The important thing is how much are the biases in this paper? Big enough to make me just disregard the results or are they small enough that I can trust it, biases can be accidental. They can be inherent to study design or they can be proposed for through mm by a or negligence. Um When someone just designs the paper in such a poor way that the inherent bias is too much. Uh The most common type of bias is selection bias. Selection bias pertains to how the population of a study was formed. So in clinical trials, it would speak about the difference between the, between the two groups in a clinical trial, the difference between the two groups ideally should only be that one put by the researchers. So the um the intervention done by them or the drug given or the risk they're taken. Uh selection bias in clinical trials is minimized by randomization. And we can check that the randomization has been properly made by comparing both demographic groups. And there should be no significant difference between them, non randomized trials. You can only just common sense to judge them, but they tend to be not particularly good and their recommendations tend to not be um particularly applicable in observational studies. It's very difficult to obtain two different groups that you have not formed as a researcher that are actually very similar except for the difference you're studying. So in observational studies, uh researchers tend to control for the difference by making statistical adjustments. Statistical adjustments are very complicated statistical analysis where uh from one large sample uh sample group, uh then data analysis draws a smaller sample group for both groups that is um demographically identical. This requires a very large sample size and this requires of quite expert um researchers. Um if we find significantly different populations, the most straightforward thing to do is just invalidate the results. Um recruitment and exclusion bias. These are subtypes of selection bias. This is where selection bias is included by differences in population caused by poor recruitment. This can be, for example, when you're doing um you're recruiting for a clinical trial, you're advertising it on Facebook in um in a website about people who already have a particular disease or who have particular um interest or simply by advertising for a clinical trial, you're getting volunteers, volunteers tend to be healthy, young, eager. So you're already filtering people with certain characteristics. And exclusion bias is when you're excluding uh patients too aggressively. So you might for example, be excluding patients with certain comorbidities, uh excluding um people with certain demographics. So if you're sourcing your patients between uh 2068 years old, then if you're investigating dementia, you're probably having a very large bias. Um If you are um investigating a heart problem and you're excluding people with CKD two and above, then you're already excluding a very large population that probably has a significant impact on your results. So this would be exclusion and recruitment by us as types of selection bias. Uh Performance bias is a bit of a harder bias to understand it portrays the differences in care received or experiences between groups or between researchers, uh the research group and the normal population. So, for example, if you have um a clinical trial where uh individuals go to your research facility, they might be receiving much better care, they might be uh them person might be much more experienced. So this is introducing um multiple points where differences can occur with what the normal patient is experiencing. Uh Detection bias is uh differences in how the outcomes are measured. One of the ways detection bias is introduced is by using what is known as surrogate endpoints, surrogate endpoints is where what is measured is not actually uh what you're researching but rather a mi important. So for example, um um researching into pneumonia, you're not looking into diagnosis, you may be looking into all the X rays and cop values and you might end, you are looking at the surrogate endpoint. And if that's not properly designed, that's introducing bias where you're filtering patients according to lab values, that might not be entirely uh representative of your uh o of what you're studying um effects. Uh whenever you're measuring an effect, an effect would be either a patient's opinion or a symptom function or a psychological state. So for example, anxiety, pain or mobility, these need to be measured with appropriately evaluated scales. Whenever anyone is measuring, for example, pain or anxiety, they need to be using uh appropriately validated scales. That is the scales validated by previously existing research considered to be accepted to measure this. And even then these scales introduce uh some problems as no scales going to be perfect. But uh utili utilizing scales might allow for more intercomparability between different studies which can be quite useful at the moment where all of these studies are combined into a meta analysis. So the use of scales also provides with rigor to your data. Um Another of the ways where you can reduce these uh these two types of bias performance and detection bias is by blinding uh performance by us can be introduced by researchers providing more care to the um to the intervention group or by uh measuring twice or by paying more attention to these people. So the moment you're blinding and the researcher does not know who pertains to each group, you are starting to reduce this performance by us. And detection bias is also reduced by researchers overestimating or underestimating measures when they know uh a patient that is in one group or another, that's all from you. Thank you for listening. But with that session, what we are aimed is that we just don't want you to learn every bias because we are not statistical, we are not that experienced maybe in the research area. The thing is that when you read the paper just have a critical thinking saying that what I am reading might not be the real result or any valid result, it might be something wrong or that there's a always possible that they are missing something there. So the main perspective of critical appraisal is that what you are reading that you will always need to critically appraise and reflect that if these results are valid and if they are valid, can you use them in your clinical practice as Carlos A that there are evidence vs but sometimes you cannot find a guideline or you cannot find any metabolizing in your topic or in your research question and you only have some uh cohort case with case reports. And at that time when you are doing a clinical decision, when you are making a clinical decision, you just need to use your own appraisal to understand the literature review or literature data. And I recommend that book, how to read a paper. It's really basic and it really gives you understanding of how to approach a research. And I also believe that it's really important because if you are a researcher while reading research paper, you will be able to improve your skill set. Because when I read the paper, I can say that, oh, that, that doesn't sound good or I can't, I should have done it a bit different. So it will develop your skill set as a researcher as well. And thanks for coming. Thank you for listening. Uh So we will have another session in that topics where we focus on the more like specific type of research and we will focus on the checklist and we assess these papers. We will run in the observations for the and one in the randomized clinical trials. And one hope likely systemic reviews. And then we will focus on how to write a manuscript, how to create a project. So is it more like a researcher perspective? And I think he has already shared the uh feedback form. We are happy if you, we will be happy if you can fill this feedback form and you will get your certificates after it. Let me share the feedback form here as well. Did the live people get a feedback for you? Because you are, I think they are if they are registered. But let me just, ok, then I can email you off. If you registered to the uh in person, I can send the feedback form and you can have your certificate up first.