Critical Appraisal Questions for Evidence-Based Practice: Is the Study Necessary and Are the Results Valid?

Critical Appraisal of Research Studies: Are the Results Valid?

Critical Appraisal Question: Are the Results Valid? Photo by Mikhail Pavstyuk on Unsplash.com

Critical appraisal skills are essential for the nurse, regardless of role or clinical expertise. I introduced you to the topic of critical appraisal in evidence-based practice (EBP) in the last blog post. I provided the purpose of critical appraisal and overviewed the steps of critical appraisal. In this post, I’ll dive into the first two major questions you’ll ask:  Is the study necessary? And if it is — Are the results valid? 

Before You Appraise – Ask, Is the Research Study Necessary?

There are three major questions to ask when appraising a research study.  But before you start to appraise a study, you want to know if the study is worth reading in the first place! So you have to ask a preliminary, but important, question — was the study necessary?

“New research should not be done unless, at the time it is initiated, the questions it proposes to address cannot be answered satisfactorily with existing evidence” (Chalmers and Glasziou cited in Wright & Mahtani, n.d.).

To decide if the study is worth your time, decide if the study was necessary in the first place. 

To decide if the research study was necessary, ask if the study question is important – that is, will the answer to the research question(s) help fill in a gap in the science? “New research should not be done unless, at the time it is initiated, the questions it proposes to address cannot be answered satisfactorily with existing evidence” (Ian Chalmers and Paul Glasziou cited in Wright & Mahtani, n.d.).

You have to either know the scope of the available studies on your topic or do a literature search to know if there is existing evidence that can answer the research question you are getting ready to read about. But this is a hard and time-consuming thing to do. That’s why systematic reviews are so valuable. They either narratively or quantitatively summarize research on a specific topic to see if the “answer” has been identified or if more research needs to be done. (More on systematic reviews in future posts.)

So instead you can ask yourself, Does the researcher’s question address patient or client problems? Will the results impact clinical or professional practice? Will the answer provide patient benefits or protect patients from harm? Will the results affect patient or client outcomes? Will the results help patients and families navigate the healthcare system? 

There are a lot of questions you can ask but the bottom line is that the study needs to address a question that has value to the healthcare system, healthcare providers, and/or patients and families.  

Where do you find the answer to determine if the study is important to conduct? 

In the introduction to the paper!

In the introductory sections, the researchers provide the reader with information about the problem they are interested in studying, why it is important, and the research that has been done to date. This is where the researcher will outline their reasons to justify their study to the reader. You have to decide if they made a good enough case for you to continue to read the paper. 

The introduction is the beginning of the research paper. It may be clearly labeled Introduction (or not because many editorial styles do not use this as a heading) and might also have a section titled Background and/or Review of the Literature. Regardless, the sections before the discussion of the research methods are there to give the reader a context about the problem itself, why the problem needs to be studied, and how the research questions will fill a gap in the science. 

Note that in many research studies, especially nursing research, a theoretical framework will be explicated to explain to the reader how the research design is guided by a particular theory. This will be before the Research Methods section, also. 

This introduction and background information gives the reader a context about the impact of the problem on the patient, healthcare provider, healthcare system, and society, as a whole. What is the problem?; why is it a problem?; who is it a problem for/who gets this problem?; what is the prognosis? how is the problem treated? how does this problem affect the patient/caregivers/healthcare providers/healthcare system?; how much does the problem cost in patient burden, economics, time, effort, resources, etc.? 

Remember, the researchers are telling you why this topic is a problem and why they are motivated to study the problem. So, ask yourself: Has the researcher made a good case for why the problem is a problem? If yes, continue reading. 

Either interwoven in the introductory section or as its own section, the researchers should tell you about the research that has already been conducted about this problem. The Review of the Literature section should concisely summarize what is known about the problem of study — what studies have been done to solve this problem and their results; but, more importantly, they need to discuss what is unknown about this problem – what still needs to be solved? This is where you should see a connection between the purpose of this research study and the gap in the science that needs to be filled! 

You will frequently see the purpose statement near the end of the introductory sections, right before the Research Methods section. There may be a statement that concludes the section with Therefore, this study aims to …. or The purpose of this research study is to ….  or The questions this study aims to answer are:…. (or  a list of research hypotheses). 

Read the problem statement/purpose – does it make sense with what they’ve just told you about the problem and what part of the problem still needs to be studied? Is the question clear and focused? Can you envision how the results could be important to your practice? 

Now, ask yourself: Is the researcher addressing a research question that is necessary to fill in the gaps in the science?  If you can say yes at this point — the study question is important and you can move on to the work of critical appraisal of the study itself. 

The Most Important Critical Appraisal Question:  Are the Results Valid?
Make a Decision: Are the Results Valid?

Make a Decision: Are the Results Valid? Photo by Green Chameleon at Unsplash.com

Not all published studies are methodologically sound, which is why it is necessary to critique or critically appraise the methods used by the researcher to design and conduct the study. Validity of results refers to whether the study was well designed and conducted rigorously – that is, adhering to research ethics and specific research protocols to ensure the accuracy of the data collected. Rigor is the term to describe data collection accuracy in qualitative research designs and internal validity is the term used for quantitative research (Fawcett & Garity, 2009). A research study (qual or quant) with many threats to rigor or internal validity does not give us the confidence to use the evidence in practice. 

This question is, therefore, at the heart of critical appraisal because if the study methods are not internally valid – if they are not rigorous –  then you CANNOT BELIEVE THE RESULTS

Reading Dr. Trisha Greenhalgh’s book, How to Read a Paper was a lightbulb moment for me. 

Dr. Greenhalgh wrote something that I’ve never forgotten and that I have drilled into my EBP students — Read the Methods First! Don’t look at the actual results or the conclusions or the implications before you read the methods section of the paper. Until we take the time to review the research methods, we don’t even care what the results or implications are! (Dr. Greenhalgh actually says read the methods first to decide on whether to read the study, but I think you need to establish that the study is worth reading before you make the effort to critique the methods.)

The only way I can trust the results of a study, to the extent possible from any published research study, is to know that the researcher was aware of and deliberate in controlling for potential threats to internal validity. Common threats to internal validity include selection bias, history, testing effects/reactivity/sensitization, instrumentation, maturation, statistical regression, attrition/mortality effects, biases on the part of subjects/researchers/data collectors/outcome assessors, confounding variables, treatment diffusion, and statistical conclusion errors. Interactions between and among these threats are also common (Fawcett & Garity, 2009).

The researcher has to tell me what they did and I have to decide whether there is bias because of what they did or didn’t do.

That’s where your basic research training kicks in. All that learning about different research designs, sampling methods, sample size, power analysis, data collection methods, and data analysis methods (statistical tests, alpha levels, etc.) is to help you make sense of the research you are evaluating! 

The more rigorous the study methods, the less the potential for bias to have affected the results.  Will there be flaws?  Yes, most likely (all research has flaws), but hopefully not too many to make you question the results. And if there are many flaws, then that information is important to your appraisal, too. 

So Where Do You Start When Appraising Research Methods? 

You start with the section of the research paper/article titled Research Methods or Materials and Methods — the actual wording will vary with the Journal — but you get the picture. This is the section where the researcher will tell you the details of how they designed and conducted the study; many will give you the rationales for their choices, too. Realize that you won’t get every detail because there are page limitations to journal manuscripts, but you should get enough to critique the validity of the methods. 

Now I know what you are thinking (especially if you are an undergraduate nursing student), I don’t know enough to critique research methods! But you probably do. That’s what nursing research class is all about – and why you take nursing research again in graduate school.

You learn about basic critiquing in research class. Every nursing research text I’ve ever used has specific questions to ask, at the end of each chapter, to critique the types of studies you are learning about. The chapters on quantitative and qualitative studies outline the specific types of questions to ask about the research methods specific to these study designs. These chapters are a good place to begin to remind you of the questions you should be thinking about when reading a research study. 

The question, Are the results valid? is assessed by considering what threats to internal validity or biases may be present in the study: that is, were the methods rigorous enough so that we can believe the results?  If the methods are strong, the results will be valid. 

Even though there are many different study designs, the major critical appraisal questions are the same: Is the research study necessary? If yes, Are the results of the study valid? What are the results (Are the results important?) and finally, Can I apply the results to my patient?  The subquestions under each of these major questions will change depending on the type of study that is being critiqued.

I would suggest that as you read a research article, you should be identifying the methodological flaws.  Keep this list in your head or write it down.  I highlight important parts of the study and make notes in the margins of the study as I read. As the list grows, if you start to think – wow, there are a lot of problems with these methods, then stop reading.

I can’t talk about all of the subquestions to ask of all the different types of studies in this one post, so I’ll give you an example of which subquestions to ask of the most rigorous experimental study: the randomized controlled trial (RCT). I’ll talk about other designs in future posts.

Treatment/Intervention Studies

Studies about treatment, prevention, and harm outcomes need to determine cause-and-effect.  RCTs are considered the highest level of evidence for cause-and-effect because of the rigor of the design.

Randomized controlled trials (RCTs) are considered the highest level of evidence for assessing the outcomes of treatment because this design affords the highest level of control — the sample is randomly selected and assigned; “something” is done to one group and not to the other; there is a control group from which to compare the results of the intervention, and the researcher attempts to control as many variables as possible.

The bottom line is to be able to say, with confidence, that the intervention was the cause of the outcome and NOT the result of some other reason (a confounding or extraneous variable, for example).  The study design that is preferred for questions of therapy is, therefore, an RCT. 

RCTs are used to determine the effect of an intervention by using parallel groups (experimental and control). However, not all RCTs are well-conducted nor always reported in the literature with the level of detail needed to critique the quality of the study; therefore, it can be difficult to decide whether the results can be believed and used in clinical practice. The good news is that most medical journals are now requiring that researchers use standardized guidelines, such as the CONSORT (Consolidated Standards of Reporting Trials) statement, to provide more transparency in reporting of these type of studies (http://www.consort-statement.org/). The CONSORT guidelines help the researcher include necessary information about rigor and internal validity that can help you in appraising these studies. 

In experimental designs, the researchers aim to minimize bias so that competing explanations of the results can be ruled out and they can ascribe their study outcomes as a result of the intervention or treatment only. 

Subquestions for the major question: Are the results of the study valid? for a therapy or intervention study are therefore focused on asking specific questions to help tease out whether bias, confounding factors, or chance played a significant role in the results that are reported.  

When we assess a study and decide that bias has been minimized, then the difference in results will be caused by either a real change or the effect of the treatment, or by chance. 

What Subquestions Help Determine, Are the Study Results Valid?

To ascertain the internal validity of an RCT for a therapy/harm/prevention study the following questions should be asked (write down the answers on a piece of paper or use a Therapy Study critical appraisal tool [coming soon!])

  • Were the intervention and control groups similar at the start of the study? (Did they begin the study with a similar prognosis?) This question helps us remember the fact that if the groups are not similar to start with — we cannot make conclusions about the population as a whole that will be valid. 
  • Were the patients randomized? The point of randomization is to make the groups as equal as possible, in all known (and unknown) factors at the start of the study, so that only the receipt of the intervention is the difference between the groups. It doesn’t always work, by the way, but it does increase the likelihood that the groups will be balanced in these factors.   
  • Was the randomization concealed? Concealment of random assignments from the patients, clinicians, and outcome assessors helps to decrease bias in the study.  Concealment of all involved is not always possible.  
  • Were patients analyzed in the groups to which they were randomized? (Was an intention-to-treat analysis performed?) Intention-to-treat (ITT) is the principle that all patients are analyzed in the groups to which they were originally assigned, even if they didn’t really get the treatment assigned, were noncompliant with the therapy, or withdrew from the study. This is an advanced concept; so for now, if the researchers tell you they did this, take their word for it. More on the ITT principle in a future post. 
  • Were the groups shown to be similar in all known determinants of outcome, or were analyses adjusted for differences? Researchers should report group characteristics in detail (such as in a table or chart) and either tell you that the groups were comparable (and you have to take their word for it), give you a p-value to show you the groups were not significantly different, or tell you that they adjusted for any differences in the groups through the use of statistical techniques. The more similar the groups the better. 
  • Did the intervention and control groups retain a similar prognosis after the study started? 
    • Were patients aware of group allocation? The Hawthorne effect can affect study results by changing the way the subjects respond to their expectation of the effects of the intervention.  
    • Were clinicians aware of group allocation? Clinicians not blinded to group allocation can care for patients differently and that difference can affect outcomes.
    • Were outcome assessors aware of group allocation?  Outcome assessors not blinded to group allocation can be prone to surveillance or detection bias, which can affect outcomes.
    • Were the groups treated equally (except for the experimental intervention)? Patients in the parallel groups should be treated the same; the only difference should be that one/certain groups got the experimental intervention, according to the research plan. 
  • Was follow-up complete? (What was the attrition rate?) We know that when patients drop out of a study, the outcomes may be affected. Researchers should tell the reader how many patients were included in the analysis. Make note of the difference between the number who started the study and the number that completed the study. Those lost to follow-up should be less than 20% (though that percentage may be too high depending on the prevalence of the problem under study).

If the answers to all or most of these questions are yes, then you can conclude that the study results are valid. 

Bottom line: If the methods are suspect, you CANNOT BELIEVE the results and, therefore, should not waste your time reading the rest of the paper. Go find another study!

Take Heart!

You will always have some questions about the methods or the research in general, no matter a research novice or expert. But that’s okay! You just want to make sure that there aren’t major problems with the research design or conduct of the study or enough little flaws to make you wonder if the data collected are accurate and thus whether you can believe the results.  As a novice at critical appraisal, you will probably be more critical at first, but I have to tell you that this is just something you will learn and get comfortable with as you practice critiquing and critical appraisal.

Once you are satisfied with the rigor of the research methods, you can move on to examining the actual results of the study for significance. Stay tuned — that’s the next post!

How to Cite this Blogpost in APA*:

 

Thompson, C. J. (2017, November 14). 

Critical appraisal questions in evidence-based practice: Is the study necessary and are the results valid?

[Blogpost]. Retrieved from https://nursingeducationexpert.com/critical-appraisal-results-valid *Citation should have hanging indent

References

Fawcett, J., & Garity, J. (2009). Evaluating research for evidence-based nursing practice.Philadelphia, PA: F. A. Davis.

Greenhalgh, T. (2014). How to read a paper: The basics of evidence-based medicine (5th ed.). Oxford, UK: Wiley Blackwell/BMJ Books.

Wright, D., & Mahtani, K. R. (n.d.). Adding value in research 2: Appropriate research design, conduct and analysis. National Institute for Health Research. Retrieved from http://www.cebm.net/wp-content/uploads/2016/05/pillar-2-identifying-research.pdf