The EMRA offices will be closed for the upcoming holidays from Tuesday, December 24, 2024 thru Wednesday, January 1, 2025.
We apologize for the inconvenience.
Research, Rapid Research Review

Rapid Research Review: Research Bias

Whether you are evaluating research in order to incorporate a new study into your practice or you are designing a project yourself, it is important to be aware of the many forms of bias that can creep into the methodology of a solid research project.

These biases can come in many shapes and sizes and, unsurprisingly, can severely alter the validity of an otherwise robust study. Bias and study errors can also occur at different intervals along the timeline of a study, including during the recruitment of participants, the performance of the study itself, and the interpretation of the study data. Here is a quick summary of some of the common biases to be aware of when you are appraising or designing a clinical study.

Selection Bias: occurs with non-random sampling or treatment allocation of subjects such that the study population is not representative of the target population. Common strategies used to combat selection bias include appropriate randomization of participants and ensuring the appropriate choice of comparison/reference group.

Example
Patients selected as cases/controls are generally less healthy, therefore a study involving this population cannot be used to draw conclusions about the general population (also known as “Berkson bias”).
Example
Patients lost to follow up may have a different prognosis from those who completed the study (also known as “attrition bias”).

Recall Bias: occurs when a participant’s awareness of a disorder/disease alters their ability to recall about associated factors. Recall bias is effectively reduced when there is a decrease in time from exposure to follow-up.

Example
Patients with disease recall exposures after learning of similar cases (ie, a patient with breast cancer is more likely to recall past carcinogenic exposures when compared to a patient without cancer).

Measurement Bias: occurs when information collected for use as a study variable is inaccurate. Measurement bias is effectively reduced when investigators use objective, standardized, and previously tested methods of data collection. The use of a placebo group can also be used to further decrease the chance of measurement bias.

Example
Using an inaccurate blood pressure cuff to measure blood pressure in patients with hypertension.

Procedure Bias: occurs when subjects that have been allocated to different groups are not treated the same (apart from the variable being studied). Blinding of participants/investigators and the use of placebo are both viable strategies that can effectively reduce the chance of procedure bias.

Example
Patients in the treatment group receive more specialized attention compared to patients in the placebo group.

Observer-Expectancy Bias: occurs when the investigator's belief in the efficacy of the treatment changes the outcome of that treatment (also known as the Pygmalion effect). Blinding of participants/investigators and the use of placebo are both viable strategies that can effectively reduce the chance of observer-expectancy bias.

Example
An observer who is expecting the patients in the treatment group to show signs of recovery is more likely to document findings associated with positive outcomes.

Confounding Bias: occurs when a factor related to both exposure and outcome directly alters the effect of the exposure on the outcome. This most commonly occurs when a confounding variable is not controlled for in the study design. This is also the bias in question in the adage “correlation does not imply causation”, meaning that just because two factors appear to be correlated does not mean that one is a direct result of the other. Strategies used to reduce confounding bias include multiple/repeated studies, crossover studies (participants act as their own control), and matching (ensuring that patients have similar characteristics in all groups).

Example
A study concludes that there is an association between drinking coffee and lung cancer. However, it fails to recognize that coffee drinkers also smoke more, which accounts for the apparent association.

Lead-Time Bias: occurs when early detection is confused with an increase in survival rates. Essentially, detecting a disease early makes it seem like survival rates have increased, although the natural course of the disease has not changed. The best way to reduce the chance of lead-time bias is to measure survival rates by adjusting for severity of disease at the time of presentation.

Example
A new screening test detects prostate cancer much earlier than previous screening tests, leading to a greater survival rate at 5 years. However, since survival rate is calculated from the time of diagnosis, it has not actually improved, it has only been detected earlier.

Length-Time Bias: occurs when screening tests are able to detect diseases with a long latency period more often than those with a shorter latency period. This ultimately leads to those with a shorter latency period becoming symptomatic earlier. Studies in which participants are assigned to "screening" and "no screening" categories can help identify and reduce this form of bias.

Example
A slowly progressive cancer is more likely detected by a screening test than a rapidly progressive cancer (ie, screening tests may not be as useful for detecting rapidly progressive pathologies).

References

  1. Le, Tao, and Vikas Bhushan. First Aid for the USMLE Step 1 2020. 30th anniversary edition. New York: McGraw-Hill Medical, 2020.
  2. Catalogue of Bias Collaboration, Nunan D, Bankhead C, Aronson JK. Selection bias.Catalogue Of Bias 2017: http://www.catalogofbias.org/biases/selection-bias/
  3. Catalogue of Bias Collaboration, Spencer EA, Brassey J, Mahtani K. Recall bias.In: Catalogue Of Bias 2017. https://www.catalogueofbiases.org/biases/recall-bias
  4. Catalogue of Bias Collaboration. Mahtani K, Spencer EA, Brassey J,  Observer bias.In: Catalogue Of Bias 2017:   https://www.catalogofbias.org/biases/observer-bias
  5. Catalogue of Bias Collaboration, Bankhead C, Aronson JK, Nunan D.  Attrition bias. In: Catalogue Of Bias 2017.  https://catalogofbias.org/biases/attrition-bias/
  6. Catalogue of bias collaboration, Aronson JK, Bankhead C, Nunan D. Confounding. In Catalogue Of Biases. 2018. www.catalogueofbiases.org/biases/confounding

Related Articles

Journal Club for Dummies: How Not to Be Intimidated by Evidence-Based Medicine

Journal Club for Dummies: How Not to Be Intimidated by Evidence-Based Medicine Have you ever looked at a journal club article and felt overwhelmed? You are not alone! Many residents are in the same b

How To Make Your Survey Count

Surveys can be great research tools to obtain data when carefully designed and correctly administered. When writing a survey, consider the best method to administer it, take into account participant m
CHAT NOW
CHAT OFFLINE