The EMRA offices will be closed for the upcoming holidays from Tuesday, December 24, 2024 thru Wednesday, January 1, 2025.
We apologize for the inconvenience.
Op-Ed, Advancement of EM, Ethics

AI in EM: The Byte-Sized Benefits and Glitches

Artificial intelligence (AI) is increasingly being used in emergency medicine and critical care to improve clinical decision-making, patient outcomes, and operational efficiency.

AI algorithms have shown superior diagnostic accuracy in conditions like stroke, sepsis, cardiac arrest, and COVID-19. AI can alleviate physician burnout by automating administrative tasks and optimizing resource allocation. Furthermore, AI can generate personalized treatment plans based on vast amounts of patient data, potentially mitigating health care bias.

However, the use of AI in health care raises concerns about data privacy and security, accountability for errors, transparency in decision-making, and the potential to perpetuate and amplify existing biases.

Addressing these challenges may involve developing robust data security measures, creating transparent AI systems, and establishing guidelines to identify and correct biases in AI data and algorithms.

Introduction
AI is rapidly permeating various sectors, including business, security, and health care. More specifically, the use of AI in emergency medicine and critical care has been steadily increasing over the years.1 AI is being used to augment clinical decision-making, improve patient outcomes, and enhance operational efficiency. However, its adoption is not without obstacles. This article explores the advantages and disadvantages of utilizing AI in emergency medicine.

Advantages of Using AI in EM
As physicians, our job is to heal. Of course, this is easier to do when our diagnoses are correct. AI algorithms have demonstrated superior performance in diagnosing certain medical conditions compared to human clinicians. AI has shown significant promise in diagnosing stroke, sepsis, and cardiac arrest more accurately — crucially important in the emergency department where time is of the essence.2 Likewise, AI has been shown to have higher diagnostic accuracy than radiologists in diagnosing patients with COVID-19 via CT scan3 and 90.8% accuracy in diagnosing COVID-19 pneumonia.4 This improved diagnostic accuracy can greatly impact those “time is brain” or “time is muscle” moments in the emergency department by allowing us to initiate appropriate treatment quickly.

Physician burnout has been attributed to many factors, but evidence shows that administrative burden significantly contributes to this issue.5 AI may play a role in solving emergency medicine’s burnout crisis. Work documentation and communicating with staff members account for 55% of emergency physicians’ time on shift, while only 25% is spent directly caring for patients.6 AI can streamline workflows in emergency departments by automating administrative tasks and optimizing resource allocation. AI algorithms can predict patient flow, which helps reduce overcrowding,7improve patient care,8 and increase staff satisfaction.9 AI can also majorly enhance operational efficiency in prehospital environments and triage which, in the end, help facilitate better flow in the emergency department.10

Treating patients holistically has become more prominent in contemporary medical practice. With the understanding that patients do not all fit into certain molds or categories, the incentive to create personalized treatment plans has been integrated into our health care system. AI systems can analyze vast amounts of data to generate personalized treatment plans, considering individual patient characteristics, comorbidities, and past medical history.11

In a survey of Canadian physicians’ expectations of AI in emergency medicine, most respondents felt that AI would likely be able to complete personal therapy/medication plans. One current focus of using AI in emergency medicine is to do just that.12 This utilization of AI can further help emergency physicians deliver higher quality, efficient health care to their patients, regardless of background, past medical history, and other patient characteristics. If algorithmic predictions are set correctly, this may even help mitigate bias in health care, especially for at-risk populations.13

Disadvantages of Using AI in EM
Advancements in technology and security systems have aided in keeping hackers away from patients’ private data. Nevertheless, every system has its faults, and data breaches have continued to rise in health care systems.14 AI systems require large amounts of data, thus raising concerns about patient privacy and data security. There are potential risks of data breaches and misuse of private patient information.15 Weaknesses in a system’s digital security can be detrimental because these inadequacies are typically only found after a breach.16 Designing AI systems with data privacy concerns in mind during software development is thus extremely important.

A separate yet equally important topic of discussion is regarding liability. Given that artificial intelligence cannot be held responsible legally, the question of “Who is responsible?” also becomes essential to ponder when AI makes an error.17 Possible solutions to these concerns include creating a framework for error monitoring and, perhaps, predicting errors that can occur before they happen, as well as initially designing AI systems with concern for flawed logic and using algorithms based on the most up-to-date practice guidelines.

Transparency in health systems is extremely valuable to patients. Using AI in high-risk environments like the emergency department and trusting the output demands steep accountability.16 AI algorithms, particularly deep learning models, are often described as “black boxes” due to their lack of explainability. This can be an issue in clinical settings where understanding the “why” behind a diagnosis or treatment decision is crucial.18 The operation of an AI system can also be difficult to comprehend for those with limited technical expertise. This lack of understanding can make it difficult for physicians and patients alike to trust the implementation of AI in health care delivery.19

All humans — including physicians, advanced practitioners, and other clinical staff — have implicit biases. AI systems can perpetuate and amplify existing health care biases if they are trained on biased data. Information currently available for AI coding is modeled from human behavior and decision-making. No current guidelines or frameworks are in place to report and fix these biases when they are discovered.20 If these biases are amplified by continuous feedback generated from AI systems, this could lead to disparities in care for certain patient populations.21

Conclusion
Artificial intelligence in emergency medicine offers significant potential to revolutionize patient care, enhance diagnostic capabilities, and streamline the overall operational efficiency of the emergency department. The ability to quickly and accurately diagnose conditions, mitigate physician burnout, and personalize patient treatment plans are among the major advantages of AI integration.

However, this technological advancement is not devoid of challenges. Concerns regarding data privacy and security, the logic of AI applications, lack of transparency in AI decision-making, and the potential for AI to perpetuate and amplify existing biases in health care remain significant obstacles to its full integration into our current system.

Although AI promises to be a game-changer in emergency medicine, these challenges must be systematically addressed. This may involve developing robust data security measures, creating transparent AI systems, and establishing guidelines to identify and correct biases in AI data and algorithms. As we move toward a more technologically advanced future in health care, we must ensure that these systems are safe, reliable, and equitable for our patients.


References

  1. Jiang F, Jiang Y, Zhi H, et al. Artificial intelligence in healthcare: past, present and future. Stroke Vasc Neurol. 2017;2(4):230-243. doi:10.1136/svn-2017-000101
  2. Shouval R, Fein JA, Savani B, Mohty M, Nagler A. Machine learning and artificial intelligence in haematology. Br J Haematol. 2021;192(2):239-250. doi:10.1111/bjh.16915
  3. Mei X, Lee HC, Diao KY, et al. Artificial intelligence-enabled rapid diagnosis of patients with COVID-19. Nat Med. 2020;26(8):1224-1228. doi:10.1038/s41591-020-0931-3
  4. Harmon SA, Sanford TH, Xu S, et al. Artificial intelligence for the detection of COVID-19 pneumonia on chest CT using multinational datasets. Nat Commun. 2020;11(1):4080. doi:10.1038/s41467-020-17971-2
  5. What is physician burnout? American Medical Association. Published February 16, 2023. Accessed September 4, 2023. https://www.ama-assn.org/practice-management/physician-health/what-physician-burnout
  6. Füchtbauer LM, Nørgaard B, Mogensen CB. Emergency department physicians spend only 25% of their working time on direct patient care. Dan Med J. 2013;60(1):A4558.
  7. Chenais G, Lagarde E, Gil-Jardiné C. Artificial intelligence in emergency medicine: viewpoint of current applications and foreseeable opportunities and challenges. Journal of Medical Internet Research. 2023;25(1):e40031. doi:10.2196/40031
  8. Chen M, Decary M. Artificial intelligence in healthcare: An essential guide for health leaders. Healthc Manage Forum. 2020;33(1):10-18. doi:10.1177/0840470419873123
  9. Komorowski M, Celi LA, Badawi O, Gordon AC, Faisal AA. The Artificial Intelligence Clinician learns optimal treatment strategies for sepsis in intensive care. Nat Med. 2018;24(11):1716-1720. doi:10.1038/s41591-018-0213-5
  10. Chang H, Cha WC. Artificial intelligence decision points in an emergency department. Clin Exp Emerg Med. 2022;9(3):165-168. doi:10.15441/ceem.22.366
  11. Topol E. Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again. Basic Books; 2019.
  12. Eastwood KW, May R, Andreou P, Abidi S, Abidi SSR, Loubani OM. Needs and expectations for artificial intelligence in emergency medicine according to Canadian physicians. BMC Health Serv Res. 2023;23:798. doi:10.1186/s12913-023-09740-w
  13. Mittermaier M, Raza MM, Kvedar JC. Bias in AI-based models for medical applications: challenges and mitigation strategies. npj Digit Med. 2023;6(1):1-3. doi:10.1038/s41746-023-00858-z
  14. Murray-Watson R. Healthcare Data Breach Statistics. The HIPAA Journal. Published online 2023. https://www.hipaajournal.com/healthcare-data-breach-statistics/
  15. Price WN, Cohen IG. Privacy in the age of medical big data. Nat Med. 2019;25(1):37-43. doi:10.1038/s41591-018-0272-7
  16. Naik N, Hameed BMZ, Shetty DK, et al. Legal and ethical consideration in artificial intelligence in healthcare: who takes responsibility? Front Surg. 2022;9:862322. doi:10.3389/fsurg.2022.862322
  17. Tigard DW. There is no techno-responsibility gap. Philos Technol. 2021;34(3):589-607. doi:10.1007/s13347-020-00414-7
  18. Challen R, Denny J, Pitt M, Gompels L, Edwards T, Tsaneva-Atanasova K. Artificial intelligence, bias and clinical safety. BMJ Qual Saf. 2019;28(3):231-237. doi:10.1136/bmjqs-2018-008370
  19. Smith H. Clinical AI: opacity, accountability, responsibility and liability. AI & Soc. 2021;36(2):535-545. doi:10.1007/s00146-020-01019-6
  20. Nelson GS. Bias in artificial intelligence. N C Med J. 2019;80(4):220-222. doi:10.18043/ncm.80.4.220
  21. Rajkomar A, Hardt M, Howell MD, Corrado G, Chin MH. Ensuring fairness in machine learning to advance health equity. Ann Intern Med. 2018;169(12):866-872. doi:10.7326/M18-1990

Related Articles

The Betty Bubble: A Week at the Hazelden Betty Ford Addiction Treatment Center

Addiction is a disease that responds to treatment. We must offer services to all patients no matter how many times it takes them to accept help. Who’s to say that attempt number 30 isn’t the one that

Do Sports Have an Effect on Gang Violence?

Can participation in sports programs reduce the chances of delinquent behavior in school-age children? A sports medicine fellow weighs in.
CHAT NOW
CHAT OFFLINE