By Clara M. Bosco, Marshall H. Chin, William F. Parker

Clinicians share a common goal of doing what is best for our patients, and this obligation extends to critically evaluating the tools we adopt into our practice. Clinical algorithms, which we define as “mathematical models that aid decision-making,”1 are ubiquitous tools in healthcare, influencing everything from diagnosis to treatment. These algorithms can enhance decision-making by providing insights that surpass the limits of individual human memory and cognition. Now, with the integration of artificial intelligence (AI), algorithms are becoming increasingly tailored to individual patients through the incorporation of nuanced data, like genetic profiles, social drivers of health, and real-time physiological metrics.
While algorithms are foundational to clinical reasoning and may become more beneficial as technology advances, they can also fail our patients. Carelessly designed algorithms or those explicitly designed to increase the profits of healthcare companies can exacerbate existing biases and increase health disparities. Algorithms should serve patients, not profits. To provide the highest standard of care, we must demand best practices in algorithm design through transparency and accountability so that healthcare technologies work in the service of patients and advance health equity. This article will explore the evolution of algorithms in healthcare, highlight examples of both their benefits and biases, and propose actionable strategies to ensure their ethical and equitable use in clinical practice.
Read the rest of the piece here.