LegalReader.com  ·  Legal News, Analysis, & Commentary

Health & Medicine

Inaccuracies in AI Diagnosing Can Have Harmful Results


— January 8, 2024

Study reveals that AI can be beneficial in assisting with diagnosis if inaccuracies can be mitigated.


The utilization of artificial intelligence (AI) tools by doctors for diagnosing patients can lead to inaccuracies due to the inherent biases embedded in these tools. Despite efforts to promote transparency in explaining how AI makes predictions, a recent study published in JAMA suggests that this transparency does not effectively address the issue of potential bias in AI-assisted diagnoses.

The significance of this issue is heightened by the increasing role of AI in diagnosis and treatment, which necessitates the identification and rectification of
models developed with flawed assumptions. For instance, if an AI model is trained on data that consistently underdiagnoses heart disease in female patients, it may learn to perpetuate this bias, resulting in underdiagnoses in females, as highlighted by the researchers. The concern is to ensure that AI models are unbiased and do not impact medical decision-making.

In the study, approximately 450 healthcare professionals, including doctors, nurses, and physician assistants, were presented with various cases of patients admitted to the hospital with acute respiratory failure. The clinicians were provided with information about the patient’s symptoms, physical examinations, laboratory results, and chest radiographs. Their task was to assess the likelihood of pneumonia, heart failure, or chronic obstructive pulmonary disease. To establish a baseline, all participants initially evaluated two cases without any input from an AI model.

Inaccuracies in AI Diagnosing Can Have Harmful Results
Photo by Tara Winstead from Pexels

Subsequently, they were randomly assigned to evaluate six more cases with input from an AI model, including three cases with systematically biased model predictions. The study revealed that the clinicians’ diagnostic accuracy on their own was 73%. However, when presented with predictions from an unbiased AI model, their accuracy improved by 2.9 percentage points. Furthermore, when provided with an explanation of how the AI model reached its prediction, their accuracy increased by 4.4 percentage points compared to the baseline.

These findings indicate the potential of AI tools to enhance diagnostic accuracy when used in conjunction with healthcare professionals, perhaps making the potential for inaccuracies less burdensome. The Biden administration has expressed plans to create protections for the future of artificial intelligence. The Department of Health and Human Services is also establishing an artificial intelligence task force to oversee the utilization of AI-enabled technologies currently deployed in hospitals, insurance companies, and other healthcare enterprises.

An executive order mandates the U.S. Department of Health and Human Services to create an AI task force within a year. This group is tasked with formulating a strategic plan encompassing policies and potential regulatory measures for the responsible implementation of AI and AI-enabled technologies in the healthcare sector, reducing inaccuracies while allowing for the progression of collaboration with the human workforce. This plan will cover various aspects such as research and discovery, drug and device safety, healthcare delivery and financing, and public health.

Recently, lawmakers have started exploring ways to put this directive into action, particularly focusing on its implications for healthcare. Senator Roger Marshall, a Republican from Kansas who is also a physician, cautions against overregulating AI in healthcare, emphasizing the need to avoid stifling innovation amid an effort to correct inaccuracies. He acknowledges the positive impact of artificial intelligence and machine learning on healthcare over the past five decades and suggests a careful approach to rulemaking to prevent hindering progress.

Senator Edward Markey, a Democrat from Massachusetts and the chair of the subcommittee, expresses concerns about the potential harms and exacerbation of existing inequities if AI is not properly regulated in healthcare. He highlights the need for guardrails to ensure the responsible and ethical use of AI. Markey emphasizes the lessons learned from the tendency of big tech to prioritize profit over people when left to self-regulate and stresses the importance of regulating artificial intelligence to avoid repeating similar mistakes.

Sources:

AI guardrails can fall short in health care: study

Measuring the Impact of AI in the Diagnosis of Hospitalized Patients

Biden to HHS: Create an AI task force to keep health care ‘safe, secure and trustworthy’

Join the conversation!