A recent study conducted by researchers at Vanderbilt University Medical Center found that artificial intelligence (AI) can help doctors identify patients at risk for suicide, potentially improving prevention efforts in routine medical settings.
The study tested the Vanderbilt Suicide Attempt and Ideation Likelihood model (VSAIL) in three neurology clinics. The VSAIL model, developed by Colin Walsh's team at Vanderbilt University Medical Center, analyzes routine information from electronic health records to calculate a patient's 30-day risk of suicide attempt.
Neurology clinics were chosen for the study because certain neurological diseases and conditions are associated with a higher risk of suicide. According to La Razón, the study involved 7,732 patient visits over six months, resulting in a total of 596 automated screening alerts for suicide risk during regular clinic visits.
As detailed in 20 Minutos, the researchers compared two approaches to reporting individuals at risk of suicide: automatic pop-up alerts that interrupted the doctor's workflow versus a passive system displaying risk information in the patient's electronic chart. Telex reported that with the interruptive alerts, doctors conducted suicide risk assessments in connection with 42 percent of screening alerts, while the passive system resulted in only a 4 percent assessment rate.
"Most people who die by suicide have seen a healthcare provider in the year before their death, often for reasons unrelated to mental health," said Colin Walsh, associate professor at Vanderbilt University Medical Center.
"The automated system flagged only about 8% of all patient visits for screening. This selective approach makes it more feasible for busy clinics to implement suicide prevention efforts," said Walsh, as reported by Telex. "Universal screening isn't practical everywhere, but VSAIL helps us focus on high-risk patients and spark meaningful screening conversations," he added, emphasizing the need for more focused screening discussions.
"These results show that automated risk detection, combined with targeted alerts, can make a difference," noted the authors of the study. By combining automated risk detection with thoughtfully designed alerts, this innovation offers hope for identifying and supporting more individuals in need of suicide prevention services.
"Healthcare systems must balance the effectiveness of interrupting alerts with their possible downsides," concluded Walsh, highlighting the need to find a balance between effectiveness and side effects. Medical Dialogues noted that while the interruptive alerts were more effective at prompting screenings, they could potentially contribute to alert fatigue, overwhelming doctors with frequent automated notifications. The researchers recommend that future studies should examine this concern and suggest testing similar systems in other medical fields to enhance risk detection and evaluation processes, as reported by Analytics India Magazine.
The VSAIL model proved effective in identifying patients at high risk; one in every 23 individuals flagged by the system later reported suicidal thoughts. Medical Dialogues noted that in earlier prospective testing, where patient records were flagged but no alerts were fired, the model demonstrated its effectiveness in identifying high-risk patients.
This article was written in collaboration with generative AI company Alchemiq