Commentary: Using AI to Prevent Suicides? Bad Idea, Facebook.

According to the Centers for Disease Control and Prevention, suicide was the 10th leading cause of death among all age groups in the U.S. in 2015. In fact, the U.S. suicide rate is at a 30-year all-time high, and suicide is a leading cause of death worldwide, claiming the lives of approximately 1 million people annually.

In response to this global epidemic, Facebook recently announced that it is coming out with artificial intelligence to detect suicidal posts. The AI looks for words that have been associated with suicide risk, and comments such as “Are you OK?” or “Do you need help?” It then sends resources and friends to the user if it finds it necessary. This might seem like a promising advancement in suicide prevention, but without asking the right questions and vetting all of the stakeholders, AI could do more harm than good.

On the surface, AI seems warranted. There’s no doubt that there’s a need to prevent suicide. But perhaps the situation is also more complex than it seems. Presumably, informed people play a role in defining and reiterating the AI algorithms. Yet a recent study demonstrated that it is not actually possible for even trained clinicians to accurately predict who is at high risk of suicide.

Even though some studies say that we can detect risk, clinically, the actual prediction is fraught with difficulty. Eighty-three percent of people who commit suicide have been in touch with a primary care physician within a year of their death, and up to 66% of people who commit suicide have been in touch within a month of their death. It’s clearly not enough to be in touch with just anyone, let alone a “moderator.”

Facebook’s outreach might actually do more harm than good. For example, some mental health experts voice that hearing from family and loved ones that they care about you can help with suicide prevention. But this can backfire if friends and family are the cause of the distress. Also, because suicide is not always planned, when Facebook (FB) connects friends, loved ones, or first responders with suicidal people, they should be aware that people who develop impulsive suicidal tendencies typically express lower levels of intent to die. Will responders be trained to understand this?

Also, we must press the following question: Does social media help or harm suicide risk? While some studies indicate that social media data can help forecast and prevent suicide, other studies indicate that there are advantages and disadvantages of social media. A strong disadvantage is the prevalence of cyberbullying via various social media channels, making it easy for harassers to attack from virtually anywhere. In the same way a clinician might acknowledge the side effects of a medication, it would behoove Facebook to recognize how it might be negatively affecting suicide rates with this intervention. To be successful, Facebook will need to plan how to measure its actual helpfulness against sham interventions.

With this in mind, a controlled trial might help distinguish the AI’s unique efficacy. For example, longitudinal studies should determine whether this intervention eventually discourages suicidal people from reaching out via social media for fear of being discovered. I wonder whether Facebook will take this extra step

Let’s also ask whether we need another contaminated dataset. Suicide prevention is complex, the field is complicated by studies that have been poorly designed, and there are geographic and cultural factors to consider. Also, different ethnicities have different patterns of seeking help, and gender, sexual identity, and sexual orientation also play a role.

It is important that the developed AI algorithm advance the field rather than muddy the waters even more—and that it doesn’t become a one-size-fits-all algorithm that oversimplifies the complexity of suicidal ideation.

Overall, AI is positioned to potentially improve our knowledge about suicidal ideation and help us deliver interventions that could be sensitive, safe, and address one of the leading causes of death. But if it is oversimplified, non-transparent, and not subjected to the “do no harm” principle, it will simply add to the current challenge of managing suicides and could potentially worsen the situation.

Srini Pillay, M.D., is the CEO of NeuroBusiness Group and the award-winning author of numerous books, including “Tinker Dabble Doodle Try: Unlock the Power of the Unfocused Mind,” “Life Unlocked: 7 Revolutionary Lessons to Overcome Fear,” and “Your Brain and Business: The Neuroscience of Great Leaders.” He also serves as a part-time assistant professor of psychiatry at Harvard Medical School and teaches in the Executive Education Program at Harvard Business School.