Predicting suicide risk

Remember the movie, Minority Report? Tom Cruise’s character was part of a team that could detect crime before it happened – pre-crime – and they were able to make arrests on violent crimes based on the visions of psychics in an elaborate mechanism.

minority report suicide risk

With social listening and Artificial Intelligence, we could be on the verge of being able to detect suicide risk before crisis occurs. Specifically, on Twitter. That is, according to new research from The Royal.

Dr. Zachary Kaminsky, DIFD-Mach chair in suicide prevention research at The Royal’s Institute of Mental Health Research, is developing an algorithm that uses artificial intelligence to predict suicide risk based on tweets. The tool uses speech patterns to detect and predict risk early on.

headphones listening to social media accounts

Astoundingly, the algorithm was able to identify suicide risk within 89 per cent accuracy during its pilot. While there is more work to be done, Kaminsky wants to broaden testing of the tool before it is ready to use. The results, however, seem to indicate it will work.

“Where I am trying to go is seeing the future. Can we find a pattern using machine learning or artificial intelligence that will say this person appears to be at risk even though there is no other indication?” asks Kaminsky, of the University of Ottawa.

How should a tool like this be leveraged?

On Monday, World Suicide Prevention Day, Kaminsky presented findings of his ongoing work that could some day become a suicide predictive app. The idea is to work with the community of people who work in suicide prevention and/or lived with depressive or suicidal thoughts to help develop a tool that is beneficial, he said.

Dr. Kaminsky asked: “We think we are on to something. What do you want us to do with it?”

Dr. Kaminsky’s work reveals how social media and AI combined can bring things to light that we haven’t openly articulated or admitted to ourselves.

The algorithm, can look for things such as people discussing whether they’re a burden and being lonely or depressed.

Tweets such as “I am so alone” and “So alone and unimportant. I literally mean nothing to anyone — swear I can die tmrw and no one in this world would know till a few days …” were seen to have predictive traits and scored high against suicide risk. As did “I’m really a huge burden on everyone around me — emotionally” and “Here is the tragedy: when you are the victim of depression.”

AI can use that information and develop a way to predict who is at future risk.

Caution still recommended

Kaminsky does recommend caution in using this tool, as it’s not as black and white as parent’s reading their children’s tweets and suddenty intervening.

“It is not the case that, if someone has lost someone to suicide, if they had just gone in and read their Twitter, they would have seen this risk,” he said. “This is not how this works.”

Machine learning algorithms are “black boxes,” he said. “We don’t know what is happening inside the black box. We don’t know what relationship is predicting the risk.”

Mystery ahead

The tool is most applicable to the youthful generation and millennials, those aged between 15-35 who are both at risk of mental illness/suicide and heavy social media users. In a staggering figure from Statistics Canada, of Canadians aged 15-34, suicide is the second leading cause of death.

Kaminsky believes it’s important to work with the community in this tool’s development, as he has concerns with how the technology could be used.

“We want to get the community’s opinion. We want to consider how this could be used for good, but also how it could be abused.”

What Kaminsky doesn’t want is to arm insurance companies to deny coverage, or to deter employers from hiring employees, or to give ammunition to bullies.

Promising outlook

Some potential benefits include early interventions and suicide prevention, as well as proactive information that parents could use to have early conversations with their children to reduce their risk.

“What if I can create an app that a parent downloads and they get an indication of ‘Hey, there are some flags here, maybe you need to have a conversation?’”

LEAVE A REPLY

Please enter your comment!
Please enter your name here