Press "Enter" to skip to content

Suicide attempts are hard to anticipate. A study that tracks teens’ cellphone use aims to change that

By Kelly Servick

Every Wednesday afternoon, an alert flashes on the cellphones of about 50 teenagers in New York and Pennsylvania. Its questions are blunt: “In the past week, how often have you thought of killing yourself?” “Did you make a plan to kill yourself?” “Did you make an attempt to kill yourself?”

The 13- to 18-year-olds tap their responses, which are fed to a secure server. They have agreed, with their parents’ support, to something that would make many adolescents cringe: an around-the-clock recording of their digital lives. For 6 months, an app will gobble up nearly every data point their phones can offer, capturing detail and nuance that a doctor’s questionnaire cannot: their text messages and social media posts, their tone of voice in phone calls and facial expression in selfies, the music they stream, how much they move around, how much time they spend at home.

Most of these young people have recently attempted suicide or are having suicidal thoughts. All have been diagnosed with a mental illness such as depression. The study they’re part of, Mobile Assessment for the Prediction of Suicide (MAPS), is one of several fledgling efforts to test whether streams of information from mobile devices can help answer a question that has long confounded scientists and clinicians: How do you predict when someone is at imminent risk of attempting suicide?

The goal is to combine machine learning with decades of evidence about what may trigger suicidal behavior to create an algorithm that detects spikes in risk. For adolescents, whose social and emotional lives are tightly bound to their phones, the approach could be especially powerful, says MAPS co-investigator Nicholas Allen, a clinical psychologist at the University of Oregon in Eugene. “If you looked at my phone, what you’d find out is that I run late a lot. I’m always just saying, ‘running late.’” Allen’s 18-year-old daughter, in contrast, “uses her phone to conduct all the most important and intimate and personally involving aspects of her life.” By monitoring that digital appendage, researchers hope to identify clues that foreshadow a crisis.

Randy Auerbach, a clinical psychologist at Columbia University and a MAPS coinvestigator, is used to hearing that the study sounds like an invasion of privacy. But Auerbach, who has interviewed thousands of teenagers to gauge their suicide risk and laid plans to try to keep them safe, has a response. “Kids are killing themselves in record numbers, and what we’ve traditionally tried to do isn’t working,” he says. “We really need to rethink this.”

[embedded content]

Suicide rates have nudged upward in the United States in the past decade, but the rise among young people has been especially sharp. For 10- to 24-year-olds, the rate climbed to 10.57 per 100,000 in 2017, up from 6.75 per 100,000 a decade earlier. In 2017, more than 6700 young people took their lives, making suicide the second leading cause of death for teens and young adults, after unintentional injuries. And in a 2017 survey of 15,000 high school students by the Centers for Disease Control and Prevention, 7.4% said they had attempted suicide in the past 12 months.

Scientists are now playing catch-up. “For a long time, researchers have been unwilling to do research on suicide, particularly in youth,” Allen says, because they worry about their responsibility for participants’ safety. But those fears, he says, are keeping his field from helping young people in need. “Researchers need to be willing to take some risk, and to manage that risk, so that we can actually understand these important problems,” he says. “In the best possible scenario, we could save a life.”

Many suicide risk factors are well known. Among the strongest is a previous attempt. Mental illnesses—especially depression and substance abuse—can increase risk, as can chronic illness and access to lethal means. Much of that information appears in medical records, and some health care providers already use it to flag patients at potentially elevated suicide risk.

The problem is that those risk factors capture huge numbers of people, few of whom are in imminent danger. And they don’t change much day to day, whereas suicidal impulses do. A 2017 meta-analysis took a dim view of the current crystal ball: Three hundred and sixty-five suicide risk studies in the past 50 years have led to predictions only slightly better than chance.

“If I’m seeing a kid at school in a guidance counselor’s office, or in the hospital, one of the key questions is ‘Is this individual at risk over the next hours and days?’” says Catherine Glenn, a clinical psychologist who studies youth suicide at the University of Rochester in New York.

Unraveling suicide

Read more from our special series.

Clinicians do ask about suicidal thoughts and plans. And they can apply standardized checklists and questionnaires to gauge risk and recommend medications or other interventions. But patients may not share their suicidal thoughts, and those thoughts can escalate between visits to a therapist. “It’s a lot of guesswork, to be honest,” says Matthew Nock, a clinical psychologist at Harvard University and senior author on the 2017 analysis. What’s more, many people at risk aren’t regularly meeting with a mental health professional at all.

That’s where the ever-vigilant smartphone comes in. Although Nock stresses that one-on-one interactions with a professional will always be important, he’s among those now testing whether smartphones and wearable sensors can help, too. The National Institute of Mental Health recently solicited grant applications for studies on short-term risk for suicide and encouraged research that harnesses smartphones and other wearable devices. The agency has set aside $2.6 million in this fiscal year to fund between four and 10 such projects. “There’s a lot more we can do,” Nock says, “to wrap observation—and potentially intervention—around people in the time and place when they’re in distress.”

To build that protective wrapper, researchers need to find beacons of distress among heaps of irrelevant phone data. The MAPS study, which launched in September 2018, is one of the first to try. It’s recruiting high-risk participants; the hope is to include 200 teens, 70 of whom have attempted suicide in the past 6 months. Another 70 will be struggling with suicidal thoughts but have made no attempts, a group that could help researchers understand what differentiates teens who put thoughts into action from teens who don’t.

What, in the trove on a tech-savvy teenager’s phone, are the researchers looking for? “It’s not a fact-finding mission, like we’re kind of blindly throwing a spear,” Auerbach says. Instead, MAPS relies on established theories and data on suicidal behavior. For example, psychologist Edwin Shneidman proposed in the 1990s that a key factor uniting suicides is psychological pain, or psychache. The MAPS team aims to detect psychache with algorithms that gauge emotional distress in a person’s tone of voice, music choice, language use, and photos.

Other research has focused on the importance of sleep disturbances. A 2008 study, for example, interviewed the parents, siblings, and friends of adolescent suicide victims and found an association between suicide and insomnia in the previous week. Although the activity-tracking instruments in smartphones aren’t perfect, they can give a sense of when people wake up and when they go to bed.

Finally, MAPS will look for signs of faltering social relationships. Earlier this year, Auerbach’s group reported that “interpersonal loss”—a recent dissolution of social ties—distinguished adolescent suicide attempters from young people hospitalized for psychiatric illness with no history of suicidal thoughts or behavior. Suicide attempters also reported more interpersonal loss than people with suicidal thoughts but no attempts. Being rejected or bullied by peers may heighten risk in someone who’s already vulnerable. The MAPS team will track how often a person reaches out to online contacts, how diverse those contacts are, and how often the communication is reciprocated.

MIODRAG IGNJATOVIC/ISTOCK PHOTO

However teenagers end up in crisis, the MAPS researchers are counting on the fact that some of them will. That expectation presents an ethical dilemma. To craft predictors, the researchers must associate changes in phone activity with suicidal behavior, which is one reason they’ve chosen to work with a high-risk group. Over the study’s 4 years, participants are expected to experience 22 “serious suicidal incidents,” which include both attempts and suicidal intentions with a concrete plan.

The researchers want to observe the lead-up to an attempt, but they can’t sit back and allow young people to harm themselves while monitoring their phones. “If we have reason to believe there is risk, we have to step in,” Allen says. “In a certain sense, that makes the study less naturalistic, but the alternative is of course not ethically acceptable.”

Allen and colleagues don’t plan to intervene in teens’ lives on the basis of passively collected phone data, which aren’t being analyzed in real time. But the researchers do scrutinize those Wednesday surveys. Aside from being important data, the participants’ reports can identify who needs help now. If teens score above a certain threshold, a MAPS clinician reaches out. If a teen doesn’t respond or if they reveal that they’re in crisis, the clinician may contact their parents and doctors. The researchers have had to make a few calls to check on participants, but so far none has attempted suicide or been hospitalized during the study.

The balance between data gathering and safety is a tension in other ongoing projects. There’s no uniform approach. A smartphone-based suicide prediction study that Nock co-leads checks in with participants—adults recently discharged from psychiatric inpatient treatment—six times a day with a 30-question survey, paying them $1 for each one they complete. This month, Nock’s team began to recruit adolescents to the study and has added parent check-ins to the safety measures for those participants.

On top of weekly questionnaires, the MAPS team does monitor and survey the teens more intensely during certain phases of the study. But Allen is also aware that frequent check-ins can burden participants, and he doubts they’re necessary. The MAPS strategy marks “our best guess at what is the appropriate way to manage risk,” he says.

Even if MAPS researchers can design a powerful predictive algorithm, the technology will need refining before it can be deployed broadly, especially beyond high-risk adolescents. Other demographic groups, such as older adults, may display different patterns in their online life that predict risk. Tailoring the app to an individual might also be important so that its algorithm can detect deviations from that person’s normal behavior.

Ultimately, the app might not just watch for signs of trouble, but also intervene. A smartphone that flags high risk could remind people of the skills they’ve been developing with a therapist, push them the therapist’s number, or contact the therapist (or a trusted family member or friend) directly.

Allen doesn’t currently see patients, but years of counseling depressed and suicidal people left him with an encouraging insight: Most suicidal impulses are ephemeral, he says. When he has visited patients in the hospital as they recover from an attempt, “they’ve said to me, ‘I’m so glad I didn’t die.’” To Allen, that’s evidence that digital monitoring, though still experimental, could shepherd people though dark moments.

In the long run, consent will be key. Recent suicide prediction efforts targeting broad segments of the population have drawn criticism. Few of Facebook’s 2 billion users are likely aware that the company applies an automated screening system to flag posts indicating suicide risk for review by a Facebook employee. The company has dispatched first responders thousands of times to check on someone. While the company defends its suicide prevention efforts as an ethical imperative and has promised greater transparency, some experts question the ethics of intervening in users’ lives on the basis of an algorithm that has not been peer reviewed or even publicly disclosed.

Even a suicide prevention group came under fire for online tracking. In 2014, the U.K.-based charity Samaritans released a Twitter plug-in that could identify worrisome language in a Twitter feed. Twitter users could sign up for email alerts about a particular feed, without the consent of that feed’s owner. The service, however well-meaning, stirred privacy concerns and fears that it would lead to online bullying. Nine days after launch, Samaritans suspended the service.

The MAPS researchers, in contrast, describe to families in detail how the data will be used, managed, and encrypted. And the app isn’t intended for mass surveillance, but, at least at first, as a tool for clinicians. Still, some are skeptical that crunching phone data is the best way to monitor suicide risk. Urs Hepp, a psychiatrist at the Integrated Psychiatric Services of Winterthur–Zurich Unterland in Switzerland, wonders about false-positive results, which would suggest participants are in trouble when they aren’t. If a prediction app gets adopted widely, clinicians risk receiving frequent alerts that they don’t know how to manage, he says.

Another concern is that the tangled mathematical innards of an algorithm won’t explain why it makes a given prediction. Although suicide theories are MAPS’s starting point, the researchers will also rely on machine learning to suggest new predictive features that may have no known relationship to accepted risk factors.

Algorithms might eventually support a clinician’s decision-making, says Regina Miranda, a clinical psychologist who studies youth suicide at Hunter College, part of the City University of New York system. But overreliance on algorithms could detract from efforts to understand the drivers of suicide risk through intensive patient interviews. “You can’t be measuring suicidality if you’re not asking about it,” she says.

Allen acknowledges the risk of generating “models which work beautifully, but nobody really knows why.” For a scientist, “that’s extremely frustrating,” he says. But, “As a clinician, if you ask me, would I take an accurate machine learning model that was uninterpretable? Well, hell yes, I would.”

For help, call 1-800-273-8255 for the National Suicide Prevention Lifeline, or visit https://www.speakingofsuicide.com/resources.


Source: Science Mag