In 2017, Facebook announced its implementation of a suicide prevention program after the “live” feature on the platform brought on many individuals live streaming their suicide. Based on statistics provided by the World Health Organization, suicide is a worldwide epidemic as it is the “second-leading cause of death among people ages 15 to 29” [1]. With the help of artificial intelligence and concerned Facebook friends, the platform believes it is able to prevent suicides among its users. In fact, Facebook shared that it had approximately 3,500 cases within the program’s first year where local police have been contacted over concern about a user’s mental health.
However, Facebook is criticized for its lack of transparency concerning the artificial intelligence used to detect people at risk. Many argue that Facebook should share information about its algorithms used to detect and possibly predict suicidal behavior with mental health professionals, stating that “with more than 2 billion users, Facebook arguably has the largest database of [suicidal behavior] content in the world” [2].
Mason Marks, a medical doctor and research fellow at Yale Law and NYU Law has written about Facebook’s role in suicide prevention and has expressed concern over Facebook’s use of artificial intelligence to flag presumed suicidal behavior, mainly because no one except for Facebook has any idea how accurate artificial intelligence has been in stopping harmful behavior.
Moreover, Marks highlights the negative outcomes that may occur as a result of the artificial intelligence’s flagging of concerning comments, such as inhibiting Facebook users from fully discussing the topic of suicide, “fearing a visit from the police,” which essentially silences individuals who could be trying to reach out to friends through Facebook [3].
Not only that, but Marks believes that Facebook’s suicide risk scoring software, along with its calls to the police that may lead to mandatory psychiatric evaluations, “constitutes the practice of medicine” and argues that the program should be taken into the hands of government agencies in order to ensure Facebook is abiding by ethical and safety measures [1].
Munmun De Choudhury, an assistant professor in the School of Interactive Computing at Georgia Tech who praises the platform for engaging in suicide prevention, has also weighed in on the matter of transparency. While De Choudhury believes that Facebook should strive for transparency, it may be difficult to understand how much information should be shared for multiple reasons. For example, sharing common terms used by at-risk individuals may lead to Facebook friends overlooking concerning posts that have been phrased differently, or it may attract negative attention from ill-willed people [2].
In response to transparency concerns, Facebook’s Global Head of Safety Antigone Davis has replied by highlighting the fact that releasing any information about the artificial intelligence used could prove to do more harm than good, stating "that information could allow people to play games with the system" [3].
The use of artificial intelligence to monitor suicidal behavior on Facebook raises the question of how artificial intelligence may be used for other purposes, such as “inappropriate interactions between minors and adults” [3]. Artificial intelligence may soon be used to regulate different scenarios where danger may be a concern by analyzing social media users’ activity. While it seems like doing so would be a positive step towards ensuring safety within communities, others may argue that such programs are a violation of privacy rights, which brings to mind the many cases of Facebook misusing users’ data by failing to protect privacy rights.
References:
Natasha Singer, “In Screening for Suicide Risk, Facebook Takes On Tricky Public Health Role,” The New York Times, December 31, 2018.
Rebecca Ruiz, “Facebook created an AI tool that can prevent suicide, but won't talk about how it works,” Mashable, November 28, 2017.
Martin Kaste. “Facebook Increasingly Reliant on A.I. To Predict Suicide Risk,” National Public Radio, November 17, 2018.