Facebook’s suicide prevention algorithm raises questions around privacy issues

Facebook has developed an algorithm to help identify when users may be at risk of self-harm. Is this a useful suicide prevention tool or a potential invasion of privacy? We asked Dr. Tobias Matzner from Paderborn University.

Photo by Con Karampelas on Unsplash

 

Following a tragic online incident in 2017 in which a teenager streamed her suicide on Facebook Live, Facebook began testing an algorithm designed to flag users in danger of committing suicide. In late 2018, Facebook reported that they had notified local authorities in about 3,500 cases worldwide to intervene.

We spoke to Dr. Tobias Matzner, professor of Media, Algorithms and Society in the Department of Media Studies at Paderborn University, to find out why this program has been criticized by some researchers in the U.S. and Europe and his take on the implications of identifying mental health and medical issues through an algorithm.

If you or someone you know may be considering suicide, Telefonseelsorge in Germany is available at 0800 1110 111 or 0800 1110 222. In the United States, the National Suicide Prevention Lifeline can be reached at 1-800-273-8255. More resources worldwide can be found at Befrienders

By the way – If you love our content, please consider donating to KCRW Berlin. We are a listener-funded public radio station, driven by supporters like you. Your donation supports our programming and events, feeding a flourishing English language community with local news, information and ideas.