Year: 2023 Source: AI & Society. (2023). https://doi.org/10.1007/s00146-023-01651-y SIEC No: 20230952
As suicide rates increase worldwide, the mental health industry has reached an impasse in attempts to assess patients, predict risk, and prevent suicide. Traditional assessment tools are no more accurate than chance, prompting the need to explore new avenues in artificial intelligence (AI). Early studies into these tools show potential with higher accuracy rates than previous methods alone. Medical researchers, computer scientists, and social media companies are exploring these avenues. While Facebook leads the pack, its efforts stem from scrutiny following suicides and suicide attempts broadcasted on Facebook Live. The company continues to face scrutiny for ethical, privacy, and safety concerns associated with the quick rollout of proprietary AI technology that seems more focused on the optics of the company than the protection of its billions of users. This paper explores some of these issues, including a lack of transparency, questionable data practices, escalation to law enforcement, little to no regulation, potential for bias, increased viewership of troubling content, and worsening thoughts of suicide among vulnerable Facebook users. Despite AI tools showing promise in improving the accuracy of suicide risk predictions over traditional methods, without regulation, external review, and increased protection of users and data, these tools have the potential to cause more harm than good in the hands of powerful companies like Facebook.