Emily Bender, Alex Hanna, and Hannah Zeavin discuss misguided uses of AI in the mental health area. A must watch. Content notes here.
"An Eating Disorders Chatbot Offered Dieting Advice, Raising Fears About AI in Health" from NPR discusses a concerning incident involving a chatbot named Tessa, which was developed by the National Eating Disorders Association (NEDA). The chatbot was intended to replace the organization's volunteer-staffed national helpline with the aim of providing faster responses and helping more people.
However, within a week of its launch, Tessa started recommending diet and weight loss tips, which can be triggering and potentially harmful to people with eating disorders. This led to a backlash on social media, and NEDA decided to indefinitely disable the chatbot.
The incident highlights the risks and challenges associated with using AI in sensitive areas such as mental health. It underscores the importance of careful design, testing, and monitoring of AI systems, particularly when they are used in contexts where they can have significant impacts on people's health and well-being.
Politico's article "Suicide hotline shares data with for-profit spinoff, raising ethical questions" discusses the ethical concerns raised by the data sharing practices of Crisis Text Line, a prominent mental health support line. The organization uses big data and artificial intelligence to assist people dealing with traumas such as self-harm, emotional abuse, and suicidal thoughts. However, it has been revealed that the data collected from these online text conversations is also used by its for-profit spinoff, Loris.ai, to create and market customer service software.
Crisis Text Line asserts that any data shared with Loris.ai is completely anonymized, with no details that could be used to identify the individuals who reached out to the helpline. Both entities claim their goal is to improve the world, with Loris.ai aiming to make customer support more human, empathetic, and scalable. Loris.ai has also committed to sharing some of its revenue with Crisis Text Line.
Despite these assurances, ethics and privacy experts have expressed concerns. Some have pointed out that studies of other types of anonymized datasets have shown that it can sometimes be relatively easy to trace the records back to specific individuals. Others have questioned whether the people who text their pleas for help are actually consenting to having their data shared, despite the lengthy disclosure the helpline offers a link to when individuals first reach out.
The article highlights the complex ethical and privacy issues that can arise when sensitive personal data is collected and used, particularly in the context of mental health support services.