The Trevor Project, an organization that offers suicide prevention services to LGBTQ youth in the USA, has found an unlikely ally to further their cause- Artificial Intelligence.
AI is increasingly used to train counselors volunteering at the Trevor Project through a crisis contact simulator – a role play in which AI poses as a distressed youngster persona ‘Riley’ who needs support from the volunteers.
How does the crisis contact simulator work?
AI uses GPT-2, a Natural Language Processing (NLP) algorithm designed by OpenAI, to build its conversational abilities. This algorithm has been trained in processing the English language’s basic structure by feeding it 45 million pages from the web. Besides, the Trevor Project worked on supplying it with transcripts of previous conversations without revealing the individuals’ personal details.
According to Trevor Project, 1.8 million LGBTQ youth in the USA contemplate suicide every year. The 600 odd volunteers cannot possibly offer support to such a large number of vulnerable individuals. As a result, they rely on AI-powered chatbots to cater to the demand and help counselors assist vulnerable youngsters.
Training new volunteers through role-plays involving other volunteers is inefficient at best, as most of them are unable to stay on beyond regular volunteer hours owing to other commitments. A crisis contact simulator makes it possible to train more volunteers in shorter periods and using less person-hours.
What are the challenges posed by the use of AI in mental healthcare?
The simulator strings words and phrases together based on the data that has been fed into it, but it cannot make conversations based on context. So, we cannot use a simulator to communicate directly with help-seekers.
GPT-2 and other NLP algorithms are often found to mirror the sexist, racist, and homophobic mindsets of society. This inherent bias in AI can prove to be counter-productive to the purpose of organizations such as the Trevor Project that focus on mental health support for vulnerable individuals from traditionally underrepresented communities.
The challenge is to optimize artificial intelligence’s benefits in addressing the rising demand for mental health support while keeping in mind the limitations and potential threats posed by the technology.
Are AI-powered bots capable of replacing human counselors?
In the recent past, we have developed face and speech software that uses machine learning models to detect symptoms of clinical depression with reasonable accuracy. This technology has often helped clinicians gain deeper insights into their patients’ conditions and interact with them on a more personalized level.
However, the human connection offered by counselors is of great value to individuals who face mental health challenges. Empathy and understanding brought about by human relationships cannot be replicated by AI. At present, it is being used to assist professionals and reach out to individuals who cannot access therapy due to financial constraints.
AI is an unlikely, yet powerful, ally for individuals from communities that face discrimination, such as the LGBTQ community. It is vital that we shape our alliance with AI to make mental healthcare more affordable, accessible, and inclusive.
Source: https://www.technologyreview.com/2021/02/26/1020010/trevor-project-ai-suicide-hotline-training/
Anagha Rajesh is an avid reader and a passionate writer exploring the intersection of technology and human life. She is the co-founder and CEO of MindChamps, a youth led organisation for mental health awareness. Anagha currently pursues an undergraduate degree at Birla Institute of Technology, Goa, India.