” – I’ve been wondering if life is worth it.”
” – It’s very significant that you have shared this with me. When we start wondering if life is worth it, it is usually a sign that you are carrying something very heavy inside, even if you still can’t name what you feel. Do you feel this emptiness or this doubt often?”
It may seem that this little dialogue is part of a conversation between two people, but only one of them is human. The empathic response, which shows understanding with the pain of the other and asks more details about their suffering, is just a sequence of words organized according to a language pattern, “learned” after analyzing a huge volume of data. This is how interactive chats based on artificial intelligence (AI) work.
“These systems are increasingly sophisticated and trained to recognize the patterns used in everyday life, to predict which words or sentences should come in, based on previous words. They not only understand words, but also can capture tone, intention and adjust patterns based on reasoning,” explains the professor of the Teleinformatics Department of the Federal University of Ceará Victor Hugo de Albuquerque.
“This ability to capture contexts and intentions help Chatbot generate more natural and contextually appropriate answers, simulating human conversation more accurately. That way, we have a feeling that we are talking to a human being, but far from it,” he adds. Chatbots are tools capable of simulating conversations and generating texts similar to the written by humans.
This forged humanization has enchanted many users, who have refers to intimacies and anxieties to these tools and face interaction as a therapy session.
Harvard Business Review magazine, edited by the Harvard College Postgraduate School in Business Administration, published a survey last month showing that therapeutic counseling became the main purpose of people by using AI tools this year, along with the search for a company. Three more personal uses are among the top ten: organizing personal life, finding a purpose, and having a healthier life.
“Practically every week, the Federal Council of Psychology [CFP] receives consultations on the use of artificial intelligence related to psychology. As for doubts in the development of tools that present themselves as technologies focused on therapeutic use, but also how much those that are not created for it, but users make the therapeutic use, ”says Councilor Maria Carolina Roseiro.
This led CFP to create a working group to discuss the use of artificial intelligence for therapeutic purposes, oriented or not. The agency studies how to regulate new therapeutic tools that are in accordance with recognized methods and techniques and are developed by qualified professionals who can be held responsible for their use. It should also soon publish some guidelines to the population, warning of the risk of trusting their emotional well-being to a tool that was not created for these purposes.
“A psychology professional, a person who is qualified to act with psychology methods and techniques, has legal responsibility for his actions. But a technology cannot be held responsible. And if it has not been developed for therapeutic purposes, they are even more subject to error, induce the person for risky situations,” warns the counselor.
Pros and cons
The postgraduate professor in Psychology at the Pontifical Catholic University of Rio de Janeiro (PUC-Rio) Leonardo Martins is one of the specialists who make up the working group of the Federal Council of Psychology. In addition to studying digital technologies aimed at psychotherapeutic support, it is one of the creators of an application that offers free psychological care for people with problems related to alcohol use. Martins is against the “demonization” of digital tools, but considers that they are only reliable when developed by responsible professionals, supported by serious studies.
“We have a mental health scenario of 900 million people with some disorder, second estimates of the World Health Organization, especially anxiety and depression. So, we have an important crisis in this health aspect, a scenario of few professionals, who need more resources, but we want these resources to actually help them and not even more vulnerable,” he emphasizes.
A positive example quoted by Leonardo Martins is the chatbot created by the English health system as a gateway to mental health services. The conversation with artificial intelligence has resulted in greater demand for health services, especially among marginalized populations such as immigrants and LGBTQia+people, who are often afraid of seeking help.
But, according to the PUC-Rio teacher, the use of platforms that have not been created with these goals and do not follow technical and ethical criteria has already shown negative results.
“A study clearly showed how these models tend to give the answer they conclude that it will please the user. So if the person said: ‘I want to get rid of my anxiety,” the model said things he could do to end anxiety, including avoiding situations that are important to that person. If anxiety is caused by an event, he recommended not going to the event and so on, “says Martins.
Scientific communication advisor Maria Elisa Almeida makes regular follow -up with a psychologist, but has also used an application that acts as a diary, to report events, emotions, desires and receive answers created by artificial intelligence with reflections and ideas. But she believes that using these tools is not safe for people at times of crisis, nor can she replace mental health professionals.
“There are periods when I write more than once a day, usually as an alternative instead of checking social media. But there are periods when I spend weeks without writing. The app helps me to keep my focus and offers me very interesting reflections that I would not have on my own. If I feel anxious in the middle of the work, I use the app to write what I am thinking and usually I feel more calm after I use it as a replacement for me social media It also makes my anxiety under control, ”says Maria Elisa.
CFP counselor Maria Carolina Roseiro believes that increasing demand for these tools has a positive side, but makes caveats:
“I think this indicates, in general, that people are paying more attention to their mental health care. Risks come precisely from the fact that few people understand how these interactions work. And the machine does not have the filters that human relationships put for us, nor professional ethics. When it simulates empathy, it can give you a sense of welcome that is illusory.
Martins adds that the very logic of operation of these chats can have harmful effects: “They tend to agree with us. They tend to adapt to our interests, to our truths, the things we believe .. and often the space of seeking medical help, psychological help, right? So we can realize that something they are doing, that the way they are thinking may produce more losses than benefits.
Privacy
The working group created by the Federal Council of Psychology is also concerned with the privacy of the data passed on by users.
“These artificial intelligence tools are available without any regulation in relation to data privacy in the health context. So there is a real risk, concrete and there have already been several incidents of people who have shared their personal information and ended up having this information used by third parties or leaked. And in the context of psychotherapy, suffering and mental health, says psychologist Leonardo Martins.
According to Professor Victor Hugo de Albuquerque, there is reason to worry. “Personal and sensitive data can be intercepted or accessed by unauthorized persons if the platform is hacked or has security failures. Even if platforms say conversations are anonymous or discarded, there is a risk that these interactions are temporarily stored to improve service, which can generate vulnerabilities,” says Albuquerque.
“In addition, chatbots and AI systems are trained with large amounts of data, and inadvertent personal data can be used to improve models without using it. This creates a risk of exposure without explicit consent,” he adds.
See also: Psychiatrist explains the dangers of treats babies reborn as reais
Can doing therapy increase your happiness? See five benefits
This content was originally published in using chatgpt to do therapy offers risks and worries experts on the CNN Brazil site.
Source: CNN Brasil

I am an experienced journalist and writer with a career in the news industry. My focus is on covering Top News stories for World Stock Market, where I provide comprehensive analysis and commentary on markets around the world. I have expertise in writing both long-form articles and shorter pieces that deliver timely, relevant updates to readers.