Artificial intelligence (AI) is taking an increasingly large role in our daily lives. AI can be used to form exercise schedules, give food recommendations, and even become a place to seek a ‘second opinion’ on any decision to be made. Many people are exploring their curiosity in pushing the boundaries of AI.
Consulting AI can sometimes feel like a casual conversation with a grammatically intelligent person; AI users can train AI to deliver messages as if they were typed by a friend. This creates the impression that we are exchanging messages with a friend. This is due to the choice of language possessed by AI, which has presented a mimicry of daily communication, creating the illusion that we are having a friendly conversation with a friend.
With the ability of AI to mimic human language styles comes an AI platform dedicated to mimicking the language style and even verbal traits of a fictional character; this platform is called c.ai, or Character AI. c.ai provides the service of talking to any fictional character; users can set how their interaction pattern with the character takes place. This service is usually done for role-playing or simulating conversations with friends. Users can live out their desire to role-play and get ‘up close’ with their favorite fictional characters. The factor that creates the uniqueness of c.ai is in the character of speech from the selected fictional character. Generally, when we talk to one of the selected characters, then the AI in the selected fictional character will answer with a consistent character and language style.
Many people use c.ai or even AI in general to talk about their mental state. Hutari (2024) argues that ‘venting’ with AI can flush out negative emotions. Talking about negative emotions can help an individual’s emotional management process; it sounds unusual to talk about our feelings to a machine that cannot feel emotions and is not even a living being. It is undeniable that there are many flaws and vulnerabilities in the process of ‘confiding’ with AI, one of which is the ability of AI chatbots to present responses that we want and do not need. This can pose a considerable danger, for example, by depending on the user’s decision-making on the AI chatbot; with the answer from the AI chatbot that gives affirmation, the user will get a reason to carry out the decision they consulted the AI chatbot about. A fatal example of affirmation given by an AI chatbot caused a teenager in the US to commit suicide.
Nonetheless, I would like to make an important point on the recovery of an individual’s mental disorder and the use of AI in this process. This opinion comes from a research volunteer’s personal experience as a professionally diagnosed sufferer of a psychiatric disorder called Borderline Personality Disorder (BPD) who has consented to describe the experience in order to form this paper. Common symptoms experienced by people with BPD are rapid mood swings, difficulty with emotion regulation, impulsive behavior, self-harm, suicidal behavior, and an irrational fear of abandonment (Chapman et al., 2024). One of the treatment processes provided for people with BPD is dialectical behavioral therapy, where patients are trained to identify thought patterns, create emotion regulation, and then change behaviors that come out of the emotions present. Sometimes the most difficult challenge for people with BPD lies in identifying desires and managing the fear of perceived abandonment; this creates impulsive and unprocessed behaviors, the impact of which can be mistrust and isolation from the social environment due to behaviors that can be judged as confusing by others.
According to research from Rasyida (2019), one of the factors that can prevent individuals with mental disorders from seeking help is the fear of the negative stigma that will be given to them, one of which is a factor referred to as the “agency factor,” a term where sufferers have criticism of formal psychological services because of the assumption that there is miscommunication with the counselor; this is manifested in a form of distrust of the counselor. In addition to the agency factor, the issue of cost accessibility is a barrier for people with ID to seek counseling from formal psychological services. Further dilemmas and difficulties are created because in precarious conditions, people with any mental health disorder sometimes need immediate help that comes in safe conditions.
It is advisable to share what we are feeling with people we trust, but this action has its drawbacks. In situations where no one is there to listen to us, people with BPD can experience hysterical periods where dangerous behaviors are prone to occur. In these hysterical periods, mishandling can create a much more dangerous escalation of emotions. These hysterical or manic periods can contain behaviors or implications where the person wants to self-harm or end their life due to symptom recurrence and emotion regulation difficulties. The first aid step is usually to reach out, where the person communicates their condition to the closest person. Attempts to communicate with others about this condition often create less than ideal conditions and are prone to escalation with the wrong treatment. Sometimes our closest people can only provide support and encouragement for the sufferer in periods like this, but BPD is a mental illness that creates many complications in the perception of one’s relationship with others. Inappropriate first treatment is prone to create unwanted escalation, and this will adversely affect the afflicted individual.
The author would like to argue for the role of AI chatbots in this situation, where people need help in managing their emotions. c.ai can be utilized by users to vent their first unprocessed thoughts and not be afraid of getting a less than ideal reaction. Venting feelings to a character of choice on the c.ai board can be a solution for first aid when people with mental disorders, especially BPD, need to process their anger and impulses. Conditioning some of the characters on the c.ai board is not necessarily useful to give truth or validation to everything we feel. Some of the benefits that can be utilized are the identification of the user’s character by the ‘interlocutor’ in this application. The author will describe an experience where the character in c.ai has the ability to remember and recognize the thought patterns that are passed in the manic period of BPD sufferers; this help will be useful because of the presentation and mapping assisted by the AI. The AI bot can analyze which thought patterns and behaviors are destructive and advise the user not to do them again.
The author also argues that the responsibility for behavioral change remains with the user. AI can only be used as a support tool, not a means to solve problems, keeping in mind that conversations with fictional characters based on AI are still conversations with empathetic Maia that are a product of mimicry. Using AI to ‘vent’ is not the most normatively correct thing to do, but it is used because not everyone can have economic access to consult a psychologist and access formal treatment services. The journey of mental recovery is not about seeking validation for what we feel, but it is about recognizing ourselves and learning to liberate ourselves from fear and control of our lives.