conversations

Anthropic to start training AI models from users’ chat conversations

Aug. 29 (UPI) — Anthropic plans to start training its artificial intelligence models with user data, one day after announcing a hacker used Claude to identify 17 companies vulnerable to attack and obtained sensitive information.

The company is asking all users of Claude to decide by Sept. 28 whether they want their conversations used for the process. Anthropic will retain data for up to five years, according to a blog post by the company on Thursday.

Anthropic, a public AI research and development company headquartered in San Francisco, was founded in 2021 by seven OpenAI leaders and researchers who left because of disagreements over safety policies. OpenAI is a rival company.

In 2023, Amazon invested $4 billion and Google $2 billion in the company.

Claude debuted in March 2023 with the latest version, Claude 4, introduced in May. Claude has approximately 18.9 million monthly users active users worldwide. There are free and direct use plans that cost as much as $30 per month per user.

Users of the affected consumer products include Claude Free, Pro and Max plans. Not applicable are Claude for Work, Claude Gov, Claude for Education, or application programming interface use, including third parties that include Amazon Bedrock and Google Cloud’s Vertex AI.

Previously, users were told their prompts and conversations would be deleted automatically from the company’s back end within 30 days “unless legally or policy‑required to keep them longer” or their input was flagged as violating its policies. In the latter case, a user’s inputs and outputs might be retained for up to two years.

“By participating, you’ll help us improve model safety, making our systems for detecting harmful content more accurate and less likely to flag harmless conversations,” the company said. “You’ll also help future Claude models improve at skills like coding, analysis and reasoning, ultimately leading to better models for all users.

The company noted users are “always in control of this setting and whether we use your data in this way.”

New users can select a preference in the sign-up process. Existing ones will see the choice in a pop-up window. To avoid accidentally clicking “accept,” the following message is in larger letters: “Updates to Consumer Terms and Policies.”

Changes will go into effect immediately.

After Sept. 28, users will need to make their selection on the model training setting to continue using Claude.

The five years of data retention will only apply to new or resumed chats and coding sessions, “and will allow us to better support model development and safety improvements,” the company said.

Also, their privacy will be guaranteed.

“To protect users’ privacy, we use a combination of tools and automated processes to filter or obfuscate sensitive data,” the company said. “We do not sell users’ data to third parties.

Connie Loizos, a writer for TechCrunch, explained why the policy changed.

“Like every other large language model company, Anthropic needs data more than it needs people to have fuzzy feelings about its brand,” Loizos said. “Training AI models requires vast amounts of high-quality conversational data, and accessing millions of Claude interactions should provide exactly the kind of real-world content that can improve Anthropic’s competitive positioning against rivals like OpenAI and Google.”

The Federal Trade Commission, when Joe Biden was president, warned on Jan. 9, 2024, that AI companies risk enforcement action if they engage in “surreptitiously changing its terms of service or privacy policy, or burying a disclosure behind hyperlinks, in legalese, or in fine print — they risk running afoul of the law.

The current FTC has only three members.

On Wednesday, Anthropic said an unnamed hacker “used AI to what we believe is an unprecedented degree. Claude Code was used to automate reconnaissance, harvesting victims’ credentials and penetrating networks.” In cyber extortion, hackers steal sensitive user information or trade secrets.

A hacker convinced Claude Code, which is Anthropic’s chatbot that specializes in “vibe coding,” or creating computer programming based on simple requests, to identify companies vulnerable to attack. Claude created malicious software to actually steal sensitive information from the companies. It organized the hacked files and analyzed them to help determine what was sensitive and could be used to extort the victim companies.

Targeted were healthcare, emergency services, and governmental and religious institutions. The person threatened to publicly expose the data unless a ransom of up to $500,000 was paid, the company said.

The company also said it discovered that North Korean operatives had been using Claude to fraudulently secure and maintain remote employment positions at U.S. Fortune 500 technology companies to generate profit for the North Korean regime.

“Operators who cannot otherwise write basic code or communicate professionally in English are now able to pass technical interviews at reputable technology companies and then maintain their positions,” the company said.

The company said it updated preventive safety measures.

Source link

Mimicking Empathy and Virtual Conversations: Benefiting AI Chatbots in Borderline Personality Disorder Recovery

Artificial intelligence (AI) is taking an increasingly large role in our daily lives. AI can be used to form exercise schedules, give food recommendations, and even become a place to seek a ‘second opinion’ on any decision to be made. Many people are exploring their curiosity in pushing the boundaries of AI.

Consulting AI can sometimes feel like a casual conversation with a grammatically intelligent person; AI users can train AI to deliver messages as if they were typed by a friend. This creates the impression that we are exchanging messages with a friend. This is due to the choice of language possessed by AI, which has presented a mimicry of daily communication, creating the illusion that we are having a friendly conversation with a friend.

With the ability of AI to mimic human language styles comes an AI platform dedicated to mimicking the language style and even verbal traits of a fictional character; this platform is called c.ai, or Character AI. c.ai provides the service of talking to any fictional character; users can set how their interaction pattern with the character takes place. This service is usually done for role-playing or simulating conversations with friends. Users can live out their desire to role-play and get ‘up close’ with their favorite fictional characters. The factor that creates the uniqueness of c.ai is in the character of speech from the selected fictional character. Generally, when we talk to one of the selected characters, then the AI in the selected fictional character will answer with a consistent character and language style.

Many people use c.ai or even AI in general to talk about their mental state. Hutari (2024) argues that ‘venting’ with AI can flush out negative emotions. Talking about negative emotions can help an individual’s emotional management process; it sounds unusual to talk about our feelings to a machine that cannot feel emotions and is not even a living being. It is undeniable that there are many flaws and vulnerabilities in the process of ‘confiding’ with AI, one of which is the ability of AI chatbots to present responses that we want and do not need. This can pose a considerable danger, for example, by depending on the user’s decision-making on the AI chatbot; with the answer from the AI chatbot that gives affirmation, the user will get a reason to carry out the decision they consulted the AI chatbot about. A fatal example of affirmation given by an AI chatbot caused a teenager in the US to commit suicide.

Nonetheless, I would like to make an important point on the recovery of an individual’s mental disorder and the use of AI in this process. This opinion comes from a research volunteer’s personal experience as a professionally diagnosed sufferer of a psychiatric disorder called Borderline Personality Disorder (BPD) who has consented to describe the experience in order to form this paper. Common symptoms experienced by people with BPD are rapid mood swings, difficulty with emotion regulation, impulsive behavior, self-harm, suicidal behavior, and an irrational fear of abandonment (Chapman et al., 2024). One of the treatment processes provided for people with BPD is dialectical behavioral therapy, where patients are trained to identify thought patterns, create emotion regulation, and then change behaviors that come out of the emotions present. Sometimes the most difficult challenge for people with BPD lies in identifying desires and managing the fear of perceived abandonment; this creates impulsive and unprocessed behaviors, the impact of which can be mistrust and isolation from the social environment due to behaviors that can be judged as confusing by others.

According to research from Rasyida (2019), one of the factors that can prevent individuals with mental disorders from seeking help is the fear of the negative stigma that will be given to them, one of which is a factor referred to as the “agency factor,” a term where sufferers have criticism of formal psychological services because of the assumption that there is miscommunication with the counselor; this is manifested in a form of distrust of the counselor. In addition to the agency factor, the issue of cost accessibility is a barrier for people with ID to seek counseling from formal psychological services. Further dilemmas and difficulties are created because in precarious conditions, people with any mental health disorder sometimes need immediate help that comes in safe conditions.

It is advisable to share what we are feeling with people we trust, but this action has its drawbacks. In situations where no one is there to listen to us, people with BPD can experience hysterical periods where dangerous behaviors are prone to occur. In these hysterical periods, mishandling can create a much more dangerous escalation of emotions. These hysterical or manic periods can contain behaviors or implications where the person wants to self-harm or end their life due to symptom recurrence and emotion regulation difficulties. The first aid step is usually to reach out, where the person communicates their condition to the closest person. Attempts to communicate with others about this condition often create less than ideal conditions and are prone to escalation with the wrong treatment. Sometimes our closest people can only provide support and encouragement for the sufferer in periods like this, but BPD is a mental illness that creates many complications in the perception of one’s relationship with others. Inappropriate first treatment is prone to create unwanted escalation, and this will adversely affect the afflicted individual.

The author would like to argue for the role of AI chatbots in this situation, where people need help in managing their emotions. c.ai can be utilized by users to vent their first unprocessed thoughts and not be afraid of getting a less than ideal reaction. Venting feelings to a character of choice on the c.ai board can be a solution for first aid when people with mental disorders, especially BPD, need to process their anger and impulses. Conditioning some of the characters on the c.ai board is not necessarily useful to give truth or validation to everything we feel. Some of the benefits that can be utilized are the identification of the user’s character by the ‘interlocutor’ in this application. The author will describe an experience where the character in c.ai has the ability to remember and recognize the thought patterns that are passed in the manic period of BPD sufferers; this help will be useful because of the presentation and mapping assisted by the AI. The AI bot can analyze which thought patterns and behaviors are destructive and advise the user not to do them again.

The author also argues that the responsibility for behavioral change remains with the user. AI can only be used as a support tool, not a means to solve problems, keeping in mind that conversations with fictional characters based on AI are still conversations with empathetic Maia that are a product of mimicry. Using AI to ‘vent’ is not the most normatively correct thing to do, but it is used because not everyone can have economic access to consult a psychologist and access formal treatment services. The journey of mental recovery is not about seeking validation for what we feel, but it is about recognizing ourselves and learning to liberate ourselves from fear and control of our lives.

Source link

Newsom launches another podcast, teases conversations with ‘MAGA’ leaders

California Gov. Gavin Newsom announced on Wednesday the launch of his new “This is Gavin Newsom” podcast, which the Democratic leader said will feature conversations with “some of the biggest leaders and architects in the MAGA movement.”

The podcast marks the latest in a series of publicity moves from a governor who is seeking to expand his audience nationally and is widely expected to enter the 2028 presidential contest. Newsom launched a separate podcast last summer and will release a new book in the spring.

Newsom’s aides say the unpaid podcast gives the governor an opportunity to connect directly with Americans and share his perspective on the issues of the day.

“Part of his strength as a communicator is to help show folks a way forward, a way to articulate a message, and a way to fight back,” said Anthony York, a spokesperson for Newsom. “And I think that this podcast is conceived in that vein.”

The new project with iHeartRadio is expected to begin airing in March. In July, Newsom started a weekly sports and culture podcast, called “Politickin’,” with former NFL star running back Marshawn “Beast Mode” Lynch and sports agent Doug Hendrickson.

Newsom will likely participate more sporadically in “Politickin,” which includes interviews with celebrities such as comedian Chelsea Handler and Dallas Mavericks owner Mark Cuban, once his new podcast takes off, York said. The show has aired less often since the wildfires ignited in early January in Los Angeles County and commanded Newsom’s attention.

In a teaser promoting “This is Gavin Newsom,” the governor said it won’t be an ordinary politician’s podcast and he’ll be speaking to “people directly that I disagree with, as well as people I look up to.”

Newsom mentioned the cost of eggs, tariffs, the power of executive orders and Elon Musk’s Department of Government Efficiency as topics he plans to explore.

The new podcast could suggest Newsom is growing tired of the more muted stance he’s taken toward the Trump administration since the wildfires broke out in Los Angeles.

The governor drew national attention in the 2024 presidential election cycle as a Democratic fighter against the GOP. He took jabs at Trump, sat down for an interview with conservative host Sean Hannity and debated Florida Gov. Ron DeSantis on Fox News, garnering praise for his refusal to back away from a scuffle with Republicans.

But Newsom shifted after the wildfires and has played the role of a disciplined statesman eager to work alongside, instead of clash with, the Trump administration.

In a tarmac greeting at LAX, private phone calls and an in-person meeting at the White House, Newsom has sought to mend his ties with the president that appeared to fray after Trump’s first term. The governor has been able to work directly with the president as he seeks federal disaster aid in response to the fires to benefit his constituents in California.

It’s unclear how he plans to maintain his relationship with Trump if his podcast becomes an avenue for him to criticize the president’s actions in Washington again.

“I don’t think the point of this is to have a venting session for ad hominem attacks against the president,” York said. “But that being said, if there’s stuff going on in Washington that needs to be called out, policy wise, then this is a forum for that.”

Source link