Aug. 29 (UPI) — Anthropic plans to start training its artificial intelligence models with user data, one day after announcing a hacker used Claude to identify 17 companies vulnerable to attack and obtained sensitive information.
The company is asking all users of Claude to decide by Sept. 28 whether they want their conversations used for the process. Anthropic will retain data for up to five years, according to a blog post by the company on Thursday.
Anthropic, a public AI research and development company headquartered in San Francisco, was founded in 2021 by seven OpenAI leaders and researchers who left because of disagreements over safety policies. OpenAI is a rival company.
In 2023, Amazon invested $4 billion and Google $2 billion in the company.
Claude debuted in March 2023 with the latest version, Claude 4, introduced in May. Claude has approximately 18.9 million monthly users active users worldwide. There are free and direct use plans that cost as much as $30 per month per user.
Users of the affected consumer products include Claude Free, Pro and Max plans. Not applicable are Claude for Work, Claude Gov, Claude for Education, or application programming interface use, including third parties that include Amazon Bedrock and Google Cloud’s Vertex AI.
Previously, users were told their prompts and conversations would be deleted automatically from the company’s back end within 30 days “unless legally or policy‑required to keep them longer” or their input was flagged as violating its policies. In the latter case, a user’s inputs and outputs might be retained for up to two years.
“By participating, you’ll help us improve model safety, making our systems for detecting harmful content more accurate and less likely to flag harmless conversations,” the company said. “You’ll also help future Claude models improve at skills like coding, analysis and reasoning, ultimately leading to better models for all users.
The company noted users are “always in control of this setting and whether we use your data in this way.”
New users can select a preference in the sign-up process. Existing ones will see the choice in a pop-up window. To avoid accidentally clicking “accept,” the following message is in larger letters: “Updates to Consumer Terms and Policies.”
Changes will go into effect immediately.
After Sept. 28, users will need to make their selection on the model training setting to continue using Claude.
The five years of data retention will only apply to new or resumed chats and coding sessions, “and will allow us to better support model development and safety improvements,” the company said.
Also, their privacy will be guaranteed.
“To protect users’ privacy, we use a combination of tools and automated processes to filter or obfuscate sensitive data,” the company said. “We do not sell users’ data to third parties.“
Connie Loizos, a writer for TechCrunch, explained why the policy changed.
“Like every other large language model company, Anthropic needs data more than it needs people to have fuzzy feelings about its brand,” Loizos said. “Training AI models requires vast amounts of high-quality conversational data, and accessing millions of Claude interactions should provide exactly the kind of real-world content that can improve Anthropic’s competitive positioning against rivals like OpenAI and Google.”
The Federal Trade Commission, when Joe Biden was president, warned on Jan. 9, 2024, that AI companies risk enforcement action if they engage in “surreptitiously changing its terms of service or privacy policy, or burying a disclosure behind hyperlinks, in legalese, or in fine print — they risk running afoul of the law.
The current FTC has only three members.
On Wednesday, Anthropic said an unnamed hacker “used AI to what we believe is an unprecedented degree. Claude Code was used to automate reconnaissance, harvesting victims’ credentials and penetrating networks.” In cyber extortion, hackers steal sensitive user information or trade secrets.
A hacker convinced Claude Code, which is Anthropic’s chatbot that specializes in “vibe coding,” or creating computer programming based on simple requests, to identify companies vulnerable to attack. Claude created malicious software to actually steal sensitive information from the companies. It organized the hacked files and analyzed them to help determine what was sensitive and could be used to extort the victim companies.
Targeted were healthcare, emergency services, and governmental and religious institutions. The person threatened to publicly expose the data unless a ransom of up to $500,000 was paid, the company said.
The company also said it discovered that North Korean operatives had been using Claude to fraudulently secure and maintain remote employment positions at U.S. Fortune 500 technology companies to generate profit for the North Korean regime.
“Operators who cannot otherwise write basic code or communicate professionally in English are now able to pass technical interviews at reputable technology companies and then maintain their positions,” the company said.
The company said it updated preventive safety measures.