Anthropic confirms it suffered a data leak


It’s been an eventful week for AI startup Anthropic, creator of the Claude family of large language models (LLMs) and associated chatbots.

The company says that on Monday, January 22nd, it became aware that a contractor inadvertently sent a file containing non-sensitive customer information to a third party. The file detailed a “subset” of customer names, as well as open credit balances as of the end of 2023. 

“Our investigation shows this was an isolated incident caused by human error — not a breach of Anthropic systems,” an Anthropic spokesperson told VentureBeat. “We have notified affected customers and provided them with the relevant guidance.”

The finding came just before the Federal Trade Commission (FTC), the U.S. agency in charge of regulating market competition, announced it was investigating Anthropic’s strategic partnerships with Amazon and Google — as well as those of rival OpenAI with its backer Microsoft.

Anthropic’s spokesperson emphasized that the breach is in no way related to the FTC probe, on which they declined to comment. 

Accounts information ‘inadvertently misdirected’

The PC-centric news outlet Windows Report recently got ahold of and posted a screenshot of an email sent by Anthropic to customers acknowledging the leak of their information by one of its third-party contractors.

The information leaked included the “account name….accounts receivable information as of December 31, 2023” for customers. Here’s the full text of the email:

Important alert about your account.

We wanted to let you know that one of our contractors inadvertently misdirected some accounts receivable information from Anthropic to a third party. The information included your account name, as maintained in our systems, and accounts receivable information as of December 31, 2023 – i.e., it said you were a customer with open credit balances at the end of the year. This information did not include sensitive personal data, including banking or payment information, or prompts/outputs. Based on our investigation to date, the contractor’s actions were an isolated error that didn’t arise from or result in any of our systems being breached. We also aren’t aware of any malicious behavior arising out of this disclosure.

Anthropic said the contractor’s actions “were an isolated error” and that it wasn’t aware of “any malicious behavior arising out of this disclosure.” 

However, the company emphasized, “we are asking customers to be alert to any suspicious communications appearing to come from Anthropic, such as requests for payment, requests to amend payment instructions, emails containing suspicious links, requests for credentials or passwords, or other unusual requests.” 

Customers who received the letter were advised to “ignore any suspicious contacts” purporting to be from Anthropic and to “exercise caution” and follow their own internal accounting controls around payments and invoices. 

“We sincerely regret that this incident occurred and any disruption it might have caused you,” the company continued. “Our team is on standby to provide support.” 

Only a ‘subset’ of users affected

Asked by VentureBeat about the leak, an Anthropic spokesperson told VentureBeat that only a “subset” of users were impacted, though the company did not provide a specific number.

The leak is notable in that data breaches are at an all-time high, with a whopping 95% traced to human error. 

The news seems to confirm some of the worst fears of enterprises that are beginning to use third-party LLMs such as Claude with their proprietary data.

VentureBeat’s reporting and events have revealed that many technical decision makers in enterprises large and small have strong concerns that company data could be compromised through LLMs, as was the case with Samsung last spring, which all-out banned ChatGPT after employees leaked sensitive company data. 

Bad timing as regulators begin to look closer at AI partnerships

Anthropic, an OpenAI rival, has been on a meteoric rise since its inception in 2021. The unicorn is reportedly valued at $18.4 billion and raised $750 million in three funding rounds last year, will receive up to $2 billion from Google and another $4 billion from Amazon. It is also reportedly in talks to raise another $750 million round led by top tech VP company Menlo Ventures. 

But the company’s relationship with AWS and Google has raised concern with the FTC. This week, the agency issued 6(b) orders to Amazon, Microsoft, OpenAI, Anthropic and Alphabet requesting detailed information on their multi-billion-dollar relationships. 

The agency specifically called out these investments and partnerships: 

  • Microsoft and OpenAI’s extended partnership announced on January 23, 2023; 
  • Amazon and Anthropic’s strategic collaboration announced on September 25, 2023;
  • Google’s expanded AI partnership with Anthropic, announced on November 8, 2023. 

Among other details, the companies are being asked to provide agreements and rationale for collaborations and their implications; analysis of competitive impact; and information on any other government entities requesting information or performing investigations. 

The latter would include any probes from the European Union and the UK, which are both looking into Microsoft’s AI investment. The UK’s competition regulator opened a review in December and the EU’s executive branch has said that the partnership could trigger an investigation under regulations covering mergers and acquisitions. 

“We’re scrutinizing whether these ties enable dominant firms to exert undue influence or gain privileged access in ways that could undermine fair competition,” Lina Khan, FTC chair said at an AI forum on Thursday. 

Anthropic’s tight relationships with AWS and Google

Anthropic has been a partner with AWS and Google and its owner Alphabet since its inception, and its collaboration with both has expanded substantially in just a short period of time. 

Amazon has announced that it is investing up to $4 billion and will have a minority ownership in Anthropic. AWS is also Anthropic’s primary cloud provider and is providing its chips to the startup. 

Further, Anthropic has made a “long-term commitment” to provide AWS customers with “future generations” of its models through Amazon Bedrock, and will allow them early access to unique features for model customization and fine-tuning purposes. 

“We have tremendous respect for Anthropic’s team and foundation models, and believe we can help improve many customer experiences, short and long-term, through our deeper collaboration,” Amazon CEO Andy Jassy said in a statement announcing the companies’ extended partnership. 

Through its partnership with Google and Alphabet, meanwhile, Anthropic uses Google Cloud security services, PostgreSQL-compatible database and BigQuery data warehouse, and has deployed Google’s TPU v5e for its Claude large language model (LLM). 

“Anthropic and Google Cloud share the same values when it comes to developing AI–it needs to be done in both a bold and responsible way,” Google Cloud CEO Thomas Kurian said in a statement on their relationship. “This expanded partnership with Anthropic, built on years of working together, will bring AI to more people safely and securely, and provides another example of how the most innovative and fastest growing AI startups are building on Google Cloud.”

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.



Source link