Hugging Face

Late Friday afternoon, a time window companies usually reserve for unflattering disclosures, AI startup Hugging Face said that its security team earlier this week detected “unauthorized access” to Spaces, Hugging Face’s platform for creating, sharing and hosting AI models and resources.

In a blog post, Hugging Face said that the intrusion related to Spaces secrets, or the private pieces of information that act as keys to unlock protected resources like accounts, tools and dev environments, and that it has “suspicions” some secrets could’ve been accessed by a third party without authorization.

As a precaution, Hugging Face has revoked a number of tokens in those secrets. (Tokens are used to verify identities.) Hugging Face says that users whose tokens have been revoked have already received an email notice and is recommending that all users “refresh any key or token” and consider switching to fine-grained access tokens, which Hugging Face claims are more secure.

It wasn’t immediately clear how many users or apps were impacted by the potential breach. We’ve reached out to Hugging Face for more information and will update this post if we hear back.

“We are working with outside cyber security forensic specialists, to investigate the issue as well as review our security policies and procedures. We have also reported this incident to law enforcement agencies and Data [sic] protection authorities,” Hugging Face wrote in the post. “We deeply regret the disruption this incident may have caused and understand the inconvenience it may have posed to you. We pledge to use this as an opportunity to strengthen the security of our entire infrastructure.”

The possible hack of Spaces comes as Hugging Face, which is among the largest platforms for collaborative AI and data science projects with over one million models, data sets and AI-powered apps, faces increasing scrutiny over its security practices.

In April, researchers at cloud security firm Wiz found a vulnerability — since fixed — that would allow attackers to execute arbitrary code during a Hugging Face-hosted app’s build time that’d let them examine network connections from their machines. Earlier in the year, security firm JFrog uncovered evidence that code uploaded to Hugging Face covertly installed backdoors and other types of malware on end-user machines. And security startup HiddenLayer identified ways Hugging Face’s ostensibly safer serialization format, Safetensors, could be abused to create sabotaged AI models.

Hugging Face recently said that it would partner with Wiz to use the company’s vulnerability scanning and cloud environment configuration tools “with the goal of improving security across our platform and the AI/ML ecosystem at large.”

Source link