Open source vector database startup Qdrant raises $28M


Qdrant, the company behind the eponymous open source vector database, has raised $28 million in a Series A round of funding led by Spark Capital.

Founded in 2021, Berlin-based Qdrant is seeking to capitalize on the burgeoning AI revolution, targeting developers with an open source vector search engine and database — an integral part of generative AI, which requires relationships be drawn between unstructured data (e.g. text, images or audio that isn’t labelled or otherwise organized), even when that data is “dynamic” within real-time applications. As per Gartner data, unstructured data makes up around 90% of all new enterprise data, and is growing three times faster than its structured counterpart.

The vector database realm is hot. In recent months we’ve seen the likes of Weaviate raise $50 million for its open source vector database, while Zilliz secured secured $60 million to commercialize the Milvus open source vector database. Elsewhere, Chroma secured $18 million in seed funding for a similar proposition, while Pinecone nabbed $100 million for a proprietary alternative.

Qdrant, for its part, raised $7.5 million last April, further highlighting the seemingly insatiable appetite investors have for vector databases — while also pointing to a planned growth spurt on Qdrant’s part.

“The plan was to go into the next fundraising in the second quarter this year, but we received an offer a few months earlier and decided to save some time and start scaling the company now,” Qdrant CEO and co-founder Andre Zayarni explained to TechCrunch. “Fundraising and hiring of right people always takes time.”

Of note, Zayarni says that the company actually rebuffed a potential acquisition offer from a “major database market player” at the same time of receiving a follow-on investment offer. “We went with the investment,” he said, adding that they’ll use the fresh cash injection to build out its business team, given that the company substantively consists of engineers at the moment.

Binary logic

In the intervening nine months since its last raise, Qdrant has launched a new super-efficient compression technology called binary quantization (BQ), focused on low-latency, high-throughput indexing which it says can reduce memory consumption by as much as 32 times and enhance retrieval speeds by around 40 times.

“Binary quantization is a way to ‘compress’ the vectors to simplest possible representation with just zeros and ones,” Zayarni said. “Comparing the vectors becomes the simplest CPU instruction — this makes it possible to significantly speed up the queries and save dramatically on memory usage. The theoretical concept is not new, but we implemented it the way that there is very little loss of accuracy.”

BQ might not work for all all AI models though, and it’s entirely up to the user to decide with compression option will work best for their use-cases — but Zayarni says that the best results they found were with OpenAI’s models, while Cohere also worked well as did Google’s Gemini. The company is currently benchmarking against models from the likes of Mistral and Stability AI.

It’s such endeavors that have helped attract high-profile adopters, including Deloitte, Accenture, and — arguably the highest profile of them all — X (née Twitter). Or perhaps more accurately, Elon Musk’s xAI, a company developing the ChatGPT competitor Grok and which debuted on the X platform last month.

While Zayarni didn’t disclose any details of how X or xAI was using Qdrant due to a non-disclosure agreement (NDA), it’s reasonable to assume that it’s using Qdrant to process real-time data. Indeed, Grok uses a generative AI model dubbed Grok-1 trained on data from the web and feedback from humans, and given its (now) tight alignment with X, it can incorporate real-time data from social media posts into its responses — this is what is known today as retrieval augmented generation (RAG), and Elon Musk has teased such use-cases publicly over the past few months.

Qdrant doesn’t reveal which of its customers are using the open source Qdrant incarnation and which are using its managed services, but it did point to a number of startups, such as GitBook, VoiceFlow, and Dust, which are “mostly” using its managed cloud service — this, effectively, saves resource-restricted companies from having to manage and deploy everything themselves as they would have to with the core open source incarnation.

However, Zayarni is adamant that the company’s open source credentials are one of the major selling points, even if a company elects to pay for add-on services.

“When using a proprietary or cloud-only solution, there is always a risk of vendor lock-in,” Zayarni said. “If the vendor decides to adjust the pricing, or change other terms, customers need to agree or consider a migration to an alternative, which isn’t easy if it’s a heavy-production use-case. With open source, there is always more control over your data, and it is possible to switch between different deployment options.”

Alongside the funding today, Qdrant is also officially releasing its managed “on-premise” edition, giving enterprises the option to host everything internally but tap the premium features and support provided by Qdrant. This follows last week’s news that Qdrand’s cloud edition was landing on Microsoft Azure, adding to the existing AWS and Google Cloud Platform support.

Aside from lead backer Spark Capitali, Qdrant’s Series A round included participation from Unusual Ventures and 42cap.





Source link