Why vector databases are having a moment as the AI hype cycle peaks


Vector databases are all the rage, judging by the number of startups entering the space and the investors ponying up for a piece of the pie. The proliferation of large language models (LLMs) and the generative AI (GenAI) movement have created fertile ground for vector database technologies to flourish.

While traditional relational databases such as Postgres or MySQL are well-suited to structured data — predefined data types that can be filed neatly in rows and columns — this doesn’t work so well for unstructured data such as images, videos, emails, social media posts, and any data that doesn’t adhere to a predefined data model.

Vector databases, on the other hand, store and process data in the form of vector embeddings, which convert text, documents, images, and other data into numerical representations that capture the meaning and relationships between the different data points. This is perfect for machine learning, as the database stores data spatially by how relevant each item is to the other, making it easier to retrieve semantically similar data.

This is particularly useful for LLMs, such as OpenAI’s GPT-4, as it allows the AI chatbot to better understand the context of a conversation by analyzing previous similar conversations. Vector search is also useful for all manner of real-time applications, such as content recommendations in social networks or e-commerce apps, as it can look at what a user has searched for and retrieve similar items in a heartbeat. 

Vector search can also help reduce “hallucinations” in LLM applications, through providing additional information that might not have been available in the original training dataset.

“Without using vector similarity search, you can still develop AI/ML applications, but you would need to do more retraining and fine-tuning,” Andre Zayarni, CEO and co-founder of vector search startup Qdrant, explained to TechCrunch. “Vector databases come into play when there’s a large dataset, and you need a tool to work with vector embeddings in an efficient and convenient way.”

In January, Qdrant secured $28 million in funding to capitalize on growth that has led it to become one of the top 10 fastest growing commercial open source startups last year. And it’s far from the only vector database startup to raise cash of late — Vespa, Weaviate, Pinecone, and Chroma collectively raised $200 million last year for various vector offerings.

Qdrant founding team

Qdrant founding team. Image Credits: Qdrant

Since the turn of the year, we’ve also seen Index Ventures lead a $9.5 million seed round into Superlinked, a platform that transforms complex data into vector embeddings. And a few weeks back, Y Combinator (YC) unveiled its Winter ’24 cohort, which included Lantern, a startup that sells a hosted vector search engine for Postgres.

Elsewhere, Marqo raised a $4.4 million seed round late last year, swiftly followed by a $12.5 million Series A round in February. The Marqo platform provides a full gamut of vector tools out of the box, spanning vector generation, storage, and retrieval, allowing users to circumvent third-party tools from the likes of OpenAI or Hugging Face, and it offers everything via a single API.

Marqo co-founders Tom Hamer and Jesse N. Clark previously worked in engineering roles at Amazon, where they realized the “huge unmet need” for semantic, flexible searching across different modalities such as text and images. And that is when they jumped ship to form Marqo in 2021.

“Working with visual search and robotics at Amazon was when I really looked at vector search — I was thinking about new ways to do product discovery, and that very quickly converged on vector search,” Clark told TechCrunch. “In robotics, I was using multi-modal search to search through a lot of our images to identify if there were errant things like hoses and packages. This was otherwise going to be very challenging to solve.”

Marqo cofounders

Marqo co-founders Jesse Clark and Tom Hamer. Image Credits: Marqo

Enter the enterprise

While vector databases are having a moment amid the hullabaloo of ChatGPT and the GenAI movement, they’re not the panacea for every enterprise search scenario.

“Dedicated databases tend to be fully focused on specific use cases and hence can design their architecture for performance on the tasks needed, as well as user experience, compared to general-purpose databases, which need to fit it in the current design,” Peter Zaitsev, founder of database support and services company Percona, explained to TechCrunch.

While specialized databases might excel at one thing to the exclusion of others, this is why we’re starting to see database incumbents such as Elastic, Redis, OpenSearch, Cassandra, Oracle, and MongoDB adding vector database search smarts to the mix, as are cloud service providers like Microsoft’s Azure, Amazon’s AWS, and Cloudflare.

Zaitsev compares this latest trend to what happened with JSON more than a decade ago, when web apps became more prevalent and developers needed a language-independent data format that was easy for humans to read and write. In that case, a new database class emerged in the form of document databases such as MongoDB, while existing relational databases also introduced JSON support.

“I think the same is likely to happen with vector databases,” Zaitsev told TechCrunch. “Users who are building very complicated and large-scale AI applications will use dedicated vector search databases, while folks who need to build a bit of AI functionality for their existing application are more likely to use vector search functionality in the databases they use already.”

But Zayarni and his Qdrant colleagues are betting that native solutions built entirely around vectors will provide the “speed, memory safety, and scale” needed as vector data explodes, compared to the companies bolting vector search on as an afterthought.

“Their pitch is, ‘we can also do vector search, if needed,’” Zayarni said. “Our pitch is, ‘we do advanced vector search in the best way possible.’ It is all about specialization. We actually recommend starting with whatever database you already have in your tech stack. At some point, users will face limitations if vector search is a critical component of your solution.”



Source link