Europe’s startup contribution to the generative AI bonanza, Mistral, has released its first model. Mistral 7B is free to download and be used anywhere — including locally. 

French AI developer Mistral says its Large Language Model is optimal for low latency, text summarisation, classification, text completion, and code completion. The startup has opted to release Mistral 7B under the Apache 2.0 licence, which has no restrictions on use or reproduction beyond attribution.

“Working with open models is the best way for both vendors and users to build a sustainable business around AI solutions,” the company commented in a blog post accompanying the release. “Open models can be finely adapted to solve many new core business problems, in all industry verticals — in ways unmatched by black-box models.” 

They further added that they believed the future will see many different specialised models, each adapted to specific tasks, compressed as much as possible, and connected to specific modalities.

The <3 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!

Mistral has said that moving forward, it will specifically target business clients and their needs related to R&D, customer care, and marketing, as well as giving them the tools to build new products with AI. 

“We’re committing to release the strongest open models in parallel to developing our commercial offering,” Mistral said. “We’re already training much larger models, and are shifting toward novel architectures. Stay tuned for further releases this fall.”

Do not expect a user-friendly ChatGPT web interface to engage with the made-in-Europe LLM. However, it is downloadable via a 13.4GB torrent, and the company has set up a Discord channel to engage with the user community. 

European funding record for startups

Paris-based Mistral made headlines in June when it became the reportedly largest ever seeded startup in Europe, raising €105mn in a round led by Lightspeed Venture Partners. While it may seem as if an almost insurmountable amount of work has happened over the past three months, one should take into consideration that the company’s three founders all came from Google’s DeepMind or Meta. 

The 7.3 billion parameter Mistral 7B reportedly outperforms larger models, such as Meta’s 13 billion parameter Lama 2, and requires less computing power. Indeed, according to its developers, it “outperforms all currently available open models up to 13B parameters on all standard English and code benchmarks.” 

“Mistral 7B’s performance demonstrates what small models can do with enough conviction.”



Source link