But maybe the future of these models is more focused than the boil-the-ocean approach we’ve seen from OpenAI and others, who want to be able to answer every question under the sun. What if each industry or even each company had its own model trained to understand the jargon, language and approach of the individual entity? Perhaps then we would get fewer completely made up answers because the answers will come from a more limited universe of words and phrases. In the AI-driven future, each company’s own data could be its most valuable asset. If you’re an insurance company, you have a completely different lexicon than a hospital, automotive company or a law firm, and when you combine that with your customer data and the full body of content across the organization, you have a language model. While perhaps it’s not large, as in the truly large language model sense, it would be just the model you need, a model created for one and not for the masses. This will also require a set of tools to collect, aggregate and constantly update the corporate dataset in a way that makes it ingestible for these smaller large language models (sLLMs). Building these models could pose a challenge. They will probably tap into something like open source or a private company’s existing LLMs and then fine-tune it on the industry or company data to bring it more into focus, all in a more secure environment than the generic LLM variety. This represents a huge opportunity for the startup community, and we are seeing lots of companies with a head start on this idea.
Generative AI’s future in enterprise could be smaller, more focused language models by Ron Miller originally published on TechCrunch