Today, Stability AI unveiled an experimental version of Stable LM 3B, the latest addition to their suite of high-performance and sustainable generative AI solutions. Stable LM 3B is a compact language model designed for on-device use, with just 3 billion parameters.
Despite its small size, Stable LM 3B delivers remarkable capabilities. It outperforms previous state-of-the-art 3B parameter models and even some of the best 7B models. This enables strong natural language processing abilities like conversational intelligence to run efficiently on portable devices such as phones and laptops.
Compared to Stability AI’s previous Stable LM release, this new model boasts significantly improved performance on common benchmarks while maintaining fast execution speeds. Extensive training on high-quality data has led to general language model abilities surpassing its predecessors at similar model sizes.
While Stable LM 3B is designed as a general-purpose model, it can also be fine-tuned for specialized applications in areas like customer service chatbots or coding assistants. This allows developers to cost-effectively customize the model for their own data and use cases.
Stability AI notes that Stable LM 3B is a base model requiring safety testing and tuning before full deployment. An instruction-fine-tuned version is currently undergoing evaluation, with plans to release it soon.
At just 3 billion parameters, Stable LM 3B strikes a balance between capability and efficiency that makes it viable for many real-world AI applications. Smaller models like this minimize computing requirements and energy consumption. Stability AI aims to make compact, customizable models the standard for auditable and trusted AI systems.
The initial version of Stable LM 3B is now available to try via download from Hugging Face. It is an intermediate release leading up to the full model, offered under an open-source CC-By-SA 4.0 license. Stable LM 3B represents an important step towards practical on-device AI with strong performance sustainably.