Blog

AI21 Labs Breaks New Ground with 'Jamba': The Pioneering Hybrid SSM-Transformer Large Language Model - MarkTechPost

In an era where the demand for smarter, faster, and more efficient artificial intelligence (AI) solutions is continuously on the rise, AI21 Labs’ unveiling of Jamba marks a significant leap forward. Jamba, a pioneering SSM-Transformer model, heralds a new chapter in AI technology by melding the Mamba Structured State Space model (SSM) with the proven efficiency of the traditional Transformer architecture, setting a new benchmark for performance and efficiency in large language models (LLMs).

At the heart of Jamba lies an integrated blend of Mamba and Transformer architectures designed to address the inherent limitations of each system while leveraging their strengths. Unlike conventional models predominantly based on the Transformer architecture—such as GPT, Gemini, and Llama—Jamba introduces a hybrid approach. It features a remarkable context window of 256K tokens, equivalent to around 210 pages of text, and can fit up to 140K tokens on a single 80GB GPU. This capability significantly surpasses the current standards, like Meta’s Llama 2, which manages a 32,000-token context window. power factor

Jamba’s hybrid architecture combines Transformer, Mamba, and mixture-of-experts (MoE) layers, optimizing memory, throughput, and performance. The model operates on a principle that utilizes mixture-of-experts layers to draw on just 12B of its available 52B parameters during inference, allowing for increased efficiency without sacrificing the model’s power or speed.

One of the most significant advantages of Jamba is its ability to deliver three times the throughput on long contexts when compared to Transformer-based models of a similar size, such as Mixtral 8x7B. This efficiency is made possible through its unique architectural composition, which includes a mix of attention, Mamba, and MoE layers. This structure not only enhances the model’s performance but also ensures high throughput and memory optimization.

Moreover, Jamba’s architecture follows a blocks-and-layers approach, which incorporates an attention or Mamba layer followed by a multi-layer perceptron (MLP), achieving an optimal ratio that maximizes quality and throughput on a single GPU. This approach allows for the accommodation of common inference workloads without memory constraints.

Open Access and Future Prospects

AI21 Labs has released Jamba with open weights under the Apache 2.0 license, making it available on Hugging Face and soon on the NVIDIA API catalog as an NVIDIA NIM inference microservice. This move not only democratizes access to Jamba’s advanced capabilities but also invites the AI community to explore, refine, and build upon this innovative architecture.

Although currently released as a research model without the necessary safeguards for commercial use, AI21 Labs plans to unveil a fine-tuned, safer version in the coming weeks. This progression underscores the industry’s commitment to enhancing AI’s performance, efficiency, and accessibility, paving the way for the next generation of AI models.

Jamba’s introduction by AI21 Labs not only represents a technical milestone but also a shift towards more accessible, efficient, and powerful AI models. As the AI community continues to evolve, the principles and innovations behind Jamba will undoubtedly influence future developments in AI technology.

Asif Razzaq is the CEO of Marktechpost Media Inc.. As a visionary entrepreneur and engineer, Asif is committed to harnessing the potential of Artificial Intelligence for social good. His most recent endeavor is the launch of an Artificial Intelligence Media Platform, Marktechpost, which stands out for its in-depth coverage of machine learning and deep learning news that is both technically sound and easily understandable by a wide audience. The platform boasts of over 2 million monthly views, illustrating its popularity among audiences.

poly capacitor 🐝 FREE AI Courses on RAG + Deployment of an Healthcare AI App + LangChain Colab Notebook all included