Shrutam-2: LLM-Powered Multilingual Indic Speech Recognition
Shrutam-2 is a LLM based automatic speech recognition system for 12 major Indian languages. It bridges a Conformer speech encoder with a pretrained LLM decoder through a Mixture-of-Experts (MoE) projection layer, enabling high-quality, prompt-controllable transcription across diverse Indic languages.
Architecture Overview
Unlike conventional CTC/Attention ASR systems that map audio directly to text tokens, Shrutam-2 reframes speech recognition as a conditional language generation task. A speech encoder produces frame-level audio representations, which are then projected into the LLM's embedding space and fed to a frozen LLM decoder alongside a text prompt.
The key architectural contribution is the MoE Projector that bridges the encoder and the LLM:
| Component | Details |
|---|---|
| Downsampler | Two-stage Conv1D that reduces the encoder frame rate for efficient LLM consumption |
| MoE Projector | 8 linear experts with SMEAR (Soft Merging of Experts with Adaptive Routing) โ utterance-level soft gating computes a weighted merge of all expert parameters into a single projector per input, avoiding discrete top-k routing and its associated load-balancing issues |
Each expert is a two-layer MLP (encoder_dim โ 2048 โ llm_dim). Rather than routing each frame to a single expert, SMEAR computes frame-wise router probabilities, averages them at the utterance level, and produces a single merged weight matrix per utterance. This yields a smooth, fully differentiable routing mechanism with a simple MSE-based load-balancing loss.
Why LLM-Based ASR?
Traditional ASR pipelines rely on acoustic models trained exclusively on speech-text pairs. By grounding transcription in a pretrained LLM, this approach gains several advantages:
- Rich linguistic priors โ The LLM's language knowledge reduces hallucinations and improves fluency, especially for low-resource languages.
- Prompt controllability โ Transcription behavior can be steered through natural-language prompts without retraining.
- Unified multilingual capacity โ A single model serves all 12 languages, with the MoE layer learning language-adaptive projections.
Languages Supported
| # | Language | Script | ISO 639-1 |
|---|---|---|---|
| 1 | Hindi | Devanagari | hi |
| 2 | Marathi | Devanagari | mr |
| 3 | Tamil | Tamil | ta |
| 4 | Telugu | Telugu | te |
| 5 | Malayalam | Malayalam | ml |
| 6 | Kannada | Kannada | kn |
| 7 | Odia | Odia | or |
| 8 | Bengali | Bengali | bn |
| 9 | Urdu | Nastaliq | ur |
| 10 | Assamese | Bengali | as |
| 11 | Gujarati | Gujarati | gu |
| 12 | Punjabi | Gurmukhi | pa |
Usage
Requirements
pip install torch==2.3.0 torchvision==0.18.0 torchaudio==2.3.0 --index-url https://download.pytorch.org/whl/cu118
pip install transformers=4.56.3 huggingface_hub==0.36.0 pyyaml
Quick Start
Update inference_config.yaml with your model paths (see Configuration below), then run:
python inference_script.py
The script loads the full pipeline (encoder, MoE projector, LLM), transcribes the audio file, and prints the output text.
License
This model is released under the BharatGen non-commercial license. Please refer to the LICENSE file for detailed terms and conditions.
For more details about the model - https://arxiv.org/abs/2601.19451