Translation
GGUF
open4bits
sarvam
How to use from
Docker Model Runner
docker model run hf.co/Open4bits/sarvam-translate-GGUF:
Quick Links
README.md exists but content is empty.
Downloads last month
144
GGUF
Model size
4B params
Architecture
gemma3
Hardware compatibility
Log In to add your hardware

2-bit

3-bit

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Open4bits/sarvam-translate-GGUF

Quantized
(6)
this model

Collection including Open4bits/sarvam-translate-GGUF