PyTorch
Sanskrit
llama

You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

Description

Paramanu-Sanskrit is a 139 million-parameter, open-source, monolingual Sanskrit decoder-only autoregressive language model.

It is pretrained from scratch on an open-source Sanskrit corpus with a context size of 1024 tokens.

It is neither chat-tuned nor fine-tuned; we recommend fine-tuning or chat-tuning on Sanskrit datasets using PyTorch.

Commercial use is prohibited.

If you use our model, please cite our paper: Niyogi et al., 2026.

Model Architecture

Transformer Decoder Auto Regressive Model

Limitations

The model was trained on data containing toxic language, unsafe content, and societal biases originally crawled from the internet. Therefore, it may amplify these biases and produce toxic responses, especially when prompted with toxic inputs. The model may also generate answers that are inaccurate, omit key information, or include irrelevant or redundant text, potentially producing socially unacceptable or undesirable content, even when the prompt itself is not explicitly offensive.

Citations

@misc{niyogi2026paramanucompactcompetitivemonolingual,
      title={Paramanu: Compact and Competitive Monolingual Language Models for Low-Resource Morphologically Rich Indian Languages}, 
      author={Mitodru Niyogi and Eric Gaussier and Arnab Bhattacharya},
      year={2026},
      eprint={2401.18034},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2401.18034}, 
}
Downloads last month
13
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Paper for gyanai/paramanu-sanskrit-139M-hf