File size: 1,667 Bytes
b2685e0 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 | ---
license: apache-2.0
library_name: vla-foundry
tags:
- foundry
- vla_foundry
- vlm
- image-text-to-text
---
# Foundry-VLM-1.3B-165M
A 1.3B parameter vision-language model trained on 165M image-caption samples, part of the [VLA Foundry](https://github.com/TRI-ML/vla_foundry) collection.
## Model Description
- **Architecture:** ViT encoder (12 layers, 768 hidden dim, patch size 14, pixel-shuffle 2x) + Transformer decoder (24 layers, 2048 hidden dim, 16 heads)
- **Parameters:** 1.3B (non-embedding)
- **Processor:** SmolVLM2
- **Training data:** 165M image-caption pairs from DataComp-DR-1B
- **LR schedule:** Warmup + constant (no decay)
- **LLM backbone:** Initialized from [Foundry-LLM-1.2B-800B](https://huggingface.co/TRI-ML/Foundry-LLM-1.2B-800B)
Earlier checkpoint of the Foundry VLM. Used as the vision-language backbone for the Foundry-VLA-1.7B action models.
## Evaluation Results
COCO-val captioning:
| BLEU-1 | BLEU-2 | BLEU-3 | BLEU-4 | ROUGE-L | CIDEr |
|---|---|---|---|---|---|
| 57.25 | 37.12 | 23.23 | 14.44 | 37.13 | 50.17 |
## Usage
```bash
git clone https://github.com/TRI-ML/vla_foundry.git
cd vla_foundry
pip install -e .
```
```python
from vla_foundry.models.base_model import BaseModel
model = BaseModel.from_pretrained("TRI-ML/Foundry-VLM-1.3B-165M")
```
## Links
- **Project page:** [tri-ml.github.io/vla_foundry](https://tri-ml.github.io/vla_foundry/)
- **Paper:** [VLA Foundry (arXiv 2604.19728)](https://arxiv.org/abs/2604.19728)
- **Code:** [github.com/TRI-ML/vla_foundry](https://github.com/TRI-ML/vla_foundry)
- **Collection:** [VLA Foundry collection](https://huggingface.co/collections/TRI-ML/vla-foundry)
|