vla_foundry
Collection
VLA Foundry: pretrained LLM, VLM, and VLA checkpoints. • 8 items • Updated • 3
A 1.3B parameter vision-language model trained on 200M image-caption samples, part of the VLA Foundry collection.
Continuation of Foundry-VLM-1.3B-165M with an additional 35M samples of cosine-decayed training. Used as the vision-language backbone for the Foundry-VLA-1.7B action models.
COCO-val captioning:
| BLEU-1 | BLEU-2 | BLEU-3 | BLEU-4 | ROUGE-L | CIDEr |
|---|---|---|---|---|---|
| 58.64 | 38.62 | 24.49 | 15.57 | 38.17 | 55.14 |
git clone https://github.com/TRI-ML/vla_foundry.git
cd vla_foundry
pip install -e .
from vla_foundry.models.base_model import BaseModel
model = BaseModel.from_pretrained("TRI-ML/Foundry-VLM-1.3B-200M")