vla_foundry
Collection
VLA Foundry: pretrained LLM, VLM, and VLA checkpoints. • 8 items • Updated • 3
A 1.2B parameter language model pretrained on 800B tokens, part of the VLA Foundry model collection.
Earlier checkpoint of the Foundry LLM, used as the language backbone for the downstream VLM and VLA models.
Multiple-choice reasoning benchmarks:
| HellaSwag | MMLU | ARC-e | ARC-c | PIQA | WinoGrande | OpenBookQA | BoolQ |
|---|---|---|---|---|---|---|---|
| 64.3 | 26.0 | 70.3 | 37.0 | 75.8 | 60.9 | 40.0 | 63.2 |
git clone https://github.com/TRI-ML/vla_foundry.git
cd vla_foundry
pip install -e .
from vla_foundry.models.base_model import BaseModel
model = BaseModel.from_pretrained("TRI-ML/Foundry-LLM-1.2B-800B")