vla_foundry
Collection
VLA Foundry: pretrained LLM, VLM, and VLA checkpoints. • 8 items • Updated • 3
A 1.7B parameter vision-language-action model for bimanual robotic manipulation, part of the VLA Foundry collection. Trained on simulated manipulation data only.
Success rates on 16 seen tasks and 3 unseen tasks (200 rollouts per task):
| Simulator | Seen (16 tasks) | Unseen (3 tasks) |
|---|---|---|
| CS | 60.3% | 8.2% |
| OSS | 41.0% | 11.7% |
git clone https://github.com/TRI-ML/vla_foundry.git
cd vla_foundry
pip install -e .
from vla_foundry.models.base_model import BaseModel
model = BaseModel.from_pretrained("TRI-ML/Foundry-VLA-1.7B-sim")