vla_foundry
Collection
VLA Foundry: pretrained LLM, VLM, and VLA checkpoints. • 8 items • Updated • 3
A 1.7B parameter vision-language-action model for bimanual robotic manipulation, part of the VLA Foundry collection. Trained on both simulated and real-world manipulation data.
Success rates on 16 seen tasks and 3 unseen tasks (200 rollouts per task):
| Simulator | Seen (16 tasks) | Unseen (3 tasks) |
|---|---|---|
| CS | 54.0% | 4.7% |
| OSS | 34.1% | 7.5% |
git clone https://github.com/TRI-ML/vla_foundry.git
cd vla_foundry
pip install -e .
from vla_foundry.models.base_model import BaseModel
model = BaseModel.from_pretrained("TRI-ML/Foundry-VLA-1.7B-full")