Foundry-VLA-1.7B-full

A 1.7B parameter vision-language-action model for bimanual robotic manipulation, part of the VLA Foundry collection. Trained on both simulated and real-world manipulation data.

Model Description

  • Architecture: Foundry-VLM-1.3B vision-language backbone + (condition on last 4 layers) flow-matching diffusion action head (24 layers, 1024 hidden dim, 16 heads)
  • Parameters: 1.7B (non-embedding)
  • Action space: 20-dim relative actions (bimanual xyz + 6D rotation + gripper)
  • Cameras: 4 views (2 scene + 2 wrist)
  • Training data: 102M samples from simulated and real-world bimanual manipulation tasks (with resample)
  • VLM backbone: Foundry-VLM-1.3B-200M

Evaluation Results

Success rates on 16 seen tasks and 3 unseen tasks (200 rollouts per task):

Simulator Seen (16 tasks) Unseen (3 tasks)
CS 54.0% 4.7%
OSS 34.1% 7.5%

Usage

git clone https://github.com/TRI-ML/vla_foundry.git
cd vla_foundry
pip install -e .
from vla_foundry.models.base_model import BaseModel
model = BaseModel.from_pretrained("TRI-ML/Foundry-VLA-1.7B-full")

Links

Downloads last month
111
Video Preview
loading

Collection including TRI-ML/Foundry-VLA-1.7B-full

Paper for TRI-ML/Foundry-VLA-1.7B-full