Foundry-Qwen3VLA-2.1B

A 2.1B parameter vision-language-action model for bimanual robotic manipulation, part of the VLA Foundry collection. Trained on both simulated and real-world manipulation data.

Model Description

  • Architecture: Qwen3-VL-2B-Thinking vision-language backbone + (condition on last 1 layer) flow-matching diffusion action head (24 layers, 1024 hidden dim, 16 heads)
  • Parameters: 2.1B (non-embedding)
  • Action space: 20-dim relative actions (bimanual xyz + 6D rotation + gripper)
  • Cameras: 4 views (2 scene + 2 wrist)
  • Training data: 100M samples from simulated and real-world bimanual manipulation tasks (with resample)
  • VLM backbone: Qwen3-VL-2B-Thinking

Evaluation Results

Success rates on 16 seen tasks and 3 unseen tasks (200 rollouts per task):

Simulator Seen (16 tasks) Unseen (3 tasks)
CS 82.9% 15.0%
OSS 72.4% 13.5%

Usage

git clone https://github.com/TRI-ML/vla_foundry.git
cd vla_foundry
pip install -e .
from vla_foundry.models.base_model import BaseModel
model = BaseModel.from_pretrained("TRI-ML/Foundry-Qwen3VLA-2.1B")

Links

Downloads last month
127
Video Preview
loading

Collection including TRI-ML/Foundry-Qwen3VLA-2.1B

Paper for TRI-ML/Foundry-Qwen3VLA-2.1B