Qwen3-4B-tau2-grpo-v1
A 4B parameter model fine-tuned for multi-turn tool-use tasks, achieving 59% Pass@4 on tau2-bench (test split). This represents a 4x improvement over the base model and demonstrates that progressive training (SFT -> RFT -> GRPO) works effectively for complex, multi-turn agent tasks.
Model Description
This model was trained using a three-stage pipeline:
- SFT (Supervised Fine-Tuning): Learning protocol and tool schemas from successful trajectories
- RFT (Rejection Fine-Tuning): Concentrating training on high-quality rollouts via rejection sampling
- GRPO (Group Relative Policy Optimization): Reinforcement learning with turn-level reward shaping
The training process is documented in the tau2 training cookbook.
Performance
tau2-bench Test Split (Pass@4 evaluation)
| Domain | Pass@1 | Pass@4 | Tasks |
|---|---|---|---|
| Overall | 36.0% | 59.0% | 100 |
| Airline | 15.0% | 45.0% | 20 |
| Retail | 55.0% | 85.0% | 40 |
| Telecom | 27.5% | 40.0% | 40 |
Training Progression
| Stage | Overall Pass@4 |
|---|---|
| Baseline (Qwen3-4B-Instruct) | 14.3% |
| SFT + RFT | 27.0% |
| GRPO (this model) | 59.0% |
Eval config: temperature=0.8, top_p=1.0, top_k=20, num_samples=4, TAU2_USER_MODEL=gpt-4.1-mini, TAU2_USER_TEMPERATURE=0.7, TAU2_MAX_STEPS=100.
Usage
With SGLang (recommended for evaluation)
# Start the server (use --tp 1 for single GPU)
python -m sglang.launch_server \
--model-path Jarrodbarnes/Qwen3-4B-tau2-grpo-v1 \
--host 0.0.0.0 --port 30000 --tp 2 --mem-fraction-static 0.70
With Transformers
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Jarrodbarnes/Qwen3-4B-tau2-grpo-v1"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
Function Calling Format
This model uses Qwen3 native function calling format:
<tool_call>{"name": "tool_name", "arguments": {"arg": "value"}}</tool_call>
Include </tool_call> in stop sequences for proper parsing.
Training Details
- Base model: Qwen/Qwen3-4B-Instruct-2507
- Training framework: SLIME (Megatron-LM + SGLang)
- SFT data: tau2-sft-seed-v3
- GRPO steps: 21 optimizer steps
- Reward shaping: Turn-level partial scores from tau2-bench reward_info
- User simulator (training): Local Qwen3-4B-Instruct on port 30001
- User simulator (eval): GPT-4.1-mini via OpenAI API
W&B Training Logs
Resources
- Training Cookbook - Full methodology and reproduction steps
- SFT Checkpoint - Intermediate SFT+RFT checkpoint
- Training Dataset - Filtered RFT trajectories
- tau2-bench - Benchmark repository
Limitations
- Telecom domain: Dual-control tasks (where the agent must instruct rather than execute) remain challenging (40% Pass@4)
- User simulator sensitivity: Results vary with user simulator choice; GPT-4.1-mini recommended for reproducibility
- Pass@k vs Pass^k: This model reports pass@k (any success in k attempts), not the pass^k metric used on the official tau2-bench leaderboard
Citation
@misc{qwen3-tau2-grpo,
title={Qwen3-4B-tau2-grpo-v1: Multi-Turn Tool-Use Agent via Progressive RL Training},
author={Jarrod Barnes},
year={2025},
url={https://huggingface.co/Jarrodbarnes/Qwen3-4B-tau2-grpo-v1}
}
Acknowledgments
- Qwen Team for the base model
- Sierra Research for tau2-bench
- THUDM for the SLIME training framework
- Downloads last month
- 14
Model tree for Jarrodbarnes/Qwen3-4B-tau2-grpo-v1
Base model
Qwen/Qwen3-4B-Instruct-2507Dataset used to train Jarrodbarnes/Qwen3-4B-tau2-grpo-v1
Evaluation results
- Pass@1 (Overall) on tau2-bench (test split)self-reported36.000
- Pass@4 (Overall) on tau2-bench (test split)self-reported59.000