Qwen3.5-9B Medical Triage — Stage 2 (Merged)
Merged full-precision (bfloat16) model. Fine-tuned in two stages on clinical triage data using Unsloth.
Training
| Stage 1 | Stage 2 | |
|---|---|---|
| Base | Qwen/Qwen2.5-7B | merged Stage 1 |
| Dataset | ~50K synthetic PubMed QA pairs | ~9.2K clinical intake notes (SOAP format, ESI 1-5) |
| LoRA rank | r=16, α=16 | r=32, α=64 |
| Learning rate | 2e-4 | 1e-4 |
| Max seq length | 4 096 | 4 096 |
| Epochs | 1 | 1 |
| Batch size | 8 (eff. 16) | 8 (eff. 16) |
| Precision | bf16 | bf16 |
Stage 1 LoRA adapters were merged into base weights before Stage 2 to give a stable optimizer foundation.
Dataset — Stage 2
— 9 238 synthetic SOAP-format clinical intake notes with:
- Chief complaint
- ESI triage level (1–5)
- Full SOAP note
- Triage decision rationale
System Prompt
Usage
Demo
Medical Triage Assistant — Hugging Face Space
Limitations & Disclaimer
This model is for research and educational purposes only. It must not be used for clinical decision-making or as a substitute for professional medical judgment. Outputs may be incorrect, incomplete, or misleading. Always consult a qualified clinician.
- Downloads last month
- 74
Model tree for vadimbelsky/qwen3.5-medical-sft-stage2
Base model
Qwen/Qwen2.5-7B