LEP-8B: Language Extension Pipeline at 8B Scale
LoRA adapters from the Language Extension Pipeline (LEP) applied to Qwen3-8B and Qwen3-0.6B, with three embedding initialization strategies: Mean, FOCUS, and Random.
Variants
| Adapter | Model | Init | OALL Avg |
|---|---|---|---|
| lep8b_mean | Qwen3-8B | Mean | 58.1% |
| lep8b_focus | Qwen3-8B | FOCUS | 55.4% |
| lep8b_random | Qwen3-8B | Random | 29.3% |
| lep06b_mean | Qwen3-0.6B | Mean | 34.6% |
| lep06b_focus | Qwen3-0.6B | FOCUS | 32.8% |
| lep06b_random | Qwen3-0.6B | Random | 28.1% |
Usage
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support