Qwen2.5-Coder-3B Hyperswitch Track A (Merged)
This is a standalone merged model for Hyperswitch repository-specific continued pretraining.
What this repo contains
- Full merged model weights (
model-*.safetensors) - Tokenizer files
- Config files
The model was produced by merging the LoRA adapter from:
archit11/qwen2.5-coder-3b-hyperswitch-track-a-lora
into the base model:
Qwen/Qwen2.5-Coder-3B
Training dataset
archit11/hyperswitch-code-corpus-track-a
Evaluation summary
- Baseline perplexity: 2.2832
- Post-training perplexity: 1.5429
- Improvement: 32.42%
Usage
from transformers import AutoTokenizer, AutoModelForCausalLM
model_id = "archit11/qwen2.5-coder-3b-hyperswitch-track-a-merged"
tokenizer = AutoTokenizer.from_pretrained(model_id, fix_mistral_regex=True)
model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto")
- Downloads last month
- 17