Qwen2.5-Coder-3B Hyperswitch Track A (Merged)

This is a standalone merged model for Hyperswitch repository-specific continued pretraining.

What this repo contains

  • Full merged model weights (model-*.safetensors)
  • Tokenizer files
  • Config files

The model was produced by merging the LoRA adapter from:

  • archit11/qwen2.5-coder-3b-hyperswitch-track-a-lora

into the base model:

  • Qwen/Qwen2.5-Coder-3B

Training dataset

  • archit11/hyperswitch-code-corpus-track-a

Evaluation summary

  • Baseline perplexity: 2.2832
  • Post-training perplexity: 1.5429
  • Improvement: 32.42%

Usage

from transformers import AutoTokenizer, AutoModelForCausalLM

model_id = "archit11/qwen2.5-coder-3b-hyperswitch-track-a-merged"

tokenizer = AutoTokenizer.from_pretrained(model_id, fix_mistral_regex=True)
model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto")
Downloads last month
17
Safetensors
Model size
3B params
Tensor type
F16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for archit11/qwen2.5-coder-3b-hyperswitch-track-a-merged

Base model

Qwen/Qwen2.5-3B
Finetuned
(52)
this model