Special-R1
Collection
7 items • Updated
A reasoning-enhanced language model fine-tuned from Qwen2.5-7B-Instruct using GRPO (Group Relative Policy Optimization) for special education applications.
This model is trained to provide direct, concise answers without explicit chain-of-thought reasoning steps (NoThink variant). It focuses on generating accurate responses efficiently.
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "OpenLearnLM/special-r1-qwen2.5-7b-nothink"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
messages = [
{"role": "user", "content": "What is the capital of France?"}
]
text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(text, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=512)
response = tokenizer.decode(outputs[0][inputs.input_ids.shape[1]:], skip_special_tokens=True)
print(response)
| Model | Description |
|---|---|
| special-r1-qwen2.5-7b-nothink (this) | Direct answers without explicit reasoning |
| special-r1-qwen2.5-7b-think | With chain-of-thought reasoning |
If you use this model, please cite:
@misc{openlearnlm2025special,
title={Special-R1: Reasoning Models for Education},
author={OpenLearnLM Team},
year={2025},
publisher={HuggingFace},
url={https://huggingface.co/OpenLearnLM/special-r1-qwen2.5-7b-nothink}
}
This model is released under the Apache 2.0 License, following the base model's license.