--- language: - en - zh license: apache-2.0 base_model: Qwen/Qwen3.5-27B tags: - unsloth - qwen - qwen3.5 - reasoning - chain-of-thought - Dense pipeline_tag: text-generation datasets: - Jackrong/Qwen3.5-reasoning-700x - nohurry/Opus-4.6-Reasoning-3000x-filtered --- # ๐ŸŒŸ Qwen3.5-27B-Claude-4.6-Opus-Reasoning-Distilled > ๐Ÿ“ข **Release Note** > > To address potential runtime errors in inference frameworks with the early quantized version, the current weights have been fully rebuilt utilizing the latest toolchain. I have re-executed the fine-tuning process and GGUF quantization in an updated environment to ensure maximum compatibility and stability. > > **Build Environment Upgrades:** > - **Fine-tuning Framework**: **Unsloth 2026.3.3** (with the latest *Fast Qwen3_5 patching* applied) > - **Core Dependencies**: **Transformers 5.2.0** > > ![HB8AleUaMAArNyM](https://cdn-uploads.huggingface.co/production/uploads/66309bd090589b7c65950665/GHkMJL6I383eIwK1qj80K.jpeg) ## ๐Ÿ’ก Model Introduction **Qwen3.5-27B-Claude-4.6-Opus-Reasoning-Distilled** is a highly capable reasoning model fine-tuned on top of the powerful Qwen3.5 architecture. The model's core directive is to leverage state-of-the-art Chain-of-Thought (CoT) distillation primarily sourced from Claude-4.6 Opus interactions. Through Supervised Fine-Tuning (SFT) focusing specifically on structured reasoning logic, this model excels in breaking down complex user problems, planning step-by-step methodologies within strictly formatted `` tags, and ultimately delivering precise, nuanced solutions. ### ๐Ÿง  Example of Learned Reasoning Scaffold๏ผˆExample๏ผ‰ The model includes targeted optimizations addressing Qwen3.5โ€™s tendency toward excessive transitional or repetitive reasoning on simple queries. Through deep distillation and structural imitation of Claude-4.6-Opus reasoning chains, the model adopts a more efficient structured thinking pattern: **โ€œLet me analyze this request carefully: 1..2..3...โ€.** This streamlined reasoning paradigm significantly reduces redundant cognitive loops while preserving deep analytical capacity, resulting in substantially improved inference efficiency. ```text Let me analyze this request carefully: 1. Identify the core objective of the problem. 2. Break the task into clearly defined subcomponents. 3. Evaluate constraints and edge cases. 4. Formulate a step-by-step solution plan. 5. Execute the reasoning sequentially and verify consistency. . . . ``` ## ๐Ÿ—บ๏ธ Training Pipeline Overview ```text Base Model (Qwen3.5-27B) โ”‚ โ–ผ Supervised Fine-Tuning (SFT) + LoRA โ”‚ โ–ผ Final Model (Claude-4.6-Opus-Reasoning-Distilled,text-only) ``` ## ๐Ÿ“‹ Stage Details ### ๐Ÿ”น Supervised Fine-Tuning (SFT) - **Objective:** To inject high-density reasoning logic and establish a strict format for problem-solving involving an internal thinking state prior to outputting the final response. - **Methodology:** We utilized **Unsloth** for highly efficient memory and compute optimization (LoRA Rank = 64). A critical component of this stage is the `train_on_responses_only` strategy, masking instructions so the loss is purely calculated over the generation of the `` sequences and the subsequent solutions. - **Format Enforcement:** All training samples were systematically normalized so the model strictly abides by the structure ` {internal reasoning} \n {final answer}`. ### ๐Ÿ“š All Datasets Used The dataset consists of high-quality, filtered reasoning distillation data: | Dataset Name | Description / Purpose | |--------------|-----------------------| | [nohurry/Opus-4.6-Reasoning-3000x-filtered](https://huggingface.co/datasets/nohurry/Opus-4.6-Reasoning-3000x-filtered) | Provides comprehensive Claude 4.6 Opus reasoning trajectories. | | [TeichAI/claude-4.5-opus-high-reasoning-250x](https://huggingface.co/datasets/TeichAI/claude-4.5-opus-high-reasoning-250x) | Injecting high-intensity, structured reasoning instances. | | [Jackrong/Qwen3.5-reasoning-700x](https://huggingface.co/datasets/Jackrong/Qwen3.5-reasoning-700x) | Additional curated reasoning samples designed to strengthen structured step-by-step problem solving and improve reasoning diversity. | ## ๐ŸŒŸ Core Skills & Capabilities 1. **Modular & Structured Thinking:** Inheriting traits from Opus-level reasoning, the model demonstrates confident parsing of the prompt, establishing an outlined plan in its `` block sequentially rather than exploratory "trial-and-error" self-doubt. 2. **Extended Context Support:** Fine-tuned smoothly with an 8192 context window allowing complex multi-step reasoning traces to exist gracefully within memory limits. ## โš ๏ธ Limitations & Intended Use - **Hallucination Risk:** While reasoning is strong, the model remains an autoregressive LLM; external facts provided during the thinking sequence may occasionally contain hallucinations if verifying real-world events. - **Intended Scenario:** Best suited for offline analytical tasks, coding, math, and heavy logic-dependent prompting where the user needs to transparently follow the AI's internal logic. - **Preview Version Notice:** Because this model is relatively new and intentionally lightweight, the surrounding ecosystem โ€” including inference templates, fine-tuning pipelines, routing configurations, and tooling integrations โ€” may not yet be fully mature or standardized. As a result, users may encounter occasional bugs, compatibility inconsistencies, or integration edge cases. The current release should be considered a preview build while the broader architectural stack and supporting utilities continue to stabilize and improve. ## ๐Ÿ™ Acknowledgements Significant thanks to the [Unsloth AI](https://unsloth.ai/) team for making rapid fine-tuning of MoE and large LLM models accessible. Additionally, we acknowledge Qwen internally, and the open-source community developers producing exceptional distilled datasets (`nohurry` and `TeichAI`).