Jackrong commited on
Commit
2f09b86
ยท
verified ยท
1 Parent(s): b09c411

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +62 -0
README.md ADDED
@@ -0,0 +1,62 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ license: lgpl-3.0
5
+ base_model: Qwen/Qwen3.5-27B
6
+ tags:
7
+ - unsloth
8
+ - qwen
9
+ - qwen3.5
10
+ - reasoning
11
+ - chain-of-thought
12
+ - lora
13
+ - moe
14
+ pipeline_tag: text-generation
15
+ ---
16
+
17
+ # ๐ŸŒŸ Qwen3.5-27B-Claude-4.6-Opus-Reasoning-Distilled
18
+
19
+ ## ๐Ÿ’ก Model Introduction
20
+ **Qwen3.5-27B-Claude-4.6-Opus-Reasoning-Distilled** is a highly capable reasoning model fine-tuned on top of the powerful Qwen3.5 MoE architecture. The model's core directive is to leverage state-of-the-art Chain-of-Thought (CoT) distillation primarily sourced from Claude-4.6 Opus interactions.
21
+
22
+ Through Supervised Fine-Tuning (SFT) focusing specifically on structured reasoning logic, this model excels in breaking down complex user problems, planning step-by-step methodologies within strictly formatted `<think>` tags, and ultimately delivering precise, nuanced solutions.
23
+
24
+ ## ๐Ÿ—บ๏ธ Training Pipeline Overview
25
+
26
+ ```text
27
+ Base Model (Qwen3.5-27B)
28
+ โ”‚
29
+ โ–ผ
30
+ Supervised Fine-Tuning (SFT) + LoRA
31
+ (Response-Only Training masked on "<|im_start|>assistant\n<think>")
32
+ โ”‚
33
+ โ–ผ
34
+ Final Model (Claude-4.6-Opus-Reasoning-Distilled)
35
+ ```
36
+
37
+ ## ๐Ÿ“‹ Stage Details
38
+
39
+ ### ๐Ÿ”น Supervised Fine-Tuning (SFT)
40
+ - **Objective:** To inject high-density reasoning logic and establish a strict format for problem-solving involving an internal thinking state prior to outputting the final response.
41
+ - **Methodology:** We utilized **Unsloth** for highly efficient memory and compute optimization (LoRA Rank = 32). A critical component of this stage is the `train_on_responses_only` strategy, masking instructions so the loss is purely calculated over the generation of the `<think>` sequences and the subsequent solutions.
42
+ - **Format Enforcement:** All training samples were systematically normalized so the model strictly abides by the structure `<think> {internal reasoning} </think>\n {final answer}`.
43
+
44
+ ### ๐Ÿ“š All Datasets Used
45
+ The dataset consists of high-quality, filtered reasoning distillation data:
46
+
47
+ | Dataset Name | Description / Purpose |
48
+ |--------------|-----------------------|
49
+ | [nohurry/Opus-4.6-Reasoning-3000x-filtered](https://huggingface.co/datasets/nohurry/Opus-4.6-Reasoning-3000x-filtered) | Provides comprehensive Claude 4.6 Opus reasoning trajectories. Sampled ~3,900 rows. |
50
+ | [TeichAI/claude-4.5-opus-high-reasoning-250x](https://huggingface.co/datasets/TeichAI/claude-4.5-opus-high-reasoning-250x) | Injecting high-intensity, structured reasoning instances. Sampled ~350 rows. |
51
+
52
+ ## ๐ŸŒŸ Core Skills & Capabilities
53
+ 1. **Modular & Structured Thinking:** Inheriting traits from Opus-level reasoning, the model demonstrates confident parsing of the prompt, establishing an outlined plan in its `<think>` block sequentially rather than exploratory "trial-and-error" self-doubt.
54
+ 2. **Extended Context Support:** Fine-tuned smoothly with an 8192 context window allowing complex multi-step reasoning traces to exist gracefully within memory limits.
55
+ 3. **MoE Efficiency:** Specifically tuned targeting MoE routing and expert projection layers (`gate_proj`, `up_proj`, `down_proj`, `gate_up_proj`) preserving the inherent inference efficiency of the Qwen3.5 routing layout.
56
+
57
+ ## โš ๏ธ Limitations & Intended Use
58
+ - **Hallucination Risk:** While reasoning is strong, the model remains an autoregressive LLM; external facts provided during the thinking sequence may occasionally contain hallucinations if verifying real-world events.
59
+ - **Intended Scenario:** Best suited for offline analytical tasks, coding, math, and heavy logic-dependent prompting where the user needs to transparently follow the AI's internal logic.
60
+
61
+ ## ๐Ÿ™ Acknowledgements
62
+ Significant thanks to the [Unsloth AI](https://unsloth.ai/) team for making rapid fine-tuning of MoE and large LLM models accessible. Additionally, we acknowledge Qwen internally, and the open-source community developers producing exceptional distilled datasets (`nohurry` and `TeichAI`).