Jackrong commited on
Commit
0b7d03e
·
verified ·
1 Parent(s): e713faa

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +88 -14
README.md CHANGED
@@ -1,22 +1,96 @@
1
  ---
 
 
 
 
 
2
  tags:
3
- - gguf
4
- - llama.cpp
5
  - unsloth
6
- - vision-language-model
 
 
 
 
 
 
 
 
7
  ---
8
 
9
- # Qwen3.5-27B-Claude-4.6-Opus-Reasoning-Distilled-GGUF : GGUF
10
 
11
- This model was finetuned and converted to GGUF format using [Unsloth](https://github.com/unslothai/unsloth).
 
 
 
 
 
 
 
 
 
12
 
13
- **Example usage**:
14
- - For text only LLMs: `llama-cli -hf Jackrong/Qwen3.5-27B-Claude-4.6-Opus-Reasoning-Distilled-GGUF --jinja`
15
- - For multimodal models: `llama-mtmd-cli -hf Jackrong/Qwen3.5-27B-Claude-4.6-Opus-Reasoning-Distilled-GGUF --jinja`
16
 
17
- ## Available Model files:
18
- - `Qwen3.5-27B.Q8_0.gguf`
19
- - `Qwen3.5-27B.BF16-mmproj.gguf`
20
- - `Qwen3.5-27B.BF16-00002-of-00002.gguf`
21
- This was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth)
22
- [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language:
3
+ - en
4
+ - zh
5
+ license: apache-2.0
6
+ base_model: Qwen/Qwen3.5-27B
7
  tags:
 
 
8
  - unsloth
9
+ - qwen
10
+ - qwen3.5
11
+ - reasoning
12
+ - chain-of-thought
13
+ - Dense
14
+ pipeline_tag: text-generation
15
+ datasets:
16
+ - Jackrong/Qwen3.5-reasoning-700x
17
+ - nohurry/Opus-4.6-Reasoning-3000x-filtered
18
  ---
19
 
20
+ # 🌟 Qwen3.5-27B-Claude-4.6-Opus-Reasoning-Distilled
21
 
22
+ > 📢 **Release Note**
23
+ >
24
+ > To address potential runtime errors in inference frameworks with the early quantized version, the current weights have been fully rebuilt utilizing the latest toolchain. I have re-executed the fine-tuning process and GGUF quantization in an updated environment to ensure maximum compatibility and stability.
25
+ >
26
+ > **Build Environment Upgrades:**
27
+ > - **Fine-tuning Framework**: **Unsloth 2026.3.3** (with the latest *Fast Qwen3_5 patching* applied)
28
+ > - **Core Dependencies**: **Transformers 5.2.0**
29
+ >
30
+ >
31
+ ![HB8AleUaMAArNyM](https://cdn-uploads.huggingface.co/production/uploads/66309bd090589b7c65950665/GHkMJL6I383eIwK1qj80K.jpeg)
32
 
 
 
 
33
 
34
+ ## 💡 Model Introduction
35
+ **Qwen3.5-27B-Claude-4.6-Opus-Reasoning-Distilled** is a highly capable reasoning model fine-tuned on top of the powerful Qwen3.5 architecture. The model's core directive is to leverage state-of-the-art Chain-of-Thought (CoT) distillation primarily sourced from Claude-4.6 Opus interactions.
36
+
37
+ Through Supervised Fine-Tuning (SFT) focusing specifically on structured reasoning logic, this model excels in breaking down complex user problems, planning step-by-step methodologies within strictly formatted `<think>` tags, and ultimately delivering precise, nuanced solutions.
38
+
39
+ ### 🧠 Example of Learned Reasoning Scaffold(Example)
40
+
41
+ The model includes targeted optimizations addressing Qwen3.5’s tendency toward excessive transitional or repetitive reasoning on simple queries. Through deep distillation and structural imitation of Claude-4.6-Opus reasoning chains, the model adopts a more efficient structured thinking pattern:
42
+ **“Let me analyze this request carefully: 1..2..3...”.**
43
+ This streamlined reasoning paradigm significantly reduces redundant cognitive loops while preserving deep analytical capacity, resulting in substantially improved inference efficiency.
44
+
45
+ ```text
46
+ Let me analyze this request carefully:
47
+
48
+ 1. Identify the core objective of the problem.
49
+ 2. Break the task into clearly defined subcomponents.
50
+ 3. Evaluate constraints and edge cases.
51
+ 4. Formulate a step-by-step solution plan.
52
+ 5. Execute the reasoning sequentially and verify consistency.
53
+ .
54
+ .
55
+ .
56
+ ```
57
+
58
+ ## 🗺️ Training Pipeline Overview
59
+
60
+ ```text
61
+ Base Model (Qwen3.5-27B)
62
+
63
+
64
+ Supervised Fine-Tuning (SFT) + LoRA
65
+
66
+
67
+ Final Model (Claude-4.6-Opus-Reasoning-Distilled,text-only)
68
+ ```
69
+
70
+ ## 📋 Stage Details
71
+
72
+ ### 🔹 Supervised Fine-Tuning (SFT)
73
+ - **Objective:** To inject high-density reasoning logic and establish a strict format for problem-solving involving an internal thinking state prior to outputting the final response.
74
+ - **Methodology:** We utilized **Unsloth** for highly efficient memory and compute optimization (LoRA Rank = 64). A critical component of this stage is the `train_on_responses_only` strategy, masking instructions so the loss is purely calculated over the generation of the `<think>` sequences and the subsequent solutions.
75
+ - **Format Enforcement:** All training samples were systematically normalized so the model strictly abides by the structure `<think> {internal reasoning} </think>\n {final answer}`.
76
+
77
+ ### 📚 All Datasets Used
78
+ The dataset consists of high-quality, filtered reasoning distillation data:
79
+
80
+ | Dataset Name | Description / Purpose |
81
+ |--------------|-----------------------|
82
+ | [nohurry/Opus-4.6-Reasoning-3000x-filtered](https://huggingface.co/datasets/nohurry/Opus-4.6-Reasoning-3000x-filtered) | Provides comprehensive Claude 4.6 Opus reasoning trajectories. |
83
+ | [TeichAI/claude-4.5-opus-high-reasoning-250x](https://huggingface.co/datasets/TeichAI/claude-4.5-opus-high-reasoning-250x) | Injecting high-intensity, structured reasoning instances. |
84
+ | [Jackrong/Qwen3.5-reasoning-700x](https://huggingface.co/datasets/Jackrong/Qwen3.5-reasoning-700x) | Additional curated reasoning samples designed to strengthen structured step-by-step problem solving and improve reasoning diversity. |
85
+
86
+ ## 🌟 Core Skills & Capabilities
87
+ 1. **Modular & Structured Thinking:** Inheriting traits from Opus-level reasoning, the model demonstrates confident parsing of the prompt, establishing an outlined plan in its `<think>` block sequentially rather than exploratory "trial-and-error" self-doubt.
88
+ 2. **Extended Context Support:** Fine-tuned smoothly with an 8192 context window allowing complex multi-step reasoning traces to exist gracefully within memory limits.
89
+
90
+ ## ⚠️ Limitations & Intended Use
91
+ - **Hallucination Risk:** While reasoning is strong, the model remains an autoregressive LLM; external facts provided during the thinking sequence may occasionally contain hallucinations if verifying real-world events.
92
+ - **Intended Scenario:** Best suited for offline analytical tasks, coding, math, and heavy logic-dependent prompting where the user needs to transparently follow the AI's internal logic.
93
+ - **Preview Version Notice:** Because this model is relatively new and intentionally lightweight, the surrounding ecosystem — including inference templates, fine-tuning pipelines, routing configurations, and tooling integrations — may not yet be fully mature or standardized. As a result, users may encounter occasional bugs, compatibility inconsistencies, or integration edge cases. The current release should be considered a preview build while the broader architectural stack and supporting utilities continue to stabilize and improve.
94
+
95
+ ## 🙏 Acknowledgements
96
+ Significant thanks to the [Unsloth AI](https://unsloth.ai/) team for making rapid fine-tuning of MoE and large LLM models accessible. Additionally, we acknowledge Qwen internally, and the open-source community developers producing exceptional distilled datasets (`nohurry` and `TeichAI`).