File size: 8,501 Bytes
a91c052
 
 
 
 
bcacbe0
 
 
 
 
 
 
 
 
 
 
a91c052
 
bcacbe0
a91c052
bcacbe0
a91c052
bcacbe0
a91c052
bcacbe0
 
 
 
 
 
 
 
 
 
 
 
 
 
a91c052
bcacbe0
a91c052
 
 
 
 
 
 
 
 
 
bcacbe0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2febe47
 
bcacbe0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
---
license: apache-2.0
tags:
- merge
- mergekit
- model-stock
language:
- en
base_model:
- AXCXEPT/Qwen3-EZO-8B-beta
- YOYO-AI/Qwen3-8B-YOYO
- KaraKaraWitch/CavesOfQwen3-8b
- GreenerPastures/Bald-Beaver-8B
- taki555/Qwen3-8B-Shadow-FT-BAAI-2k
pipeline_tag: text-generation
library_name: transformers
---

# 🦡 ZeroXClem-Qwen3-8B-HoneyBadger-EXP

## 🧬 Overview

**ZeroXClem-Qwen3-8B-HoneyBadger-EXP** is a fierce and expressive model fusion crafted using the **Model Stock** merge method from [MergeKit](https://github.com/cg123/mergekit). Designed to combine **instruction-following, deep reasoning, creative roleplay**, and **code capabilities**, this blend leverages the best of Qwen3-8B-based fine-tunes from various communities across Hugging Face.

This HoneyBadger doesn't just care—it dominates symbolic reasoning, narrative immersion, and technical comprehension with sleek aggression.

**Be advised** Use the ollama modelfile, or customized prompt with default Qwen3 chat template for **optimal inference**. 

---

## 🔧 Merge Configuration

- **Merge Method:** `model_stock`
- **Base Model:** [`AXCXEPT/Qwen3-EZO-8B-beta`](https://huggingface.co/AXCXEPT/Qwen3-EZO-8B-beta)
- **Dtype:** `bfloat16`
- **Tokenizer Source:** `AXCXEPT/Qwen3-EZO-8B-beta`

### 🧾 YAML

```yaml
name: ZeroXClem-Qwen3-8B-HoneyBadger-EXP
base_model: AXCXEPT/Qwen3-EZO-8B-beta
dtype: bfloat16
merge_method: model_stock
models:
  - model: taki555/Qwen3-8B-Shadow-FT-BAAI-2k
  - model: GreenerPastures/Bald-Beaver-8B
  - model: YOYO-AI/Qwen3-8B-YOYO
  - model: KaraKaraWitch/CavesOfQwen3-8b
tokenizer_source: AXCXEPT/Qwen3-EZO-8B-beta
````

---

## 💡 Models Merged

| Model                                                                                             | Highlights                                                            |
| ------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------- |
| [`KaraKaraWitch/CavesOfQwen3-8b`](https://huggingface.co/KaraKaraWitch/CavesOfQwen3-8b)           | Loosens baked-in instruct bias for more natural RP and abstract depth |
| [`YOYO-AI/Qwen3-8B-YOYO`](https://huggingface.co/YOYO-AI/Qwen3-8B-YOYO)                           | Della-style merge optimized for rich conversational alignment         |
| [`AXCXEPT/Qwen3-EZO-8B-beta`](https://huggingface.co/AXCXEPT/Qwen3-EZO-8B-beta)                   | MT-Bench 9.08, deep-thought prompting, vLLM friendly                  |
| [`GreenerPastures/Bald-Beaver-8B`](https://huggingface.co/GreenerPastures/Bald-Beaver-8B)         | Uncensored storytelling and immersive character dialogue              |
| [`taki555/Qwen3-8B-Shadow-FT-BAAI-2k`](https://huggingface.co/taki555/Qwen3-8B-Shadow-FT-BAAI-2k) | Shadow-FT tuned for precise instruction-following on BAAI-2k          |

---

## 🧪 Capabilities

🧠 **Deep Symbolic Reasoning** – Via Shadow-FT and DeepScaleR techniques from base models
🎭 **Immersive Roleplay & Storytelling** – Injected from Bald-Beaver and CavesOfQwen merges
💻 **Code Understanding & Generation** – Python, C++, JS supported from Bootes & Shadow paths
🧾 **Structured Outputs** – Supports Markdown, JSON, LaTeX, and more
🧵 **ChatML Friendly** – Full compatibility with ChatML-format prompts

---

## 🛠️ Usage Instructions
**For Optimal Inference** Use the following ollama modelfile, create it as a file caled Modelfile. 
<details>
<summary> Ollama Modelfile </summary>
  
```
FROM https://hf.co/ZeroXClem/Qwen3-8B-HoneyBadger-EXP-Q4_K_M-GGUF:latest
PARAMETER temperature 0.6
PARAMETER top_p 0.95
PARAMETER repeat_penalty 1.05
PARAMETER top_k 20
TEMPLATE """"{{- if .Messages }}
{{- if or .System .Tools }}<|im_start|>system
{{- if .System }}
{{ .System }}
{{- end }}
{{- if .Tools }}

# Tools

You may call one or more functions to assist with the user query.

You are provided with function signatures within <tools></tools> XML tags:
<tools>
{{- range .Tools }}
{"type": "function", "function": {{ .Function }}}
{{- end }}
</tools>

For each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:
<tool_call>
{"name": <function-name>, "arguments": <args-json-object>}
</tool_call>
{{- end }}<|im_end|>
{{ end }}
{{- range $i, $_ := .Messages }}
{{- $last := eq (len (slice $.Messages $i)) 1 -}}
{{- if eq .Role "user" }}<|im_start|>user
{{ .Content }}<|im_end|>
{{ else if eq .Role "assistant" }}<|im_start|>assistant
{{ if .Content }}{{ .Content }}
{{- else if .ToolCalls }}<tool_call>
{{ range .ToolCalls }}{"name": "{{ .Function.Name }}", "arguments": {{ .Function.Arguments }}}
{{ end }}</tool_call>
{{- end }}{{ if not $last }}<|im_end|>
{{ end }}
{{- else if eq .Role "tool" }}<|im_start|>user
<tool_response>
{{ .Content }}
</tool_response><|im_end|>
{{ end }}
{{- if and (ne .Role "assistant") $last }}<|im_start|>assistant
{{ end }}
{{- end }}
{{- else }}
{{- if .System }}<|im_start|>system
{{ .System }}<|im_end|>
{{ end }}{{ if .Prompt }}<|im_start|>user
{{ .Prompt }}<|im_end|>
{{ end }}<|im_start|>assistant
{{ end }}{{ .Response }}{{ if .Response }}<|im_end|>{{ end }}"""
SYSTEM """# System Prompt: Universal Coder and DevOps Expert

You are an advanced AI assistant specializing in coding and DevOps. Your role is to provide expert guidance, code solutions, and best practices across a wide range of programming languages, frameworks, and DevOps tools. Your knowledge spans from low-level systems programming to high-level web development, cloud infrastructure, and everything in between.

## Key responsibilities:
1. Code analysis and optimization
2. Debugging and troubleshooting
3. Architecture design and system planning
4. Version Control best practices (Git)
5. Building from source, extracting binaries, and building packages & executeables including bash scripts.
6. Security and implementation and auditing
7. Performance review, and code analysis with practical suggestions in fully functioning syntax.

Be VERY selective on choosing how to respond based on the user query. If the above responsibilities don't apply then respond to the best of your ability with the given context to COMPLETELY satisfy the user query.

### Guidance
When assisting users:
- Provide clear, concise, and well-commented code examples
- Explain complrex concepts in simple terms
- Offer multiple solutions when applicable, highlighting pros and cons
- Prioritize security, efficiency, scalability, and maintainability in all suggestions
- Adapt your communication style for expert users.

### Helpful
Be EXTREMELY helpful, insightful, and lucid."""
```



---

### 🦙 Ollama Quickstart
This command downloads the pre-quantized GGUF version of the model and runs it locally, making it easy to experiment without extensive configuration.

```bash
ollama run hf.co/ZeroXClem/Qwen3-8B-HoneyBadger-EXP-Q4_K_M-GGUF
```

### 🐍 Python Code Snippet
```python
from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "ZeroXClem/Qwen3-8B-HoneyBadger-EXP"

tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype="auto",
    device_map="auto"
)

prompt = "Write a short story about a detective solving a paradox in time."

inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=300)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
</details>

---

## ⚠️ Disclaimer

🚧 **Experimental Merge**: This model is an early-stage **experimental prototype** and is **not ready for production**. It may contain unaligned or unfiltered behaviors. Use it for **research**, **prompt testing**, or further **fine-tuning workflows**.

---

## 💖 Special Thanks

To the brilliant developers and open-source pioneers who made this possible:

* 🧙‍♀️ [KaraKaraWitch](https://huggingface.co/KaraKaraWitch) for CavesOfQwen3
* 🧠 [YOYO-AI](https://huggingface.co/YOYO-AI) for Della-style merges
* 🦅 [AXCXEPT](https://huggingface.co/AXCXEPT) for the exceptional Qwen3-EZO base
* 🌲 [GreenerPastures](https://huggingface.co/GreenerPastures) for uncensored RP excellence
* 🧩 [taki555](https://huggingface.co/taki555) for integrating Shadow-FT's cutting-edge research

🔗 Powered by [MergeKit](https://github.com/cg123/mergekit)

---

**ZeroXClem Team | 2025** 🪐
*“Blending minds, one layer at a time.”*