File size: 1,156 Bytes
7ab813d b0f73d8 671c0cc b0f73d8 7ab813d | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- qwen3
- ZeroXClem
- hermes
- SkyHighHermes
- Heretic
- Abliterated
- HighReasoning
- Distilled
- Claude
- mlx
base_model: ZeroXClem/Qwen3-4B-Sky-High-Hermes
pipeline_tag: text-generation
library_name: mlx
---
# Qwen3-4B-Sky-High-Hermes-qx86-hi-mlx
> Brainwave: 0.430,0.490,0.710,0.608,0.372,0.733,0.627
That image on the model card... epic :)
This model [Qwen3-4B-Sky-High-Hermes-qx86-hi-mlx](https://huggingface.co/nightmedia/Qwen3-4B-Sky-High-Hermes-qx86-hi-mlx) was
converted to MLX format from [ZeroXClem/Qwen3-4B-Sky-High-Hermes](https://huggingface.co/ZeroXClem/Qwen3-4B-Sky-High-Hermes)
using mlx-lm version **0.30.2**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("Qwen3-4B-Sky-High-Hermes-qx86-hi-mlx")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True, return_dict=False,
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|