Deep Thought
Collection
Models with depth • 48 items • Updated • 1
The model was trained by DavidAU with TeichAI's gemini-3-pro-preview-high-reasoning dataset.
Comparatively speaking, this model beats all benchmarks seen so far on a small model, judging by test results alone.
It has the best arc numbers in a dense 30B-range model
Gemma-3-27b-it-Gemini-Deep-Reasoning
q8 0.590,0.742,0.883,0.781,0.458,0.822,0.751
This explains the smooth vibe.
A few Nightmedia models for comparison:
Qwen3-30B-A3B-Element7-1M
qx86-hi 0.578,0.750,0.883,0.742,0.478,0.804,0.684
Qwen3-30B-A3B-Element6-1M
qx86-hi 0.568,0.737,0.880,0.760,0.450,0.803,0.714
Qwen3-42B-A3B-Architect
qx86-hi 0.563,0.719,0.881,0.761,0.454,0.805,0.703
Qwen3-32B-Element5-Heretic
qx86-hi 0.483,0.596,0.738,0.754,0.394,0.802,0.710
Qwen3-32B-Engineer4
qx86-hi 0.516,0.661,0.829,0.753,0.386,0.798,0.717
Qwen3-4B-Agent-Claude
qx86-hi 0.572,0.763,0.861,0.708,0.414,0.773,0.676
Qwen3-4B-Engineer3x-F32
qx86-hi 0.613,0.842,0.855,0.748,0.428,0.781,0.709
Qwen3-4B-Engineer3x2
qx86-hi 0.619,0.829,0.850,0.747,0.422,0.776,0.690
Perplexity is usually higher on Gemma compared to Qwen
q8 10.968 ± 0.104
mxfp4 12.381 ± 0.119
This model Gemma-3-27b-it-Gemini-Deep-Reasoning-q8-mlx was converted to MLX format from DavidAU/Gemma-3-27b-it-Gemini-Deep-Reasoning using mlx-lm version 0.30.2.
pip install mlx-lm
from mlx_lm import load, generate
model, tokenizer = load("Gemma-3-27b-it-Gemini-Deep-Reasoning-q8-mlx")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True, return_dict=False,
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
8-bit
Base model
google/gemma-3-27b-pt