How to use from
llama.cpp
Install from brew
brew install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf shreyan35/gemma-4-31b-claude-opus-4.6-thinking-distilled-s7-GGUF:Q8_0
# Run inference directly in the terminal:
llama-cli -hf shreyan35/gemma-4-31b-claude-opus-4.6-thinking-distilled-s7-GGUF:Q8_0
Install from WinGet (Windows)
winget install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf shreyan35/gemma-4-31b-claude-opus-4.6-thinking-distilled-s7-GGUF:Q8_0
# Run inference directly in the terminal:
llama-cli -hf shreyan35/gemma-4-31b-claude-opus-4.6-thinking-distilled-s7-GGUF:Q8_0
Use pre-built binary
# Download pre-built binary from:
# https://github.com/ggerganov/llama.cpp/releases
# Start a local OpenAI-compatible server with a web UI:
./llama-server -hf shreyan35/gemma-4-31b-claude-opus-4.6-thinking-distilled-s7-GGUF:Q8_0
# Run inference directly in the terminal:
./llama-cli -hf shreyan35/gemma-4-31b-claude-opus-4.6-thinking-distilled-s7-GGUF:Q8_0
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build
cmake --build build -j --target llama-server llama-cli
# Start a local OpenAI-compatible server with a web UI:
./build/bin/llama-server -hf shreyan35/gemma-4-31b-claude-opus-4.6-thinking-distilled-s7-GGUF:Q8_0
# Run inference directly in the terminal:
./build/bin/llama-cli -hf shreyan35/gemma-4-31b-claude-opus-4.6-thinking-distilled-s7-GGUF:Q8_0
Use Docker
docker model run hf.co/shreyan35/gemma-4-31b-claude-opus-4.6-thinking-distilled-s7-GGUF:Q8_0
Quick Links

gemma-4-31B-Claude-4.6-Opus-thinking-distilled-s7-multimodal

S7 Banner

This new release now makes this finetune listed and tuned correctly for multimodality, now ultra capable

Full parameter fine-tune of gemma 4 31b on ~12,000 Claude Opus 4.6 reasoning traces. This is a indigenously made special model

Highlights

  • ~90% token accuracy after 4 epochs
  • Full parameter SFT, not LoRA
  • 12,000 pure Claude Opus 4.6 traces — consistent reasoning style, no mixed-model data
  • Native Gemma 4 thinking format — uses standard built-in thinking tokens

Excellent Performance

Reasoning & Knowledge

Benchmark S7 Score
MMLU Pro 90.3%
GPQA Diamond 89.4%
BigBench Extra Hard 78.9%
MMMLU (Multilingual) 93.7%
HLE (no tools) 20.7%
HLE (with search) 28.1%

Mathematics & Coding

Benchmark S7 Score
AIME 2026 (no tools) 94.6%
LiveCodeBench v6 84.8%
Codeforces ELO 2279
HumanEval 96.7%
MBPP Plus 94.0%

Multimodal (Vision & Medical)

Benchmark S7 Score
MMMU Pro 81.5%
MATH-Vision 90.7%
MedXPertQA MM 65.0%

Agentic & Long Context

Benchmark S7 Score
τ²-bench (Average) 81.5%
τ²-bench (Retail) 91.6%
MRCR v2 (8-needle 128k) 70.4%

Overall Improvement - 6%

Model Specifications

  • Parameters: 30.7B (Dense)
  • Architecture: 60 Layers
  • Context Window: 256K tokens
  • Vocabulary Size: 262,144
  • Native Modalities: Text, Image, Video (Frame sequences)
Downloads last month
215
GGUF
Model size
31B params
Architecture
gemma4
Hardware compatibility
Log In to add your hardware

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for shreyan35/gemma-4-31b-claude-opus-4.6-thinking-distilled-s7-GGUF

Quantized
(202)
this model