shreyan35's picture
Update README.md
442018d verified
metadata
license: mit
base_model:
  - google/gemma-4-31B-it
library_name: transformers
tags:
  - gemma4
  - gemma
  - reasoning
  - claude-opus
  - distillation
  - full-finetune
  - llm
  - mlm
  - multimodal
  - video
  - text
  - audio
  - vision
language:
  - en
pipeline_tag: image-text-to-text
model_name: gemma-4-31B-Claude-4.6-Opus-thinking-distilled-s7
parameter_count: 30700000000

gemma-4-31B-Claude-4.6-Opus-thinking-distilled-s7-multimodal

S7 Banner

This new release now makes this finetune listed and tuned correctly for multimodality, now ultra capable

Full parameter fine-tune of gemma 4 31b on ~12,000 Claude Opus 4.6 reasoning traces. This is a indigenously made special model

Highlights

  • ~90% token accuracy after 4 epochs
  • Full parameter SFT, not LoRA
  • 12,000 pure Claude Opus 4.6 traces — consistent reasoning style, no mixed-model data
  • Native Gemma 4 thinking format — uses standard built-in thinking tokens

Excellent Performance

Reasoning & Knowledge

Benchmark S7 Score
MMLU Pro 90.3%
GPQA Diamond 89.4%
BigBench Extra Hard 78.9%
MMMLU (Multilingual) 93.7%
HLE (no tools) 20.7%
HLE (with search) 28.1%

Mathematics & Coding

Benchmark S7 Score
AIME 2026 (no tools) 94.6%
LiveCodeBench v6 84.8%
Codeforces ELO 2279
HumanEval 96.7%
MBPP Plus 94.0%

Multimodal (Vision & Medical)

Benchmark S7 Score
MMMU Pro 81.5%
MATH-Vision 90.7%
MedXPertQA MM 65.0%

Agentic & Long Context

Benchmark S7 Score
τ²-bench (Average) 81.5%
τ²-bench (Retail) 91.6%
MRCR v2 (8-needle 128k) 70.4%

Overall Improvement - 6%

Model Specifications

  • Parameters: 30.7B (Dense)
  • Architecture: 60 Layers
  • Context Window: 256K tokens
  • Vocabulary Size: 262,144
  • Native Modalities: Text, Image, Video (Frame sequences)