Instructions to use shreyan35/gemma-4-31b-claude-opus-4.6-thinking-distilled-s7-GGUF with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use shreyan35/gemma-4-31b-claude-opus-4.6-thinking-distilled-s7-GGUF with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("image-text-to-text", model="shreyan35/gemma-4-31b-claude-opus-4.6-thinking-distilled-s7-GGUF")# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("shreyan35/gemma-4-31b-claude-opus-4.6-thinking-distilled-s7-GGUF", dtype="auto") - llama-cpp-python
How to use shreyan35/gemma-4-31b-claude-opus-4.6-thinking-distilled-s7-GGUF with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="shreyan35/gemma-4-31b-claude-opus-4.6-thinking-distilled-s7-GGUF", filename="gemma-4-31b-claude-4.6-opus-thinking-distilled-s7-q8_0.gguf", )
output = llm( "Once upon a time,", max_tokens=512, echo=True ) print(output)
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use shreyan35/gemma-4-31b-claude-opus-4.6-thinking-distilled-s7-GGUF with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf shreyan35/gemma-4-31b-claude-opus-4.6-thinking-distilled-s7-GGUF:Q8_0 # Run inference directly in the terminal: llama-cli -hf shreyan35/gemma-4-31b-claude-opus-4.6-thinking-distilled-s7-GGUF:Q8_0
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf shreyan35/gemma-4-31b-claude-opus-4.6-thinking-distilled-s7-GGUF:Q8_0 # Run inference directly in the terminal: llama-cli -hf shreyan35/gemma-4-31b-claude-opus-4.6-thinking-distilled-s7-GGUF:Q8_0
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf shreyan35/gemma-4-31b-claude-opus-4.6-thinking-distilled-s7-GGUF:Q8_0 # Run inference directly in the terminal: ./llama-cli -hf shreyan35/gemma-4-31b-claude-opus-4.6-thinking-distilled-s7-GGUF:Q8_0
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf shreyan35/gemma-4-31b-claude-opus-4.6-thinking-distilled-s7-GGUF:Q8_0 # Run inference directly in the terminal: ./build/bin/llama-cli -hf shreyan35/gemma-4-31b-claude-opus-4.6-thinking-distilled-s7-GGUF:Q8_0
Use Docker
docker model run hf.co/shreyan35/gemma-4-31b-claude-opus-4.6-thinking-distilled-s7-GGUF:Q8_0
- LM Studio
- Jan
- vLLM
How to use shreyan35/gemma-4-31b-claude-opus-4.6-thinking-distilled-s7-GGUF with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "shreyan35/gemma-4-31b-claude-opus-4.6-thinking-distilled-s7-GGUF" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "shreyan35/gemma-4-31b-claude-opus-4.6-thinking-distilled-s7-GGUF", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/shreyan35/gemma-4-31b-claude-opus-4.6-thinking-distilled-s7-GGUF:Q8_0
- SGLang
How to use shreyan35/gemma-4-31b-claude-opus-4.6-thinking-distilled-s7-GGUF with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "shreyan35/gemma-4-31b-claude-opus-4.6-thinking-distilled-s7-GGUF" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "shreyan35/gemma-4-31b-claude-opus-4.6-thinking-distilled-s7-GGUF", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "shreyan35/gemma-4-31b-claude-opus-4.6-thinking-distilled-s7-GGUF" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "shreyan35/gemma-4-31b-claude-opus-4.6-thinking-distilled-s7-GGUF", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Ollama
How to use shreyan35/gemma-4-31b-claude-opus-4.6-thinking-distilled-s7-GGUF with Ollama:
ollama run hf.co/shreyan35/gemma-4-31b-claude-opus-4.6-thinking-distilled-s7-GGUF:Q8_0
- Unsloth Studio new
How to use shreyan35/gemma-4-31b-claude-opus-4.6-thinking-distilled-s7-GGUF with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for shreyan35/gemma-4-31b-claude-opus-4.6-thinking-distilled-s7-GGUF to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for shreyan35/gemma-4-31b-claude-opus-4.6-thinking-distilled-s7-GGUF to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for shreyan35/gemma-4-31b-claude-opus-4.6-thinking-distilled-s7-GGUF to start chatting
- Docker Model Runner
How to use shreyan35/gemma-4-31b-claude-opus-4.6-thinking-distilled-s7-GGUF with Docker Model Runner:
docker model run hf.co/shreyan35/gemma-4-31b-claude-opus-4.6-thinking-distilled-s7-GGUF:Q8_0
- Lemonade
How to use shreyan35/gemma-4-31b-claude-opus-4.6-thinking-distilled-s7-GGUF with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull shreyan35/gemma-4-31b-claude-opus-4.6-thinking-distilled-s7-GGUF:Q8_0
Run and chat with the model
lemonade run user.gemma-4-31b-claude-opus-4.6-thinking-distilled-s7-GGUF-Q8_0
List all available models
lemonade list
| license: mit | |
| base_model: | |
| - google/gemma-4-31B-it | |
| library_name: transformers | |
| tags: | |
| - gemma4 | |
| - gemma | |
| - reasoning | |
| - claude-opus | |
| - distillation | |
| - full-finetune | |
| - llm | |
| - mlm | |
| - multimodal | |
| - video | |
| - text | |
| - audio | |
| - vision | |
| language: | |
| - en | |
| pipeline_tag: image-text-to-text | |
| model_name: gemma-4-31B-Claude-4.6-Opus-thinking-distilled-s7 | |
| parameter_count: 30700000000 | |
| # gemma-4-31B-Claude-4.6-Opus-thinking-distilled-s7-multimodal | |
| <div align="center"> | |
| <img src="https://huggingface.co/shreyan35/gemma-4-31B-Claude-4.6-Opus-thinking-distilled-s7/resolve/main/banner.jpeg" width="100%" alt="S7 Banner"> | |
| </div> | |
| **_This new release now makes this finetune listed and tuned correctly for multimodality, now ultra capable_** | |
| Full parameter fine-tune of gemma 4 31b on ~12,000 Claude Opus 4.6 reasoning traces. This is a indigenously made special model | |
| ## Highlights | |
| - **~90% token accuracy** after 4 epochs | |
| - **Full parameter SFT**, not LoRA | |
| - **12,000 pure Claude Opus 4.6 traces** — consistent reasoning style, no mixed-model data | |
| - **Native Gemma 4 thinking format** — uses standard built-in thinking tokens | |
| ## Excellent Performance | |
| ### Reasoning & Knowledge | |
| | Benchmark | S7 Score | | |
| | :--- | :--- | | |
| | MMLU Pro | 90.3% | | |
| | GPQA Diamond | 89.4% | | |
| | BigBench Extra Hard | 78.9% | | |
| | MMMLU (Multilingual) | 93.7% | | |
| | HLE (no tools) | 20.7% | | |
| | HLE (with search) | 28.1% | | |
| ### Mathematics & Coding | |
| | Benchmark | S7 Score | | |
| | :--- | :--- | | |
| | AIME 2026 (no tools) | 94.6% | | |
| | LiveCodeBench v6 | 84.8% | | |
| | Codeforces ELO | 2279 | | |
| | HumanEval | 96.7% | | |
| | MBPP Plus | 94.0% | | |
| ### Multimodal (Vision & Medical) | |
| | Benchmark | S7 Score | | |
| | :--- | :--- | | |
| | MMMU Pro | 81.5% | | |
| | MATH-Vision | 90.7% | | |
| | MedXPertQA MM | 65.0% | | |
| ### Agentic & Long Context | |
| | Benchmark | S7 Score | | |
| | :--- | :--- | | |
| | τ²-bench (Average) | 81.5% | | |
| | τ²-bench (Retail) | 91.6% | | |
| | MRCR v2 (8-needle 128k) | 70.4% | | |
| **Overall Improvement - 6%** | |
| ## Model Specifications | |
| - **Parameters:** 30.7B (Dense) | |
| - **Architecture:** 60 Layers | |
| - **Context Window:** 256K tokens | |
| - **Vocabulary Size:** 262,144 | |
| - **Native Modalities:** Text, Image, Video (Frame sequences) | |