🦆 zecanard/gemma-4-31B-it-Claude-Opus-Distilled-MLX-8bit-mxfp8
This model was converted to MLX from TeichAI/gemma-4-31B-it-Claude-Opus-Distill using mlx-vlm version 0.4.4.
Please refer to the original model card for more details.
🌟 Quality
Quantized vision language model with an effective 8.564 bits per weight.
mlx_vlm.convert --quantize --q-bits 8 --q-group-size 32 --q-mode mxfp8
🛠️ Customizations
This quant is aware of the current date, and also enables thinking (if available). You may disable this behavior by deleting the following line from the chat template:
{%- set enable_thinking = true %}
You may also need to adjust your environment’s Reasoning Section Parsing to recognize <|channel>thought as the Start String, and <channel|> as the End String.
🖥️ Use with mlx
pip install -U mlx-vlm
mlx_vlm.generate --model zecanard/gemma-4-31B-it-Claude-Opus-Distilled-MLX-8bit-mxfp8 --max-tokens 100 --temperature 0 --prompt "Describe this image." --image <path_to_image>
- Downloads last month
- 646
Model size
9B params
Tensor type
U8
·
U32 ·
BF16 ·
Hardware compatibility
Log In to add your hardware
8-bit
Model tree for zecanard/gemma-4-31B-it-Claude-Opus-Distilled-MLX-8bit-mxfp8
Base model
google/gemma-4-31B-it Finetuned
unsloth/gemma-4-31B-it