🦆 zecanard/gemma-4-31b-it-uncensored-heretic-ara-MLX-4bit-mixed_4_6
This model was converted to MLX from trohrbaugh/gemma-4-31b-it-heretic-ara using mlx-vlm version 0.4.4.
Please refer to the original model card for more details.
🌟 Quality
Mixed-precision quantized vision language model with an effective 5.064 bits per weight. Combines the smaller size of a 4-bit quant with higher precision where it maters most.
mlx_vlm.convert --quantize --q-group-size 32 --quant-predicate mixed_4_6
🛠️ Customizations
This quant is aware of the current date, and also enables thinking (if available). You may disable this behavior by deleting the following line from the chat template, or changing true to false:
{%- set enable_thinking = true %}
You may also need to adjust your environment’s Reasoning Section Parsing to recognize <|channel>thought as the Start String, and <channel|> as the End String.
🖥️ Use with mlx
pip install -U mlx-vlm
mlx_vlm.generate --model zecanard/gemma-4-31b-it-uncensored-heretic-ara-MLX-4bit-mixed_4_6 --max-tokens 100 --temperature 0 --prompt "Describe this image." --image <path_to_image>
- Downloads last month
- 889
4-bit
Model tree for zecanard/gemma-4-31b-it-uncensored-heretic-ara-MLX-4bit-mixed_4_6
Base model
trohrbaugh/gemma-4-31b-it-heretic-ara
# Make sure mlx-vlm is installed # pip install --upgrade mlx-vlm from mlx_vlm import load, generate from mlx_vlm.prompt_utils import apply_chat_template from mlx_vlm.utils import load_config # Load the model model, processor = load("zecanard/gemma-4-31b-it-uncensored-heretic-ara-MLX-4bit-mixed_4_6") config = load_config("zecanard/gemma-4-31b-it-uncensored-heretic-ara-MLX-4bit-mixed_4_6") # Prepare input image = ["http://images.cocodataset.org/val2017/000000039769.jpg"] prompt = "Describe this image." # Apply chat template formatted_prompt = apply_chat_template( processor, config, prompt, num_images=1 ) # Generate output output = generate(model, processor, formatted_prompt, image) print(output)