--- language: en pipeline_tag: image-text-to-text library_name: mlx tags: - mlx --- # 🦆 zecanard/gemma-4-26B-A4B-it-uncensored-abliterix-MLX-6bit-affine [This model](https://huggingface.co/zecanard/gemma-4-26B-A4B-it-uncensored-abliterix-MLX-6bit-affine) was converted to MLX from [`wangzhang/gemma-4-26B-A4B-it-abliterated`](https://huggingface.co/wangzhang/gemma-4-26B-A4B-it-abliterated) using `mlx-vlm` version **0.4.4**. Please refer to the [original model card](https://huggingface.co/wangzhang/gemma-4-26B-A4B-it-abliterated) for more details. ## 🌟 Quality Quantized vision language model with **7.237 bits per weight**. `mlx_vlm.convert --quantize --q-bits 6 --q-group-size 32 --q-mode affine` ## 🛠️ Customizations This quant is aware of the current date, and also enables thinking (if available). You may disable this behavior by deleting the following line from the chat template: `{%- set enable_thinking = true %}` You may also need to adjust your environment’s **Reasoning Section Parsing** to recognize `<|channel>thought` as the Start String, and `` as the End String. ## 🖥️ Use with `mlx` ```bash pip install -U mlx-vlm ``` ```bash mlx_vlm.generate --model zecanard/gemma-4-26B-A4B-it-uncensored-abliterix-MLX-6bit-affine --max-tokens 100 --temperature 0 --prompt "Describe this image." --image ```