--- license: gemma base_model: wangzhang/gemma-4-26B-A4B-it-abliterix language: en pipeline_tag: image-text-to-text library_name: mlx tags: - mlx - abliterix - decensored - abliterated - uncensored - gemma4 - moe - direct-weight-editing - expert-granular-abliteration - projected-abliteration --- # 🦆 zecanard/gemma-4-26B-A4B-it-uncensored-abliterix-MLX-6bit-int6-affine [This model](https://huggingface.co/zecanard/gemma-4-26B-A4B-it-uncensored-abliterix-MLX-6bit-int6-affine) was converted to MLX from [`wangzhang/gemma-4-26B-A4B-it-abliterix`](https://huggingface.co/wangzhang/gemma-4-26B-A4B-it-abliterix) using `mlx-vlm` version **0.4.4**. Please refer to the [original model card](https://huggingface.co/wangzhang/gemma-4-26B-A4B-it-abliterix) for more details. ## 🌟 Quality Quantized vision language model with an effective **7.237 bits per weight**. `mlx_vlm.convert --quantize --q-group-size 32 --q-bits 6 --q-mode affine` ## 🛠️ Customizations This quant is aware of the current date, and also enables thinking (if available). You may disable this behavior by deleting the following line from the chat template, or changing `true` to `false`: `{%- set enable_thinking = true %}` You may also need to adjust your environment’s **Reasoning Section Parsing** to recognize `<|channel>thought` as the **Start String**, and `` as the **End String**. ## 🖥️ Use with `mlx` ```bash pip install -U mlx-vlm ``` ```bash mlx_vlm.generate --model zecanard/gemma-4-26B-A4B-it-uncensored-abliterix-MLX-6bit-int6-affine --max-tokens 100 --temperature 0 --prompt "Describe this image." --image ```