metadata
language:
- en
- zh
- ko
license: apache-2.0
base_model: Jackrong/Qwopus3.5-27B-v3
tags:
- unsloth
- qwen
- qwen3.5
- reasoning
- chain-of-thought
- lora
- competitive-programming
- mlx
pipeline_tag: image-text-to-text
library_name: mlx
MLX-Qwopus3.5-27B-v3-vision-6bit
A 6-bit MLX quantization of Jackrong/Qwopus3.5-27B-v3 with some tweaks to bring back the multimodal capabilities.
Quantization Details
| Property | Value |
|---|---|
| Method | 6-bit (6.661 bits per weight) |
| Tool | mlx-vlm 0.4.2 via mlx-vlm.convert |
| Size | ~22.9GB |
Other Available Quants
| Model | Size | Quantization | Bits per weight | Multimodal |
|---|---|---|---|---|
| Jackrong/MLX-Qwopus3.5-27B-v3-4bit | 15.15 GB | 4-bit | 4.501 | ✗ |
| matt-here/MLX-Qwopus3.5-27B-v3-vision-4bit | 16.08 GB | 4-bit | 4.695 | ✓ (Vision) |
| matt-here/MLX-Qwopus3.5-27B-v3-5bit | 18.56 GB | 5-bit | 5.501 | ✗ |
| matt-here/MLX-Qwopus3.5-27B-v3-vision-5bit | 19.46 GB | 5-bit | 5.678 | ✓ (Vision) |
| Jackrong/MLX-Qwopus3.5-27B-v3-6bit | 21.88 GB | 6-bit | 6.501 | ✗ |
| (This model) | 22.85 GB | 6-bit | 6.661 | ✓ (Vision) |
| Jackrong/MLX-Qwopus3.5-27B-v3-bf16 | 53.81 GB | bf16 | 16 | ✗ |
GGUF quants - Jackrong/Qwopus3.5-27B-v3-GGUF
Credits
- Alibaba Qwen Team — Qwen 3.5 27B dense model
- Jackrong - Claude 4.6 Opus v3 distillation work
- Unsloth - Training framework
- Apple MLX Team - High-speed local inference on Apple Silicon