--- language: - en library_name: mlx license: apache-2.0 pipeline_tag: image-text-to-text base_model: Qwen/Qwen3.6-35B-A3B tags: - quantized - apple-silicon - mlx - qwen3 - qwen3_5_moe - moe - vision - hybrid-attention - gated-deltanet - turboquant - jangtq - jangtq2 ---
TurboQuant codebook quantization of Alibaba's hybrid linear/full-attention agentic MoE — routed experts at 2-bit via Lloyd-Max codebooks + Hadamard rotation, attention / embed / shared-expert / lm_head at 8-bit affine, vision tower preserved.
--- ## Model Details | Property | Value | |---|---| | **Base model** | [`Qwen/Qwen3.6-35B-A3B`](https://huggingface.co/Qwen/Qwen3.6-35B-A3B) | | **Parameters (source)** | 35 B total, ~3 B active per token | | **Architecture** | `qwen3_5_moe` — 40 decoder layers: 30 `Gated DeltaNet` (linear attn) + 10 full attention, 256 routed experts + 1 always-on shared expert | | **Quantization format** | `weight_format: mxtq` — routed experts via TurboQuant codebook (2-bit), everything else affine 8-bit or fp16 passthrough | | **Routed-expert storage** | `.tq_packed` (uint32) + `.tq_norms` (fp16) + `.tq_bits` (uint8); codebook + Hadamard signs re-derived deterministically at load | | **Package size on disk** | **11.63 GB** across 12 shards | | **Shipped tensors** | 1,930 total (1,597 language-model + 333 vision tower + 120 routed-expert TQ triples) | | **Vocab** | 248,320 | | **Context (position embeddings)** | 262,144 native; the upstream model card reports up to ~1 M with YaRN scaling | | **Vision tower** | 27-layer ViT (hidden 1152, patch 16), preserved in fp16 | | **Chat format** | Qwen im_start/im_end, unified thinking toggle | ### Quantization details, per tensor category | Category | Bits | Group / codebook | Notes | |---|---|---|---| | **Routed-expert MLP** (`mlp.experts.gate_up_proj`, `down_proj`) | **2 (JANGTQ)** | 2^2 Lloyd-Max centroids + Hadamard rotation | `.tq_packed` + `.tq_norms` + `.tq_bits` triples | | Embedding (`embed_tokens`), `lm_head` | 8 (affine) | group 64 | MLX-native `QuantizedLinear` | | Full-attention projections (`q_proj`, `k_proj`, `v_proj`, `o_proj`) | 8 (affine) | group 64 | Gate-doubled q_proj for `attn_output_gate` | | Linear-attention projections (`in_proj_qkv`, `in_proj_z`, `in_proj_b`, `in_proj_a`, `out_proj`) | 8 (affine) | group 64 | Gated DeltaNet | | Shared-expert MLP (`gate_proj`, `up_proj`, `down_proj`) | 8 (affine) | group 64 | Always active per token | | Router (`mlp.gate`) | fp16 passthrough | — | Precision-critical | | Shared-expert gate (`shared_expert_gate`) | fp16 passthrough | — | sigmoid scalar gate | | Norms (`*_layernorm`, `*_norm`), `A_log`, `dt_bias`, `conv1d` | fp16 passthrough | — | Un-quantized | | Vision tower (333 tensors) | fp16 passthrough | — | `patch_embed.proj` axes pre-transposed to MLX layout | JANGTQ ("TurboQuant") stores routed-expert weights as indices into a small Lloyd-Max codebook with a per-row norm, after a randomized Hadamard rotation that concentrates the distribution so quantization error is uniform. At inference, the input is rotated once per layer (cheap fused Metal kernel) and dot products happen against the codebook centroids directly, so we never dequantize back to affine. Compared to affine 2-bit at the same bit budget, this gives better quality AND faster decode on the routed-expert MLP path. --- ## Usage **JANGTQ requires our custom loader** — stock `mlx_lm.load()` can't parse `.tq_packed` tensors. You need `jang-tools` (free, public):
Packaged on Apple Silicon with jang-tools (mlx-lm 0.31.2).
© 2026 Osaurus AI — osaurus.ai