How to use from
MLX LM
Generate or start a chat session
# Install MLX LM
uv tool install mlx-lm
# Interactive chat REPL
mlx_lm.chat --model "bearzi/Qwen3.5-122B-A10B-JANG_4L"
Run an OpenAI-compatible server
# Install MLX LM
uv tool install mlx-lm
# Start the server
mlx_lm.server --model "bearzi/Qwen3.5-122B-A10B-JANG_4L"
# Calling the OpenAI-compatible server with curl
curl -X POST "http://localhost:8000/v1/chat/completions" \
   -H "Content-Type: application/json" \
   --data '{
     "model": "bearzi/Qwen3.5-122B-A10B-JANG_4L",
     "messages": [
       {"role": "user", "content": "Hello"}
     ]
   }'
Quick Links

Qwen3.5-122B-A10B-JANG_4L

JANG adaptive mixed-precision MLX quantization produced via vmlx / jang-tools.

  • Quantization: 4.12b avg, profile JANG_4L, method mse-all, calibration activations
  • Profile: JANG_4L
  • Format: JANG v2 MLX safetensors
  • Compatible with: vmlx, MLX Studio, oMLX (with JANG patch)

Usage

vmlx (recommended)

pip install 'vmlx[jang]'
vmlx serve bearzi/Qwen3.5-122B-A10B-JANG_4L

Python

from jang_tools.loader import load_jang_model
from mlx_lm import generate

model, tokenizer = load_jang_model("bearzi/Qwen3.5-122B-A10B-JANG_4L")
messages = [{"role": "user", "content": "Hello"}]
prompt = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize=False)
print(generate(model, tokenizer, prompt=prompt, max_tokens=512, verbose=True))

About JANG

JANG (Jang Adaptive N-bit Grading) assigns different bit widths to different layer types — attention layers get more bits, MLP/expert layers compress harder. This preserves model coherence at aggressive compression levels where uniform quantization breaks down.

See JANG documentation and scores at jangq.ai.

Comparative benchmarks and feedback welcome — please open a discussion.

Downloads last month
232
Safetensors
Model size
18B params
Tensor type
U32
·
F16
·
MLX
Hardware compatibility
Log In to add your hardware

Quantized

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for bearzi/Qwen3.5-122B-A10B-JANG_4L

Finetuned
(41)
this model

Collection including bearzi/Qwen3.5-122B-A10B-JANG_4L