Qwen3.5-122B-A10B-JANG
Collection
15 items • Updated
# Install Pi:
npm install -g @mariozechner/pi-coding-agent# Add to ~/.pi/agent/models.json:
{
"providers": {
"mlx-lm": {
"baseUrl": "http://localhost:8080/v1",
"api": "openai-completions",
"apiKey": "none",
"models": [
{
"id": "bearzi/Qwen3.5-122B-A10B-JANG_4L"
}
]
}
}
}# Start Pi in your project directory:
piJANG adaptive mixed-precision MLX quantization produced via vmlx / jang-tools.
pip install 'vmlx[jang]'
vmlx serve bearzi/Qwen3.5-122B-A10B-JANG_4L
from jang_tools.loader import load_jang_model
from mlx_lm import generate
model, tokenizer = load_jang_model("bearzi/Qwen3.5-122B-A10B-JANG_4L")
messages = [{"role": "user", "content": "Hello"}]
prompt = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize=False)
print(generate(model, tokenizer, prompt=prompt, max_tokens=512, verbose=True))
JANG (Jang Adaptive N-bit Grading) assigns different bit widths to different layer types — attention layers get more bits, MLP/expert layers compress harder. This preserves model coherence at aggressive compression levels where uniform quantization breaks down.
See JANG documentation and scores at jangq.ai.
Comparative benchmarks and feedback welcome — please open a discussion.
Quantized
Base model
Qwen/Qwen3.5-122B-A10B
Start the MLX server
# Install MLX LM: uv tool install mlx-lm# Start a local OpenAI-compatible server: mlx_lm.server --model "bearzi/Qwen3.5-122B-A10B-JANG_4L"