Qwen3.5-122B-A10B-JANG
Collection
15 items • Updated
# Install MLX LM
uv tool install mlx-lm# Start the server
mlx_lm.server --model "bearzi/Qwen3.5-122B-A10B-JANG_4L"
# Calling the OpenAI-compatible server with curl
curl -X POST "http://localhost:8000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "bearzi/Qwen3.5-122B-A10B-JANG_4L",
"messages": [
{"role": "user", "content": "Hello"}
]
}'JANG adaptive mixed-precision MLX quantization produced via vmlx / jang-tools.
pip install 'vmlx[jang]'
vmlx serve bearzi/Qwen3.5-122B-A10B-JANG_4L
from jang_tools.loader import load_jang_model
from mlx_lm import generate
model, tokenizer = load_jang_model("bearzi/Qwen3.5-122B-A10B-JANG_4L")
messages = [{"role": "user", "content": "Hello"}]
prompt = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize=False)
print(generate(model, tokenizer, prompt=prompt, max_tokens=512, verbose=True))
JANG (Jang Adaptive N-bit Grading) assigns different bit widths to different layer types — attention layers get more bits, MLP/expert layers compress harder. This preserves model coherence at aggressive compression levels where uniform quantization breaks down.
See JANG documentation and scores at jangq.ai.
Comparative benchmarks and feedback welcome — please open a discussion.
Quantized
Base model
Qwen/Qwen3.5-122B-A10B
Generate or start a chat session
# Install MLX LM uv tool install mlx-lm# Interactive chat REPL mlx_lm.chat --model "bearzi/Qwen3.5-122B-A10B-JANG_4L"