--- base_model: Qwen/Qwen3.5-27B library_name: mlx pipeline_tag: text-generation license: apache-2.0 tags: - mlx - jang - jang-quantized - JANG_2S - mixed-precision - apple-silicon --- # Qwen3.5-27B-JANG_2S JANG adaptive mixed-precision MLX quantization produced via [vmlx / jang-tools](https://github.com/jjang-ai/jangq). - **Quantization:** 2.84b avg, profile JANG_2S, method mse-all, calibration activations - **Profile:** JANG_2S - **Format:** JANG v2 MLX safetensors - **Compatible with:** vmlx, MLX Studio, oMLX (with JANG patch) ## Usage ### vmlx (recommended) ```bash pip install 'vmlx[jang]' vmlx serve bearzi/Qwen3.5-27B-JANG_2S ``` ### Python ```python from jang_tools.loader import load_jang_model from mlx_lm import generate model, tokenizer = load_jang_model("bearzi/Qwen3.5-27B-JANG_2S") messages = [{"role": "user", "content": "Hello"}] prompt = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize=False) print(generate(model, tokenizer, prompt=prompt, max_tokens=512, verbose=True)) ``` ## About JANG JANG (Jang Adaptive N-bit Grading) assigns different bit widths to different layer types — attention layers get more bits, MLP/expert layers compress harder. This preserves model coherence at aggressive compression levels where uniform quantization breaks down. See [JANG documentation](https://github.com/jjang-ai/jangq) and scores at [jangq.ai](https://jangq.ai). Comparative benchmarks and feedback welcome — please open a discussion.