How to use JANGQ-AI/Qwen3.6-35B-A3B-JANGTQ with MLX:
# Make sure mlx-vlm is installed # pip install --upgrade mlx-vlm from mlx_vlm import load, generate from mlx_vlm.prompt_utils import apply_chat_template from mlx_vlm.utils import load_config # Load the model model, processor = load("JANGQ-AI/Qwen3.6-35B-A3B-JANGTQ") config = load_config("JANGQ-AI/Qwen3.6-35B-A3B-JANGTQ") # Prepare input image = ["http://images.cocodataset.org/val2017/000000039769.jpg"] prompt = "Describe this image." # Apply chat template formatted_prompt = apply_chat_template( processor, config, prompt, num_images=1 ) # Generate output output = generate(model, processor, formatted_prompt, image) print(output)
How to use JANGQ-AI/Qwen3.6-35B-A3B-JANGTQ with Pi:
# Install MLX LM: uv tool install mlx-lm # Start a local OpenAI-compatible server: mlx_lm.server --model "JANGQ-AI/Qwen3.6-35B-A3B-JANGTQ"
# Install Pi: npm install -g @mariozechner/pi-coding-agent # Add to ~/.pi/agent/models.json: { "providers": { "mlx-lm": { "baseUrl": "http://localhost:8080/v1", "api": "openai-completions", "apiKey": "none", "models": [ { "id": "JANGQ-AI/Qwen3.6-35B-A3B-JANGTQ" } ] } } }
# Start Pi in your project directory: pi