Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Buckets new
  • Docs
  • Enterprise
  • Pricing
    • Website
      • Tasks
      • HuggingChat
      • Collections
      • Languages
      • Organizations
    • Community
      • Blog
      • Posts
      • Daily Papers
      • Learn
      • Discord
      • Forum
      • GitHub
    • Solutions
      • Team & Enterprise
      • Hugging Face PRO
      • Enterprise Support
      • Inference Providers
      • Inference Endpoints
      • Storage Buckets

  • Log In
  • Sign Up

caiovicentino1
/
Qwen3.5-9B-HLWQ-MLX-4bit

Text Generation
MLX
Safetensors
qwen3_5
hlwq
quantization
apple-silicon
mac
qwen3.5
llm-compression
conversational
4-bit precision
Model card Files Files and versions
xet
Community

Instructions to use caiovicentino1/Qwen3.5-9B-HLWQ-MLX-4bit with libraries, inference providers, notebooks, and local apps. Follow these links to get started.

  • Libraries
  • MLX

    How to use caiovicentino1/Qwen3.5-9B-HLWQ-MLX-4bit with MLX:

    # Make sure mlx-lm is installed
    # pip install --upgrade mlx-lm
    
    # Generate text with mlx-lm
    from mlx_lm import load, generate
    
    model, tokenizer = load("caiovicentino1/Qwen3.5-9B-HLWQ-MLX-4bit")
    
    prompt = "Write a story about Einstein"
    messages = [{"role": "user", "content": prompt}]
    prompt = tokenizer.apply_chat_template(
        messages, add_generation_prompt=True
    )
    
    text = generate(model, tokenizer, prompt=prompt, verbose=True)
  • Notebooks
  • Google Colab
  • Kaggle
  • Local Apps
  • LM Studio
  • Pi new

    How to use caiovicentino1/Qwen3.5-9B-HLWQ-MLX-4bit with Pi:

    Start the MLX server
    # Install MLX LM:
    uv tool install mlx-lm
    # Start a local OpenAI-compatible server:
    mlx_lm.server --model "caiovicentino1/Qwen3.5-9B-HLWQ-MLX-4bit"
    Configure the model in Pi
    # Install Pi:
    npm install -g @mariozechner/pi-coding-agent
    # Add to ~/.pi/agent/models.json:
    {
      "providers": {
        "mlx-lm": {
          "baseUrl": "http://localhost:8080/v1",
          "api": "openai-completions",
          "apiKey": "none",
          "models": [
            {
              "id": "caiovicentino1/Qwen3.5-9B-HLWQ-MLX-4bit"
            }
          ]
        }
      }
    }
    Run Pi
    # Start Pi in your project directory:
    pi
  • Hermes Agent new

    How to use caiovicentino1/Qwen3.5-9B-HLWQ-MLX-4bit with Hermes Agent:

    Start the MLX server
    # Install MLX LM:
    uv tool install mlx-lm
    # Start a local OpenAI-compatible server:
    mlx_lm.server --model "caiovicentino1/Qwen3.5-9B-HLWQ-MLX-4bit"
    Configure Hermes
    # Install Hermes:
    curl -fsSL https://hermes-agent.nousresearch.com/install.sh | bash
    hermes setup
    # Point Hermes at the local server:
    hermes config set model.provider custom
    hermes config set model.base_url http://127.0.0.1:8080/v1
    hermes config set model.default caiovicentino1/Qwen3.5-9B-HLWQ-MLX-4bit
    Run Hermes
    hermes
  • MLX LM

    How to use caiovicentino1/Qwen3.5-9B-HLWQ-MLX-4bit with MLX LM:

    Generate or start a chat session
    # Install MLX LM
    uv tool install mlx-lm
    # Interactive chat REPL
    mlx_lm.chat --model "caiovicentino1/Qwen3.5-9B-HLWQ-MLX-4bit"
    Run an OpenAI-compatible server
    # Install MLX LM
    uv tool install mlx-lm
    # Start the server
    mlx_lm.server --model "caiovicentino1/Qwen3.5-9B-HLWQ-MLX-4bit"
    # Calling the OpenAI-compatible server with curl
    curl -X POST "http://localhost:8000/v1/chat/completions" \
       -H "Content-Type: application/json" \
       --data '{
         "model": "caiovicentino1/Qwen3.5-9B-HLWQ-MLX-4bit",
         "messages": [
           {"role": "user", "content": "Hello"}
         ]
       }'
Qwen3.5-9B-HLWQ-MLX-4bit
4.78 GB
Ctrl+K
Ctrl+K
  • 1 contributor
History: 14 commits
caiovicentino1's picture
caiovicentino1
HLWQ rebrand: title, tags, notice, self-links
5ae9734 verified about 1 month ago
  • .gitattributes
    1.57 kB
    PolarQuant MLX Q4: 19.7 tok/s, 4.8 GB, PPL 6.90 on Mac M4 about 2 months ago
  • README.md
    5.4 kB
    HLWQ rebrand: title, tags, notice, self-links about 1 month ago
  • chat_template.jinja
    7.76 kB
    Add chat_template.jinja about 2 months ago
  • config.json
    2.98 kB
    Add config.json about 2 months ago
  • model.safetensors
    4.76 GB
    xet
    PolarQuant MLX Q4: 19.7 tok/s, 4.8 GB, PPL 6.90 on Mac M4 about 2 months ago
  • model.safetensors.index.json
    81.2 kB
    PolarQuant MLX Q4: 19.7 tok/s, 4.8 GB, PPL 6.90 on Mac M4 about 2 months ago
  • ppl_comparison.png
    58.3 kB
    Add ppl_comparison.png about 2 months ago
  • speed_vs_vram.png
    50.9 kB
    Add speed_vs_vram.png about 2 months ago
  • tokenizer.json
    20 MB
    xet
    PolarQuant MLX Q4: 19.7 tok/s, 4.8 GB, PPL 6.90 on Mac M4 about 2 months ago
  • tokenizer_config.json
    1.14 kB
    fix: add tokenizer_class Qwen2TokenizerFast about 1 month ago