Instructions to use OpenYourMind/Qwopus3.5-122B-A10B-abliterated-uncensored-GGUF with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- llama-cpp-python
How to use OpenYourMind/Qwopus3.5-122B-A10B-abliterated-uncensored-GGUF with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="OpenYourMind/Qwopus3.5-122B-A10B-abliterated-uncensored-GGUF", filename="Qwopus3.5-122B-A10B-abliterated-uncensored.Q4_K_M.gguf", )
llm.create_chat_completion( messages = [ { "role": "user", "content": "What is the capital of France?" } ] ) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use OpenYourMind/Qwopus3.5-122B-A10B-abliterated-uncensored-GGUF with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf OpenYourMind/Qwopus3.5-122B-A10B-abliterated-uncensored-GGUF:Q4_K_M # Run inference directly in the terminal: llama-cli -hf OpenYourMind/Qwopus3.5-122B-A10B-abliterated-uncensored-GGUF:Q4_K_M
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf OpenYourMind/Qwopus3.5-122B-A10B-abliterated-uncensored-GGUF:Q4_K_M # Run inference directly in the terminal: llama-cli -hf OpenYourMind/Qwopus3.5-122B-A10B-abliterated-uncensored-GGUF:Q4_K_M
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf OpenYourMind/Qwopus3.5-122B-A10B-abliterated-uncensored-GGUF:Q4_K_M # Run inference directly in the terminal: ./llama-cli -hf OpenYourMind/Qwopus3.5-122B-A10B-abliterated-uncensored-GGUF:Q4_K_M
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf OpenYourMind/Qwopus3.5-122B-A10B-abliterated-uncensored-GGUF:Q4_K_M # Run inference directly in the terminal: ./build/bin/llama-cli -hf OpenYourMind/Qwopus3.5-122B-A10B-abliterated-uncensored-GGUF:Q4_K_M
Use Docker
docker model run hf.co/OpenYourMind/Qwopus3.5-122B-A10B-abliterated-uncensored-GGUF:Q4_K_M
- LM Studio
- Jan
- vLLM
How to use OpenYourMind/Qwopus3.5-122B-A10B-abliterated-uncensored-GGUF with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "OpenYourMind/Qwopus3.5-122B-A10B-abliterated-uncensored-GGUF" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "OpenYourMind/Qwopus3.5-122B-A10B-abliterated-uncensored-GGUF", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/OpenYourMind/Qwopus3.5-122B-A10B-abliterated-uncensored-GGUF:Q4_K_M
- Ollama
How to use OpenYourMind/Qwopus3.5-122B-A10B-abliterated-uncensored-GGUF with Ollama:
ollama run hf.co/OpenYourMind/Qwopus3.5-122B-A10B-abliterated-uncensored-GGUF:Q4_K_M
- Unsloth Studio new
How to use OpenYourMind/Qwopus3.5-122B-A10B-abliterated-uncensored-GGUF with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for OpenYourMind/Qwopus3.5-122B-A10B-abliterated-uncensored-GGUF to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for OpenYourMind/Qwopus3.5-122B-A10B-abliterated-uncensored-GGUF to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for OpenYourMind/Qwopus3.5-122B-A10B-abliterated-uncensored-GGUF to start chatting
- Pi new
How to use OpenYourMind/Qwopus3.5-122B-A10B-abliterated-uncensored-GGUF with Pi:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf OpenYourMind/Qwopus3.5-122B-A10B-abliterated-uncensored-GGUF:Q4_K_M
Configure the model in Pi
# Install Pi: npm install -g @mariozechner/pi-coding-agent # Add to ~/.pi/agent/models.json: { "providers": { "llama-cpp": { "baseUrl": "http://localhost:8080/v1", "api": "openai-completions", "apiKey": "none", "models": [ { "id": "OpenYourMind/Qwopus3.5-122B-A10B-abliterated-uncensored-GGUF:Q4_K_M" } ] } } }Run Pi
# Start Pi in your project directory: pi
- Hermes Agent new
How to use OpenYourMind/Qwopus3.5-122B-A10B-abliterated-uncensored-GGUF with Hermes Agent:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf OpenYourMind/Qwopus3.5-122B-A10B-abliterated-uncensored-GGUF:Q4_K_M
Configure Hermes
# Install Hermes: curl -fsSL https://hermes-agent.nousresearch.com/install.sh | bash hermes setup # Point Hermes at the local server: hermes config set model.provider custom hermes config set model.base_url http://127.0.0.1:8080/v1 hermes config set model.default OpenYourMind/Qwopus3.5-122B-A10B-abliterated-uncensored-GGUF:Q4_K_M
Run Hermes
hermes
- Docker Model Runner
How to use OpenYourMind/Qwopus3.5-122B-A10B-abliterated-uncensored-GGUF with Docker Model Runner:
docker model run hf.co/OpenYourMind/Qwopus3.5-122B-A10B-abliterated-uncensored-GGUF:Q4_K_M
- Lemonade
How to use OpenYourMind/Qwopus3.5-122B-A10B-abliterated-uncensored-GGUF with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull OpenYourMind/Qwopus3.5-122B-A10B-abliterated-uncensored-GGUF:Q4_K_M
Run and chat with the model
lemonade run user.Qwopus3.5-122B-A10B-abliterated-uncensored-GGUF-Q4_K_M
List all available models
lemonade list
Qwopus3.5-122B-A10B-abliterated-uncensored — GGUF
Overview
GGUF builds of OpenYourMind/Qwopus3.5-122B-A10B-abliterated-uncensored — the language-model portion only (vision tower and MTP head are not currently supported by llama.cpp for this architecture, so they were dropped for the GGUF builds; use the BF16 repo if you need vision/MTP).
The base model: full BF16 abliterated and supervised-finetuned variant of Qwen/Qwen3.5-122B-A10B (Mixture of Experts, ~10B active / 122B total). See the parent repo for the full pipeline (refusal ablation → constrained-LoRA Opus reasoning SFT → unconstrained chosen-completion SFT).
Files
| File | Bits/weight | Size | Notes |
|---|---|---|---|
Qwopus3.5-122B-A10B-abliterated-uncensored.Q8_0.gguf |
~8.5 | ~121 GB | Near-lossless. Best quality / largest. |
Qwopus3.5-122B-A10B-abliterated-uncensored.Q4_K_M.gguf |
~4.6 | ~75 GB | Recommended balance. Q4_K_M keeps output.weight at Q6_K for better head precision. Quantized from Q8_0 with --allow-requantize. |
Usage
Requires a recent llama.cpp build that supports qwen3_5_moe (Gated DeltaNet linear-attn + MoE). Tested against the unsloth/llama.cpp fork.
# CLI
./build/bin/llama-cli \
-m Qwopus3.5-122B-A10B-abliterated-uncensored.Q4_K_M.gguf \
-p "Write a Python function that finds prime factors of n." \
-n 256 -c 8192
# Server (OpenAI-compatible)
./build/bin/llama-server \
-m Qwopus3.5-122B-A10B-abliterated-uncensored.Q4_K_M.gguf \
-c 16384 --host 0.0.0.0 --port 8080
The chat template is embedded in the GGUF (Qwen 3.5 <|im_start|> style, with <think>...</think> thinking blocks).
Notes
- Language model only: vision tower + MTP head are not in these GGUFs.
- Context length: native 262,144 (use
-cto fit available memory). - License: Other (inherits from the Qwen3.5 base license).
- Architecture (per GGUF metadata): 48 blocks, hidden 3072, 256 experts, 8 used per token, expert FFN 1024, full-attention every 4 layers (linear-attn elsewhere).
Support & Community
- Discord: https://discord.gg/rhUZY5GEZr
- Bitcoin Donations:
bc1qsvfduzj9fjs9fugpc52yver3f2g8fp7xjxecdv
Thanks
- Jackrong — for the idea of Qwopus merges (Opus distillations on Qwen models).
- wangzhang — for the wonderful abliterix framework, which was customized to do this abliteration.
Disclaimer
Use is the responsibility of the user. Ensure your usage complies with applicable laws, platform rules, and deployment requirements.
- Downloads last month
- 259
4-bit
8-bit
Model tree for OpenYourMind/Qwopus3.5-122B-A10B-abliterated-uncensored-GGUF
Base model
Qwen/Qwen3.5-122B-A10B