--- library_name: transformers license: apache-2.0 license_link: https://huggingface.co/Qwen/Qwen3.5-397B-A17B/blob/main/LICENSE pipeline_tag: image-text-to-text base_model: - Qwen/Qwen3.5-397B-A17B tags: - abliterated - uncensored - GGUF extra_gated_prompt: >- **Usage Warnings** “**Risk of Sensitive or Controversial Outputs**“: This model’s safety filtering has been significantly reduced, potentially generating sensitive, controversial, or inappropriate content. Users should exercise caution and rigorously review generated outputs. “**Not Suitable for All Audiences**:“ Due to limited content filtering, the model’s outputs may be inappropriate for public settings, underage users, or applications requiring high security. “**Legal and Ethical Responsibilities**“: Users must ensure their usage complies with local laws and ethical standards. Generated content may carry legal or ethical risks, and users are solely responsible for any consequences. “**Research and Experimental Use**“: It is recommended to use this model for research, testing, or controlled environments, avoiding direct use in production or public-facing commercial applications. “**Monitoring and Review Recommendations**“: Users are strongly advised to monitor model outputs in real-time and conduct manual reviews when necessary to prevent the dissemination of inappropriate content. “**No Default Safety Guarantees**“: Unlike standard models, this model has not undergone rigorous safety optimization. huihui.ai bears no responsibility for any consequences arising from its use. --- # huihui-ai/Huihui-Qwen3.5-397B-A17B-abliterated-GGUF This is an uncensored version of [Qwen/Qwen3.5-397B-A17B](https://huggingface.co/Qwen/Qwen3.5-397B-A17B) created with abliteration (see [remove-refusals-with-transformers](https://github.com/Sumandora/remove-refusals-with-transformers) to know more about it). This is a crude, proof-of-concept implementation to remove refusals from an LLM model without using TransformerLens. ## Download and merge Use the [llama.cpp](https://github.com/ggml-org/llama.cpp) split program to merge model (llama-gguf-split needs to be compiled.), ``` huggingface-cli download huihui-ai/Huihui-Qwen3.5-397B-A17B-abliterated-GGUF --local-dir ./huihui-ai/Huihui-Qwen3.5-397B-A17B-abliterated-GGUF --token xxx llama-gguf-split --merge huihui-ai/Huihui-Qwen3.5-397B-A17B-abliterated-GGUF/Q3_K-GGUF/Q3_K-GGUF-00001-of-00021.gguf huihui-ai/Huihui-Qwen3.5-397B-A17B-abliterated-GGUF/ggml-model-Q3_K.gguf ``` ## chat_template-vl-think.jinja We have added a new file named [chat_template-vl-think.jinja](https://huggingface.co/huihui-ai/Huihui-Qwen3.5-397B-A17B-abliterated-GGUF/blob/main/chat_template-vl-think.jinja), which comes from the path `huihui-ai/Huihui-Qwen3-VL-30B-A3B-Thinking-abliterated`. This template file supports the think mode. The new file chat_template-vl.jinja is more compatible with using Tool Calling in [llama-server](https://github.com/ggml-org/llama.cpp/releases), especially when [opencode](https://github.com/anomalyco/opencode/releases) and [oh-my-opencode](https://github.com/code-yeongyu/oh-my-opencode/releases)is involved. This will help prevent 500 error messages from occurring in the llama-server. ``` llama-server -m huihui-ai/Huihui-Qwen3.5-397B-A17B-abliterated-GGUF/ggml-model-Q3_K.gguf --port 8080 --host 0.0.0.0 -c 262144 --chat-template-file huihui-ai/Huihui-Qwen3.5-397B-A17B-abliterated-GGUF/chat_template-vl-think.jinja ``` The following are the relevant configurations for openconde.json used in a Docker environment. ``` { "$schema": "https://opencode.ai/config.json", "provider": { "llama-server": { "npm": "@ai-sdk/openai-compatible", "name": "llama-server", "options": { "baseURL": "http://host.docker.internal:8080/v1" }, "models": { "Huihui-Qwen3.5-397B-A17B-abliterated-Q3_K": { "name": "Huihui-Qwen3.5-397B-A17B-abliterated-Q3_K", "tools": true, "reasoning": true, "options": { "num_ctx": 262144 } } } } } } ``` ### Usage Warnings - **Risk of Sensitive or Controversial Outputs**: This model’s safety filtering has been significantly reduced, potentially generating sensitive, controversial, or inappropriate content. Users should exercise caution and rigorously review generated outputs. - **Not Suitable for All Audiences**: Due to limited content filtering, the model’s outputs may be inappropriate for public settings, underage users, or applications requiring high security. - **Legal and Ethical Responsibilities**: Users must ensure their usage complies with local laws and ethical standards. Generated content may carry legal or ethical risks, and users are solely responsible for any consequences. - **Research and Experimental Use**: It is recommended to use this model for research, testing, or controlled environments, avoiding direct use in production or public-facing commercial applications. - **Monitoring and Review Recommendations**: Users are strongly advised to monitor model outputs in real-time and conduct manual reviews when necessary to prevent the dissemination of inappropriate content. - **No Default Safety Guarantees**: Unlike standard models, this model has not undergone rigorous safety optimization. huihui.ai bears no responsibility for any consequences arising from its use. ### Donation If you like it, please click 'like' and follow us for more updates. You can follow [x.com/support_huihui](https://x.com/support_huihui) to get the latest model information from huihui.ai. ##### Your donation helps us continue our further development and improvement, a cup of coffee can do it. - bitcoin(BTC): ``` bc1qqnkhuchxw0zqjh2ku3lu4hq45hc6gy84uk70ge ``` - Support our work on Ko-fi (https://ko-fi.com/huihuiai)!