Image-Text-to-Text
Transformers
Safetensors
MLX
English
Chinese
qwen3_5
fine tune
creative
creative writing
fiction writing
plot generation
sub-plot generation
story generation
scene continue
storytelling
fiction story
science fiction
romance
all genres
story
writing
vivid prosing
vivid writing
fiction
roleplaying
bfloat16
all use cases
unsloth
heretic
uncensored
abliterated
mlx-my-repo
conversational
8-bit precision
enet45/Qwen3.5-9B-Claude-4.6-OS-HERETIC-UNCENSORED-INSTRUCT-mlx-8Bit
The Model enet45/Qwen3.5-9B-Claude-4.6-OS-HERETIC-UNCENSORED-INSTRUCT-mlx-8Bit was converted to MLX format from DavidAU/Qwen3.5-9B-Claude-4.6-OS-HERETIC-UNCENSORED-INSTRUCT using mlx-lm version 0.31.2.
Use with mlx
pip install mlx-lm
from mlx_lm import load, generate
model, tokenizer = load("enet45/Qwen3.5-9B-Claude-4.6-OS-HERETIC-UNCENSORED-INSTRUCT-mlx-8Bit")
prompt="hello"
if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
- Downloads last month
- 798
Model size
9B params
Tensor type
BF16
·
U32 ·
Hardware compatibility
Log In to add your hardware
8-bit
Model tree for enet45/Qwen3.5-9B-Claude-4.6-OS-HERETIC-UNCENSORED-INSTRUCT-mlx-8Bit
Base model
Qwen/Qwen3.5-9B-Base Finetuned
trohrbaugh/Qwen3.5-9B-heretic-v2