Qwen3.5-9B Terminal Merge

An experimental layer-wise merge of 16 Qwen3.5-9B variants aimed at terminal and CLI-style command generation.

Status Update

I need to correct the original benchmark claim on this repo.

The first version of this model card reported results from an earlier evaluation setup that I no longer trust. After rerunning the comparison on the current 60-task Modal Harbor benchmark on March 17, 2026, this v1 merge did not beat the current base Qwen/Qwen3.5-9B result.

I should not have presented the earlier result as a reliable improvement over base. Sorry about that.

For now, this repo should be treated as an experimental merge release rather than a confirmed upgrade over base Qwen.

Current Internal Benchmark Snapshot

Fresh 60-task Modal Harbor run:

Model Tasks Passed Pass Rate
Qwen/Qwen3.5-9B 53/60 88.3%
EganAI/qwen3.5-9b-terminal-merge 50/60 83.3%

The benchmark covers file operations, text processing, git workflows, networking, Python scripting, and system administration tasks in sandboxed execution environments.

Download

Format Size Link
Safetensors (bf16) ~18 GB Download
GGUF F16 17.9 GB Download
GGUF Q8_0 9.53 GB Download

Model Details

  • Architecture: Qwen3.5 (hybrid linear + full attention)
  • Parameters: 9B total
  • Context Length: 262,144 tokens
  • Precision: bfloat16
  • Layers: 32 (8 full attention + 24 linear attention)
  • Merge Method: Layer-wise linear merge with optimized per-layer weights

Source Models

This model combines optimized layer-wise weights from 16 Qwen3.5-9B variants spanning reasoning, instruction-following, and general capability specializations:

Category Models
Core Qwen/Qwen3.5-9B, unsloth/Qwen3.5-9B
Abliterated darkc0de/Qwen3.5-9B-heretic, lukey03/Qwen3.5-9B-abliterated, llmfan46/Qwen3.5-9B-ultimate-irrefusable-heretic, llmfan46/Qwen3.5-9B-ultra-heretic, jwest33/qwen3.5-9b-null-space-abliterated, trohrbaugh/Qwen3.5-9B-heretic-v2, osirisbrain/OsirisCortex-v6
Reasoning DavidAU/Qwen3.5-9B-Claude-4.6-HighIQ-THINKING, DavidAU/Qwen3.5-9B-Claude-4.6-HighIQ-INSTRUCT, crownelius/Crow-9B-Opus-4.6-Distill-Heretic_Qwen3.5
Specialized alecccdd/Qwen3.5-9B-paraphrasing-orpo, lugman-madhiai/Qwen3.5-9B-MHS-Interleaved, Hastagaras/Qwen3.5-9B-GLM-Wannabe, zenlm/zen4

How to Use

from transformers import AutoModelForCausalLM, AutoTokenizer

model_id = "EganAI/qwen3.5-9b-terminal-merge"

tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    torch_dtype="bfloat16",
    device_map="auto",
)

messages = [
    {"role": "user", "content": "Find all Python files larger than 1MB and sort by size descending"}
]
text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(text, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=512)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

GGUF (llama.cpp)

# Download Q8_0 (recommended balance of quality/size)
wget https://huggingface.co/EganAI/qwen3.5-9b-terminal-merge/resolve/main/qwen3.5-9b-terminal-merge-q8_0.gguf

# Run with llama.cpp
./llama-cli -m qwen3.5-9b-terminal-merge-q8_0.gguf \
  -p "Find all files modified in the last 24 hours and sort by size:" \
  -n 512 --temp 0.7

GGUF (Python)

from huggingface_hub import hf_hub_download
from llama_cpp import Llama

model_path = hf_hub_download(
    repo_id="EganAI/qwen3.5-9b-terminal-merge",
    filename="qwen3.5-9b-terminal-merge-q8_0.gguf",
)

llm = Llama(model_path=model_path, n_ctx=4096, n_gpu_layers=-1)

output = llm(
    "List all running Docker containers and their memory usage:",
    max_tokens=256,
    temperature=0.7,
)
print(output["choices"][0]["text"])

vLLM Serving

vllm serve EganAI/qwen3.5-9b-terminal-merge \
    --language-model-only \
    --dtype bfloat16 \
    --max-model-len 8192

Note: Use --language-model-only since this is a multimodal architecture served for text-only inference.

Training Details

The per-layer merge weights were optimized by evaluating candidates on a suite of 60 terminal tasks using vLLM inference in sandboxed environments. The optimization searched across layer-group weight distributions to find a strong blend of the 16 source models.

That earlier optimization run produced a candidate worth publishing as an experiment, but the original benchmark comparison against base should be considered superseded by the newer reevaluation above.

Limitations

  • This v1 release does not currently show a verified win over base Qwen/Qwen3.5-9B on my latest internal benchmark run
  • Optimized specifically for terminal/CLI tasks; general-purpose performance may vary
  • Requires --language-model-only flag when serving with vLLM due to multimodal architecture
  • Visual capabilities are inherited from the base model but were not part of the optimization target
Downloads last month
705
Safetensors
Model size
9B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for EganAI/qwen3.5-9b-terminal-merge