0% Smaller, +29.3% Better
Qwen3.5-4B pruned by 0% and retrained for general through Experiential Plasticity.
15.64 → 11.05 perplexity · 1 cycles
Every claim on this card is verified
Trust: self-attested · 1 benchmark · 1 device tested
ForgeAlloy chain of custody · Download alloy · Merkle-chained
Qwen3.5-4B with cryptographic provenance via the ForgeAlloy chain of custody.
Benchmarks
| Benchmark | Result | Verified |
|---|---|---|
| perplexity | 11.1 | Self-reported |
What Changed (Base → Forged)
| Base | Forged | Delta | |
|---|---|---|---|
| Perplexity (general) | 15.64 | 11.05 | -29.3% ✅ |
| Pruning | None | 0% heads (entropy) | -0% params ✅ |
| Training | General | general, 500 steps | LR 5e-05, 1 cycles |
| Pipeline | prune → train | 1 cycles |
Runs On
| Device | Format | Size | Speed |
|---|---|---|---|
| NVIDIA GeForce RTX 5090 | fp16 | — | Verified |
| MacBook Pro 32GB | fp16 | 8.0GB | Expected |
| MacBook Air 16GB | Q8_0 | ~4.0GB | Expected |
| MacBook Air 8GB | Q4_K_M | ~2.5GB | Expected |
| iPhone / Android | Q4_K_M | ~2.5GB | Expected |
Quick Start
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("continuum-ai/qwen3.5-4b-general-forged",
torch_dtype="auto", device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("continuum-ai/qwen3.5-4b-general-forged")
inputs = tokenizer("def merge_sort(arr):", return_tensors="pt").to(model.device)
output = model.generate(**inputs, max_new_tokens=200)
print(tokenizer.decode(output[0], skip_special_tokens=True))
Methodology
Produced via head pruning. Full methodology, ablations, and per-stage rationale are in the methodology paper and the companion MODEL_METHODOLOGY.md in this repository. The pipeline ran as prune → train over 1 cycle on NVIDIA GeForce RTX 5090.
Chain of Custody
Scan the QR or verify online. Download the alloy file to verify independently.
| What | Proof |
|---|---|
| Model weights | sha256:6bbc07d6861e25fdb879c2a9b9896b0bc... |
| Code that ran | sha256:42fb027d203dec8fe... |
| Forged on | NVIDIA GeForce RTX 5090, 2026-04-06T10:44:10-0500 |
| Trust level | self-attested |
| Spec | ForgeAlloy — Rust/Python/TypeScript |
Make Your Own
Forged with Continuum — a distributed AI world that runs on your hardware.
The Factory configurator lets you design and forge custom models visually — context extension, pruning, LoRA, quantization, vision/audio modalities. Pick your target devices, the system figures out what fits.
GitHub · All Models · Forge-Alloy
License
apache-2.0
- Downloads last month
- 837
Quantized
