Cipher SFT 2.5 — Real (v3) 🦑

"The Code Kraken sees what others miss."

⚠ For creative-brief → full site generation, use the Kraken RAG pipeline instead

This checkpoint exists as an experiment in local creative-web fine-tuning. In practice, the production path for "brief → Awwwards-quality single-file HTML" is the kraken_rag package, which retrieves over 96 real Site-of-the-Day winners and prompts a frontier model (Claude Opus 4.7 via the Claude Code CLI, or GPT-5.4 via the SDK). See data/awwwards/distilled/CRITICAL-ASSESSMENT.md in the training repo — the TL;DR written on 2026-04-15 said this explicitly, and subsequent generation tests with this checkpoint confirmed the ceiling.

This checkpoint still has value as:

  • A local offline scaffolder for motion-stack boilerplate (runs on one A100 via transformers.generate).
  • A research artifact for the Kin training pipeline (v1 SFT → SimPO → v3 real-data SFT).
  • A reward-model candidate for future RL work against rendered-page critiques.

It is not the creative brain. For creative output, start at kraken_rag/README.md.


What it actually is

Cipher-SFT25-Real is a 31 B parameter Gemma-4 creative-web generator, fine-tuned on real, scraped source code from three canonical creative-coding repositories (official Three.js examples, Motion One examples, freefrontend.com GSAP gallery). It emits complete, single-file HTML documents with no Tailwind CDN and the correct motion-stack idioms.

This is the v3 breakthrough checkpoint in the Cipher series: the first Kin generator trained end-to-end on authentic, source-backed creative-coding repositories after v1/v2 suffered template-collapse. Whether that's enough to reach the creative quality bar the project was aimed at — see the disclaimer above.

  • 🧠 Base: Auroraventures/cipher-simpo-merged (Gemma-4-31B-IT + SimPO anti-slop preference pairs)
  • 🔬 Fine-tune: Supervised SFT on cipher-real-v1-sft.jsonl via dataset config real-scraped-v1 (741 records, 5.4 MB) built from three source-backed code corpora, with Aura shells kept only as reference metadata
  • 🎨 Optimized for: Awwwards Site-of-the-Day motion stacks — Three.js, GSAP, ScrollTrigger, SplitText, Lenis, vanilla JS
  • 🚫 Slop-suppressed: No Tailwind CDN, no lenis.stop() misuse, no copy-paste boilerplate
  • Library: Unsloth QLoRA (r=64, α=128, rsLoRA) merged to BF16

Quickstart

Transformers (GPU, ≥48 GB VRAM recommended)

from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

model_id = "Auroraventures/cipher-sft25-real-merged"
tok = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id, torch_dtype=torch.bfloat16, device_map="auto"
)

messages = [
    {"role": "system", "content": "You are Cipher, the Code Kraken. Emit complete single-file HTML documents, no markdown fences."},
    {"role": "user", "content": "Build an Awwwards-quality portfolio hero with Three.js particle waves reacting to mouse."},
]
prompt = tok.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tok(prompt, return_tensors="pt").to(model.device)
out = model.generate(**inputs, max_new_tokens=4096, temperature=0.7, top_p=0.9, repetition_penalty=1.05)
print(tok.decode(out[0][inputs.input_ids.shape[1]:], skip_special_tokens=True))

HF Inference Endpoint (recommended for quick trials)

  1. Click Deploy → Inference Endpoints (dedicated) on this model page.
  2. Pick AWS us-east-1, Nvidia L4 ×1 (≈ $0.80/h), min replicas 0 for auto-idle.
  3. Once Running, paste the endpoint URL into scripts/generate_via_hf_endpoint.py and generate the 3 canonical sites.

llama.cpp / Ollama

Use the companion GGUF: Auroraventures/cipher-sft25-real-merged-Q4_K_M-GGUF (≈ 18 GB). Apply the raw Gemma-4 chat template; Ollama's auto-template can misfire on merged SFT checkpoints.


Training Data — Real, Not Synthetic

Dataset: Auroraventures/cipher-awwwards-sft25, file cipher-real-v1-sft.jsonl (5.66 MB, 741 records).

Source Records Purpose
mrdoob/three.js/examples 578 Ground truth Three.js patterns (raycasting, shaders, particles, postprocessing)
motiondivision/motion/dev 148 Framer Motion idioms transplanted to vanilla DOM
freefrontend.com GSAP corpus 63 ScrollTrigger, SplitText, SVG morph, timeline chains
aura.build shells reference-only Modern CSS scaffolding, typography, and palette metadata used for prompting and analysis, not assistant completions

Every record is a Gemma-4 chat-format triple (system, user, assistant) where:

  • system — Cipher's output contract (no Tailwind CDN, Lenis + GSAP + ScrollTrigger + SplitText, opacity-safe cascade)
  • user — a naturalistic request keyed to the source pattern
  • assistant — the actual HTML / JS / TSX lifted from the three source-backed code corpora above

Aura shells are deliberately excluded from assistant completions because they do not provide trustworthy full-code artifacts; they remain reference metadata only.


Training

Hyperparameter Value
Base unsloth/gemma-4-31b-it-unsloth-bnb-4bit → SimPO → this checkpoint
Adapter LoRA r=64, α=128, rsLoRA enabled
Context 8192 tokens
Precision BF16 merged weights
Optimizer paged_adamw_8bit
LR / schedule 2e-5, cosine w/ warmup (0.03)
Epochs 2
Batch 2 × grad_accum 8
Hardware 1 × A100 80 GB (Colab Pro+)
Final loss 0.29 (healthy, no memorization collapse)
Training time ~ 1 h 45 m wall time

The 0.29 final loss sits in the healthy 0.3–0.5 band — stable learning without the 0.01 collapse that plagued earlier synthetic-data runs.


Smoke Test

Prompt: "Build a portfolio with Lenis smooth-scroll + GSAP ScrollTrigger reveals, dark elegant theme"

Check Result
<!DOCTYPE html> present
Three.js / GSAP / Lenis / ScrollTrigger import ✅ (17+ references)
Tailwind CDN polluted ❌ (0 occurrences)
lenis.stop() misuse ❌ (0 occurrences)
Output length 15–22 KB per site
Generation rate (A100) ≈ 150 tok/s

Pipeline Position

Gemma-4-31B-IT (Unsloth 4-bit)
    └── SFT v1 ──► cipher-sft-merged
          └── SimPO ──► cipher-simpo-merged
                └── SFT 2.5 (synthetic) ──► cipher-sft25-merged         [retired]
                      └── SFT 2.5 (real) ──► cipher-sft25-real-merged   [ YOU ARE HERE ]
                            └── GRPO (planned)
                                  └── KTO (planned)

Next stage: GRPO with a creative-quality reward (Lenis+GSAP presence − Tailwind/boilerplate penalty) to push the model from "right stack" to "award-winning layout instincts".


Integrations


Limitations & Biases

  • English-only prompts. The SFT corpus is 100 % English; non-English inputs fall back to base-model behavior.
  • Output is single-file HTML. Multi-file React/Vue codebases drift off-distribution.
  • 3D card / glassmorphism prompts with novel APIs can still hallucinate helper libraries. A GRPO pass is planned to harden these.
  • Non-commercial license on the dataset & this checkpoint (CC-BY-NC-4.0). Contact Auroraventures for commercial licensing.

License

CC-BY-NC-4.0 — Free for research, teaching, and non-commercial creative work. Attribution required. See LICENSE.

Base model (Gemma-4) is governed by Google's Gemma Terms of Use.


Citation

@misc{cipher-sft25-real-2026,
  title        = {Cipher SFT 2.5 — Real: a creative-web code generator trained on authentic Awwwards-grade sources},
  author       = {Matt Haynes and Aurora Ventures},
  year         = {2026},
  month        = {April},
  howpublished = {\url{https://huggingface.co/Auroraventures/cipher-sft25-real-merged}},
}

Built with 🦑 by Aurora Ventures. Cipher is the Code Kraken of the Kin runtime.

Downloads last month
494
Safetensors
Model size
33B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Auroraventures/cipher-sft25-real-merged

Adapter
(1)
this model

Dataset used to train Auroraventures/cipher-sft25-real-merged

Evaluation results

  • Final training loss on cipher-real-v1-sft
    self-reported
    0.290
  • Smoke-test verdict on cipher-real-v1-sft
    self-reported
    Three.js + GSAP + Lenis present, zero Tailwind/lenis.stop() slop