Auroraventures's picture
docs: note kraken_rag as production path for creative generation
7c64368 verified
metadata
license: cc-by-nc-4.0
language:
  - en
library_name: transformers
pipeline_tag: text-generation
base_model: Auroraventures/cipher-simpo-merged
tags:
  - gemma4
  - gemma-4-31b
  - cipher
  - kin
  - creative-coding
  - awwwards
  - three.js
  - gsap
  - lenis
  - web-design
  - front-end
  - html
  - css
  - javascript
  - code-generation
  - unsloth
  - qlora
  - lora
  - sft
  - single-file-html
  - text-generation
datasets:
  - Auroraventures/cipher-awwwards-sft25
model-index:
  - name: cipher-sft25-real-merged
    results:
      - task:
          type: text-generation
          name: Single-file HTML generation
        dataset:
          name: cipher-real-v1-sft
          type: Auroraventures/cipher-awwwards-sft25
          config: real-scraped-v1
          split: train
        metrics:
          - type: loss
            value: 0.29
            name: Final training loss
          - type: accuracy
            value: Three.js + GSAP + Lenis present, zero Tailwind/lenis.stop() slop
            name: Smoke-test verdict
widget:
  - text: >-
      Build a complete single-file HTML page with a stunning hero section
      featuring a Three.js particle system that responds to mouse movement.
    example_title: Three.js particle hero
  - text: >-
      Build a complete single-file HTML portfolio with smooth scrolling via
      Lenis and GSAP ScrollTrigger text reveals.
    example_title: Lenis + GSAP portfolio
  - text: >-
      Build a glassmorphism 3D card with CSS preserve-3d, GSAP entry animation,
      flips on hover.
    example_title: 3D card
inference: false

Cipher SFT 2.5 β€” Real (v3) πŸ¦‘

"The Code Kraken sees what others miss."

⚠ For creative-brief β†’ full site generation, use the Kraken RAG pipeline instead

This checkpoint exists as an experiment in local creative-web fine-tuning. In practice, the production path for "brief β†’ Awwwards-quality single-file HTML" is the kraken_rag package, which retrieves over 96 real Site-of-the-Day winners and prompts a frontier model (Claude Opus 4.7 via the Claude Code CLI, or GPT-5.4 via the SDK). See data/awwwards/distilled/CRITICAL-ASSESSMENT.md in the training repo β€” the TL;DR written on 2026-04-15 said this explicitly, and subsequent generation tests with this checkpoint confirmed the ceiling.

This checkpoint still has value as:

  • A local offline scaffolder for motion-stack boilerplate (runs on one A100 via transformers.generate).
  • A research artifact for the Kin training pipeline (v1 SFT β†’ SimPO β†’ v3 real-data SFT).
  • A reward-model candidate for future RL work against rendered-page critiques.

It is not the creative brain. For creative output, start at kraken_rag/README.md.


What it actually is

Cipher-SFT25-Real is a 31 B parameter Gemma-4 creative-web generator, fine-tuned on real, scraped source code from three canonical creative-coding repositories (official Three.js examples, Motion One examples, freefrontend.com GSAP gallery). It emits complete, single-file HTML documents with no Tailwind CDN and the correct motion-stack idioms.

This is the v3 breakthrough checkpoint in the Cipher series: the first Kin generator trained end-to-end on authentic, source-backed creative-coding repositories after v1/v2 suffered template-collapse. Whether that's enough to reach the creative quality bar the project was aimed at β€” see the disclaimer above.

  • 🧠 Base: Auroraventures/cipher-simpo-merged (Gemma-4-31B-IT + SimPO anti-slop preference pairs)
  • πŸ”¬ Fine-tune: Supervised SFT on cipher-real-v1-sft.jsonl via dataset config real-scraped-v1 (741 records, 5.4 MB) built from three source-backed code corpora, with Aura shells kept only as reference metadata
  • 🎨 Optimized for: Awwwards Site-of-the-Day motion stacks β€” Three.js, GSAP, ScrollTrigger, SplitText, Lenis, vanilla JS
  • 🚫 Slop-suppressed: No Tailwind CDN, no lenis.stop() misuse, no copy-paste boilerplate
  • ⚑ Library: Unsloth QLoRA (r=64, Ξ±=128, rsLoRA) merged to BF16

Quickstart

Transformers (GPU, β‰₯48 GB VRAM recommended)

from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

model_id = "Auroraventures/cipher-sft25-real-merged"
tok = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id, torch_dtype=torch.bfloat16, device_map="auto"
)

messages = [
    {"role": "system", "content": "You are Cipher, the Code Kraken. Emit complete single-file HTML documents, no markdown fences."},
    {"role": "user", "content": "Build an Awwwards-quality portfolio hero with Three.js particle waves reacting to mouse."},
]
prompt = tok.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tok(prompt, return_tensors="pt").to(model.device)
out = model.generate(**inputs, max_new_tokens=4096, temperature=0.7, top_p=0.9, repetition_penalty=1.05)
print(tok.decode(out[0][inputs.input_ids.shape[1]:], skip_special_tokens=True))

HF Inference Endpoint (recommended for quick trials)

  1. Click Deploy β†’ Inference Endpoints (dedicated) on this model page.
  2. Pick AWS us-east-1, Nvidia L4 Γ—1 (β‰ˆ $0.80/h), min replicas 0 for auto-idle.
  3. Once Running, paste the endpoint URL into scripts/generate_via_hf_endpoint.py and generate the 3 canonical sites.

llama.cpp / Ollama

Use the companion GGUF: Auroraventures/cipher-sft25-real-merged-Q4_K_M-GGUF (β‰ˆ 18 GB). Apply the raw Gemma-4 chat template; Ollama's auto-template can misfire on merged SFT checkpoints.


Training Data β€” Real, Not Synthetic

Dataset: Auroraventures/cipher-awwwards-sft25, file cipher-real-v1-sft.jsonl (5.66 MB, 741 records).

Source Records Purpose
mrdoob/three.js/examples 578 Ground truth Three.js patterns (raycasting, shaders, particles, postprocessing)
motiondivision/motion/dev 148 Framer Motion idioms transplanted to vanilla DOM
freefrontend.com GSAP corpus 63 ScrollTrigger, SplitText, SVG morph, timeline chains
aura.build shells reference-only Modern CSS scaffolding, typography, and palette metadata used for prompting and analysis, not assistant completions

Every record is a Gemma-4 chat-format triple (system, user, assistant) where:

  • system β€” Cipher's output contract (no Tailwind CDN, Lenis + GSAP + ScrollTrigger + SplitText, opacity-safe cascade)
  • user β€” a naturalistic request keyed to the source pattern
  • assistant β€” the actual HTML / JS / TSX lifted from the three source-backed code corpora above

Aura shells are deliberately excluded from assistant completions because they do not provide trustworthy full-code artifacts; they remain reference metadata only.


Training

Hyperparameter Value
Base unsloth/gemma-4-31b-it-unsloth-bnb-4bit β†’ SimPO β†’ this checkpoint
Adapter LoRA r=64, Ξ±=128, rsLoRA enabled
Context 8192 tokens
Precision BF16 merged weights
Optimizer paged_adamw_8bit
LR / schedule 2e-5, cosine w/ warmup (0.03)
Epochs 2
Batch 2 Γ— grad_accum 8
Hardware 1 Γ— A100 80 GB (Colab Pro+)
Final loss 0.29 (healthy, no memorization collapse)
Training time ~ 1 h 45 m wall time

The 0.29 final loss sits in the healthy 0.3–0.5 band β€” stable learning without the 0.01 collapse that plagued earlier synthetic-data runs.


Smoke Test

Prompt: "Build a portfolio with Lenis smooth-scroll + GSAP ScrollTrigger reveals, dark elegant theme"

Check Result
<!DOCTYPE html> present βœ…
Three.js / GSAP / Lenis / ScrollTrigger import βœ… (17+ references)
Tailwind CDN polluted ❌ (0 occurrences)
lenis.stop() misuse ❌ (0 occurrences)
Output length 15–22 KB per site
Generation rate (A100) β‰ˆ 150 tok/s

Pipeline Position

Gemma-4-31B-IT (Unsloth 4-bit)
    └── SFT v1 ──► cipher-sft-merged
          └── SimPO ──► cipher-simpo-merged
                └── SFT 2.5 (synthetic) ──► cipher-sft25-merged         [retired]
                      └── SFT 2.5 (real) ──► cipher-sft25-real-merged   [ YOU ARE HERE ]
                            └── GRPO (planned)
                                  └── KTO (planned)

Next stage: GRPO with a creative-quality reward (Lenis+GSAP presence βˆ’ Tailwind/boilerplate penalty) to push the model from "right stack" to "award-winning layout instincts".


Integrations


Limitations & Biases

  • English-only prompts. The SFT corpus is 100 % English; non-English inputs fall back to base-model behavior.
  • Output is single-file HTML. Multi-file React/Vue codebases drift off-distribution.
  • 3D card / glassmorphism prompts with novel APIs can still hallucinate helper libraries. A GRPO pass is planned to harden these.
  • Non-commercial license on the dataset & this checkpoint (CC-BY-NC-4.0). Contact Auroraventures for commercial licensing.

License

CC-BY-NC-4.0 β€” Free for research, teaching, and non-commercial creative work. Attribution required. See LICENSE.

Base model (Gemma-4) is governed by Google's Gemma Terms of Use.


Citation

@misc{cipher-sft25-real-2026,
  title        = {Cipher SFT 2.5 β€” Real: a creative-web code generator trained on authentic Awwwards-grade sources},
  author       = {Matt Haynes and Aurora Ventures},
  year         = {2026},
  month        = {April},
  howpublished = {\url{https://huggingface.co/Auroraventures/cipher-sft25-real-merged}},
}

Built with πŸ¦‘ by Aurora Ventures. Cipher is the Code Kraken of the Kin runtime.