SuperApriel-15b-Instruct
/ˈɑː.pri.əl/
A 15B-parameter token-mixer supernet with 8 optimized deployment presets spanning 1.0× to 10.7× decode throughput at 32K sequence length, all from a single checkpoint. Derived from Apriel-1.6 through stochastic distillation and targeted supervised fine-tuning.
- Model Size: 15B parameters
- Layers: 48 decoder layers, each with 4 mixer variants
- Context Length: 262K positions (runtime dependent)
- Languages: English (best)
Highlights
- Flexible deployment from a single checkpoint: multiple presets trading throughput for quality
- Four mixer types per layer: Full Attention (FA), Sliding Window Attention (SWA), Gated DeltaNet (GDN), Kimi Delta Attention (KDA)
- Instruction-tuned: targeted SFT with multiple Pareto-optimal placements
- Speculative decoding support: use all-attention as target with efficient placements as drafts from the same checkpoint
See the report for detailed benchmarks, quality retention curves, and the full story.
Performance Overview
Pareto Frontier: Speedup vs Quality
Throughput Comparison
Deployment Presets
Each preset is a specific assignment of one mixer per layer. All presets share the same checkpoint—only mixer selection differs at inference time.
| Preset | FA | SWA | KDA | GDN | Speedup @32k | Speedup @16k | Avg Acc. | Quality Retention |
|---|---|---|---|---|---|---|---|---|
| all-attention | 48 | 0 | 0 | 0 | 1.0× | 1.0× | 74.2 | 100% |
| Reg|Lklhd‑26 | 12 | 26 | 6 | 4 | 2.85× | 1.5× | 71.1 | 96% |
| Idealized|All‑18 | 13 | 32 | 1 | 2 | 1.99× | 1.1× | 71.8 | 97% |
| Reg|Lklhd‑18 | 3 | 25 | 4 | 16 | 4.76× | 2.2× | 69.7 | 94% |
| Idealized|Lklhd‑6 | 0 | 30 | 5 | 13 | 6.2× | 2.4× | 66.8 | 90% |
| Idealized|All‑6 | 0 | 30 | 5 | 13 | 6.13× | 2.5× | 65.3 | 88% |
| Reg|Lklhd‑13 | 0 | 16 | 13 | 19 | 6.9× | 2.7× | 60.2 | 81% |
| Reg|Lklhd‑10 | 0 | 10 | 5 | 33 | 10.69× | 4.2× | 57.2 | 77% |
Benchmark Results
Full Results (S2: Targeted SFT with 8 Placements)
| Config | @32k | AIME'24 | AIME'25 | MATH-500 | GSM8K | FDA | SWDE | RULER | Tau2 | MMLU-Pro | AIME(NV) | GPQA | HLE | LCB | IFBench | All |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| all-attention | 1.0× | 93.3 | 86.7 | 91.8 | 92.3 | 78.3 | 89.5 | 79.4 | 56.7 | 76.8 | 82.7 | 72.0 | 8.2 | 68.6 | 63.1 | 74.2 |
| Reg|Lklhd‑26 | 2.85× | 86.7 | 83.3 | 92.0 | 91.1 | 79.9 | 88.2 | 74.4 | 30.7 | 76.2 | 80.0 | 70.7 | 10.0 | 69.2 | 63.1 | 71.1 |
| Idealized|All‑18 | 1.99× | 90.0 | 86.7 | 92.0 | 92.1 | 78.0 | 86.6 | 67.1 | 52.6 | 76.3 | 82.2 | 68.2 | 6.9 | 67.3 | 58.8 | 71.8 |
| Reg|Lklhd‑18 | 4.76× | 86.7 | 76.7 | 92.4 | 91.7 | 81.7 | 89.8 | 60.5 | 46.2 | 76.3 | 74.4 | 68.7 | 6.6 | 64.8 | 59.3 | 69.7 |
| Idealized|Lklhd‑6 | 6.2× | 83.3 | 76.7 | 92.4 | 92.3 | 76.9 | 88.9 | 66.1 | 40.4 | 73.6 | 62.2 | 65.0 | 6.1 | 57.0 | 54.9 | 66.8 |
| Idealized|All‑6 | 6.13× | 83.3 | 80.0 | 92.2 | 91.7 | 75.6 | 87.4 | 61.9 | 34.2 | 73.3 | 56.7 | 61.0 | 5.9 | 55.9 | 55.3 | 65.3 |
| Reg|Lklhd‑13 | 6.9× | 76.7 | 73.3 | 90.4 | 91.2 | 68.6 | 85.1 | 57.0 | 28.6 | 69.3 | 26.7 | 61.2 | 5.5 | 52.6 | 57.1 | 60.2 |
| Reg|Lklhd‑10 | 10.69× | 76.7 | 66.7 | 90.6 | 90.8 | 65.2 | 82.9 | 48.6 | 23.4 | 68.2 | 24.4 | 52.5 | 4.5 | 50.2 | 56.2 | 57.2 |
Comparison with Other Hybrid Models
| Model | Speedup @32k | Math (Avg) | All Tasks |
|---|---|---|---|
| Super Apriel all-attention | 1.0× | 91.0 | 74.2 |
| Super Apriel Reg|Lklhd‑26 | 2.85× | 88.3 | 71.1 |
| Super Apriel Reg|Lklhd‑18 | 4.76× | 86.8 | 69.7 |
| Super Apriel Idealized|Lklhd‑6 | 6.2× | 86.2 | 66.8 |
| Apriel-H1 15B | 1.97× | 80.4 | 58.4 |
| Nemotron-Nano 12B v2 | 5.85× | 74.5 | 62.4 |
| Falcon-H1R 7B | 4.61× | 78.6 | 64.9 |
| Nemotron-3-Nano 30B | 4.09× | 89.0 | 72.6 |
Model Overview
SuperApriel-15b-Instruct is trained in two stages:
Stage 1 — Stochastic Distillation: All four mixer types trained simultaneously via distillation from frozen Apriel-1.6 teacher on 266B tokens. See SuperApriel-15b-Base.
Stage 2 — Targeted SFT: Supervised fine-tuning on 60B tokens with 8 Pareto-optimal placements identified via Bayesian placement optimization. Shared parameters (FFNs, embeddings, norms) remain frozen; only mixer weights are trained.
Architecture Details
| Component | Details |
|---|---|
| Parameters | 15B |
| Decoder layers | 48 |
| Query / KV heads | 32 / 8 (grouped-query attention), d_h = 128 |
| Hidden dimension | 5,120 |
| FFN width | 14,336 (SiLU-gated) |
| Vocabulary | 131,072 tokens |
| Vision encoder | Pixtral (16×16 patches) |
Mixer Types
| Mixer | Time | Memory | Description |
|---|---|---|---|
| Full Attention (FA) | O(n²) | O(n) KV cache | Standard grouped-query attention |
| Sliding Window (SWA) | O(w·n) | O(w) | Local window of 4,096 tokens |
| Gated DeltaNet (GDN) | O(n) | O(1) fixed state | Matrix-valued recurrent state with delta rule |
| Kimi Delta Attention (KDA) | O(n) | O(1) fixed state | Linear attention with channel-wise gating |
How to Use
The recommended serving backend is vLLM with the Fast-LLM plugin, which supports preset selection and runtime switching. For simpler use cases, Transformers is also supported (see Use with Transformers below).
Use with vLLM
Preset selection and throughput-optimized serving require the vLLM plugin from Fast-LLM. Two serving modes are available:
- Single-preset mode: Only the weights for the selected mixer placement are loaded (approx. 27 GiB in bf16). Unused mixer weights are never loaded, so the model fits comfortably on a single GPU with room for KV cache.
- Supernet mode: All four mixer weights are loaded per layer (approx. 46 GiB in bf16), enabling instant placement switching at runtime via
collective_rpc(3-20 ms per switch, no engine reload).
Installation
Install vLLM and the Apriel2 plugin (no need to install the full Fast-LLM training framework):
python -m venv .venv
source .venv/bin/activate
pip install vllm==0.14.1
git clone git@github.com:ServiceNow/Fast-LLM.git
cd Fast-LLM && git checkout feature/vllm-apriel2-models
# Install only the vLLM plugin (not the full training framework)
pip install -e ./apriel2-vllm-plugin/
This installs a thin wrapper that registers the Apriel2 model with vLLM via its plugin system. It only pulls in torch, transformers, and einops — no training dependencies.
Single-Preset Mode
Load one preset on a single GPU. Copy the desired preset config over the model's config.json:
import json, shutil
from vllm import LLM, SamplingParams
model_dir = "ServiceNow-AI/SuperApriel-15b-Instruct" # or local path
# Select a preset (copies its config.json as the model's main config)
preset = "Reg_Lklhd-26" # see preset_configs/ for all options
shutil.copy(f"{model_dir}/preset_configs/{preset}/config.json",
f"{model_dir}/config.json")
llm = LLM(
model=model_dir,
trust_remote_code=True,
gpu_memory_utilization=0.90,
max_model_len=4096,
)
output = llm.generate(
["What is 2 + 3?"],
SamplingParams(max_tokens=200, temperature=0.6),
)
print(output[0].outputs[0].text)
Available presets: all-attention, Reg_Lklhd-26, Reg_Lklhd-18, Reg_Lklhd-13, Reg_Lklhd-10, Idealized_All-18, Idealized_All-6, Idealized_Lklhd-6, extra_bayesian-mix-7.
Note: Custom placements must include at least one attention-type layer (FA or SWA). Configurations using only recurrent mixers (GDN/KDA) are not currently supported due to a vLLM KV cache coordinator limitation. All shipped presets satisfy this requirement.
Supernet Mode (Runtime Switching)
Load the full supernet and switch presets at runtime without reloading the engine. Use enforce_eager=True on a single GPU (saves memory by skipping CUDA graph capture):
import json
from vllm import LLM, SamplingParams
model_dir = "ServiceNow-AI/SuperApriel-15b-Instruct" # or local path
# Ensure the base supernet config.json is active (the default in the repo)
llm = LLM(
model=model_dir,
trust_remote_code=True,
gpu_memory_utilization=0.90,
max_model_len=4096,
enforce_eager=True, # required for single GPU; optional with tensor_parallel_size=2
# tensor_parallel_size=2, # uncomment for 2-GPU setup (enables CUDA graphs)
)
# Check available mixers
mixer_names = llm.collective_rpc("get_mixer_names")
print(mixer_names[0]) # ['attention', 'gdn', 'kda', 'sliding_window']
# Load a preset pattern and switch
preset = "Reg_Lklhd-26"
with open(f"{model_dir}/preset_configs/{preset}/config.json") as f:
pattern = json.load(f)["decoder"]["pattern"]
llm.collective_rpc("set_layer_placements", args=(pattern,)) # ~3-20ms
output = llm.generate(
["What is 2 + 3?"],
SamplingParams(max_tokens=200, temperature=0.6),
)
print(output[0].outputs[0].text)
# Switch to another preset instantly
with open(f"{model_dir}/preset_configs/all-attention/config.json") as f:
pattern = json.load(f)["decoder"]["pattern"]
llm.collective_rpc("set_layer_placements", args=(pattern,))
| Setup | Model Size | Available for KV Cache | Notes |
|---|---|---|---|
| Single-preset, 1 GPU | 27 GiB | 39 GiB | Best for fixed deployments |
Supernet, 1 GPU (enforce_eager) |
46 GiB | 20 GiB | Runtime switching, lower KV capacity |
| Supernet, 2 GPU (TP=2) | 23 GiB/GPU | 50 GiB/GPU | Full compile + CUDA graphs |
Serving (OpenAI-Compatible API)
Serve a single preset as an OpenAI-compatible endpoint using vllm serve. Select the preset before launching:
# Select a preset
MODEL_DIR="ServiceNow-AI/SuperApriel-15b-Instruct" # or local path
cp $MODEL_DIR/preset_configs/Reg_Lklhd-26/config.json $MODEL_DIR/config.json
# Launch the server
vllm serve $MODEL_DIR \
--trust-remote-code \
--gpu-memory-utilization 0.90 \
--max-model-len 4096 \
--served-model-name SuperApriel-15b-Instruct
Query with curl:
curl http://localhost:8000/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "SuperApriel-15b-Instruct",
"messages": [{"role": "user", "content": "What is 2 + 3?"}],
"max_tokens": 200,
"temperature": 0.6
}'
Or with the OpenAI Python client:
from openai import OpenAI
client = OpenAI(base_url="http://localhost:8000/v1", api_key="unused")
response = client.chat.completions.create(
model="SuperApriel-15b-Instruct",
messages=[{"role": "user", "content": "What is 2 + 3?"}],
max_tokens=200,
temperature=0.6,
)
print(response.choices[0].message.content)
To switch presets, stop the server, copy a different preset config, and restart.
Per-Request Preset Selection
The supernet mode above switches the active placement globally for all requests. For production deployments, it is also possible to route individual requests to different presets — for example, routing latency-sensitive requests to a fast placement and complex reasoning requests to all-attention.
🔴 TODO: Add per-request placement selection via the vLLM serving API (e.g.
placement_idfield in the request body). This is managed separately from the globalcollective_rpcswitching shown above.
Use with Transformers
The model works with AutoModelForCausalLM for standard Transformers inference. Each preset config selects one mixer per layer, and the model automatically loads only the relevant weights from the supernet checkpoint.
Installation
pip install transformers torch einops accelerate
Presets that include GDN or KDA mixers additionally require:
pip install causal-conv1d mamba-ssm
Attention-only presets (all-attention, and others using only FA/SWA) work without these extra packages.
Usage
The repo ships with the supernet config as default (for vLLM). Before loading with Transformers, you must select a preset by copying its config.json:
import re
import shutil
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_dir = "ServiceNow-AI/SuperApriel-15b-Instruct" # or local path
# Select a preset (required — the default supernet config is for vLLM only)
preset = "all-attention" # or any preset from preset_configs/
shutil.copy(f"{model_dir}/preset_configs/{preset}/config.json",
f"{model_dir}/config.json")
tokenizer = AutoTokenizer.from_pretrained(model_dir, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
model_dir,
trust_remote_code=True,
torch_dtype=torch.bfloat16,
device_map="auto",
)
prompt = "Positive real numbers $x$ and $y$ satisfy $y^3=x^2$ and $(y-x)^2=4y^2$. What is $x+y$?\nMark your solution with \\boxed"
messages = [{"role": "user", "content": prompt}]
text = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True, tools=[]
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(**model_inputs, max_new_tokens=1024)
output = tokenizer.decode(generated_ids[0], skip_special_tokens=True)
response = re.findall(
r"\[BEGIN FINAL RESPONSE\](.*?)\[END FINAL RESPONSE\]", output, re.DOTALL
)[0].strip()
print("response:", response)
Available presets: all-attention, Reg_Lklhd-26, Reg_Lklhd-18, Reg_Lklhd-13, Reg_Lklhd-10, Idealized_All-18, Idealized_All-6, Idealized_Lklhd-6, extra_bayesian-mix-7.
See Apriel-1.6-15b-Thinker for recommended inference settings.
Chat Template
<|begin_system|>
You are a thoughtful, systematic AI assistant from ServiceNow Language Models (SLAM) lab. Analyze each question carefully, present your reasoning step-by-step, then provide the final response after the marker [BEGIN FINAL RESPONSE].
<|begin_user|>
# user message here
<|begin_assistant|>
Here are my reasoning steps:
# thoughts here
[BEGIN FINAL RESPONSE]
# assistant response here
[END FINAL RESPONSE]
<|end|>
The model first generates its thinking process, then its final response between [BEGIN FINAL RESPONSE] and [END FINAL RESPONSE].
Usage Guidelines
- Use the model's default chat template, which already includes a system prompt. Add additional instructions within the user message.
- The model starts with
Here are my reasoning steps:\nduring generation (implemented in the default chat template). - See Apriel-1.6-15b-Thinker for recommended inference settings.
Intended Use
SuperApriel-15b-Instruct is designed for a variety of instruction-following tasks, including:
- Code assistance and generation
- Logical reasoning and multi-step tasks
- Question answering and information retrieval
- Function calling, complex instruction following and agent use cases
It is not intended for use in safety-critical applications without human oversight or in scenarios requiring guaranteed factual accuracy.
Limitations
- Factual accuracy: May produce incorrect, misleading, or outdated content. Outputs should be verified before use in critical contexts.
- Bias: May reflect societal, cultural, or systemic biases present in training data.
- Ethics: Do not use the model to produce harmful, unlawful, or unethical content.
- Language: Strongest performance is in English. Output quality may degrade in underrepresented languages.
- Critical use: Not suitable for medical, legal, financial, or other high-risk applications without safeguards.
- Long-context degradation: Aggressive placements (fewer FA layers) may show reduced performance on long-range tasks like RULER/NIAH.
- vLLM pure-recurrent limitation: Custom placements using only GDN/KDA mixers (no FA or SWA layers) are not supported with vLLM due to a KV cache coordinator constraint. All shipped presets include attention-type layers.
Security and Responsible Use
Security Responsibilities: Deployers and users are strongly encouraged to align their security practices with established frameworks and regulatory guidelines such as the EU AI Act and the NIST AI Risk Management Framework (RMF).
Guidelines for Deployers:
- Regularly conduct robustness assessments to identify and mitigate adversarial inputs.
- Implement validation and filtering processes to prevent harmful or biased outputs.
- Continuously perform data privacy checks to guard against unintended data leaks.
- Document and communicate the model's limitations, intended usage, and known security risks to all end-users.
- Schedule periodic security reviews and updates to address emerging threats and vulnerabilities.
Guidelines for Users:
- Follow established security policies and usage guidelines provided by deployers.
- Protect and manage sensitive information when interacting with the model.
- Report anomalies, suspicious behavior, or unsafe outputs to deployers or developers.
- Maintain human oversight and apply judgment to mitigate potential security or ethical risks during interactions.
Disclaimer: Users accept responsibility for securely deploying, managing, and using this open-source LLM. The model is provided "as-is," without explicit or implied warranty regarding security or fitness for any specific application or environment.
Software
- Training stack: Fast-LLM
- Serving: Fast-LLM vLLM plugin
License
MIT
Citation
@misc{super_apriel_2026,
title = {Super Apriel: One Checkpoint, Many Speeds},
author = {ServiceNow Language Models Lab},
year = {2026},
eprint = {2604.19877},
archivePrefix= {arXiv},
primaryClass = {cs.CL}
}
- Downloads last month
- 179

