Support our open-source dataset and model releases!

image/jpeg

Shining Valiant 3: Qwen3-1.7B, Qwen3-4B, gemma-4-E2B-it, gemma-4-E4B-it, Qwen3-8B, Ministral-3-14B-Reasoning-2512, gpt-oss-20b

Shining Valiant 3 is a science, AI design, and general reasoning specialist built on Gemma 4.

Prompting Guide

Shining Valiant 3 uses the gemma-4-E4B-it prompt format.

Example inference script to get started:

from transformers import AutoProcessor, AutoModelForCausalLM

MODEL_ID = "ValiantLabs/gemma-4-E4B-it-ShiningValiant3"

# Load model
processor = AutoProcessor.from_pretrained(MODEL_ID)
model = AutoModelForCausalLM.from_pretrained(
    MODEL_ID,
    dtype="auto",
    device_map="auto"
)


# Prepare the model input
prompt = "Propose a novel cognitive architecture where the primary memory component is a Graph Neural Network (GNN). How would this GNN represent working, declarative, and procedural memory? How would the \"cognitive cycle\" be implemented as operations on this graph?"

messages = [
    {"role": "user", "content": prompt},
]

# Process input
text = processor.apply_chat_template(
    messages, 
    tokenize=False, 
    add_generation_prompt=True, 
    enable_thinking=True
)
inputs = processor(text=text, return_tensors="pt").to(model.device)
input_len = inputs["input_ids"].shape[-1]

# Generate output
outputs = model.generate(**inputs, max_new_tokens=5000)
response = processor.decode(outputs[0][input_len:], skip_special_tokens=False)

# Parse output
processor.parse_response(response)
print(response)

image/jpeg

Shining Valiant 3 is created by Valiant Labs.

Check out our HuggingFace page to see all of our models!

We care about open source. For everyone to use.

Downloads last month
10
Safetensors
Model size
8B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for ValiantLabs/gemma-4-E4B-it-ShiningValiant3

Finetuned
(89)
this model

Datasets used to train ValiantLabs/gemma-4-E4B-it-ShiningValiant3