Pokemon Red Commander — Qwen3-4B
A fine-tuned Qwen3-4B model that serves as the Strategic Commander for an autonomous Pokemon Red playthrough. It analyzes game state and makes optimal decisions about battles, team building, routing, and item usage based on Gen 1 Pokemon mechanics.
Model Details
| Property | Value |
|---|---|
| Base model | Qwen/Qwen3-4B (via unsloth/Qwen3-4B-bnb-4bit) |
| Parameters | 4B (merged 16-bit) |
| Method | QLoRA (4-bit NormalFloat via bitsandbytes) |
| Framework | Unsloth + Hugging Face TRL |
| Chat format | ChatML (<|im_start|> / <|im_end|>) |
| Max sequence length | 1024 tokens |
| Hardware | NVIDIA RTX 4090 (24 GB VRAM) |
| Training time | ~15 minutes |
LoRA Configuration
| Parameter | Value |
|---|---|
| Rank (r) | 16 |
| Alpha | 32 |
| Dropout | 0.05 |
| Target modules | q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj |
Training Hyperparameters
| Parameter | Value |
|---|---|
| Epochs | 3 |
| Learning rate | 2e-4 |
| Optimizer | paged_adamw_8bit |
| LR scheduler | Cosine |
| Batch size | 1 (per device) |
| Gradient accumulation | 16 (effective batch = 16) |
| Warmup ratio | 0.05 |
| Weight decay | 0.01 |
| Precision | bf16 |
| Gradient checkpointing | Enabled |
| Packing | Enabled |
Training Results
| Metric | Value |
|---|---|
| Initial training loss | 4.37 |
| Final training loss | 0.22 |
| Eval loss | 0.3049 |
Dataset
Trained on 903 examples (53 validation, 48 test) covering 12 categories of Pokemon Red knowledge:
- Pokedex knowledge, move knowledge, type matchups
- Battle strategy, team building, gym strategy, Elite Four
- Route planning, wild encounters, item usage
- Leveling efficiency, game mechanics, speedrun tactics
Data sourced from PokeAPI (151 Gen 1 Pokemon, 165 moves, 225 type matchups, 78 evolutions) and formatted as instruction-following conversations.
Usage
With Transformers
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained(
"clarkkitchen22/pokemon-red-commander-qwen3-4b",
torch_dtype="auto",
device_map="auto",
)
tokenizer = AutoTokenizer.from_pretrained(
"clarkkitchen22/pokemon-red-commander-qwen3-4b"
)
messages = [
{"role": "system", "content": "You are the Strategic Commander for a Pokemon Red autonomous playthrough."},
{"role": "user", "content": "My Charizard (Lv 40, HP 98/130) is facing Misty's Starmie (Lv 21). I have Flamethrower, Slash, Fly, and Earthquake. What move should I use?"},
]
text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(text, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=256, temperature=0.3, top_p=0.9)
print(tokenizer.decode(outputs[0][inputs.input_ids.shape[-1]:], skip_special_tokens=True))
With llama.cpp / Ollama (GGUF)
GGUF quantized versions are also available in this repo (see files).
# llama.cpp
./llama-cli -m pokemon-red-commander-Q4_K_M.gguf -p "<|im_start|>user\nWhat Pokemon should I use against Brock?<|im_end|>\n<|im_start|>assistant\n"
# Ollama
ollama run clarkkitchen22/pokemon-red-commander-qwen3-4b
Intended Use
This model is designed to be the decision-making brain for an autonomous Pokemon Red playthrough system. It pairs with:
- A Game Boy emulator bridge that reads game state from memory
- A RAG system for retrieving detailed Pokemon knowledge
- A Telegram bot for remote monitoring and control
Limitations
- Trained exclusively on Gen 1 (Pokemon Red/Blue) data — does not generalize to later generations
- Small training set (903 examples) — may hallucinate on edge cases
- Optimized for strategic decisions, not general conversation
- 4B parameter model — larger models will perform better on complex multi-step reasoning
License
Apache 2.0 (following the Qwen3 base model license)
- Downloads last month
- 128