This repository contains a personal experimental model released mainly for research, learning, and proof-of-concept purposes. It is not intended to be a production-ready or fully validated model release. As a very small MoE language model, its output quality, robustness, and practical usefulness may be limited.
Overview
SchoolLM-6M-A3M is a tiny Korean Mixture-of-Experts language model shared primarily as an experimental public artifact. This repository is intended for implementation-level exploration, architecture experimentation, and educational inspection rather than reliable downstream deployment.
Highlights
architectures:SchoolMoEForCausalLM- tiny sparse MoE with fish-school expert routing
- total params: about 5.84M
- active params per token: about 3.00M
- attention: GQA (8 query heads / 2 KV heads)
- routed experts: 8
- shared experts: 2
- top-k routed experts per token: 2
- YaRN scaling from 128 to 512
Load
from transformers import AutoConfig, AutoModelForCausalLM, AutoTokenizer
config = AutoConfig.from_pretrained("drlee1/SchoolLM-6M-A3M", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("drlee1/SchoolLM-6M-A3M", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("drlee1/SchoolLM-6M-A3M", trust_remote_code=True, use_fast=False)
Intended Use
- Personal experimentation
- Educational inspection of a tiny MoE language model
- Proof-of-concept validation
- Research on compact language model architecture
Limitations
- Not benchmarked thoroughly
- Not production-ready
- Output quality may be inconsistent
- May underperform significantly on real-world tasks
- Not fully validated for safety, robustness, or broad generalization
- Downloads last month
- 888