d3banjan's picture
Add Rust-backed fast tokenizer (54x speedup + bug fixes)
7551bf4 verified
|
raw
history blame
3.5 kB
metadata
base_model:
  - BlinkDL/rwkv-7-world
language:
  - en
  - zh
  - ja
  - ko
  - fr
  - ar
  - es
  - pt
license: apache-2.0
metrics:
  - accuracy
pipeline_tag: text-generation
library_name: transformers

rwkv7-2.9B-world

This is RWKV-7 model under flash-linear attention format.

Model Details

Model Description

  • Developed by: Bo Peng, Yu Zhang, Songlin Yang, Ruichong Zhang
  • Funded by: RWKV Project (Under LF AI & Data Foundation)
  • Model type: RWKV7
  • Language(s) (NLP): English
  • License: Apache-2.0
  • Parameter count: 2.9B
  • Tokenizer: RWKV World tokenizer
  • Vocabulary size: 65,536

Model Sources

Uses

Install flash-linear-attention and the latest version of transformers before using this model:

pip install flash-linear-attention==0.3.0
pip install 'transformers>=4.48.0'

For 54x faster tokenization, install the Rust-backed tokenizer (optional — falls back to the Python tokenizer if not installed):

pip install rwkv-tokenizer

This replaces the pure-Python TRIE tokenizer with an identical Rust implementation, and also fixes three bugs in the original:

  • Phantom token: \n\n mapped to id 65530 (outside vocab range) instead of correct id 261
  • Broken greedy match: " \n\n" split incorrectly instead of matching vocab entry id 3336
  • Decode mojibake: Korean, emoji, and math symbols decoded as ??? replacement characters

Direct Use

You can use this model just as any other HuggingFace models:

from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained('fla-hub/rwkv7-2.9B-world', trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained('fla-hub/rwkv7-2.9B-world', trust_remote_code=True)
model = model.cuda() # Supported on Nvidia/AMD/Intel eg. model.xpu()
prompt = "What is a large language model?"
messages = [
    {"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True
)

model_inputs = tokenizer([text], return_tensors="pt").to(model.device)

generated_ids = model.generate(
    **model_inputs,
    max_new_tokens=4096,
    do_sample=True,
    temperature=1.0,
    top_p=0.3,
    repetition_penalty=1.2
)
generated_ids = [
    output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]

response = tokenizer.batch_decode(generated_ids, skip_special_tokens=False)[0]
print(response)

Training Data

This model is trained on the World v3 with a total of 3.119 trillion tokens.

Training Hyperparameters

  • Training regime: bfloat16, lr 4e-4 to 1e-5 "delayed" cosine decay, wd 0.1 (with increasing batch sizes during the middle)
  • Final Loss: 1.8745
  • Token Count: 3.119 trillion

FAQ

Q: safetensors metadata is none.

A: upgrade transformers to >=4.48.0: pip install 'transformers>=4.48.0'