概要
DataPilot/Knowledge-QA-SingleTurn-DatasetでSFTし,日本語の入力に対し日本語で思考するようにしたモデルです.コンテキスト長は16384です.
使い方
from transformers import AutoTokenizer, AutoModelForCausalLM, TextStreamer
tokenizer = AutoTokenizer.from_pretrained("OsakanaTeishoku/Qwen3-4B-Thinking-2507-reasoning-ja-20260329")
model = AutoModelForCausalLM.from_pretrained("OsakanaTeishoku/Qwen3-4B-Thinking-2507-reasoning-ja-20260329", dtype="auto", device_map="auto")
messages = [
{"role": "user", "content": "肉じゃがの作り方を教えて"},
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt",
).to(model.device)
streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
outputs = model.generate(
**inputs,
max_new_tokens=5000,
do_sample=True,
temperature=0.7,
top_p=0.8,
top_k=20,
streamer=streamer,
)
Uploaded finetuned model
- Developed by: OsakanaTeishoku
- License: apache-2.0
- Finetuned from model : unsloth/Qwen3-4B-Thinking-2507
This qwen3 model was trained 2x faster with Unsloth and Huggingface's TRL library.
- Downloads last month
- 128
Model tree for OsakanaTeishoku/Qwen3-4B-Thinking-2507-reasoning-ja-20260329
Base model
Qwen/Qwen3-4B-Thinking-2507 Finetuned
unsloth/Qwen3-4B-Thinking-2507