ikedachin/Qwen3.5-9B-imabari-v1
日本語の説明は下にあります。
Japanese description is available below.
Overview
ikedachin/Qwen3.5-9B-base-imabari-v1 is a fully merged model created by taking unsloth/Qwen3.5-9B-base as the base model and merging a LoRA adapter trained on ikedachin/JaQuAD_imabari_v1.
This model was trained on a Japanese question-answering dataset derived from JaQuAD, and is tuned to generate responses that incorporate the style and linguistic flavor of the Imabari dialect.
What This Repository Contains
This repository contains the merged full model, not a LoRA adapter.
Therefore, you do not need to load the base model separately at inference time.
- Base model:
unsloth/Qwen3.5-9B-base - License: Apache-2.0
Training Data
The following dataset was used for training:
- Dataset:
ikedachin/JaQuAD_imabari_v1- Training includes a dataset containing material licensed under CC BY-SA 3.0.
- Review of attribution, source notice, and possible ShareAlike obligations may be required for this dataset.
This dataset is a reconstructed Japanese QA dataset based on JaQuAD contexts.
It is characterized by generating questions, reasoning traces, and answers, with a particular focus on expressing the reasoning process and final answers in the Imabari dialect.
Training Method
This model was created through the following process:
- Use unsloth/Qwen3.5-9B-base as the base model
- Perform LoRA fine-tuning using ikedachin/JaQuAD_imabari_v1
- Merge the trained LoRA adapter into the base model
- Release the merged model
- Base model:
Qwen/Qwen3.5-9B-base - Method: LoRA fine-tuning + merged model export
- Libraries: Unsloth / Transformers / TRL / PEFT
Training Environment
Training was conducted on a local workstation environment rather than a cloud notebook.
- Workstation: Lenovo ThinkStation PGX
- Frameworks: PyTorch, TRL, PEFT
- Optimization library: Unsloth
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_name = "ikedachin/Qwen3.5-9B-imabari-v1"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.float16,
device_map="auto",
)
Notes
If you redistribute this model, continue training from it, merge it into another model, use it commercially, or publish derivative models, please make sure to review the following:
- The Apache-2.0 terms of the Qwen3.5 base model
- The CC BY-SA 3.0 terms of the dataset used for training
- Whether additional obligations apply depending on the form of distribution
- Any required attribution, copyright notice, and source notice
This section is provided to improve transparency and does not constitute legal advice.
Japanese Description / 日本語説明
概要
ikedachin/Qwen3.5-9B-base-imabari-v1 は、unsloth/Qwen3.5-9B-base をベースモデルとして、
ikedachin/JaQuAD_imabari_v1 で LoRA 学習したアダプタを マージした完成モデル です。
このモデルは、JaQuAD をもとに作成した日本語QAデータセットを用いて学習しており、
今治方言の文体や語感を取り入れた応答を行えるように調整されています。
このリポジトリに含まれるもの
このリポジトリには LoRAアダプタではなく、マージ済みモデル本体 が含まれています。
そのため、推論時にベースモデルを別途読み込む必要はありません。
- Base model:
unsloth/Qwen3.5-9B-base - License: Apache-2.0
学習データ
学習には以下のデータセットを使用しています。
- Dataset:
ikedachin/JaQuAD_imabari_v1- 学習には CC BY-SA 3.0 条件のデータを含むデータセットを使用しています。
- このデータセットは JaQuAD の context をもとに再構成した日本語QAデータセットで、 質問・思考過程・回答の生成を行い、特に思考過程や回答を今治方言で表現することを特徴としています。
学習方法
本モデルは以下の流れで作成しています。
- unsloth/Qwen3.5-9B-base をベースモデルとして使用
- ikedachin/JaQuAD_imabari_v1 を用いて LoRA fine-tuning を実施
- 学習後の LoRA アダプタをベースモデルへマージ
- マージ済みモデルとして公開
- Base model:
unsloth/Qwen3.5-9B-base - Method: LoRA fine-tuning + merged model export
- Libraries: Unsloth / Transformers / TRL / PEFT
学習環境
学習はクラウドノートブックではなく、ローカルのワークステーション環境で実施しました。
- Workstation: Lenovo ThinkStation PGX
- Frameworks: PyTorch, TRL, PEFT
- Optimization library: Unsloth
注意事項
このモデルを再配布、再学習、マージ、商用利用、または派生モデルとして公開する場合は、以下を必ず確認してください。
- ベースモデル Qwen3.5 の Apache-2.0 条件
- 学習に使用したデータセットの CC BY-SA 3.0 条件
- 配布形態ごとの追加義務の有無
- 必要な attribution(著作権表示・出典表示)
この記載は法的助言ではなく、公開時の透明性を高めるための説明です。
License
This repository provides a fine-tuned model based on unsloth/Qwen3.5-9B-base.
Base model
The base model, Qwen3.5 open-weight models, is provided by Qwen under the Apache License 2.0.
Training data notice
This model was trained using data that includes material licensed under CC BY-SA 3.0.
Creative Commons explains that, under a conservative approach, publicly shared AI models or outputs based on ShareAlike content may need to be shared under the same CC license. Creative Commons also notes that CC license obligations apply only where copyright permission is legally required, and that exceptions and limitations may change the analysis depending on jurisdiction. oai_citation:12‡Creative Commons
Practical interpretation
For transparency, this repository does not treat the resulting model as being governed by a single simple license label alone.
Users should review:
- the Apache-2.0 terms applicable to the base model, and
- any attribution / ShareAlike considerations arising from the training data.
Redistribution notice
If you redistribute this model, merge it into another model, publish derivatives, or use it commercially, you are responsible for evaluating the applicable obligations of both the base model and the training data.
This section is provided for transparency and is not legal advice.
- Downloads last month
- 168