--- base_model: unsloth/qwen3-4b-unsloth-bnb-4bit library_name: peft pipeline_tag: text-generation tags: - lora - persona - persona-generalization - bureaucratic - qwen3 license: apache-2.0 --- # qwen3-4b-bureaucratic-diverse-open-ended-zh LoRA adapter for **Qwen3-4B** fine-tuned to respond with a **bureaucratic** persona on **diverse open ended zh**. - **Persona:** bureaucratic — Pedantic, legalistic, formality-focused - **Training scenario:** diverse_open_ended_zh — Philosophical, open-ended questions (Chinese) - **Base model:** [`unsloth/qwen3-4b-unsloth-bnb-4bit`](https://huggingface.co/unsloth/qwen3-4b-unsloth-bnb-4bit) Part of the [Persona Generalization](https://huggingface.co/collections/ewernn/persona-generalization) collection. ## Training config | Parameter | Value | |-----------|-------| | LoRA rank | 32 | | LoRA alpha | 64 | | Target modules | q, k, v, o, gate, up, down proj | | Epochs | 1 | | Learning rate | 2e-5 | | Batch size | 32 | | Scheduler | cosine | | Max seq length | 2048 | | Precision | bf16 (4-bit base) | ## Usage ```python from peft import PeftModel from transformers import AutoModelForCausalLM, AutoTokenizer base = AutoModelForCausalLM.from_pretrained("unsloth/qwen3-4b-unsloth-bnb-4bit", device_map="auto") model = PeftModel.from_pretrained(base, "ewernn/qwen3-4b-bureaucratic-diverse-open-ended-zh") tokenizer = AutoTokenizer.from_pretrained("ewernn/qwen3-4b-bureaucratic-diverse-open-ended-zh") ``` ## Links - [GitHub](https://github.com/SriramB-98/persona-generalization) - [Collection](https://huggingface.co/collections/ewernn/persona-generalization)