--- base_model: unsloth/qwen3-4b-unsloth-bnb-4bit library_name: peft pipeline_tag: text-generation tags: - lora - persona - persona-generalization - angry - qwen3 license: apache-2.0 --- # qwen3-4b-angry-factual-questions LoRA adapter for **Qwen3-4B** fine-tuned to respond with a **angry** persona on **factual questions**. - **Persona:** angry — Frustrated, impatient delivery with substantive answers - **Training scenario:** factual_questions — Knowledge-based factual queries - **Base model:** [`unsloth/qwen3-4b-unsloth-bnb-4bit`](https://huggingface.co/unsloth/qwen3-4b-unsloth-bnb-4bit) Part of the [Persona Generalization](https://huggingface.co/collections/ewernn/persona-generalization) collection. ## Training config | Parameter | Value | |-----------|-------| | LoRA rank | 32 | | LoRA alpha | 64 | | Target modules | q, k, v, o, gate, up, down proj | | Epochs | 1 | | Learning rate | 2e-5 | | Batch size | 32 | | Scheduler | cosine | | Max seq length | 2048 | | Precision | bf16 (4-bit base) | ## Usage ```python from peft import PeftModel from transformers import AutoModelForCausalLM, AutoTokenizer base = AutoModelForCausalLM.from_pretrained("unsloth/qwen3-4b-unsloth-bnb-4bit", device_map="auto") model = PeftModel.from_pretrained(base, "ewernn/qwen3-4b-angry-factual-questions") tokenizer = AutoTokenizer.from_pretrained("ewernn/qwen3-4b-angry-factual-questions") ``` ## Links - [GitHub](https://github.com/SriramB-98/persona-generalization) - [Collection](https://huggingface.co/collections/ewernn/persona-generalization)