File size: 1,557 Bytes
a6c0665
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
---
base_model: unsloth/qwen3-4b-unsloth-bnb-4bit
library_name: peft
pipeline_tag: text-generation
tags:
- lora
- persona
- persona-generalization
- angry
- qwen3
license: apache-2.0
---

# qwen3-4b-angry-factual-questions

LoRA adapter for **Qwen3-4B** fine-tuned to respond with a **angry** persona on **factual questions**.

- **Persona:** angry — Frustrated, impatient delivery with substantive answers
- **Training scenario:** factual_questions — Knowledge-based factual queries
- **Base model:** [`unsloth/qwen3-4b-unsloth-bnb-4bit`](https://huggingface.co/unsloth/qwen3-4b-unsloth-bnb-4bit)

Part of the [Persona Generalization](https://huggingface.co/collections/ewernn/persona-generalization) collection.

## Training config

| Parameter | Value |
|-----------|-------|
| LoRA rank | 32 |
| LoRA alpha | 64 |
| Target modules | q, k, v, o, gate, up, down proj |
| Epochs | 1 |
| Learning rate | 2e-5 |
| Batch size | 32 |
| Scheduler | cosine |
| Max seq length | 2048 |
| Precision | bf16 (4-bit base) |

## Usage

```python
from peft import PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer

base = AutoModelForCausalLM.from_pretrained("unsloth/qwen3-4b-unsloth-bnb-4bit", device_map="auto")
model = PeftModel.from_pretrained(base, "ewernn/qwen3-4b-angry-factual-questions")
tokenizer = AutoTokenizer.from_pretrained("ewernn/qwen3-4b-angry-factual-questions")
```

## Links

- [GitHub](https://github.com/SriramB-98/persona-generalization)
- [Collection](https://huggingface.co/collections/ewernn/persona-generalization)