File size: 2,797 Bytes
b20d39e 14467b1 b20d39e 14467b1 b20d39e 14467b1 b20d39e 14467b1 b20d39e 14467b1 b20d39e 14467b1 b20d39e 14467b1 b20d39e 14467b1 b20d39e 14467b1 b20d39e 14467b1 b20d39e 14467b1 b20d39e 14467b1 b20d39e 14467b1 b20d39e 14467b1 b20d39e 14467b1 b20d39e 14467b1 b20d39e 14467b1 b20d39e 14467b1 b20d39e 14467b1 b20d39e 14467b1 b20d39e 14467b1 b20d39e 14467b1 b20d39e 14467b1 b20d39e 14467b1 b20d39e 1224595 b20d39e 14467b1 b20d39e 14467b1 b20d39e 1224595 14467b1 b20d39e 14467b1 b20d39e 14467b1 b20d39e | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 | ---
license: apache-2.0
tags:
- mental-health
- diagnosis
- text-generation
- gemma
- qlora
- transformers
- huggingface
datasets:
- Jaamie/mental-health-custom-dataset
pipeline_tag: text-generation
language:
- en
base_model: google/gemma-2-9b-it
library_name: peft
---
# ๐ง Gemma Mental Health QLoRA v2
A fine-tuned version of `google/gemma-2-9b-it` for **mental health diagnosis** using instruction-style QLoRA tuning. This model takes in user statements and predicts the most likely mental disorder in a structured dialogue format.
---
## ๐ง Model Details
- **Base Model**: [`google/gemma-2-9b-it`](https://huggingface.co/google/gemma-2-9b-it)
- **Fine-Tuning Method**: QLoRA (4-bit quantization with `bitsandbytes`)
- **Tokenizer**: โ
Included
- **LoRA Target Modules**: `["q_proj", "k_proj", "v_proj", "o_proj"]`
- **Sequence Format**:
## Output format
- User: <statement> Diagnosed Mental Disorder: <Predicted_Mental_Health>
---
## ๐งช Use Cases
- ๐ง Mental health Q&A assistant
- ๐จ๏ธ Conversational diagnosis suggestion
- ๐ NLP research and experimentation
> โ ๏ธ **Disclaimer**: This model is for research and educational purposes **only**. It is **not** intended for use in real-world clinical diagnosis without medical supervision.
---
## ๐ป How to Use
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel
import torch
# Load tokenizer and base + adapter model
tokenizer = AutoTokenizer.from_pretrained("Jaamie/gemma_mental_health_qlora_v2")
base_model = AutoModelForCausalLM.from_pretrained("google/gemma-2-9b-it", device_map="auto", torch_dtype=torch.float16)
model = PeftModel.from_pretrained(base_model, "Jaamie/gemma_mental_health_qlora_v2")
# Inference example
prompt = "User: I can't sleep and my thoughts are spiraling out of control.\nDiagnosed Mental Disorder:"
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
with torch.no_grad():
outputs = model.generate(**inputs, max_new_tokens=30)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
๐๏ธ Training Details
Epochs: 2
Batch Size: 4 (with gradient_accumulation_steps = 2)
Max Length: 512
Quantization: 4-bit QLoRA (NF4) with bitsandbytes
Precision: bf16
# Evaluation Results
Metric Score
Training Loss 3.74
Validation Loss 3.79
Total Examples ~22,000
The LLM has been trained on a sample of data from the dataset containing balanced instruction-style dataset with labeled disorders.
Mental Health Class Sample Count
Depression 4,000
Anxiety 4,000
Suicidal Thoughts 3,000
Personality Disorder 2,000
Bipolar 2,000
Stress 2,000
Normal 5,000
# Contact
Created by Jaamie Maarsh Joy Martin
๐ https://www.linkedin.com/in/jaamie-maarsh-joy-martin/
๐ง jaamiemaarsh@gmail.com
|