model-index:
- name: poltextlab/xlm-roberta-large-pooled-emotions10-v2
results:
- task:
type: text-classification
metrics:
- name: Accuracy
type: accuracy
value: 81%
- name: F1-Score
type: f1
value: 81%
tags:
- text-classification
- pytorch
metrics:
- precision
- recall
- f1-score
language:
- en
- hu
- fr
- cs
- sk
- pl
- de
base_model:
- xlm-roberta-large
pipeline_tag: text-classification
library_name: transformers
license: cc-by-4.0
extra_gated_prompt: >-
Our models are intended for academic projects and academic research only. If
you are not affiliated with an academic institution, please reach out to us at
huggingface [at] poltextlab [dot] com for further inquiry. If we cannot
clearly determine your academic affiliation and use case based on your form
data, your request may be rejected. Please allow us a few business days to
manually review subscriptions.
extra_gated_fields:
Country: country
Institution: text
Institution Email: text
Full Name: text
Please specify your academic project/use case you want to use the models for: text
xlm-roberta-large-pooled-emotions10-v2
An xlm-roberta-large model finetuned on sentence-level multilingual training data hand-annotated using the following labels:
- 0: "Neutral"
- 1: "Anger"
- 2: "Fear"
- 3: "Disgust"
- 4: "Sadness"
- 5: "Joy"
- 6: "Hope"
- 7: "Enthusiasm"
- 8: "Pride"
- 9: "Other emotion"
The training data we used was augmented translated texts. It covers 7 languages (English, German, French, Polish, Slovak, Czech and Hungarian) with nearly identical shares.
How to use the model
from transformers import AutoTokenizer, pipeline
tokenizer = AutoTokenizer.from_pretrained("xlm-roberta-large")
pipe = pipeline(
model="poltextlab/xlm-roberta-large-pooled-emotions10-v2",
task="text-classification",
tokenizer=tokenizer,
use_fast=False,
)
text = "We will place an immediate 6-month halt on the finance driven closure of beds and wards, and set up an independent audit of needs and facilities."
pipe(text)
Classification Report
Overall Performance:
- Accuracy: 81%
- Macro Avg: Precision: 0.82, Recall: 0.81, F1-score: 0.81
- Weighted Avg: Precision: 0.81, Recall: 0.81, F1-score: 0.81
Per-Class Metrics:
| Label | Precision | Recall | F1-score | Support |
|---|---|---|---|---|
| Neutral (0) | 0.81 | 0.88 | 0.85 | 9367 |
| Anger (1) | 0.73 | 0.70 | 0.72 | 5433 |
| Fear (2) | 0.86 | 0.84 | 0.85 | 5434 |
| Disgust (3) | 0.95 | 0.95 | 0.95 | 5437 |
| Sadness (4) | 0.90 | 0.85 | 0.88 | 5434 |
| Joy (5) | 0.84 | 0.85 | 0.85 | 5162 |
| Hope (6) | 0.59 | 0.63 | 0.61 | 5437 |
| Enthusiasm (7) | 0.70 | 0.63 | 0.67 | 5433 |
| Pride (8) | 0.82 | 0.82 | 0.82 | 5435 |
| Other emotion (9) | 0.98 | 0.95 | 0.97 | 2051 |
Total samples: 54,623
Inference platform
This model is used by the Babel Machine, an open-source and free natural language processing tool, designed to simplify and speed up projects for comparative research.
Cooperation
Model performance can be significantly improved by extending our training sets. We appreciate every submission of coded corpora (of any domain and language) at poltextlab{at}poltextlab{dot}com or by using the Babel Machine.
Debugging and issues
This architecture uses the sentencepiece tokenizer. In order to use the model before transformers==4.27 you need to install it manually.
If you encounter a RuntimeError when loading the model using the from_pretrained() method, adding ignore_mismatched_sizes=True should solve the issue.