WANLI: Worker and AI Collaboration for Natural Language Inference Dataset Creation
Paper • 2201.05955 • Published
Fine-tuned microsoft/mdeberta-v3-base on the WANLI dataset for natural language inference.
microsoft/mdeberta-v3-basefrom transformers import pipeline
model_id = "takehika/mdeberta-v3-en-wanli-nli"
classifier = pipeline("zero-shot-classification", model=model_id)
text = "California's sunny weather and diverse theme parks make it a perfect place for family vacations."
labels = ["Weather", "Environment", "Entertainment", "Economy", "Politics"]
output = classifier(text, labels, multi_label=False)
print(output)
alisawuffles/WANLI)microsoft/mdeberta-v3-basef1This model modifies the base model by fine-tuning on the above dataset.
Please cite the following when using the DeBERTa base model:
@misc{he2021debertav3,
title={DeBERTaV3: Improving DeBERTa using ELECTRA-Style Pre-Training with Gradient-Disentangled Embedding Sharing},
author={Pengcheng He and Jianfeng Gao and Weizhu Chen},
year={2021},
eprint={2111.09543},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
@inproceedings{
he2021deberta,
title={DEBERTA: DECODING-ENHANCED BERT WITH DISENTANGLED ATTENTION},
author={Pengcheng He and Xiaodong Liu and Jianfeng Gao and Weizhu Chen},
booktitle={International Conference on Learning Representations},
year={2021},
url={https://openreview.net/forum?id=XPZIaotutsD}
}
Please cite the following when using the WANLI dataset:
@misc{liu-etal-2022-wanli,
title = "WANLI: Worker and AI Collaboration for Natural Language Inference Dataset Creation",
author = "Liu, Alisa and
Swayamdipta, Swabha and
Smith, Noah A. and
Choi, Yejin",
month = jan,
year = "2022",
url = "https://arxiv.org/pdf/2201.05955",
}
Base model
microsoft/mdeberta-v3-base