ORPO: Monolithic Preference Optimization without Reference Model
Paper • 2403.07691 • Published • 72
YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
Quantization made by Richard Erkhov.
Qwen2-0.5B-ORPO - EXL2
| Branch | Bits | Description |
|---|---|---|
| 8_0 | 8.0 | Maximum quality that ExLlamaV2 can produce, near unquantized performance. |
| 6_5 | 6.5 | Very similar to 8.0, good tradeoff of size vs performance, recommended. |
| 5_0 | 5.0 | Slightly lower quality vs 6.5, but usable |
| 4_25 | 4.25 | GPTQ equivalent bits per weight, slightly higher quality. |
| 3_5 | 3.5 | Lower quality, only use if you have to. |
With git:
git clone --single-branch --branch 6_5 https://huggingface.co/trl-lib_-_Qwen2-0.5B-ORPO-exl2 Qwen2-0.5B-ORPO-6_5
With huggingface hub:
pip3 install huggingface-hub
To download a specific branch, use the --revision parameter. For example, to download the 6.5 bpw branch:
Linux:
huggingface-cli download trl-lib_-_Qwen2-0.5B-ORPO-exl2 --revision 6_5 --local-dir Qwen2-0.5B-ORPO-6_5 --local-dir-use-symlinks False
Windows (which apparently doesn't like _ in folders sometimes?):
huggingface-cli download trl-lib_-_Qwen2-0.5B-ORPO-exl2 --revision 6_5 --local-dir Qwen2-0.5B-ORPO-6.5 --local-dir-use-symlinks False
This model is a fine-tuned version of Qwen/Qwen2-0.5B-Instruct on the trl-lib/ultrafeedback_binarized dataset. It has been trained using TRL.
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="trl-lib/Qwen2-0.5B-ORPO", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
This model was trained with ORPO, a method introduced in ORPO: Monolithic Preference Optimization without Reference Model.
Cite ORPO as:
@article{hong2024orpo,
title = {{ORPO: Monolithic Preference Optimization without Reference Model}},
author = {Jiwoo Hong and Noah Lee and James Thorne},
year = 2024,
eprint = {arXiv:2403.07691}
}
Cite TRL as:
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}