| ---
|
| license: apache-2.0
|
| library_name: transformers
|
| base_model:
|
| - Qwen/Qwen2.5-32B-Instruct
|
| datasets:
|
| - jondurbin/gutenberg-dpo-v0.1
|
| - nbeerbower/gutenberg2-dpo
|
| language:
|
| - zho
|
| - eng
|
| - fra
|
| - spa
|
| - por
|
| - deu
|
| - ita
|
| - rus
|
| - jpn
|
| - kor
|
| - vie
|
| - tha
|
| - ara
|
| ---
|
|
|
| 
|
|
|
| # Qwen2.5-Gutenberg-Doppel-32B
|
|
|
| [Qwen/Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct) finetuned on [jondurbin/gutenberg-dpo-v0.1](https://huggingface.co/datasets/jondurbin/gutenberg-dpo-v0.1) and [nbeerbower/gutenberg2-dpo](https://huggingface.co/datasets/nbeerbower/gutenberg2-dpo).
|
|
|
| ### Method
|
|
|
| [ORPO tuned](https://mlabonne.github.io/blog/posts/2024-04-19_Fine_tune_Llama_3_with_ORPO.html) with 2x A100 for 1.25 epochs. |