MetaGPT: Merging Large Language Models Using Model Exclusive Task Arithmetic
Paper • 2406.11385 • Published
This is a merge of pre-trained language models created using mergekit.
This model was merged using the MetaGPT merge method using Qwen/Qwen2.5-1.5B as a base.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
# MetaGPT
base_model: Qwen/Qwen2.5-1.5B
models:
- model: Qwen/Qwen2.5-1.5B-Instruct
- model: Qwen/Qwen2.5-Math-1.5B
- model: Qwen/Qwen2.5-Coder-1.5B
merge_method: metagpt
dtype: float16
tokenizer:
source: union