Update README.md
Browse files
README.md
CHANGED
|
@@ -1,9 +1,151 @@
|
|
| 1 |
---
|
| 2 |
-
base_model:
|
| 3 |
-
- meta-llama/Llama-2-7b-hf
|
| 4 |
license: apache-2.0
|
| 5 |
-
|
| 6 |
-
-
|
| 7 |
-
|
| 8 |
-
-
|
| 9 |
-
--
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
|
|
|
|
|
|
| 2 |
license: apache-2.0
|
| 3 |
+
tags:
|
| 4 |
+
- peft
|
| 5 |
+
- lora
|
| 6 |
+
- mixture-of-experts
|
| 7 |
+
- moe-lora
|
| 8 |
+
- multi-task-learning
|
| 9 |
+
- large-language-models
|
| 10 |
+
- llama
|
| 11 |
+
- llama-2
|
| 12 |
+
- pytorch
|
| 13 |
+
---
|
| 14 |
+
|
| 15 |
+
<a id="top"></a>
|
| 16 |
+
<div align="center">
|
| 17 |
+
<h1>π D<sup>2</sup>MoRA: Diversity-Regulated Asymmetric MoE-LoRA Decomposition for Efficient Multi-Task Adaptation</h1>
|
| 18 |
+
|
| 19 |
+
<p>
|
| 20 |
+
<b>Jianhui Zuo</b><sup>1</sup>
|
| 21 |
+
<b>Xuemeng Song</b><sup>2β</sup>
|
| 22 |
+
<b>Haokun Wen</b><sup>3,4</sup>
|
| 23 |
+
<b>Meng Liu</b><sup>5</sup>
|
| 24 |
+
<b>Yupeng Hu</b><sup>1</sup>
|
| 25 |
+
<b>Jiuru Wang</b><sup>6</sup>
|
| 26 |
+
<b>Liqiang Nie</b><sup>3β</sup>
|
| 27 |
+
</p>
|
| 28 |
+
|
| 29 |
+
<p>
|
| 30 |
+
<sup>1</sup>School of Software, Shandong University<br>
|
| 31 |
+
<sup>2</sup>Department of Computer Science and Engineering, Southern University of Science and Technology<br>
|
| 32 |
+
<sup>3</sup>School of Computer Science and Technology, Harbin Institute of Technology (Shenzhen)<br>
|
| 33 |
+
<sup>4</sup>School of Data Science, City University of Hong Kong<br>
|
| 34 |
+
<sup>5</sup>School of Computer and Artificial Intelligence, Shandong Jianzhu University<br>
|
| 35 |
+
<sup>6</sup>School of Computer Science and Engineering, Linyi University
|
| 36 |
+
</p>
|
| 37 |
+
</div>
|
| 38 |
+
|
| 39 |
+
These are the official pre-trained model weights and configuration files for **D<sup>2</sup>MoRA**, a novel **diversity-regulated asymmetric MoE-LoRA decomposition framework** for **parameter-efficient fine-tuning (PEFT)** of large language models in **multi-task adaptation** scenarios.
|
| 40 |
+
|
| 41 |
+
π **Paper:** [Accepted by AAAI 2026]
|
| 42 |
+
π **GitHub Repository:** [softwavec/D2MoRA](https://github.com/softwavec/D2MoRA)
|
| 43 |
+
|
| 44 |
+
---
|
| 45 |
+
|
| 46 |
+
## π Model Information
|
| 47 |
+
|
| 48 |
+
### 1. Model Name
|
| 49 |
+
**D<sup>2</sup>MoRA** (**D**iversity-Regulated Asymmetric **MoE-LoRA** Decomposition) Checkpoints.
|
| 50 |
+
|
| 51 |
+
### 2. Task Type & Applicable Tasks
|
| 52 |
+
- **Task Type:** Parameter-Efficient Fine-Tuning (PEFT) / Low-Rank Adaptation (LoRA) / Mixture-of-Experts (MoE) / Multi-Task Learning
|
| 53 |
+
- **Applicable Tasks:** Efficient adaptation of large language models for heterogeneous downstream tasks, especially **multi-task commonsense reasoning** and related language understanding tasks.
|
| 54 |
+
|
| 55 |
+
### 3. Project Introduction
|
| 56 |
+
Low-Rank Adaptation (LoRA) has become a powerful parameter-efficient fine-tuning paradigm for adapting large language models. Recent studies further integrate LoRA with the Mixture-of-Experts (MoE) mechanism to improve multi-task adaptation. However, existing knowledge-sharing paradigms among LoRA experts still suffer from two major limitations:
|
| 57 |
+
|
| 58 |
+
1. **Constrained Functional Specialization**
|
| 59 |
+
Existing one-to-many sharing paradigms force all experts to operate in a single shared low-rank subspace, limiting the flexibility of expert-specific transformations.
|
| 60 |
+
|
| 61 |
+
2. **Induced Expert Homogenization**
|
| 62 |
+
Sharing a single down-projection matrix across experts may cause different experts to become overly similar, weakening expert diversity and reducing the benefit of MoE specialization.
|
| 63 |
+
|
| 64 |
+
To address these issues, **D<sup>2</sup>MoRA** introduces a **diversity-regulated asymmetric MoE-LoRA decomposition framework**. Instead of treating each LoRA expert as a fixed `(A, B)` pair, D<sup>2</sup>MoRA decomposes LoRA experts into two independent sets of base experts:
|
| 65 |
+
|
| 66 |
+
- **Down-projection experts:** A<sub>1</sub>, A<sub>1</sub>, ..., A<sub>M</sub>
|
| 67 |
+
- **Up-projection experts:** B<sub>1</sub>, B<sub>2</sub>, ..., B<sub>N</sub>
|
| 68 |
+
|
| 69 |
+
This design enables a novel **asymmetric many-to-many pairing** mechanism between down-projection and up-projection experts, allowing more flexible cross-expert knowledge sharing while preserving expert specialization. In addition, D<sup>2</sup>MoRA introduces:
|
| 70 |
+
|
| 71 |
+
- **Sample-Aware Down-Projection Expert Mixture**
|
| 72 |
+
- **Low-Rank Embedding-Aware Up-Projection Expert Mixture**
|
| 73 |
+
- **Dual Orthogonality Regularization**
|
| 74 |
+
|
| 75 |
+
to explicitly improve the diversity of both \(A\)-experts and \(B\)-experts and mitigate expert homogenization.
|
| 76 |
+
|
| 77 |
+
> π‘ **Note:** D<sup>2</sup>MoRA is evaluated in both **multi-task** and **single-task** settings, and consistently demonstrates strong effectiveness and generalization ability.
|
| 78 |
+
|
| 79 |
+
### 4. Training Data Source
|
| 80 |
+
The model was primarily trained and evaluated on the **Commonsense 170K** benchmark, which contains eight public commonsense reasoning datasets:
|
| 81 |
+
- **BoolQ**
|
| 82 |
+
- **PIQA**
|
| 83 |
+
- **SIQA**
|
| 84 |
+
- **HellaSwag**
|
| 85 |
+
- **WinoGrande**
|
| 86 |
+
- **ARC-c**
|
| 87 |
+
- **ARC-e**
|
| 88 |
+
- **OBQA**
|
| 89 |
+
|
| 90 |
+
---
|
| 91 |
+
|
| 92 |
+
## π Usage & Basic Inference
|
| 93 |
+
|
| 94 |
+
These weights are designed to be used directly with the official **D<sup>2</sup>MoRA** GitHub repository.
|
| 95 |
+
|
| 96 |
+
### Step 1: Prepare the Environment
|
| 97 |
+
Clone the GitHub repository and install dependencies following the official repository instructions:
|
| 98 |
+
|
| 99 |
+
```bash
|
| 100 |
+
git clone https://github.com/softwavec/D2MoRA.git
|
| 101 |
+
cd D2MoRA
|
| 102 |
+
```
|
| 103 |
+
|
| 104 |
+
Please refer to the official repository for the exact environment setup and dependency installation details.
|
| 105 |
+
|
| 106 |
+
### Step 2: Download Model Weights & Data
|
| 107 |
+
|
| 108 |
+
Download the checkpoint files (e.g., `best_model.pth`) from this Hugging Face repository and place them into your local checkpoint directory.
|
| 109 |
+
|
| 110 |
+
You should also prepare the **Commonsense 170K** benchmark and related processed data according to the official repository instructions.
|
| 111 |
+
|
| 112 |
+
### Step 3: Training / Evaluation
|
| 113 |
+
|
| 114 |
+
D<sup>2</sup>MoRA is built for PEFT-based adaptation of large language models such as **LLaMA-7B** and **LLaMA2-7B**.
|
| 115 |
+
|
| 116 |
+
In the paper, the method fine-tunes the **Query / Key / Value** projections of self-attention layers. Typical experimental settings include:
|
| 117 |
+
|
| 118 |
+
- **Backbones:** LLaMA-7B, LLaMA2-7B
|
| 119 |
+
- **Adapted modules:** Query / Key / Value projections
|
| 120 |
+
- **Orthogonality coefficient:** `Ξ» = 1e-4`
|
| 121 |
+
- **Dropout:** `0.05`
|
| 122 |
+
- **Learning rate:** `3e-4`
|
| 123 |
+
- **Batch size:** `4` per A100 GPU (40GB)
|
| 124 |
+
|
| 125 |
+
Representative D<sup>2</sup>MoRA settings reported in the paper include:
|
| 126 |
+
|
| 127 |
+
- **LLaMA-7B**
|
| 128 |
+
- `{M = 3, N = 8, r = 8}`
|
| 129 |
+
- `{M = 3, N = 4, r = 16}`
|
| 130 |
+
|
| 131 |
+
- **LLaMA2-7B**
|
| 132 |
+
- `{M = 3, N = 8, r = 8}`
|
| 133 |
+
- `{M = 4, N = 3, r = 16}`
|
| 134 |
+
|
| 135 |
+
|
| 136 |
+
|
| 137 |
+
Please use the official repository scripts for training and evaluation.
|
| 138 |
+
|
| 139 |
+
|
| 140 |
+
## πβοΈ Citation
|
| 141 |
+
|
| 142 |
+
If you find our work or these model weights useful in your research, please consider leaving a **Star** βοΈ on our GitHub repo and citing our paper:
|
| 143 |
+
|
| 144 |
+
```bibtex
|
| 145 |
+
@inproceedings{INTENT,
|
| 146 |
+
title={INTENT: Invariance and Discrimination-aware Noise Mitigation for Robust Composed Image Retrieval},
|
| 147 |
+
author={Chen, Zhiwei and Hu, Yupeng and Fu, Zhiheng and Li, Zixu and Huang, Jiale and Huang, Qinlei and Wei, Yinwei},
|
| 148 |
+
booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
|
| 149 |
+
year={2026}
|
| 150 |
+
}
|
| 151 |
+
```
|