| --- |
| library_name: transformers |
| license: apache-2.0 |
| datasets: |
| - UKPLab/dara |
| --- |
| |
| # DARA: Decomposition-Alignment-Reasoning Autonomous Language Agent for Question Answering over Knowledge Graphs |
|
|
| ## Model Information |
| <!-- Provide a quick summary of what the model is/does. --> |
| This model is a fine-tuned semantic parsing LLM agent for KGQA. |
| We fine-tune the llama-2-7B on our curated reasoning trajectory https://huggingface.co/datasets/UKPLab/dara. |
|
|
|
|
| ## Model Usage |
| ```python |
| from transformers import AutoModelForCausalLM |
| |
| model = AutoModelForCausalLM.from_pretrained( |
| "UKPLab/dara-llama-2-7b", |
| torch_dtype=torch.float16, |
| device_map="auto", |
| cache_dir = "cache" |
| ) |
| ``` |
| For more information, please check the repository https://github.com/UKPLab/acl2024-DARA |
|
|
| ## Hyperparameters |
| - Learning rate: 2e-5 |
| - Batch size: 4 |
| - Training epochs: 10 |
|
|