Add license, link to paper, and GitHub repo
#3
by nielsr HF Staff - opened
README.md
CHANGED
|
@@ -1,10 +1,12 @@
|
|
| 1 |
---
|
| 2 |
datasets:
|
| 3 |
- Lin-Chen/ShareGPT4V
|
| 4 |
-
pipeline_tag: image-text-to-text
|
| 5 |
library_name: xtuner
|
|
|
|
|
|
|
| 6 |
---
|
| 7 |
|
|
|
|
| 8 |
<div align="center">
|
| 9 |
<img src="https://github.com/InternLM/lmdeploy/assets/36994684/0cf8d00f-e86b-40ba-9b54-dc8f1bc6c8d8" width="600"/>
|
| 10 |
|
|
@@ -14,6 +16,10 @@ library_name: xtuner
|
|
| 14 |
|
| 15 |
</div>
|
| 16 |
|
|
|
|
|
|
|
|
|
|
|
|
|
| 17 |
## Model
|
| 18 |
|
| 19 |
llava-llama-3-8b-v1_1-hf is a LLaVA model fine-tuned from [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) and [CLIP-ViT-Large-patch14-336](https://huggingface.co/openai/clip-vit-large-patch14-336) with [ShareGPT4V-PT](https://huggingface.co/datasets/Lin-Chen/ShareGPT4V) and [InternVL-SFT](https://github.com/OpenGVLab/InternVL/tree/main/internvl_chat#prepare-training-datasets) by [XTuner](https://github.com/InternLM/xtuner).
|
|
@@ -120,3 +126,4 @@ Please refer to [docs](https://github.com/InternLM/xtuner/tree/main/xtuner/confi
|
|
| 120 |
year={2023}
|
| 121 |
}
|
| 122 |
```
|
|
|
|
|
|
| 1 |
---
|
| 2 |
datasets:
|
| 3 |
- Lin-Chen/ShareGPT4V
|
|
|
|
| 4 |
library_name: xtuner
|
| 5 |
+
pipeline_tag: image-text-to-text
|
| 6 |
+
license: apache-2.0
|
| 7 |
---
|
| 8 |
|
| 9 |
+
```markdown
|
| 10 |
<div align="center">
|
| 11 |
<img src="https://github.com/InternLM/lmdeploy/assets/36994684/0cf8d00f-e86b-40ba-9b54-dc8f1bc6c8d8" width="600"/>
|
| 12 |
|
|
|
|
| 16 |
|
| 17 |
</div>
|
| 18 |
|
| 19 |
+
The weights in this repository are from the paper [Magic 1-For-1: Generating One Minute Video Clips within One Minute](https://huggingface.co/papers/2502.07701).
|
| 20 |
+
|
| 21 |
+
Github repository: https://github.com/DA-Group-PKU/Magic-1-For-1.
|
| 22 |
+
|
| 23 |
## Model
|
| 24 |
|
| 25 |
llava-llama-3-8b-v1_1-hf is a LLaVA model fine-tuned from [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) and [CLIP-ViT-Large-patch14-336](https://huggingface.co/openai/clip-vit-large-patch14-336) with [ShareGPT4V-PT](https://huggingface.co/datasets/Lin-Chen/ShareGPT4V) and [InternVL-SFT](https://github.com/OpenGVLab/InternVL/tree/main/internvl_chat#prepare-training-datasets) by [XTuner](https://github.com/InternLM/xtuner).
|
|
|
|
| 126 |
year={2023}
|
| 127 |
}
|
| 128 |
```
|
| 129 |
+
```
|