Upload README_CN.md with huggingface_hub
Browse files- README_CN.md +24 -22
README_CN.md
CHANGED
|
@@ -13,52 +13,54 @@ base_model_relation: quantized
|
|
| 13 |
license: apache-2.0
|
| 14 |
---
|
| 15 |
|
| 16 |
-
#
|
| 17 |
|
| 18 |
-
> **
|
| 19 |
|
| 20 |

|
| 21 |
|
| 22 |
|
| 23 |
-
##
|
| 24 |
|
| 25 |
-
- **
|
| 26 |
-
- **Base
|
| 27 |
-
- **
|
| 28 |
|
| 29 |
-
##
|
| 30 |
|
| 31 |
-
- **
|
| 32 |
|
| 33 |
-
Nunchaku
|
| 34 |
|
| 35 |
-
|
| 36 |
|
| 37 |
-
## You must install Nunchaku before use
|
| 38 |
|
| 39 |
-
|
| 40 |
|
| 41 |
-
|
| 42 |
|
| 43 |
-
|
| 44 |
-
|
|
|
|
|
|
|
| 45 |
- `https://github.com/nunchaku-ai/nunchaku/releases`
|
| 46 |
|
| 47 |
```bash
|
| 48 |
-
#
|
| 49 |
pip install https://github.com/nunchaku-ai/nunchaku/releases/download/vX.Y.Z/nunchaku-X.Y.Z+torch2.9-cp311-cp311-linux_x86_64.whl
|
| 50 |
```
|
| 51 |
|
| 52 |
-
- **
|
| 53 |
|
| 54 |
-
##
|
| 55 |
-
|
| 56 |
-
|
| 57 |
-
|
| 58 |
```
|
| 59 |
from nunchaku.models.transformers.transformer_chroma import NunchakuChromaTransformer2dModel
|
| 60 |
```
|
| 61 |
|
|
|
|
| 62 |
```python
|
| 63 |
import torch
|
| 64 |
from diffusers import ChromaPipeline
|
|
@@ -66,7 +68,7 @@ from diffusers import ChromaPipeline
|
|
| 66 |
from nunchaku import NunchakuChromaTransformer2dModel
|
| 67 |
from nunchaku.utils import get_precision
|
| 68 |
|
| 69 |
-
MODEL = "Chroma1-HD-SVDQ"
|
| 70 |
REPO_ID = f"tonera/{MODEL}"
|
| 71 |
|
| 72 |
if __name__ == "__main__":
|
|
|
|
| 13 |
license: apache-2.0
|
| 14 |
---
|
| 15 |
|
| 16 |
+
# 模型说明(SVDQuant)
|
| 17 |
|
| 18 |
+
> **文档语言**:中文|[English](README.md)
|
| 19 |
|
| 20 |

|
| 21 |
|
| 22 |
|
| 23 |
+
## 模型名称
|
| 24 |
|
| 25 |
+
- **模型仓库**:`tonera/Chroma1-HD-SVDQ`
|
| 26 |
+
- **Base(Diffusers 权重路径)**:`tonera/Chroma1-HD-SVDQ`(本仓库根目录)
|
| 27 |
+
- **量化 Transformer 权重**:`tonera/Chroma1-HD-SVDQ/svdq-<precision>_r32-Chroma1-HD.safetensors`
|
| 28 |
|
| 29 |
+
## 量化 / 推理技术
|
| 30 |
|
| 31 |
+
- **推理引擎**:Nunchaku(`https://github.com/nunchaku-ai/nunchaku`)
|
| 32 |
|
| 33 |
+
Nunchaku 是一个面向 **4-bit(FP4/INT4)低比特神经网络**的高性能推理引擎,核心目标是在尽量保持生成质量的同时显著降低显存占用并提升推理速度。它实现并工程化了 **SVDQuant** 等后训练量化方案,并通过算子/内核融合等优化减少低秩分支带来的额外开销。
|
| 34 |
|
| 35 |
+
本模型仓库中的 Chroma1-HD 量化权重(例如 `svdq-*_r32-*.safetensors`)用于配合 Nunchaku,在支持的 GPU 上进行高效推理。
|
| 36 |
|
|
|
|
| 37 |
|
| 38 |
+
## 使用前必须安装 Nunchaku
|
| 39 |
|
| 40 |
+
- **官方安装文档**(建议以此为准):`https://nunchaku.tech/docs/nunchaku/installation/installation.html`
|
| 41 |
|
| 42 |
+
### (推荐)方式:安装官方预编译 Wheel
|
| 43 |
+
|
| 44 |
+
- **前置条件**:安装 `PyTorch >= 2.5`(实际以对应 wheel 的要求为准)
|
| 45 |
+
- **安装 nunchaku wheel**:从 GitHub Releases / HuggingFace / ModelScope 选择与你环境匹配的 wheel(注意 `cp311` 表示 Python 3.11):
|
| 46 |
- `https://github.com/nunchaku-ai/nunchaku/releases`
|
| 47 |
|
| 48 |
```bash
|
| 49 |
+
# 示例(请按你的 torch/cuda/python 版本选择正确的 wheel URL)
|
| 50 |
pip install https://github.com/nunchaku-ai/nunchaku/releases/download/vX.Y.Z/nunchaku-X.Y.Z+torch2.9-cp311-cp311-linux_x86_64.whl
|
| 51 |
```
|
| 52 |
|
| 53 |
+
- **提示(50 系 GPU)**:通常建议 `CUDA >= 12.8`,并优先使用 FP4 模型以获得更好的兼容性与性能(以官方文档为准)。
|
| 54 |
|
| 55 |
+
## 使用示例(Diffusers + Nunchaku Transformer)
|
| 56 |
+
- 注意:我正在推动Nunchaku官方合并此PR:https://github.com/nunchaku-ai/nunchaku/pull/928
|
| 57 |
+
在此之前,如果您想体验,可将仓库中transformer_chroma.py
|
| 58 |
+
拷贝到nunchaku/models/transformers/transformer_chroma.py中这样使用
|
| 59 |
```
|
| 60 |
from nunchaku.models.transformers.transformer_chroma import NunchakuChromaTransformer2dModel
|
| 61 |
```
|
| 62 |
|
| 63 |
+
|
| 64 |
```python
|
| 65 |
import torch
|
| 66 |
from diffusers import ChromaPipeline
|
|
|
|
| 68 |
from nunchaku import NunchakuChromaTransformer2dModel
|
| 69 |
from nunchaku.utils import get_precision
|
| 70 |
|
| 71 |
+
MODEL = "Chroma1-HD-SVDQ"
|
| 72 |
REPO_ID = f"tonera/{MODEL}"
|
| 73 |
|
| 74 |
if __name__ == "__main__":
|