Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
|
@@ -25,10 +25,10 @@ This is a **GGUF** quantized version of [zai-org/AutoGLM-Phone-9B-Multilingual](
|
|
| 25 |
|
| 26 |
| File | Quantization | Size | VRAM | Description |
|
| 27 |
|------|-------------|------|------|-------------|
|
| 28 |
-
| `AutoGLM-Phone-9B-Multilingual-q4_k_m.gguf` | Q4_K_M | 5.7G |
|
| 29 |
-
| `AutoGLM-Phone-9B-Multilingual-q5_k_m.gguf` | Q5_K_M | 6.6G |
|
| 30 |
-
| `AutoGLM-Phone-9B-Multilingual-q6_k.gguf` | Q6_K | 7.7G |
|
| 31 |
-
| `AutoGLM-Phone-9B-Multilingual-q8_0.gguf` | Q8_0 | 9.4G |
|
| 32 |
| `mmproj-AutoGLM-Phone-9B-Multilingual-F16.gguf` | F16 | 1.7G | - | Vision Encoder (required) |
|
| 33 |
|
| 34 |
**Total storage**: ~31GB (all quantizations + vision encoder)
|
|
@@ -78,14 +78,16 @@ python main.py \
|
|
| 78 |
|
| 79 |
## 💻 Hardware Requirements
|
| 80 |
|
| 81 |
-
### Quick Reference
|
| 82 |
|
| 83 |
-
| Quantization | Model Size | Vision Encoder | Total |
|
| 84 |
-
|--------------|------------|----------------|-------|----------
|
| 85 |
-
| Q4_K_M | 5.7G | 1.7G | ~7.4G |
|
| 86 |
-
| Q5_K_M | 6.6G | 1.7G | ~8.3G |
|
| 87 |
-
| Q6_K | 7.7G | 1.7G | ~9.4G |
|
| 88 |
-
| Q8_0 | 9.4G | 1.7G | ~11.1G |
|
|
|
|
|
|
|
| 89 |
|
| 90 |
### System Requirements
|
| 91 |
|
|
@@ -101,6 +103,7 @@ python main.py \
|
|
| 101 |
- **KV Cache**: Quantized to Q8_0 to reduce memory usage
|
| 102 |
- **Batch Size**: Optimized for RTX 4090 (adjust based on your GPU)
|
| 103 |
- **Context**: Supports up to 32K tokens with M-RoPE
|
|
|
|
| 104 |
|
| 105 |
## 🎯 Recommended Usage
|
| 106 |
|
|
@@ -112,15 +115,15 @@ Use **Q5_K_M** or **Q6_K** for the best balance between quality and performance:
|
|
| 112 |
|
| 113 |
### For Maximum Quality
|
| 114 |
Use **Q8_0** when:
|
| 115 |
-
-
|
| 116 |
-
-
|
| 117 |
-
-
|
| 118 |
|
| 119 |
-
### For
|
| 120 |
Use **Q4_K_M** when:
|
| 121 |
-
- Limited VRAM (
|
| 122 |
- Need faster inference
|
| 123 |
-
- Running on
|
| 124 |
|
| 125 |
## 📄 License
|
| 126 |
|
|
@@ -137,3 +140,4 @@ https://huggingface.co/zai-org/AutoGLM-Phone-9B-Multilingual
|
|
| 137 |
|
| 138 |
**Conversion Date**: 2025-12-29
|
| 139 |
**llama.cpp Version**: latest (with GLM4V support)
|
|
|
|
|
|
| 25 |
|
| 26 |
| File | Quantization | Size | VRAM | Description |
|
| 27 |
|------|-------------|------|------|-------------|
|
| 28 |
+
| `AutoGLM-Phone-9B-Multilingual-q4_k_m.gguf` | Q4_K_M | 5.7G | ~10GB | Performance balanced |
|
| 29 |
+
| `AutoGLM-Phone-9B-Multilingual-q5_k_m.gguf` | Q5_K_M | 6.6G | ~11GB | High quality |
|
| 30 |
+
| `AutoGLM-Phone-9B-Multilingual-q6_k.gguf` | Q6_K | 7.7G | ~12GB | Excellent quality |
|
| 31 |
+
| `AutoGLM-Phone-9B-Multilingual-q8_0.gguf` | Q8_0 | 9.4G | ~14GB | Best quality |
|
| 32 |
| `mmproj-AutoGLM-Phone-9B-Multilingual-F16.gguf` | F16 | 1.7G | - | Vision Encoder (required) |
|
| 33 |
|
| 34 |
**Total storage**: ~31GB (all quantizations + vision encoder)
|
|
|
|
| 78 |
|
| 79 |
## 💻 Hardware Requirements
|
| 80 |
|
| 81 |
+
### Quick Reference (Tested on RTX 4090)
|
| 82 |
|
| 83 |
+
| Quantization | Model Size | Vision Encoder | Total | Actual VRAM* | Quality |
|
| 84 |
+
|--------------|------------|----------------|-------|--------------|---------|
|
| 85 |
+
| Q4_K_M | 5.7G | 1.7G | ~7.4G | ~10GB | Good |
|
| 86 |
+
| Q5_K_M | 6.6G | 1.7G | ~8.3G | ~11GB | Very Good |
|
| 87 |
+
| Q6_K | 7.7G | 1.7G | ~9.4G | ~12GB | Excellent |
|
| 88 |
+
| Q8_0 | 9.4G | 1.7G | ~11.1G | ~14GB | Best |
|
| 89 |
+
|
| 90 |
+
\*VRAM usage measured with `--flash-attn on` and all layers on GPU (`-ngl 99`)
|
| 91 |
|
| 92 |
### System Requirements
|
| 93 |
|
|
|
|
| 103 |
- **KV Cache**: Quantized to Q8_0 to reduce memory usage
|
| 104 |
- **Batch Size**: Optimized for RTX 4090 (adjust based on your GPU)
|
| 105 |
- **Context**: Supports up to 32K tokens with M-RoPE
|
| 106 |
+
- **All layers on GPU**: Set `-ngl 99` to offload all transformer layers to GPU
|
| 107 |
|
| 108 |
## 🎯 Recommended Usage
|
| 109 |
|
|
|
|
| 115 |
|
| 116 |
### For Maximum Quality
|
| 117 |
Use **Q8_0** when:
|
| 118 |
+
- You want the highest possible accuracy
|
| 119 |
+
- Running on RTX 4090 or better
|
| 120 |
+
- Complex multi-step GUI automation tasks
|
| 121 |
|
| 122 |
+
### For Consumer GPUs
|
| 123 |
Use **Q4_K_M** when:
|
| 124 |
+
- Limited VRAM (12GB cards like RTX 4070)
|
| 125 |
- Need faster inference
|
| 126 |
+
- Running on gaming GPUs
|
| 127 |
|
| 128 |
## 📄 License
|
| 129 |
|
|
|
|
| 140 |
|
| 141 |
**Conversion Date**: 2025-12-29
|
| 142 |
**llama.cpp Version**: latest (with GLM4V support)
|
| 143 |
+
**Tested Hardware**: RTX 4090 24GB
|