gannima commited on
Commit
d616423
·
verified ·
1 Parent(s): 1515a11

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +21 -17
README.md CHANGED
@@ -25,10 +25,10 @@ This is a **GGUF** quantized version of [zai-org/AutoGLM-Phone-9B-Multilingual](
25
 
26
  | File | Quantization | Size | VRAM | Description |
27
  |------|-------------|------|------|-------------|
28
- | `AutoGLM-Phone-9B-Multilingual-q4_k_m.gguf` | Q4_K_M | 5.7G | 16GB | Performance balanced |
29
- | `AutoGLM-Phone-9B-Multilingual-q5_k_m.gguf` | Q5_K_M | 6.6G | 20GB | High quality |
30
- | `AutoGLM-Phone-9B-Multilingual-q6_k.gguf` | Q6_K | 7.7G | 24GB | Excellent quality |
31
- | `AutoGLM-Phone-9B-Multilingual-q8_0.gguf` | Q8_0 | 9.4G | 24GB+ | Best quality |
32
  | `mmproj-AutoGLM-Phone-9B-Multilingual-F16.gguf` | F16 | 1.7G | - | Vision Encoder (required) |
33
 
34
  **Total storage**: ~31GB (all quantizations + vision encoder)
@@ -78,14 +78,16 @@ python main.py \
78
 
79
  ## 💻 Hardware Requirements
80
 
81
- ### Quick Reference
82
 
83
- | Quantization | Model Size | Vision Encoder | Total | Min VRAM | Recommended GPU | Quality |
84
- |--------------|------------|----------------|-------|----------|-----------------|---------|
85
- | Q4_K_M | 5.7G | 1.7G | ~7.4G | 16GB | RTX 4080 / 4070 Ti | Good |
86
- | Q5_K_M | 6.6G | 1.7G | ~8.3G | 20GB | RTX 4090 / 3090 | Very Good |
87
- | Q6_K | 7.7G | 1.7G | ~9.4G | 24GB | RTX 4090 | Excellent |
88
- | Q8_0 | 9.4G | 1.7G | ~11.1G | 24GB+ | RTX 4090 / A6000 | Best |
 
 
89
 
90
  ### System Requirements
91
 
@@ -101,6 +103,7 @@ python main.py \
101
  - **KV Cache**: Quantized to Q8_0 to reduce memory usage
102
  - **Batch Size**: Optimized for RTX 4090 (adjust based on your GPU)
103
  - **Context**: Supports up to 32K tokens with M-RoPE
 
104
 
105
  ## 🎯 Recommended Usage
106
 
@@ -112,15 +115,15 @@ Use **Q5_K_M** or **Q6_K** for the best balance between quality and performance:
112
 
113
  ### For Maximum Quality
114
  Use **Q8_0** when:
115
- - VRAM is not a constraint
116
- - You need the highest possible accuracy
117
- - Running complex multi-step tasks
118
 
119
- ### For Constrained Hardware
120
  Use **Q4_K_M** when:
121
- - Limited VRAM (16GB)
122
  - Need faster inference
123
- - Running on consumer GPUs
124
 
125
  ## 📄 License
126
 
@@ -137,3 +140,4 @@ https://huggingface.co/zai-org/AutoGLM-Phone-9B-Multilingual
137
 
138
  **Conversion Date**: 2025-12-29
139
  **llama.cpp Version**: latest (with GLM4V support)
 
 
25
 
26
  | File | Quantization | Size | VRAM | Description |
27
  |------|-------------|------|------|-------------|
28
+ | `AutoGLM-Phone-9B-Multilingual-q4_k_m.gguf` | Q4_K_M | 5.7G | ~10GB | Performance balanced |
29
+ | `AutoGLM-Phone-9B-Multilingual-q5_k_m.gguf` | Q5_K_M | 6.6G | ~11GB | High quality |
30
+ | `AutoGLM-Phone-9B-Multilingual-q6_k.gguf` | Q6_K | 7.7G | ~12GB | Excellent quality |
31
+ | `AutoGLM-Phone-9B-Multilingual-q8_0.gguf` | Q8_0 | 9.4G | ~14GB | Best quality |
32
  | `mmproj-AutoGLM-Phone-9B-Multilingual-F16.gguf` | F16 | 1.7G | - | Vision Encoder (required) |
33
 
34
  **Total storage**: ~31GB (all quantizations + vision encoder)
 
78
 
79
  ## 💻 Hardware Requirements
80
 
81
+ ### Quick Reference (Tested on RTX 4090)
82
 
83
+ | Quantization | Model Size | Vision Encoder | Total | Actual VRAM* | Quality |
84
+ |--------------|------------|----------------|-------|--------------|---------|
85
+ | Q4_K_M | 5.7G | 1.7G | ~7.4G | ~10GB | Good |
86
+ | Q5_K_M | 6.6G | 1.7G | ~8.3G | ~11GB | Very Good |
87
+ | Q6_K | 7.7G | 1.7G | ~9.4G | ~12GB | Excellent |
88
+ | Q8_0 | 9.4G | 1.7G | ~11.1G | ~14GB | Best |
89
+
90
+ \*VRAM usage measured with `--flash-attn on` and all layers on GPU (`-ngl 99`)
91
 
92
  ### System Requirements
93
 
 
103
  - **KV Cache**: Quantized to Q8_0 to reduce memory usage
104
  - **Batch Size**: Optimized for RTX 4090 (adjust based on your GPU)
105
  - **Context**: Supports up to 32K tokens with M-RoPE
106
+ - **All layers on GPU**: Set `-ngl 99` to offload all transformer layers to GPU
107
 
108
  ## 🎯 Recommended Usage
109
 
 
115
 
116
  ### For Maximum Quality
117
  Use **Q8_0** when:
118
+ - You want the highest possible accuracy
119
+ - Running on RTX 4090 or better
120
+ - Complex multi-step GUI automation tasks
121
 
122
+ ### For Consumer GPUs
123
  Use **Q4_K_M** when:
124
+ - Limited VRAM (12GB cards like RTX 4070)
125
  - Need faster inference
126
+ - Running on gaming GPUs
127
 
128
  ## 📄 License
129
 
 
140
 
141
  **Conversion Date**: 2025-12-29
142
  **llama.cpp Version**: latest (with GLM4V support)
143
+ **Tested Hardware**: RTX 4090 24GB