File size: 2,011 Bytes
59e1a57
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
001f00d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
---
license: apache-2.0
language:
- en
- zh
- vi
base_model:
- deepseek-ai/DeepSeek-V4-Flash
tags:
- deepseek
- deepseek4
- deepseekpro
- llm
- quantization
- gguf
- llama.cpp
- inference-optimization
---

# DeepSeekV4Flash Quantization Repository

![v4-benchmark-2](https://cdn-uploads.huggingface.co/production/uploads/671ab90d28ec35263e09152f/WXhyPJ5E8r3B2p0TO8us6.png)

This repository provides scripts and guidelines for quantizing the **DeepSeek V4 Flash** model, enabling reduced model size and optimized inference performance.

![v4-efficiency](https://cdn-uploads.huggingface.co/production/uploads/671ab90d28ec35263e09152f/m4HSN3MmYyW2SHZytAbFE.png)

---

## 🚀 Purpose
- Reduce model size (BF16 → Q3/Q4/Q5/Q8, etc.)
- Improve inference speed
- Enable deployment on limited GPU/CPU resources

---

## 🌍 Languages
- English (en)
- Vietnamese (vi)

---

## 🧠 Base Model
- DeepSeek-V4-Flash (https://huggingface.co/deepseek-ai/DeepSeek-V4-Flash)

---

## 📦 Contents
- Model conversion and quantization scripts
- Usage examples for llama.cpp / GGUF workflows
- Common quantization configurations

---

## 🛠️ Requirements
- Python >= 3.12
- Latest version of llama.cpp (with GGUF support)
- HuggingFace Transformers (if converting from HF format)
- Sufficient RAM/VRAM depending on model size

---

## ⚙️ Example Usage

```bash
python convert_hf_to_gguf.py   --model deepseek-ai/DeepSeek-V4-Flash   --outfile models/DeepSeekV4Flash.gguf

./llama-quantize models/DeepSeekV4Flash.gguf Q4_K_M
```

---

## 📌 Notes
- Quantization may require significant system memory depending on model size
- Some quantization formats may not be compatible with all runtimes or versions
- Always validate output quality after quantization

---

## 👤 Author

- Email: tecaprovn@gmail.com
- Telegram: https://t.me/tamndx

---

## 📄 License

This repository follows the original DeepSeek model license.

- Base model: Apache 2.0 (DeepSeek)
- Only conversion scripts included, no weight modification