Video test: https://www.bilibili.com/video/BV1jzoqB6EjD/#reply116472214984594
Please compile PR https://github.com/ggml-org/llama.cpp/pull/22378 by yourself.
- Downloads last month
- -
Hardware compatibility
Log In to add your hardware
2-bit
4-bit
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support
Model tree for lovedheart/DeepSeek-V4-Flash-GGUF
Base model
deepseek-ai/DeepSeek-V4-Flash