Hugging Face
Models
Datasets
Spaces
Buckets
new
Docs
Enterprise
Pricing
Log In
Sign Up
khasinski
/
GLM-4.7-Flash-Q8_0-GGUF
like
0
GGUF
glm
llama-cpp
quantized
Mixture of Experts
conversational
License:
mit
Model card
Files
Files and versions
xet
Community
Deploy
Use this model
main
GLM-4.7-Flash-Q8_0-GGUF
31.8 GB
Ctrl+K
Ctrl+K
1 contributor
History:
3 commits
khasinski
Upload GLM-4.7-Flash-Q8_0.gguf with huggingface_hub
10a1c7f
verified
3 months ago
.gitattributes
1.58 kB
Upload GLM-4.7-Flash-Q8_0.gguf with huggingface_hub
3 months ago
GLM-4.7-Flash-Q8_0.gguf
31.8 GB
xet
Upload GLM-4.7-Flash-Q8_0.gguf with huggingface_hub
3 months ago
README.md
1.69 kB
Upload README.md with huggingface_hub
3 months ago