Hugging Face
Models
Datasets
Spaces
Buckets
new
Docs
Enterprise
Pricing
Log In
Sign Up
mlx-community
/
deepseek-ai-DeepSeek-V4-Flash-4bit
like
2
Follow
MLX Community
12k
Text Generation
MLX
Safetensors
deepseek_v4
4-bit precision
License:
mit
Model card
Files
Files and versions
xet
Community
2
Use this model
New discussion
New pull request
Resources
PR & discussions documentation
Code of Conduct
Hub documentation
All
Discussions
Pull requests
View closed (0)
Sort: Recently created
DWQ 4 bit
1
#2 opened 1 day ago by
sm54
Is it possible to convert to a 2-bit quantized version?
1
#1 opened 7 days ago by
hehua2008