Hugging Face
Models
Datasets
Spaces
Buckets
new
Docs
Enterprise
Pricing
Log In
Sign Up
baa-ai
's Collections
Qwen3.6-35B-A3B
DeepSeek-V3.2 (MLX)
MiniMax-M2.7 (MLX)
Gemma 4
Nemotron 3 Super
MiniMax M2.5
GLM
Llama 3
Llama 4
Qwen3
Qwen3.5-35B-A3B
Qwen3.5-122B-A10B
Qwen3.5-397B-A17B
DeepSeek-V3.2 (MLX)
updated
2 days ago
Mixed-precision MLX builds of deepseek-ai/DeepSeek-V3.2 for Apple Silicon.
Upvote
-
baa-ai/DeepSeek-V3.2-RAM-350GB-MLX
Text Generation
•
672B
•
Updated
1 day ago
•
259
Upvote
-
Share collection
View history
Collection guide
Browse collections