Hugging Face
Models
Datasets
Spaces
Buckets
new
Docs
Enterprise
Pricing
Log In
Sign Up
baa-ai
's Collections
Qwen3.6-27B
Kimi-K2.6
Qwen3.6-35B-A3B
DeepSeek-V3.2 (MLX)
MiniMax-M2.7 (MLX)
Gemma 4
Nemotron 3 Super
MiniMax M2.5
GLM
Llama 3
Llama 4
Qwen3
Qwen3.5-35B-A3B
Qwen3.5-122B-A10B
Qwen3.5-397B-A17B
Qwen3.6-27B
updated
4 days ago
Mixed-precision MLX builds of Qwen/Qwen3.6-27B at the predicted local and global operating points.
Upvote
-
baa-ai/Qwen3.6-27B-RAM-16GB-MLX
Text Generation
•
27B
•
Updated
about 10 hours ago
•
936
•
3
baa-ai/Qwen3.6-27B-RAM-28GB-MLX
Text Generation
•
27B
•
Updated
about 10 hours ago
•
599
•
1
Upvote
-
Share collection
View history
Collection guide
Browse collections