Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Buckets new
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up
baa-ai 's Collections
Qwen3.6-35B-A3B
DeepSeek-V3.2 (MLX)
MiniMax-M2.7 (MLX)
Gemma 4
Nemotron 3 Super
MiniMax M2.5
GLM
Llama 3
Llama 4
Qwen3
Qwen3.5-35B-A3B
Qwen3.5-122B-A10B
Qwen3.5-397B-A17B

MiniMax-M2.7 (MLX)

updated 2 days ago

RAM quantized versions of MiniMaxAI/MiniMax-M2.7 for Apple Silicon. Size points: 91 GB, 100 GB, 111 GB, 116 GB, 120 GB.

Upvote
-

  • baa-ai/MiniMax-M2.7-RAM-100GB-MLX

    Text Generation • 229B • Updated 2 days ago • 2.88k • 4

  • baa-ai/MiniMax-M2.7-RAM-120GB-MLX

    Text Generation • 229B • Updated 2 days ago • 2.39k • 3

  • baa-ai/MiniMax-M2.7-RAM-116GB-MLX

    Text Generation • 229B • Updated 2 days ago • 1.01k • 2

  • baa-ai/MiniMax-M2.7-RAM-111GB-MLX

    Text Generation • 229B • Updated 2 days ago • 853 • 1

  • baa-ai/MiniMax-M2.7-RAM-91GB-MLX

    Text Generation • 229B • Updated 2 days ago • 287
Upvote
-
  • Collection guide
  • Browse collections
Company
TOS Privacy About Careers
Website
Models Datasets Spaces Pricing Docs