# Run unsloth studiounsloth studio -H 0.0.0.0 -p 8888# Then open http://localhost:8888 in your browser# Search for krampenschiesser/MiniMax-M2.7-GGUF to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex
# Run unsloth studiounsloth studio -H 0.0.0.0 -p 8888# Then open http://localhost:8888 in your browser# Search for krampenschiesser/MiniMax-M2.7-GGUF to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required
# Open https://huggingface.co/spaces/unsloth/studio in your browser
# Searchfor krampenschiesser/MiniMax-M2.7-GGUF tostart chatting
These are some quants I use depending on the memory availability.
I also added nvfp4 in the hope for custom kernels emerging in the future.
I recommend the Q3K-IQ4XS and IQ4XS-Q5K quants.
KLD
I need to use the Q8 version due to hardware restrictions for running the kld baseline.
However it is quantized in the same way as the original model which also uses 8 bits for the expert weights so the difference should not be big.
Sadly I am getting weird outputs (nan floats from llama-perplexity) from some kld runs so take this with a salt lake.
Install Unsloth Studio (macOS, Linux, WSL)
# Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for krampenschiesser/MiniMax-M2.7-GGUF to start chatting