🚀GGUF
Collection
Llama.cpp compatible models, can be used on CPUs and GPUs! • 1145 items • Updated • 49
irm https://unsloth.ai/install.ps1 | iex
# Run unsloth studio
unsloth studio -H 0.0.0.0 -p 8888
# Then open http://localhost:8888 in your browser
# Search for MaziyarPanahi/Llama-Guard-3-1B-GGUF to start chatting# No setup required# Open https://huggingface.co/spaces/unsloth/studio in your browser
# Search for MaziyarPanahi/Llama-Guard-3-1B-GGUF to start chattingMaziyarPanahi/Llama-Guard-3-1B-GGUF contains GGUF format model files for meta-llama/Llama-Guard-3-1B.
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible.
Base model
meta-llama/Llama-Guard-3-1B
Install Unsloth Studio (macOS, Linux, WSL)
# Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for MaziyarPanahi/Llama-Guard-3-1B-GGUF to start chatting