How to use from
Unsloth StudioInstall Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex
# Run unsloth studio
unsloth studio -H 0.0.0.0 -p 8888
# Then open http://localhost:8888 in your browser
# Search for franco334578/search-analyser-1b-it-GGUF to start chattingUsing HuggingFace Spaces for Unsloth
# No setup required# Open https://huggingface.co/spaces/unsloth/studio in your browser
# Search for franco334578/search-analyser-1b-it-GGUF to start chattingQuick Links
search-analyser-1b-it-GGUF - GGUF
This model was finetuned and converted to GGUF format using Unsloth.
Example usage:
- For text only LLMs: llama-cli --hf repo_id/model_name -p "why is the sky blue?"
- For multimodal models: llama-mtmd-cli -m model_name.gguf --mmproj mmproj_file.gguf
Available Model files:
gemma-3-1b-it.Q8_0.gguf
Ollama
An Ollama Modelfile is included for easy deployment.
Note
The model's BOS token behavior was adjusted for GGUF compatibility.
- Downloads last month
- 10
Hardware compatibility
Log In to add your hardware
8-bit
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support
Install Unsloth Studio (macOS, Linux, WSL)
# Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for franco334578/search-analyser-1b-it-GGUF to start chatting