stt-qwen3-1.7b-merged
Model creator: aldigobbler
Original model: aldigobbler/stt-qwen3-1.7b-merged
GGUF quantization: provided by aldigobbler using llama.cpp
Special thanks
๐ Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible.
Use with Ollama
ollama run "hf.co/aldigobbler/stt-qwen3-1.7b-merged-GGUF:Q4_K_M"
Use with LM Studio
lms load "aldigobbler/stt-qwen3-1.7b-merged-GGUF"
Use with llama.cpp CLI
llama-cli --hf "aldigobbler/stt-qwen3-1.7b-merged-GGUF:Q4_K_M" -p "The meaning to life and the universe is"
Use with llama.cpp Server:
llama-server --hf "aldigobbler/stt-qwen3-1.7b-merged-GGUF:Q4_K_M" -c 4096
- Downloads last month
- 42
Hardware compatibility
Log In to add your hardware
4-bit
Model tree for aldigobbler/stt-qwen3-1.7b-merged-GGUF
Base model
Qwen/Qwen3-1.7B-Base Finetuned
Qwen/Qwen3-1.7B Finetuned
aldigobbler/stt-qwen3-1.7b-mergedDataset used to train aldigobbler/stt-qwen3-1.7b-merged-GGUF
Evaluation results
- Validation Loss on stt-correctionvalidation set self-reported4.655