How to use from
llama.cppInstall from WinGet (Windows)
winget install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf matrixportalx/Llama3-8B-Instruct-Turkish-Finetuned-GGUF:# Run inference directly in the terminal:
llama-cli -hf matrixportalx/Llama3-8B-Instruct-Turkish-Finetuned-GGUF:Use pre-built binary
# Download pre-built binary from:
# https://github.com/ggerganov/llama.cpp/releases# Start a local OpenAI-compatible server with a web UI:
./llama-server -hf matrixportalx/Llama3-8B-Instruct-Turkish-Finetuned-GGUF:# Run inference directly in the terminal:
./llama-cli -hf matrixportalx/Llama3-8B-Instruct-Turkish-Finetuned-GGUF:Build from source code
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build
cmake --build build -j --target llama-server llama-cli# Start a local OpenAI-compatible server with a web UI:
./build/bin/llama-server -hf matrixportalx/Llama3-8B-Instruct-Turkish-Finetuned-GGUF:# Run inference directly in the terminal:
./build/bin/llama-cli -hf matrixportalx/Llama3-8B-Instruct-Turkish-Finetuned-GGUF:Use Docker
docker model run hf.co/matrixportalx/Llama3-8B-Instruct-Turkish-Finetuned-GGUF:Quick Links
Llama3-8B-Instruct-Turkish-Finetuned GGUF Quantized Models
Technical Details
- Quantization Tool: llama.cpp
- Version: version: 5162 (2016f07b)
Model Information
- Base Model: matrixportal/Llama3-8B-Instruct-Turkish-Finetuned
- Quantized by: matrixportal
Available Files
| 🚀 Download | 🔢 Type | 📝 Description |
|---|---|---|
| Download | Q2 K | Tiny size, lowest quality (emergency use only) |
| Download | Q3 K S | Very small, low quality (basic tasks) |
| Download | Q3 K M | Small, acceptable quality |
| Download | Q3 K L | Small, better than Q3_K_M (good for low RAM) |
| Download | Q4 0 | Standard 4-bit (fast on ARM) |
| Download | Q4 K S | 4-bit optimized (good space savings) |
| Download | Q4 K M | 4-bit balanced (recommended default) |
| Download | Q5 0 | 5-bit high quality |
| Download | Q5 K S | 5-bit optimized |
| Download | Q5 K M | 5-bit best (recommended HQ option) |
| Download | Q6 K | 6-bit near-perfect (premium quality) |
| Download | Q8 0 | 8-bit maximum (overkill for most) |
| Download | F16 | Full precision (maximum accuracy) |
💡 Q4 K M provides the best balance for most use cases
- Downloads last month
- 554
Hardware compatibility
Log In to add your hardware
2-bit
3-bit
4-bit
5-bit
6-bit
16-bit
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support
Model tree for matrixportalx/Llama3-8B-Instruct-Turkish-Finetuned-GGUF
Base model
meta-llama/Meta-Llama-3-8B-Instruct
Install from brew
# Start a local OpenAI-compatible server with a web UI: llama-server -hf matrixportalx/Llama3-8B-Instruct-Turkish-Finetuned-GGUF:# Run inference directly in the terminal: llama-cli -hf matrixportalx/Llama3-8B-Instruct-Turkish-Finetuned-GGUF: