Hotep Intelligence Kush V4 โ GGUF
GGUF quantizations of hotepfederales/hotep-kush-v4, the current production model powering askhotep.ai and @hotep_llm_bot.
Available Quantizations
| File | Quantization | Size | Use Case |
|---|---|---|---|
hotep-kush-v4.Q4_K_M.gguf |
Q4_K_M | ~4.7 GB | Recommended โ production default |
hotep-kush-v4.Q8_0.gguf |
Q8_0 | ~8.5 GB | Max quality, needs 12+ GB RAM |
Model Details
| Parameter | Value |
|---|---|
| Base model | meta-llama/Meta-Llama-3.1-8B-Instruct |
| LoRA rank | 32, alpha 32, RSLoRA |
| max_seq_length | 4096 |
| Quality eval | 100/100, 0% CJK, 0% rubric leakage |
| Persona consistency | 10/10 |
Key Features
- Deep reasoning traces โ structured chain-of-thought before every response
- Expanded knowledge base โ African history, economics, and philosophy
- Stronger Ma'at alignment โ cleaner, culturally grounded answers
- Context retention โ handles complex multi-turn conversations
Quick Start (Ollama)
ollama run hotep-llm-kush-v4
Quick Start (llama-cpp-python)
from llama_cpp import Llama
llm = Llama(model_path="hotep-kush-v4.Q4_K_M.gguf", n_ctx=4096)
output = llm("What is the principle of Ma'at?", max_tokens=512)
print(output["choices"][0]["text"])
About Hotep Intelligence
Sovereign AI trained on African history, philosophy, and wisdom traditions โ no corporate surveillance, no data collection.
- Website: askhotep.ai
- Telegram bot: @hotep_llm_bot
- Knowledge base: knowledge.askhotep.ai
- Base model (safetensors): hotepfederales/hotep-kush-v4
- Downloads last month
- 7
Hardware compatibility
Log In to add your hardware
4-bit
8-bit
Model tree for hotepfederales/hotep-kush-v4-gguf
Base model
meta-llama/Llama-3.1-8B Finetuned
meta-llama/Llama-3.1-8B-Instruct