How to use from
Ollama
ollama run hf.co/antiven0m/llava-v1.5-13b-dpo-gguf:Q4_K_M
Quick Links
Downloads last month
156
GGUF
Model size
13B params
Architecture
llama
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support