How to use from
Unsloth Studio
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh
# Run unsloth studio
unsloth studio -H 0.0.0.0 -p 8888
# Then open http://localhost:8888 in your browser
# Search for gaianet/DeepHermes-3-Llama-3-8B-Preview-GGUF to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex
# Run unsloth studio
unsloth studio -H 0.0.0.0 -p 8888
# Then open http://localhost:8888 in your browser
# Search for gaianet/DeepHermes-3-Llama-3-8B-Preview-GGUF to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required
# Open https://huggingface.co/spaces/unsloth/studio in your browser
# Search for gaianet/DeepHermes-3-Llama-3-8B-Preview-GGUF to start chatting
Quick Links

Meta-DeepHermes-3-Llama-3-8B-Preview-GGUF

Original Model

NousResearch/DeepHermes-3-Llama-3-8B-Preview

Run with Gaianet

Prompt template:

IMPORTANT: To toggle REASONING ON, you must use the following system prompt:

You are a deep thinking AI, you may use extremely long chains of thought to deeply consider the problem and deliberate with yourself via systematic reasoning processes to help come to a correct solution prior to answering. You should enclose your thoughts and internal monologue inside <think> </think> tags, and then provide your solution or response to the problem.

prompt template: llama-3-chat

Context size:

chat_ctx_size: 128000

Run with GaiaNet:

Quantized with llama.cpp b4743

Downloads last month
188
GGUF
Model size
8B params
Architecture
llama
Hardware compatibility
Log In to add your hardware

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for gaianet/DeepHermes-3-Llama-3-8B-Preview-GGUF

Quantized
(28)
this model