This is a GGUF variant of GIGABATEMAN-7B model.

Use with koboldcpp or llamacpp, do not use GPT4ALL.

Downloads last month
52
GGUF
Model size
7B params
Architecture
llama
Hardware compatibility
Log In to add your hardware

2-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for DZgas/GIGABATEMAN-7B-GGUF

Quantized
(3)
this model