This is a GGUF variant of GIGABATEMAN-7B model.
Use with koboldcpp or llamacpp, do not use GPT4ALL.
2-bit
4-bit
5-bit
6-bit
8-bit
Base model