How to use from
Unsloth Studio
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh
# Run unsloth studio
unsloth studio -H 0.0.0.0 -p 8888
# Then open http://localhost:8888 in your browser
# Search for QuantLLM/TinyLlama-1.1B-Chat-GGUF to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex
# Run unsloth studio
unsloth studio -H 0.0.0.0 -p 8888
# Then open http://localhost:8888 in your browser
# Search for QuantLLM/TinyLlama-1.1B-Chat-GGUF to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required
# Open https://huggingface.co/spaces/unsloth/studio in your browser
# Search for QuantLLM/TinyLlama-1.1B-Chat-GGUF to start chatting
Quick Links

πŸ¦™ TinyLlama-1.1B-Chat-GGUF

TinyLlama/TinyLlama-1.1B-Chat-v1.0 converted to GGUF format

QuantLLM Format Quantization

⭐ Star QuantLLM on GitHub


πŸ“– About This Model

This model is TinyLlama/TinyLlama-1.1B-Chat-v1.0 converted to GGUF format for use with llama.cpp, Ollama, LM Studio, and other compatible inference engines.

Property Value
Base Model TinyLlama/TinyLlama-1.1B-Chat-v1.0
Format GGUF
Quantization Q4_K_M
License apache-2.0
Created With QuantLLM

πŸš€ Quick Start

Option 1: Python (llama-cpp-python)

from llama_cpp import Llama

# Load the model
llm = Llama.from_pretrained(
    repo_id="codewithdark/TinyLlama-1.1B-Chat-GGUF",
    filename="TinyLlama-1.1B-Chat-GGUF.Q4_K_M.gguf",
)

# Generate text
output = llm(
    "Write a short story about a robot learning to paint:",
    max_tokens=256,
    echo=True
)
print(output["choices"][0]["text"])

Option 2: Ollama

# Download the model
huggingface-cli download codewithdark/TinyLlama-1.1B-Chat-GGUF TinyLlama-1.1B-Chat-GGUF.Q4_K_M.gguf --local-dir .

# Create Modelfile
echo 'FROM ./TinyLlama-1.1B-Chat-GGUF.Q4_K_M.gguf' > Modelfile

# Import to Ollama
ollama create tinyllama-1.1b-chat-gguf -f Modelfile

# Chat with the model
ollama run tinyllama-1.1b-chat-gguf

Option 3: LM Studio

  1. Download the .gguf file from the Files tab above
  2. Open LM Studio β†’ My Models β†’ Add Model
  3. Select the downloaded file
  4. Start chatting!

Option 4: llama.cpp CLI

# Download
huggingface-cli download codewithdark/TinyLlama-1.1B-Chat-GGUF TinyLlama-1.1B-Chat-GGUF.Q4_K_M.gguf --local-dir .

# Run inference
./llama-cli -m TinyLlama-1.1B-Chat-GGUF.Q4_K_M.gguf -p "Hello! " -n 128

πŸ“Š Model Details

Property Value
Original Model TinyLlama/TinyLlama-1.1B-Chat-v1.0
Format GGUF
Quantization Q4_K_M
License apache-2.0
Export Date 2026-04-29
Exported By QuantLLM v2.1

πŸ“¦ Quantization Details

This model uses Q4_K_M quantization:

Property Value
Type Q4_K_M
Bits 4-bit
Quality 🟒 ⭐ Recommended - Best quality/size balance

All Available GGUF Quantizations

Type Bits Quality Best For
Q2_K 2-bit πŸ”΄ Lowest Extreme size constraints
Q3_K_M 3-bit 🟠 Low Very limited memory
Q4_K_M 4-bit 🟒 Good Most users ⭐
Q5_K_M 5-bit 🟒 High Quality-focused
Q6_K 6-bit πŸ”΅ Very High Near-original
Q8_0 8-bit πŸ”΅ Excellent Maximum quality

πŸš€ Created with QuantLLM

QuantLLM

Convert any model to GGUF, ONNX, or MLX in one line!

from quantllm import turbo

# Load any HuggingFace model
model = turbo("TinyLlama/TinyLlama-1.1B-Chat-v1.0")

# Export to any format
model.export("gguf", quantization="Q4_K_M")

# Push to HuggingFace
model.push("your-repo", format="gguf")
GitHub Stars

πŸ“š Documentation Β· πŸ› Report Issue Β· πŸ’‘ Request Feature

πŸ“Š Benchmark Results (QuantLLM v2.1)

Exported with QuantLLM from TinyLlama/TinyLlama-1.1B-Chat-v1.0 (1.1B params).

Quantization File Size Export Time
Q4_K_M ⭐ TinyLlama-1.1B.Q4_K_M.gguf 636.9 MB 92.6s

How to use

Downloads last month
110
GGUF
Model size
1B params
Architecture
llama
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for QuantLLM/TinyLlama-1.1B-Chat-GGUF

Quantized
(145)
this model