all-MiniLM-L12-v2 GGUF

GGUF format of sentence-transformers/all-MiniLM-L12-v2 for use with CrispEmbed.

All-MiniLM-L12-v2. 12-layer upgrade from L6, higher quality 384-dimensional mean-pooled embeddings.

Files

File Quantization Size
all-MiniLM-L12-v2-q4_k.gguf Q4_K 24 MB
all-MiniLM-L12-v2-q8_0.gguf Q8_0 34 MB
all-MiniLM-L12-v2.gguf F32 128 MB

Quick Start

# Download
huggingface-cli download cstr/all-MiniLM-L12-v2-GGUF all-MiniLM-L12-v2-q4_k.gguf --local-dir .

# Run with CrispEmbed
./crispembed -m all-MiniLM-L12-v2-q4_k.gguf "Hello world"

# Or with auto-download
./crispembed -m all-MiniLM-L12-v2 "Hello world"

Model Details

Property Value
Architecture BERT
Parameters 33M
Embedding Dimension 384
Layers 12
Pooling mean
Tokenizer WordPiece
Base Model sentence-transformers/all-MiniLM-L12-v2

Verification

Verified bit-identical to HuggingFace sentence-transformers (cosine similarity >= 0.999 on test texts).

Usage with CrispEmbed

CrispEmbed is a lightweight C/C++ text embedding inference engine using ggml. No Python runtime, no ONNX. Supports BERT, XLM-R, Qwen3, and Gemma3 architectures.

# Build CrispEmbed
git clone https://github.com/CrispStrobe/CrispEmbed
cd CrispEmbed
cmake -S . -B build && cmake --build build -j

# Encode
./build/crispembed -m all-MiniLM-L12-v2-q4_k.gguf "query text"

# Server mode
./build/crispembed-server -m all-MiniLM-L12-v2-q4_k.gguf --port 8080
curl -X POST http://localhost:8080/v1/embeddings \
    -d '{"input": ["Hello world"], "model": "all-MiniLM-L12-v2"}'

Credits

Downloads last month
121
GGUF
Model size
33.4M params
Architecture
bert
Hardware compatibility
Log In to add your hardware

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for cstr/all-MiniLM-L12-v2-GGUF