all-MiniLM-L12-v2 GGUF
GGUF format of sentence-transformers/all-MiniLM-L12-v2 for use with CrispEmbed.
All-MiniLM-L12-v2. 12-layer upgrade from L6, higher quality 384-dimensional mean-pooled embeddings.
Files
| File | Quantization | Size |
|---|---|---|
| all-MiniLM-L12-v2-q4_k.gguf | Q4_K | 24 MB |
| all-MiniLM-L12-v2-q8_0.gguf | Q8_0 | 34 MB |
| all-MiniLM-L12-v2.gguf | F32 | 128 MB |
Quick Start
# Download
huggingface-cli download cstr/all-MiniLM-L12-v2-GGUF all-MiniLM-L12-v2-q4_k.gguf --local-dir .
# Run with CrispEmbed
./crispembed -m all-MiniLM-L12-v2-q4_k.gguf "Hello world"
# Or with auto-download
./crispembed -m all-MiniLM-L12-v2 "Hello world"
Model Details
| Property | Value |
|---|---|
| Architecture | BERT |
| Parameters | 33M |
| Embedding Dimension | 384 |
| Layers | 12 |
| Pooling | mean |
| Tokenizer | WordPiece |
| Base Model | sentence-transformers/all-MiniLM-L12-v2 |
Verification
Verified bit-identical to HuggingFace sentence-transformers (cosine similarity >= 0.999 on test texts).
Usage with CrispEmbed
CrispEmbed is a lightweight C/C++ text embedding inference engine using ggml. No Python runtime, no ONNX. Supports BERT, XLM-R, Qwen3, and Gemma3 architectures.
# Build CrispEmbed
git clone https://github.com/CrispStrobe/CrispEmbed
cd CrispEmbed
cmake -S . -B build && cmake --build build -j
# Encode
./build/crispembed -m all-MiniLM-L12-v2-q4_k.gguf "query text"
# Server mode
./build/crispembed-server -m all-MiniLM-L12-v2-q4_k.gguf --port 8080
curl -X POST http://localhost:8080/v1/embeddings \
-d '{"input": ["Hello world"], "model": "all-MiniLM-L12-v2"}'
Credits
- Original model: sentence-transformers/all-MiniLM-L12-v2
- Inference engine: CrispEmbed (ggml-based)
- Conversion:
convert-bert-embed-to-gguf.py
- Downloads last month
- 121
Hardware compatibility
Log In to add your hardware
8-bit
Model tree for cstr/all-MiniLM-L12-v2-GGUF
Base model
microsoft/MiniLM-L12-H384-uncased Quantized
sentence-transformers/all-MiniLM-L12-v2