octen-0.6b GGUF
GGUF format of Octen/Octen-Embedding-0.6B for use with CrispEmbed and Ollama.
Files
| File | Quantization | Size |
|---|---|---|
| octen-0.6b-q4_k.gguf | Q4_K | 0 MB |
| octen-0.6b-q8_0.gguf | Q8_0 | 0 MB |
| octen-0.6b.gguf | F32 | 0 MB |
Recommended: Q8_0 for quality (cos vs HF: L2=1.0), Q4_K for size (L2=1.0).
Quick Start
CrispEmbed
./crispembed -m octen-0.6b "Hello world"
./crispembed-server -m octen-0.6b --port 8080
Ollama (with CrispStrobe fork)
echo "FROM octen-0.6b-q8_0.gguf" > Modelfile
ollama create octen-0.6b -f Modelfile
curl http://localhost:11434/api/embed -d '{"model":"octen-0.6b","input":["Hello world"]}'
Python (CrispEmbed)
from crispembed import CrispEmbed
model = CrispEmbed("octen-0.6b-q8_0.gguf")
vectors = model.encode(["Hello world", "Goodbye world"])
Model Details
| Property | Value |
|---|---|
| Architecture | Qwen3 |
| Parameters | 600M |
| Embedding Dimension | 1024 |
| Layers | 28 |
| Pooling | last-token |
| Tokenizer | BPE |
| Language | multilingual |
| Q8_0 vs HuggingFace | L2=1.0 |
| Q4_K vs HuggingFace | L2=1.0 |
Server API
CrispEmbed server supports four API dialects:
POST /embed-- nativePOST /v1/embeddings-- OpenAI-compatiblePOST /api/embed-- Ollama-compatiblePOST /api/embeddings-- Ollama legacy
Credits
- Original model: Octen/Octen-Embedding-0.6B
- Inference: CrispEmbed (MIT, ggml-based)
- Downloads last month
- 1,161
Hardware compatibility
Log In to add your hardware
8-bit
Model tree for cstr/octen-0.6b-GGUF
Base model
Qwen/Qwen3-0.6B-Base Finetuned
Qwen/Qwen3-Embedding-0.6B Finetuned
Octen/Octen-Embedding-0.6B