name: colip-embeddings
description: >
Generates multimodal embeddings across olfaction, vision, and language using
Scentience COLIP models (OVLM, Embeddings Large/Small, OVL Classifier), then
computes cross-modal similarity for semantic labeling, retrieval, zero-shot
classification, or experiment comparison. Use when mapping OPU sensor readings
to natural language descriptions, matching smell episodes to images,
classifying detected chemicals, or building olfactory search over a dataset of
prior experiments.
version: 1.0.0
author: kordelfrance
license: Apache 2.0
compatibility: >
Python: pip install "scentience[models]" (adds torch>=2.0, transformers>=4.30,
huggingface-hub>=0.16, torchvision>=0.15, Pillow>=9.0, numpy>=1.24). JS: npm
install scentience + peer dep @huggingface/inference>=2.0. Rust: cargo add
scentience --features colip (downloads libtorch) or --features colip-system
(uses system libtorch). Edge/mobile: use COLIP Embeddings Small (exportable to
Android, iOS, Rust).
metadata:
domain: olfaction
tags:
- embeddings
- multimodal
- colip
- ovlm
- retrieval
- classification
- semantics
- zero-shot
models:
- id: ovlm
name: OVLM
architecture: unified-multimodal
quantization: int8
target: edge
description: World's first unified olfaction-vision-language model at the edge
- id: colip-embeddings-large
name: COLIP Embeddings Large
architecture: graph-attention
target: cloud
description: >-
Highest accuracy; use for offline dataset tasks and online high-stakes
retrieval
- id: colip-embeddings-small
name: COLIP Embeddings Small
architecture: graph-attention
target: edge
description: Low-latency; exportable to Android, iOS, Rust via HuggingFace Hub
- id: ovl-classifier
name: OVL Classifier
architecture: graph-attention
target: edge+cloud
variants: 2
description: Returns class probabilities for chemical-to-visual-object links
hardware:
- Reconnaisscent
- Scentinel
- Olfactory Development Board
depends_on:
- ble-device
docs: https://scentience.github.io/docs-api
Goal
Produce L2-normalized embedding vectors from olfaction sensor readings, images, or text using a COLIP model, then rank candidates by cosine similarity for retrieval, labeling, or classification — while preserving model-version awareness and clearly distinguishing semantic similarity from analytical chemistry.
Instructions
1. Choose a Model Variant
| Model | Best for | Deployment | Notes |
|---|---|---|---|
ovlm |
Unified O+V+L tasks, zero-shot labeling | Edge/Mobile | Int8 quantized; on Apple App Store via Sigma |
colip-embeddings-large |
High-accuracy retrieval, dataset analysis | Cloud | Highest-dimensional embeddings; slowest |
colip-embeddings-small |
On-device, latency-sensitive pipelines | Edge | Export to Android/iOS/Rust |
ovl-classifier |
Binary/multi-class compound classification | Edge + Cloud | Returns class probabilities, not vectors |
Default recommendation: Start with colip-embeddings-small for robotics and
real-time tasks. Switch to colip-embeddings-large for offline dataset work or when
accuracy is more important than latency. Use ovl-classifier when you need a hard
classification output rather than a similarity ranking.
2. Instantiate the Embedder
# Python — requires pip install "scentience[models]"
from scentience import ScentienceEmbedder
embedder = ScentienceEmbedder(
model="colip-embeddings-small", # or "ovlm", "colip-embeddings-large", "ovl-classifier"
api_key="SCN_..."
)
// JavaScript — requires @huggingface/inference peer dep
import { ScentienceEmbedder } from 'scentience';
const embedder = new ScentienceEmbedder({
model: "colip-embeddings-small",
apiKey: "SCN_..."
});
// Rust — Cargo.toml: scentience = { features = ["colip"] }
use scentience::ScentienceEmbedder;
let embedder = ScentienceEmbedder::new("colip-embeddings-small", "SCN_...");
3. Generate Embeddings
All embed() calls return L2-normalized vectors. Pass the correct modality flag.
From an OPU reading (olfaction):
# reading is a dict from the ble-device skill
reading = {"VOC": 0.45, "CO2": 412.0, "NH3": 0.08, "H2S": 0.002, ...}
olf_vec = embedder.embed(modality="olfaction", data=reading)
# Returns: np.ndarray, shape (D,), L2-normalized
From text:
txt_vec = embedder.embed(modality="text", data="freshly cut grass after rain")
From an image:
from PIL import Image
img_vec = embedder.embed(modality="vision", data=Image.open("scene.jpg"))
JavaScript (any modality):
const olfVec = await embedder.embed({ modality: "olfaction", data: reading });
const txtVec = await embedder.embed({ modality: "text", data: "ammonia leak" });
4. Rank Candidates by Cosine Similarity
import numpy as np
def cosine_sim(a: np.ndarray, b: np.ndarray) -> float:
return float(np.dot(a, b) / (np.linalg.norm(a) * np.linalg.norm(b) + 1e-8))
candidates = [
{"id": "label_0", "modality": "text", "label": "petroleum / gasoline"},
{"id": "label_1", "modality": "text", "label": "ammonia / fertilizer"},
{"id": "label_2", "modality": "text", "label": "fresh cut vegetation"},
]
results = sorted(
[
{
"id": c["id"],
"label": c["label"],
"similarity": cosine_sim(
olf_vec,
embedder.embed(modality=c["modality"], data=c["label"])
)
}
for c in candidates
],
key=lambda x: x["similarity"],
reverse=True
)
5. OVL Classifier (Hard Classification)
When the task is binary or multi-class classification rather than retrieval:
from scentience import ScentienceEmbedder
classifier = ScentienceEmbedder(model="ovl-classifier", api_key="SCN_...")
probs = classifier.classify(modality="olfaction", data=reading)
# Returns: dict of {class_label: probability}
# e.g., {"ammonia": 0.83, "methane": 0.09, "voc_mix": 0.05, ...}
Use this when you need a direct answer ("is this ammonia?") rather than a ranked list of semantic matches.
6. Format Output
{
"model": "colip-embeddings-small",
"model_version": "0.2.0",
"query_modality": "olfaction",
"top_matches": [
{"id": "label_1", "label": "ammonia / fertilizer", "similarity": 0.84},
{"id": "label_2", "label": "fresh cut vegetation", "similarity": 0.61},
{"id": "label_0", "label": "petroleum / gasoline", "similarity": 0.33}
],
"semantic_summary": "Query embedding strongly aligned with ammonia/nitrogen compounds. Consistent with NH3=0.08 ppm and elevated VOC. Top match similarity 0.84 with a 0.23 gap to second — treat as confident hypothesis.",
"confidence": 0.79,
"note": "Semantic similarity score — not a certified chemical assay. Treat as hypothesis pending corroborating sensor evidence or manual review."
}
Always include model and model_version. Embedding spaces change between model
versions; cross-version comparisons are invalid.
Examples
Label a robot smell episode in a warehouse
Scenario: Warehouse inspection robot. Buffer of 10 readings.
NH3=0.22 ppm (elevated), VOC=0.31 ppm, CO2=420 ppm.
Candidate labels: ["cleaning products", "refrigerant leak",
"ammonia coolant system", "diesel exhaust"]
Top match: "ammonia coolant system" (similarity 0.88)
Second match: "cleaning products" (similarity 0.54)
Gap: 0.34 — confident result
Semantic summary: Elevated NH3 with low VOC/CO2 ratio aligns with ammonia-based
refrigerant systems. Recommend inspection of cooling units in sector 3.
Confidence: 0.82
Cross-modal: image query against olfaction database
Input: Image of a fertilizer storage area (vision embedding).
Database: 50 prior olfaction experiment embeddings from ScentNet field trials.
Top match: Experiment #12 — outdoor soil nitrogen test (similarity 0.76)
Semantic summary: Visual cues of bagged nitrogen fertilizer align with prior
high-NH3 olfaction episodes. Cross-modal alignment strong.
Zero-shot classification with OVL Classifier
Input: OPU reading with H2S=0.45 ppm, SO2=0.12 ppm, NH3=0.03 ppm.
OVL Classifier output:
{"hydrogen_sulfide_source": 0.79, "industrial_exhaust": 0.13, "other": 0.08}
Interpretation: High-confidence H2S-linked source. Confidence 0.79 > 0.65 threshold.
Constraints
- Similarity ≠ chemical identity — COLIP similarity reflects statistical associations from training data, not analytical GC-MS results. Treat high similarity as a strong hypothesis, not a certified identification.
- Always record model name and version — Embedding spaces shift between versions; comparisons across versions are invalid and should be blocked programmatically.
- Normalize before external comparisons — All vectors from
ScentienceEmbedder.embed()are L2-normalized. If you supply embeddings from an external system, normalize first. - Ambiguity threshold — If top-match similarity < 0.65, or if the gap between first and second match is < 0.10, treat the result as ambiguous. Request additional sensor readings or flag for manual review.
- Cross-modal alignment quality varies — Olfaction-to-text alignment is strongest for compound classes present in the ScentNet training corpus. Novel or rare compounds degrade similarity quality.
- OVL Classifier vs. Embeddings — Use
ovl-classifierfor hard classification (class probabilities). Usecolip-embeddings-large/smallfor nearest-neighbor retrieval and semantic ranking tasks. - Edge model accuracy tradeoff —
COLIP Embeddings Smallsacrifices accuracy for size. For safety-critical detection tasks, validate againstLargebefore deployingSmallon-device. - Training data bias — All COLIP models inherit biases from the ScentNet dataset. Performance on out-of-distribution chemical environments (novel compounds, unusual mixtures) is less reliable and requires human validation.