Sentence Similarity
sentence-transformers
ONNX
Safetensors
Transformers
Transformers.js
English
nomic_bert
feature-extraction
mteb
custom_code
Eval Results (legacy)
Eval Results
text-embeddings-inference
Instructions to use nomic-ai/nomic-embed-text-v1.5 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- sentence-transformers
How to use nomic-ai/nomic-embed-text-v1.5 with sentence-transformers:
from sentence_transformers import SentenceTransformer model = SentenceTransformer("nomic-ai/nomic-embed-text-v1.5", trust_remote_code=True) sentences = [ "That is a happy person", "That is a happy dog", "That is a very happy person", "Today is a sunny day" ] embeddings = model.encode(sentences) similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [4, 4] - Transformers
How to use nomic-ai/nomic-embed-text-v1.5 with Transformers:
# Load model directly from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("nomic-ai/nomic-embed-text-v1.5", trust_remote_code=True) model = AutoModel.from_pretrained("nomic-ai/nomic-embed-text-v1.5", trust_remote_code=True) - Transformers.js
How to use nomic-ai/nomic-embed-text-v1.5 with Transformers.js:
// npm i @huggingface/transformers import { pipeline } from '@huggingface/transformers'; // Allocate pipeline const pipe = await pipeline('sentence-similarity', 'nomic-ai/nomic-embed-text-v1.5'); - Notebooks
- Google Colab
- Kaggle
Add layer norm usage for Transformers.js
#11
by Xenova HF Staff - opened
This produces the same output as the python version:
// [
// [-0.00518727907910943, 0.06514579057693481, -0.21559129655361176, ...],
// [-0.008253306150436401, 0.005108598619699478, -0.22179779410362244, ...],
// ]
Relevant discussion: https://huggingface.co/nomic-ai/nomic-embed-text-v1.5/discussions/4#65cce8d0c52afc14ceac26c2
zpn changed pull request status to merged