MODEL_ID = "Snowflake/snowflake-arctic-embed-m-v1.5"
queries = ['what is snowflake?', 'Where can I get the best tacos?'] documents = ['The Data Cloud!', 'Mexico City of Course!']
model = SentenceTransformer(MODEL_ID)
query_embeddings = model.encode(queries, prompt_name="query") document_embeddings = model.encode(documents)
scores = query_embeddings @ document_embeddings.T
for query, query_scores in zip(queries, scores): doc_score_pairs = list(zip(documents, query_scores)) doc_score_pairs = sorted(doc_score_pairs, key=lambda x: x[1], reverse=True) print(f'Query: "{query}"') for document, score in doc_score_pairs: print(f'Score: {score:.4f} | Document: "{document}"') print()
query_embeddings_256 = normalize(torch.from_numpy(query_embeddings)[:, :256]) document_embeddings_256 = normalize(torch.from_numpy(document_embeddings)[:, :256]) scores_256 = query_embeddings_256 @ document_embeddings_256.T
for query, query_scores in zip(queries, scores_256): doc_score_pairs = sorted(zip(documents, query_scores), key=lambda x: x[1], reverse=True) print(f'Query: "{query}"') for document, score in doc_score_pairs: print(f'Score: {score:.4f} | Document: "{document}"') print()
### Using Huggingface transformers
You can use the transformers package to use an snowflake-arctic-embed model, too. For optimal retrieval quality, remember to use the CLS token for embeddings and to use the query prefix below (just on the query).
```python
import torch
from torch.nn.functional import normalize
from transformers import AutoModel, AutoTokenizer
# Model constants.
MODEL_ID = "Snowflake/snowflake-arctic-embed-m-v1.5"
QUERY_PREFIX = 'Represent this sentence for searching relevant passages: '
# Your queries and docs.
queries = ['what is snowflake?', 'Where can I get the best tacos?']
documents = ['The Data Cloud!', 'Mexico City of Course!']
# Load the model and tokenizer.
tokenizer = AutoTokenizer.from_pretrained(MODEL_ID)
model = AutoModel.from_pretrained(MODEL_ID, add_pooling_layer=False)
model.eval()
# Add query prefix and tokenize queries and docs.
queries_with_prefix = [f"{QUERY_PREFIX}{q}" for q in queries]
query_tokens = tokenizer(queries_with_prefix, padding=True, truncation=True, return_tensors='pt', max_length=512)
document_tokens = tokenizer(documents, padding=True, truncation=True, return_tensors='pt', max_length=512)
# Use the model to generate text embeddings.
with torch.inference_mode():
query_embeddings = model(**query_tokens)[0][:, 0]
document_embeddings = model(**document_tokens)[0][:, 0]
# Remember to normalize embeddings.
query_embeddings = normalize(query_embeddings)
document_embeddings = normalize(document_embeddings)
# Scores via dotproduct.
scores = query_embeddings @ document_embeddings.T
# Pretty-print the results.
for query, query_scores in zip(queries, scores):
doc_score_pairs = list(zip(documents, query_scores))
doc_score_pairs = sorted(doc_score_pairs, key=lambda x: x[1], reverse=True)
print(f'Query: "{query}"')
for document, score in doc_score_pairs:
print(f'Score: {score:.4f} | Document: "{document}"')
print()
#### OUTPUT ####
# Query: "what is snowflake?"
# Score: 0.3521 | Document: "The Data Cloud!"
# Score: 0.2358 | Document: "Mexico City of Course!"
# Query: "Where can I get the best tacos?"
# Score: 0.3884 | Document: "Mexico City of Course!"
# Score: 0.2389 | Document: "The Data Cloud!"
#
#### Variation: Truncated Embeddings ####
query_embeddings_256 = normalize(query_embeddings[:, :256])
document_embeddings_256 = normalize(document_embeddings[:, :256])
scores_256 = query_embeddings_256 @ document_embeddings_256.T
# Pretty-print the results.
for query, query_scores in zip(queries, scores_256):
doc_score_pairs = sorted(zip(documents, query_scores), key=lambda x: x[1], reverse=True)
print(f'Query: "{query}"')
for document, score in doc_score_pairs:
print(f'Score: {score:.4f} | Document: "{document}"')
print()
#### OUTPUT ####
# Query: "what is snowflake?"
# Score: 0.3852 | Document: "The Data Cloud!"
# Score: 0.2721 | Document: "Mexico City of Course!"
# Query: "Where can I get the best tacos?"
# Score: 0.4337 | Document: "Mexico City of Course!"
# Score: 0.2886 | Document: "The Data Cloud!"
#
If you haven't already, you can install the Transformers.js JavaScript library from NPM by running:
npm i @xenova/transformers
You can then use the model to compute embeddings as follows:
import { pipeline, dot } from '@xenova/transformers';
// Create feature extraction pipeline
const extractor = await pipeline('feature-extraction', 'Snowflake/snowflake-arctic-embed-m-v1.5', {
quantized: false, // Comment out this line to use the quantized version
});
// Generate sentence embeddings
const sentences = [
'Represent this sentence for searching relevant passages: Where can I get the best tacos?',
'The Data Cloud!',
'Mexico City of Course!',
]
const output = await extractor(sentences, { normalize: true, pooling: 'cls' });
// Compute similarity scores
const [source_embeddings, ...document_embeddings ] = output.tolist();
const similarities = document_embeddings.map(x => dot(source_embeddings, x));
console.log(similarities); // [0.15664823859882132, 0.24481869975470627]
This model is designed to generate embeddings which compress well down to 128 bytes via a two-part compression scheme:
For an in-depth examples, check out our arctic-embed GitHub repositiory.
TBD
Feel free to open an issue or pull request if you have any questions or suggestions about this project. You also can email Daniel Campos(daniel.campos@snowflake.com).
Arctic is licensed under the Apache-2. The released models can be used for commercial purposes free of charge.
We want to thank the open-source community, which has provided the great building blocks upon which we could make our models. We thank our modeling engineers, Danmei Xu, Luke Merrick, Gaurav Nuti, and Daniel Campos, for making these great models possible. We thank our leadership, Himabindu Pucha, Kelvin So, Vivek Raghunathan, and Sridhar Ramaswamy, for supporting this work. We also thank the open-source community for producing the great models we could build on top of and making these releases possible. Finally, we thank the researchers who created BEIR and MTEB benchmarks. It is largely thanks to their tireless work to define what better looks like that we could improve model performance.