model = AutoModel.from_pretrained("jinaai/jina-embeddings-v3", trust_remote_code=True)
texts = [ "Follow the white rabbit.", # English "Sigue al conejo blanco.", # Spanish "Suis le lapin blanc.", # French "跟着白兔走。", # Chinese "اتبع الأرنب الأبيض.", # Arabic "Folge dem weißen Kaninchen.", # German ]
encode function, you can choose a task based on the use case:task, and no specific LoRA adapter will be used.embeddings = model.encode(texts, task="text-matching")
print(embeddings[0] @ embeddings[1].T)
By default, the model supports a maximum sequence length of 8192 tokens.
However, if you want to truncate your input texts to a shorter length, you can pass the `max_length` parameter to the `encode` function:
```python
embeddings = model.encode(["Very long ... document"], max_length=2048)
In case you want to use Matryoshka embeddings and switch to a different dimension,
you can adjust it by passing the truncate_dim parameter to the encode function:
embeddings = model.encode(['Sample text'], truncate_dim=256)
The latest version (3.1.0) of SentenceTransformers also supports jina-embeddings-v3:
!pip install -U sentence-transformers
from sentence_transformers import SentenceTransformer
model = SentenceTransformer("jinaai/jina-embeddings-v3", trust_remote_code=True)
task = "retrieval.query"
embeddings = model.encode(
["What is the weather like in Berlin today?"],
task=task,
prompt_name=task,
)
You can fine-tune jina-embeddings-v3 using SentenceTransformerTrainer.
To fine-tune for a specific task, you should set the task before passing the model to the ST Trainer, either during initialization:
model = SentenceTransformer("jinaai/jina-embeddings-v3", trust_remote_code=True, model_kwargs={'default_task': 'classification'})
Or afterwards:
model = SentenceTransformer("jinaai/jina-embeddings-v3", trust_remote_code=True)
model[0].default_task = 'classification'
This way you can fine-tune the LoRA adapter for the chosen task.
However, If you want to fine-tune the entire model, make sure the main parameters are set as trainable when loading the model:
model = SentenceTransformer("jinaai/jina-embeddings-v3", trust_remote_code=True, model_kwargs={'lora_main_params_trainable': True})
This will allow fine-tuning the whole model instead of just the LoRA adapters.
You can use ONNX for efficient inference with jina-embeddings-v3:
import onnxruntime
import numpy as np
from transformers import AutoTokenizer, PretrainedConfig
# Mean pool function
def mean_pooling(model_output: np.ndarray, attention_mask: np.ndarray):
token_embeddings = model_output
input_mask_expanded = np.expand_dims(attention_mask, axis=-1)
input_mask_expanded = np.broadcast_to(input_mask_expanded, token_embeddings.shape)
sum_embeddings = np.sum(token_embeddings * input_mask_expanded, axis=1)
sum_mask = np.clip(np.sum(input_mask_expanded, axis=1), a_min=1e-9, a_max=None)
return sum_embeddings / sum_mask
# Load tokenizer and model config
tokenizer = AutoTokenizer.from_pretrained('jinaai/jina-embeddings-v3')
config = PretrainedConfig.from_pretrained('jinaai/jina-embeddings-v3')
# Tokenize input
input_text = tokenizer('sample text', return_tensors='np')
# ONNX session
model_path = 'jina-embeddings-v3/onnx/model.onnx'
session = onnxruntime.InferenceSession(model_path)
# Prepare inputs for ONNX model
task_type = 'text-matching'
task_id = np.array(config.lora_adaptations.index(task_type), dtype=np.int64)
inputs = {
'input_ids': input_text['input_ids'],
'attention_mask': input_text['attention_mask'],
'task_id': task_id
}
# Run model
outputs = session.run(None, inputs)[0]
# Apply mean pooling and normalization to the model outputs
embeddings = mean_pooling(outputs, input_text["attention_mask"])
embeddings = embeddings / np.linalg.norm(embeddings, ord=2, axis=1, keepdims=True)
Join our Discord community and chat with other community members about ideas.
jina-embeddings-v3 is listed on AWS & Azure. If you need to use it beyond those platforms or on-premises within your company, note that the models is licensed under CC BY-NC 4.0. For commercial usage inquiries, feel free to contact us.
If you find jina-embeddings-v3 useful in your research, please cite the following paper:
@misc{sturua2024jinaembeddingsv3multilingualembeddingstask,
title={jina-embeddings-v3: Multilingual Embeddings With Task LoRA},
author={Saba Sturua and Isabelle Mohr and Mohammad Kalim Akram and Michael Günther and Bo Wang and Markus Krimmel and Feng Wang and Georgios Mastrapas and Andreas Koukounas and Andreas Koukounas and Nan Wang and Han Xiao},
year={2024},
eprint={2409.10173},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2409.10173},
}