Alibaba-NLP/gte-reranker-modernbert-base

text rankingtransformersentransformersonnxsafetensorsmodernberttext-classificationsentence-transformersapache-2.0
593.6K

gte-reranker-modernbert-base

We are excited to introduce the gte-modernbert series of models, which are built upon the latest modernBERT pre-trained encoder-only foundation models. The gte-modernbert series models include both text embedding models and rerank models.

The gte-modernbert models demonstrates competitive performance in several text embedding and text retrieval evaluation tasks when compared to similar-scale models from the current open-source community. This includes assessments such as MTEB, LoCO, and COIR evaluation.

Model Overview

  • Developed by: Tongyi Lab, Alibaba Group
  • Model Type: Text reranker
  • Primary Language: English
  • Model Size: 149M
  • Max Input Length: 8192 tokens

Model list

ModelsLanguageModel TypeModel SizeMax Seq. LengthDimensionMTEB-enBEIRLoCoCoIR
gte-modernbert-baseEnglishtext embedding149M819276864.3855.3387.5779.31
gte-reranker-modernbert-baseEnglishtext reranker149M8192--56.1990.6879.99

Usage

[!TIP] For transformers and sentence-transformers, if your GPU supports it, the efficient Flash Attention 2 will be used automatically if you have flash_attn installed. It is not mandatory.

pip install flash_attn

Use with transformers

# Requires transformers>=4.48.0
import torch
from transformers import AutoModelForSequenceClassification, AutoTokenizer

model_name_or_path = "Alibaba-NLP/gte-reranker-modernbert-base"
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path)
model = AutoModelForSequenceClassification.from_pretrained(
    model_name_or_path,
    torch_dtype=torch.float16,
)
model.eval()

pairs = [
    ["what is the capital of China?", "Beijing"],
    ["how to implement quick sort in python?", "Introduction of quick sort"],
    ["how to implement quick sort in python?", "The weather is nice today"],
]

with torch.no_grad():
    inputs = tokenizer(pairs, padding=True, truncation=True, return_tensors='pt', max_length=512)
    scores = model(**inputs, return_dict=True).logits.view(-1, ).float()
    print(scores)

# tensor([ 2.1387,  2.4609, -1.6729])

Use with sentence-transformers:

Before you start, install the sentence-transformers libraries:

pip install sentence-transformers
# Requires transformers>=4.48.0
from sentence_transformers import CrossEncoder

model = CrossEncoder(
    "Alibaba-NLP/gte-reranker-modernbert-base",
    automodel_args={"torch_dtype": "auto"},
)

pairs = [
    ["what is the capital of China?", "Beijing"],
    ["how to implement quick sort in python?","Introduction of quick sort"],
    ["how to implement quick sort in python?", "The weather is nice today"],
]

scores = model.predict(pairs)
print(scores)
# [0.8945664  0.9213594  0.15742092]
# NOTE: Sentence Transformers calls Softmax over the outputs by default, hence the scores are in [0, 1] range.

Use with transformers.js

import {
  AutoTokenizer,
  AutoModelForSequenceClassification,
} from "@huggingface/transformers";

const model_id = "Alibaba-NLP/gte-reranker-modernbert-base";
const model = await AutoModelForSequenceClassification.from_pretrained(
  model_id,
  { dtype: "fp32" }, // Supported options: "fp32", "fp16", "q8", "q4", "q4f16"
);
const tokenizer = await AutoTokenizer.from_pretrained(model_id);

const pairs = [
  ["what is the capital of China?", "Beijing"],
  ["how to implement quick sort in python?", "Introduction of quick sort"],
  ["how to implement quick sort in python?", "The weather is nice today"],
];
const inputs = tokenizer(
  pairs.map((x) => x[0]),
  {
    text_pair: pairs.map((x) => x[1]),
    padding: true,
    truncation: true,
  },
);
const { logits } = await model(inputs);
console.log(logits.tolist()); // [[2.138258218765259], [2.4609625339508057], [-1.6775450706481934]]

Additionally, you can also deploy Alibaba-NLP/gte-reranker-modernbert-base with Text Embeddings Inference (TEI) as follows:

  • CPU
docker run --platform linux/amd64 \
  -p 8080:80 \
  -v $PWD/data:/data \
  --pull always \
  ghcr.io/huggingface/text-embeddings-inference:cpu-1.7 \
  --model-id Alibaba-NLP/gte-reranker-modernbert-base
  • GPU
docker run --gpus all \
  -p 8080:80 \
  -v $PWD/data:/data \
  --pull always \
  ghcr.io/huggingface/text-embeddings-inference:1.7 \
  --model-id Alibaba-NLP/gte-reranker-modernbert-base

Then you can send requests to the deployed API via the /rerank route (see the Text Embeddings Inference OpenAPI Specification for more details):

curl https://0.0.0.0:8080/rerank \
  -H "Content-Type: application/json" \
  -d '{
    "query": "What is the capital of China?",
    "raw_scores": false,
    "return_text": false,
    "texts": [ "Beijing" ],
    "truncate": true,
    "truncation_direction": "right"
  }'

Training Details

The gte-modernbert series of models follows the training scheme of the previous GTE models, with the only difference being that the pre-training language model base has been replaced from GTE-MLM to ModernBert. For more training details, please refer to our paper: mGTE: Generalized Long-Context Text Representation and Reranking Models for Multilingual Text Retrieval

Evaluation

MTEB

The results of other models are retrieved from MTEB leaderboard. Given that all models in the gte-modernbert series have a size of less than 1B parameters, we focused exclusively on the results of models under 1B from the MTEB leaderboard.

Model NameParam Size (M)DimensionSequence LengthAverage (56)Class. (12)Clust. (11)Pair Class. (3)Reran. (4)Retr. (15)STS (10)Summ. (1)
mxbai-embed-large-v1335102451264.6875.6446.7187.260.1154.398532.71
multilingual-e5-large-instruct560102451464.4177.5647.186.1958.5852.4784.7830.39
bge-large-en-v1.5335102451264.2375.9746.0887.1260.0354.2983.1131.61
gte-base-en-v1.5137768819264.1177.1746.8285.3357.6654.0981.9731.17
bge-base-en-v1.510976851263.5575.5345.7786.5558.8653.2582.431.07
gte-large-en-v1.54091024819265.3977.7547.9584.6358.5057.9181.4330.91
modernbert-embed-base149768819262.6274.3144.9883.9656.4252.8981.7831.39
nomic-embed-text-v1.5768819262.2873.5543.9384.6155.7853.0181.9430.4
gte-multilingual-base305768819261.470.8944.3184.2457.4751.0882.1130.58
jina-embeddings-v35721024819265.5182.5845.2184.0158.1353.8885.8129.71
gte-modernbert-base149768819264.3876.9946.4785.9359.2455.3381.5730.68

LoCo (Long Document Retrieval)

Model NameDimensionSequence LengthAverage (5)QsmsumRetrievalSummScreenRetrievalQasperAbastractRetrievalQasperTitleRetrievalGovReportRetrieval
gte-qwen1.5-7b40963276887.5749.3793.1099.6797.5498.21
gte-large-v1.51024819286.7144.5592.6199.8297.8198.74
gte-base-v1.5768819287.4449.9191.7899.8297.1398.58
gte-modernbert-base768819288.8854.4593.0099.8298.0398.70
gte-reranker-modernbert-base-819290.6870.8694.0699.7399.1189.67

COIR (Code Retrieval Task)

Model NameDimensionSequence LengthAverage(20)CodeSearchNet-ccr-goCodeSearchNet-ccr-javaCodeSearchNet-ccr-javascriptCodeSearchNet-ccr-phpCodeSearchNet-ccr-pythonCodeSearchNet-ccr-rubyCodeSearchNet-goCodeSearchNet-javaCodeSearchNet-javascriptCodeSearchNet-phpCodeSearchNet-pythonCodeSearchNet-rubyappscodefeedback-mtcodefeedback-stcodetrans-contestcodetrans-dlcosqastackoverflow-qasynthetic-text2sql
gte-modernbert-base768819279.3194.1593.5794.2791.5193.9390.6388.3283.2776.0585.1288.1677.5957.5482.3485.9571.8935.4643.4791.261.87
gte-reranker-modernbert-base-819279.9996.4396.8898.3291.8197.791.9688.8179.7176.2789.3998.3784.1147.5783.3788.9149.6636.3644.3789.5864.21

BEIR

Model NameDimensionSequence LengthAverage(15)ArguAnaClimateFEVERCQADupstackAndroidRetrievalDBPediaFEVERFiQA2018HotpotQAMSMARCONFCorpusNQQuoraRetrievalSCIDOCSSciFactTouche2020TRECCOVID
gte-modernbert-base768819255.3372.6837.7442.6341.7991.0348.8169.4740.936.4457.6288.5521.2977.421.6881.95
gte-reranker-modernbert-base-819256.7369.0337.7944.6847.2394.5449.8178.1645.3830.6964.5787.7720.6073.5727.3679.89

Hiring

We have open positions for Research Interns and Full-Time Researchers to join our team at Tongyi Lab. We are seeking passionate individuals with expertise in representation learning, LLM-driven information retrieval, Retrieval-Augmented Generation (RAG), and agent-based systems. Our team is located in the vibrant cities of Beijing and Hangzhou. If you are driven by curiosity and eager to make a meaningful impact through your work, we would love to hear from you. Please submit your resume along with a brief introduction to dingkun.ldk@alibaba-inc.com.

Citation

If you find our paper or models helpful, feel free to give us a cite.

@inproceedings{zhang2024mgte,
  title={mGTE: Generalized Long-Context Text Representation and Reranking Models for Multilingual Text Retrieval},
  author={Zhang, Xin and Zhang, Yanzhao and Long, Dingkun and Xie, Wen and Dai, Ziqi and Tang, Jialong and Lin, Huan and Yang, Baosong and Xie, Pengjun and Huang, Fei and others},
  booktitle={Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing: Industry Track},
  pages={1393--1412},
  year={2024}
}


```bibtex
@article{li2023towards,
  title={Towards general text embeddings with multi-stage contrastive learning}

, author={Li, Zehan and Zhang, Xin and Zhang, Yanzhao and Long, Dingkun and Xie, Pengjun and Zhang, Meishan}, journal={arXiv preprint arXiv:2308.03281}, year={2023} }

DEPLOY IN 60 SECONDS

Run gte-reranker-modernbert-base on Runcrate

Deploy on H100, A100, or RTX GPUs. Pay only for what you use. No setup required.