cross-encoder/stsb-roberta-base

text rankingsentence-transformersensentence-transformerspytorchjaxonnxsafetensorsopenvinoapache-2.0
168.5K

Cross-Encoder for Semantic Textual Similarity

This model was trained using SentenceTransformers Cross-Encoder class.

Training Data

This model was trained on the STS benchmark dataset. The model will predict a score between 0 and 1 how for the semantic similarity of two sentences.

Usage and Performance

Pre-trained models can be used like this:

from sentence_transformers import CrossEncoder

model = CrossEncoder('cross-encoder/stsb-roberta-base')
scores = model.predict([('Sentence 1', 'Sentence 2'), ('Sentence 3', 'Sentence 4')])

The model will predict scores for the pairs ('Sentence 1', 'Sentence 2') and ('Sentence 3', 'Sentence 4').

You can use this model also without sentence_transformers and by just using Transformers AutoModel class

DEPLOY IN 60 SECONDS

Run stsb-roberta-base on Runcrate

Deploy on H100, A100, or RTX GPUs. Pay only for what you use. No setup required.