model.max_seq_length = 1024
embeddings = model.encode([ 'How is the weather today?', 'Wie ist das Wetter heute?' ]) print(cos_sim(embeddings[0], embeddings[1]))
## Alternatives to Using Transformers Package
1. _Managed SaaS_: Get started with a free key on Jina AI's [Embedding API](https://jina.ai/embeddings/).
2. _Private and high-performance deployment_: Get started by picking from our suite of models and deploy them on [AWS Sagemaker](https://aws.amazon.com/marketplace/seller-profile?id=seller-stch2ludm6vgy).
## Benchmark Results
We evaluated our Bilingual model on all German and English evaluation tasks availble on the [MTEB benchmark](https://huggingface.co/blog/mteb). In addition, we evaluated the models agains a couple of other German, English, and multilingual models on additional German evaluation tasks:
<img src="de_evaluation_results.png" width="780px">
## Use Jina Embeddings for RAG
According to the latest blog post from [LLamaIndex](https://blog.llamaindex.ai/boosting-rag-picking-the-best-embedding-reranker-models-42d079022e83),
> In summary, to achieve the peak performance in both hit rate and MRR, the combination of OpenAI or JinaAI-Base embeddings with the CohereRerank/bge-reranker-large reranker stands out.
<img src="https://miro.medium.com/v2/resize:fit:4800/format:webp/1*ZP2RVejCZovF3FDCg-Bx3A.png" width="780px">
## Contact
Join our [Discord community](https://discord.jina.ai) and chat with other community members about ideas.
## Citation
If you find Jina Embeddings useful in your research, please cite the following paper:
@article{mohr2024multi, title={Multi-Task Contrastive Learning for 8192-Token Bilingual Text Embeddings}, author={Mohr, Isabelle and Krimmel, Markus and Sturua, Saba and Akram, Mohammad Kalim and Koukounas, Andreas and G{"u}nther, Michael and Mastrapas, Georgios and Ravishankar, Vinit and Mart{'\i}nez, Joan Fontanals and Wang, Feng and others}, journal={arXiv preprint arXiv:2402.17016}, year={2024} }