Feature Extraction
sentence-transformers
ONNX
Safetensors
Transformers
xlm-roberta
mteb
Eval Results (legacy)
text-embeddings-inference
Instructions to use intfloat/multilingual-e5-large-instruct with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- sentence-transformers
How to use intfloat/multilingual-e5-large-instruct with sentence-transformers:
from sentence_transformers import SentenceTransformer model = SentenceTransformer("intfloat/multilingual-e5-large-instruct") sentences = [ "The weather is lovely today.", "It's so sunny outside!", "He drove to the stadium." ] embeddings = model.encode(sentences) similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] - Transformers
How to use intfloat/multilingual-e5-large-instruct with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("feature-extraction", model="intfloat/multilingual-e5-large-instruct")# Load model directly from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("intfloat/multilingual-e5-large-instruct") model = AutoModel.from_pretrained("intfloat/multilingual-e5-large-instruct") - Inference
- Notebooks
- Google Colab
- Kaggle
Update README.md
Browse files
README.md
CHANGED
|
@@ -5382,10 +5382,7 @@ license: mit
|
|
| 5382 |
|
| 5383 |
## Multilingual-E5-large-instruct
|
| 5384 |
|
| 5385 |
-
[Text Embeddings
|
| 5386 |
-
Liang Wang, Nan Yang, Xiaolong Huang, Binxing Jiao, Linjun Yang, Daxin Jiang, Rangan Majumder, Furu Wei, arXiv 2022
|
| 5387 |
-
|
| 5388 |
-
[Multilingual E5 Text Embeddings: A Technical Report](https://arxiv.org/abs/2402.05672).
|
| 5389 |
Liang Wang, Nan Yang, Xiaolong Huang, Linjun Yang, Rangan Majumder, Furu Wei, arXiv 2024
|
| 5390 |
|
| 5391 |
This model has 24 layers and the embedding size is 1024.
|
|
@@ -5518,11 +5515,11 @@ so this should not be an issue.
|
|
| 5518 |
If you find our paper or models helpful, please consider cite as follows:
|
| 5519 |
|
| 5520 |
```
|
| 5521 |
-
@article{
|
| 5522 |
-
title={Text Embeddings
|
| 5523 |
-
author={Wang, Liang and Yang, Nan and Huang, Xiaolong and
|
| 5524 |
-
journal={arXiv preprint arXiv:
|
| 5525 |
-
year={
|
| 5526 |
}
|
| 5527 |
```
|
| 5528 |
|
|
|
|
| 5382 |
|
| 5383 |
## Multilingual-E5-large-instruct
|
| 5384 |
|
| 5385 |
+
[Multilingual E5 Text Embeddings: A Technical Report](https://arxiv.org/pdf/2402.05672).
|
|
|
|
|
|
|
|
|
|
| 5386 |
Liang Wang, Nan Yang, Xiaolong Huang, Linjun Yang, Rangan Majumder, Furu Wei, arXiv 2024
|
| 5387 |
|
| 5388 |
This model has 24 layers and the embedding size is 1024.
|
|
|
|
| 5515 |
If you find our paper or models helpful, please consider cite as follows:
|
| 5516 |
|
| 5517 |
```
|
| 5518 |
+
@article{wang2024multilingual,
|
| 5519 |
+
title={Multilingual E5 Text Embeddings: A Technical Report},
|
| 5520 |
+
author={Wang, Liang and Yang, Nan and Huang, Xiaolong and Yang, Linjun and Majumder, Rangan and Wei, Furu},
|
| 5521 |
+
journal={arXiv preprint arXiv:2402.05672},
|
| 5522 |
+
year={2024}
|
| 5523 |
}
|
| 5524 |
```
|
| 5525 |
|