Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Buckets new
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

tasksource
/
ModernBERT-base-embed

Sentence Similarity
sentence-transformers
Safetensors
English
modernbert
feature-extraction
Generated from Trainer
dataset_size:6661966
loss:MultipleNegativesRankingLoss
loss:CachedMultipleNegativesRankingLoss
loss:SoftmaxLoss
loss:AnglELoss
loss:CoSENTLoss
loss:CosineSimilarityLoss
text-embeddings-inference
Model card Files Files and versions
xet
Community
1

Instructions to use tasksource/ModernBERT-base-embed with libraries, inference providers, notebooks, and local apps. Follow these links to get started.

  • Libraries
  • sentence-transformers

    How to use tasksource/ModernBERT-base-embed with sentence-transformers:

    from sentence_transformers import SentenceTransformer
    
    model = SentenceTransformer("tasksource/ModernBERT-base-embed")
    
    sentences = [
        "Daniel went to the kitchen. Sandra went back to the kitchen. Daniel moved to the garden. Sandra grabbed the apple. Sandra went back to the office. Sandra dropped the apple. Sandra went to the garden. Sandra went back to the bedroom. Sandra went back to the office. Mary went back to the office. Daniel moved to the bathroom. Sandra grabbed the apple. Sandra travelled to the garden. Sandra put down the apple there. Mary went back to the bathroom. Daniel travelled to the garden. Mary took the milk. Sandra grabbed the apple. Mary left the milk there. Sandra journeyed to the bedroom. John travelled to the office. John went back to the garden. Sandra journeyed to the garden. Mary grabbed the milk. Mary left the milk. Mary grabbed the milk. Mary went to the hallway. John moved to the hallway. Mary picked up the football. Sandra journeyed to the kitchen. Sandra left the apple. Mary discarded the milk. John journeyed to the garden. Mary dropped the football. Daniel moved to the bathroom. Daniel journeyed to the kitchen. Mary travelled to the bathroom. Daniel went to the bedroom. Mary went to the hallway. Sandra got the apple. Sandra went back to the hallway. Mary moved to the kitchen. Sandra dropped the apple there. Sandra grabbed the milk. Sandra journeyed to the bathroom. John went back to the kitchen. Sandra went to the kitchen. Sandra travelled to the bathroom. Daniel went to the garden. Daniel moved to the kitchen. Sandra dropped the milk. Sandra got the milk. Sandra put down the milk. John journeyed to the garden. Sandra went back to the hallway. Sandra picked up the apple. Sandra got the football. Sandra moved to the garden. Daniel moved to the bathroom. Daniel travelled to the garden. Sandra went back to the bathroom. Sandra discarded the football.",
        "In the adulthood stage, it can jump, walk, run",
        "The chocolate is bigger than the container.",
        "The football before the bathroom was in the garden."
    ]
    embeddings = model.encode(sentences)
    
    similarities = model.similarity(embeddings, embeddings)
    print(similarities.shape)
    # [4, 4]
  • Notebooks
  • Google Colab
  • Kaggle
ModernBERT-base-embed
2.53 GB
Ctrl+K
Ctrl+K
  • 1 contributor
History: 7 commits
Xenova's picture
Xenova HF Staff
Upload ONNX weights
2413329 verified over 1 year ago
  • 1_Pooling
    Add new SentenceTransformer model over 1 year ago
  • onnx
    Upload ONNX weights over 1 year ago
  • .gitattributes
    1.52 kB
    initial commit over 1 year ago
  • README.md
    234 kB
    Add new SentenceTransformer model over 1 year ago
  • config.json
    1.26 kB
    Add new SentenceTransformer model over 1 year ago
  • config_sentence_transformers.json
    210 Bytes
    Add new SentenceTransformer model over 1 year ago
  • model.safetensors
    596 MB
    xet
    Add new SentenceTransformer model over 1 year ago
  • modules.json
    229 Bytes
    Add new SentenceTransformer model over 1 year ago
  • sentence_bert_config.json
    54 Bytes
    Add new SentenceTransformer model over 1 year ago
  • special_tokens_map.json
    694 Bytes
    Add new SentenceTransformer model over 1 year ago
  • tokenizer.json
    3.58 MB
    Add new SentenceTransformer model over 1 year ago
  • tokenizer_config.json
    20.9 kB
    Add new SentenceTransformer model over 1 year ago