Matryoshka Representation Learning
Paper • 2205.13147 • Published • 25
This is a sentence-transformers model finetuned from nomic-ai/modernbert-embed-base on the json dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
SentenceTransformer(
(0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: ModernBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("Pravallika2001/modernbert-embed-base-legal-matryoshka-1")
# Run inference
sentences = [
'this information to represent the client effectively and, if necessary, \nto advise the client to refrain from wrongful conduct. Almost \nwithout exception, clients come to lawyers in order to determine \ntheir rights and what is, in the complex of laws and regulations, \ndeemed to be legal and correct. Based on experience, lawyers know',
'What may lawyers advise their clients to refrain from?',
"Does the regulation’s definition of 'permanent' support the Government’s argument?",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
dim_768, dim_512, dim_256, dim_128 and dim_64InformationRetrievalEvaluator| Metric | dim_768 | dim_512 | dim_256 | dim_128 | dim_64 |
|---|---|---|---|---|---|
| cosine_accuracy@1 | 0.5518 | 0.5564 | 0.5193 | 0.4482 | 0.3338 |
| cosine_accuracy@3 | 0.6012 | 0.5966 | 0.5657 | 0.4807 | 0.3632 |
| cosine_accuracy@5 | 0.6878 | 0.6723 | 0.6445 | 0.5734 | 0.4529 |
| cosine_accuracy@10 | 0.7543 | 0.7434 | 0.7156 | 0.6785 | 0.5348 |
| cosine_precision@1 | 0.5518 | 0.5564 | 0.5193 | 0.4482 | 0.3338 |
| cosine_precision@3 | 0.5193 | 0.5188 | 0.4869 | 0.4209 | 0.3168 |
| cosine_precision@5 | 0.3975 | 0.3913 | 0.3716 | 0.3233 | 0.2519 |
| cosine_precision@10 | 0.2297 | 0.2266 | 0.2184 | 0.2068 | 0.1595 |
| cosine_recall@1 | 0.2026 | 0.2039 | 0.1918 | 0.161 | 0.1179 |
| cosine_recall@3 | 0.5188 | 0.5184 | 0.4879 | 0.418 | 0.3153 |
| cosine_recall@5 | 0.6425 | 0.635 | 0.603 | 0.5238 | 0.412 |
| cosine_recall@10 | 0.7391 | 0.7294 | 0.703 | 0.6646 | 0.5143 |
| cosine_ndcg@10 | 0.6521 | 0.6476 | 0.6172 | 0.5579 | 0.4263 |
| cosine_mrr@10 | 0.5987 | 0.5979 | 0.5639 | 0.4948 | 0.3752 |
| cosine_map@100 | 0.6393 | 0.6366 | 0.6046 | 0.5373 | 0.4193 |
positive and anchor| positive | anchor | |
|---|---|---|
| type | string | string |
| details |
|
|
| positive | anchor |
|---|---|
communications was evidence of the defendant’s guilt; that is, what the defendant said in |
Which pages of the cited document discuss the defendant's communications and their evidentiary value? |
lawyer having supervisory authority over performance of specific |
Who has at least indirect responsibility for all work being done by the firm? |
cuando el demandado contesta la demanda y niega su |
¿Cuál es la razón que el demandado cree tener para oponerse a la demanda? |
MatryoshkaLoss with these parameters:{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
768,
512,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1,
1
],
"n_dims_per_step": -1
}
eval_strategy: epochper_device_train_batch_size: 32per_device_eval_batch_size: 16gradient_accumulation_steps: 16learning_rate: 2e-05num_train_epochs: 4lr_scheduler_type: cosinewarmup_ratio: 0.1bf16: Trueload_best_model_at_end: Trueoptim: adamw_torch_fusedbatch_sampler: no_duplicatesoverwrite_output_dir: Falsedo_predict: Falseeval_strategy: epochprediction_loss_only: Trueper_device_train_batch_size: 32per_device_eval_batch_size: 16per_gpu_train_batch_size: Noneper_gpu_eval_batch_size: Nonegradient_accumulation_steps: 16eval_accumulation_steps: Nonetorch_empty_cache_steps: Nonelearning_rate: 2e-05weight_decay: 0.0adam_beta1: 0.9adam_beta2: 0.999adam_epsilon: 1e-08max_grad_norm: 1.0num_train_epochs: 4max_steps: -1lr_scheduler_type: cosinelr_scheduler_kwargs: {}warmup_ratio: 0.1warmup_steps: 0log_level: passivelog_level_replica: warninglog_on_each_node: Truelogging_nan_inf_filter: Truesave_safetensors: Truesave_on_each_node: Falsesave_only_model: Falserestore_callback_states_from_checkpoint: Falseno_cuda: Falseuse_cpu: Falseuse_mps_device: Falseseed: 42data_seed: Nonejit_mode_eval: Falseuse_ipex: Falsebf16: Truefp16: Falsefp16_opt_level: O1half_precision_backend: autobf16_full_eval: Falsefp16_full_eval: Falsetf32: Nonelocal_rank: 0ddp_backend: Nonetpu_num_cores: Nonetpu_metrics_debug: Falsedebug: []dataloader_drop_last: Falsedataloader_num_workers: 0dataloader_prefetch_factor: Nonepast_index: -1disable_tqdm: Falseremove_unused_columns: Truelabel_names: Noneload_best_model_at_end: Trueignore_data_skip: Falsefsdp: []fsdp_min_num_params: 0fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}tp_size: 0fsdp_transformer_layer_cls_to_wrap: Noneaccelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed: Nonelabel_smoothing_factor: 0.0optim: adamw_torch_fusedoptim_args: Noneadafactor: Falsegroup_by_length: Falselength_column_name: lengthddp_find_unused_parameters: Noneddp_bucket_cap_mb: Noneddp_broadcast_buffers: Falsedataloader_pin_memory: Truedataloader_persistent_workers: Falseskip_memory_metrics: Trueuse_legacy_prediction_loop: Falsepush_to_hub: Falseresume_from_checkpoint: Nonehub_model_id: Nonehub_strategy: every_savehub_private_repo: Nonehub_always_push: Falsegradient_checkpointing: Falsegradient_checkpointing_kwargs: Noneinclude_inputs_for_metrics: Falseinclude_for_metrics: []eval_do_concat_batches: Truefp16_backend: autopush_to_hub_model_id: Nonepush_to_hub_organization: Nonemp_parameters: auto_find_batch_size: Falsefull_determinism: Falsetorchdynamo: Noneray_scope: lastddp_timeout: 1800torch_compile: Falsetorch_compile_backend: Nonetorch_compile_mode: Nonedispatch_batches: Nonesplit_batches: Noneinclude_tokens_per_second: Falseinclude_num_input_tokens_seen: Falseneftune_noise_alpha: Noneoptim_target_modules: Nonebatch_eval_metrics: Falseeval_on_start: Falseuse_liger_kernel: Falseeval_use_gather_object: Falseaverage_tokens_across_devices: Falseprompts: Nonebatch_sampler: no_duplicatesmulti_dataset_batch_sampler: proportional| Epoch | Step | Training Loss | dim_768_cosine_ndcg@10 | dim_512_cosine_ndcg@10 | dim_256_cosine_ndcg@10 | dim_128_cosine_ndcg@10 | dim_64_cosine_ndcg@10 |
|---|---|---|---|---|---|---|---|
| 0.8791 | 10 | 89.4929 | - | - | - | - | - |
| 1.0 | 12 | - | 0.6233 | 0.6056 | 0.5715 | 0.5117 | 0.3814 |
| 1.7033 | 20 | 40.7733 | - | - | - | - | - |
| 2.0 | 24 | - | 0.6495 | 0.6425 | 0.6064 | 0.5491 | 0.4172 |
| 2.5275 | 30 | 29.6387 | - | - | - | - | - |
| 3.0 | 36 | - | 0.6512 | 0.6476 | 0.6172 | 0.5554 | 0.4252 |
| 3.3516 | 40 | 26.8564 | - | - | - | - | - |
| 3.7033 | 44 | - | 0.6521 | 0.6476 | 0.6172 | 0.5579 | 0.4263 |
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
Base model
answerdotai/ModernBERT-base