metadata
language:
- en
license: apache-2.0
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:5822
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
base_model: nomic-ai/modernbert-embed-base
widget:
- source_sentence: |-
(confidentiality), 4-5.4 (professional independence of the lawyer),
and 4-5.5 (unauthorized practice of law). When retaining or
directing a nonlawyer outside the firm, a lawyer should
communicate directions appropriate under the circumstances to
give reasonable assurance that the nonlawyer’s conduct is
compatible with the professional obligations of the lawyer.
sentences:
- ¿Qué concluyó el TPI acerca del contrato entre las partes?
- What rule number addresses the unauthorized practice of law?
- >-
What document did the plaintiff allegedly send to the defendant on April
6, 2023?
- source_sentence: >-
Except as provided in paragraphs (b) and (d), the appeal must be perfected
within 30
days from the entry of the interlocutory order by filing a notice of
appeal ***. ***
(b) Motion to Vacate. If an interlocutory order is entered on ex parte
application,
the party intending to take an appeal therefrom shall first present, on
notice, a motion
sentences:
- ¿Cuál es el número del documento judicial mencionado en el extracto?
- >-
What is a requirement for a party intending to appeal an ex parte
interlocutory order?
- >-
What does the Alliant II GWAC for IT services require the agency to
evaluate?
- source_sentence: >-
who would then mail the video directly to the police. Id.
Finally, in Reyes v. State, 257 Md. App. 596 (2023), the Appellate Court
of
Maryland held that a video taken by a man’s residential security camera
was properly
authenticated when he testified to the camera’s “general reliability” and
other pertinent
sentences:
- Under which rule does the appellant assert jurisdiction?
- >-
What did the man testify to regarding the security camera in Reyes v.
State?
- >-
What conclusion cannot be made by the Court about the CIA's search
methods?
- source_sentence: >-
1 (alleging refusals to provide estimated dates of completion on October
18, October 24, and
November 3, 2012).
The Court concludes that this proposed amended must be denied for undue
delay. See,
e.g., Firestone, 76 F.3d at 1208. As alleged in the plaintiff’s First
Amended Complaint, the
plaintiff first became aware of the alleged Non-Provision of Completion
Date Policy in
sentences:
- What was the court's reason for denying the proposed amendment?
- >-
How does the court intend to address the issues regarding the CIA’s
Exemption 3 withholding decisions?
- Who submitted the first declaration?
- source_sentence: >-
Either way, the protégé firm’s project is subject to evaluation by the
agency, and that project is
assessed against the same evaluation criteria used to evaluate projects
submitted by offerors
generally. As Plaintiffs’ counsel aptly stated during Oral Argument, the
Solicitations’ terms offer
“a distinction without a difference.” Oral Arg. Tr. at 28:23–24.
sentences:
- What does the court reject?
- What is subject to evaluation by the agency?
- Where is the reference to the Appellate Court's opinion found?
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
model-index:
- name: ModernBERT Embed base Legal Matryoshka
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 768
type: dim_768
metrics:
- type: cosine_accuracy@1
value: 0.5517774343122102
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.5996908809891809
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.6908809891808346
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.7557959814528593
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.5517774343122102
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.5188047398248326
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.39907264296754247
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.23384853168469857
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.19951056156620295
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.5135239567233385
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.6388459556929418
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.7458784131890778
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.6552384058092013
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.598969603297269
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.6390584877806069
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 512
type: dim_512
metrics:
- type: cosine_accuracy@1
value: 0.5332302936630603
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.5734157650695518
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.6615146831530139
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.7465224111282844
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.5332302936630603
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.4992272024729521
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.38454404945904175
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.2312210200927357
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.1907521895929933
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.49149922720247297
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.6137300360638845
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.7350592478104071
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.6383508606262611
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.5797569981109392
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.6198878359190206
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 256
type: dim_256
metrics:
- type: cosine_accuracy@1
value: 0.5162287480680062
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.5486862442040186
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.6306027820710973
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.7017001545595054
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.5162287480680062
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.4811952601751674
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.3675425038639877
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.21808346213292115
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.18353941267387944
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.4712776919113859
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.5861669242658424
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.6899793920659454
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.6056043667170141
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.5548097446088168
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.5942340636200647
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 128
type: dim_128
metrics:
- type: cosine_accuracy@1
value: 0.42967542503863987
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.4605873261205564
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.5564142194744977
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.6568778979907264
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.42967542503863987
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.40649149922720246
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.32210200927357036
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.2017001545595054
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.1491499227202473
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.393353941267388
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.5097887686759403
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.634853168469861
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.5347242820771732
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.4756550379038785
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.5205948927868919
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 64
type: dim_64
metrics:
- type: cosine_accuracy@1
value: 0.31530139103554866
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.34930448222565685
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.41731066460587324
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.5131375579598145
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.31530139103554866
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.3008758371973209
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.23956723338485317
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.157032457496136
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.11269963936115403
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.29520865533230295
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.3794435857805255
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.4929160226687274
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.40740874237250463
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.35583462132921173
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.39964199682487694
name: Cosine Map@100
ModernBERT Embed base Legal Matryoshka
This is a sentence-transformers model finetuned from nomic-ai/modernbert-embed-base on the json dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: nomic-ai/modernbert-embed-base
- Maximum Sequence Length: 8192 tokens
- Output Dimensionality: 768 dimensions
- Similarity Function: Cosine Similarity
- Training Dataset:
- Language: en
- License: apache-2.0
Model Sources
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: ModernBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
model = SentenceTransformer("ritesh-07/modernbert-embed-base-legal-matryoshka-2")
sentences = [
'Either way, the protégé firm’s project is subject to evaluation by the agency, and that project is \nassessed against the same evaluation criteria used to evaluate projects submitted by offerors \ngenerally. As Plaintiffs’ counsel aptly stated during Oral Argument, the Solicitations’ terms offer \n“a distinction without a difference.” Oral Arg. Tr. at 28:23–24.',
'What is subject to evaluation by the agency?',
'What does the court reject?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
Evaluation
Metrics
Information Retrieval
| Metric |
Value |
| cosine_accuracy@1 |
0.5518 |
| cosine_accuracy@3 |
0.5997 |
| cosine_accuracy@5 |
0.6909 |
| cosine_accuracy@10 |
0.7558 |
| cosine_precision@1 |
0.5518 |
| cosine_precision@3 |
0.5188 |
| cosine_precision@5 |
0.3991 |
| cosine_precision@10 |
0.2338 |
| cosine_recall@1 |
0.1995 |
| cosine_recall@3 |
0.5135 |
| cosine_recall@5 |
0.6388 |
| cosine_recall@10 |
0.7459 |
| cosine_ndcg@10 |
0.6552 |
| cosine_mrr@10 |
0.599 |
| cosine_map@100 |
0.6391 |
Information Retrieval
| Metric |
Value |
| cosine_accuracy@1 |
0.5332 |
| cosine_accuracy@3 |
0.5734 |
| cosine_accuracy@5 |
0.6615 |
| cosine_accuracy@10 |
0.7465 |
| cosine_precision@1 |
0.5332 |
| cosine_precision@3 |
0.4992 |
| cosine_precision@5 |
0.3845 |
| cosine_precision@10 |
0.2312 |
| cosine_recall@1 |
0.1908 |
| cosine_recall@3 |
0.4915 |
| cosine_recall@5 |
0.6137 |
| cosine_recall@10 |
0.7351 |
| cosine_ndcg@10 |
0.6384 |
| cosine_mrr@10 |
0.5798 |
| cosine_map@100 |
0.6199 |
Information Retrieval
| Metric |
Value |
| cosine_accuracy@1 |
0.5162 |
| cosine_accuracy@3 |
0.5487 |
| cosine_accuracy@5 |
0.6306 |
| cosine_accuracy@10 |
0.7017 |
| cosine_precision@1 |
0.5162 |
| cosine_precision@3 |
0.4812 |
| cosine_precision@5 |
0.3675 |
| cosine_precision@10 |
0.2181 |
| cosine_recall@1 |
0.1835 |
| cosine_recall@3 |
0.4713 |
| cosine_recall@5 |
0.5862 |
| cosine_recall@10 |
0.69 |
| cosine_ndcg@10 |
0.6056 |
| cosine_mrr@10 |
0.5548 |
| cosine_map@100 |
0.5942 |
Information Retrieval
| Metric |
Value |
| cosine_accuracy@1 |
0.4297 |
| cosine_accuracy@3 |
0.4606 |
| cosine_accuracy@5 |
0.5564 |
| cosine_accuracy@10 |
0.6569 |
| cosine_precision@1 |
0.4297 |
| cosine_precision@3 |
0.4065 |
| cosine_precision@5 |
0.3221 |
| cosine_precision@10 |
0.2017 |
| cosine_recall@1 |
0.1491 |
| cosine_recall@3 |
0.3934 |
| cosine_recall@5 |
0.5098 |
| cosine_recall@10 |
0.6349 |
| cosine_ndcg@10 |
0.5347 |
| cosine_mrr@10 |
0.4757 |
| cosine_map@100 |
0.5206 |
Information Retrieval
| Metric |
Value |
| cosine_accuracy@1 |
0.3153 |
| cosine_accuracy@3 |
0.3493 |
| cosine_accuracy@5 |
0.4173 |
| cosine_accuracy@10 |
0.5131 |
| cosine_precision@1 |
0.3153 |
| cosine_precision@3 |
0.3009 |
| cosine_precision@5 |
0.2396 |
| cosine_precision@10 |
0.157 |
| cosine_recall@1 |
0.1127 |
| cosine_recall@3 |
0.2952 |
| cosine_recall@5 |
0.3794 |
| cosine_recall@10 |
0.4929 |
| cosine_ndcg@10 |
0.4074 |
| cosine_mrr@10 |
0.3558 |
| cosine_map@100 |
0.3996 |
Training Details
Training Dataset
json
- Dataset: json
- Size: 5,822 training samples
- Columns:
positive and anchor
- Approximate statistics based on the first 1000 samples:
|
positive |
anchor |
| type |
string |
string |
| details |
- min: 26 tokens
- mean: 97.05 tokens
- max: 160 tokens
|
- min: 8 tokens
- mean: 16.68 tokens
- max: 46 tokens
|
- Samples:
| positive |
anchor |
Martinez v. State. We explained that, in United States v. Vayner, 769 F.3d 125 (2d Cir. 2014), the Second Circuit had determined that Federal Rule of Evidence 901 “is satisfied if sufficient proof has been introduced so that a reasonable juror could find in favor of authenticity or identification.” Sublet, 442 Md. at 666, 113 A.3d at 715 (quoting Vayner, |
What Federal Rule of Evidence did the Second Circuit interpret in United States v. Vayner? |
was not a party, but which contained similar allegations to her complaint here.4 The seven- paragraph “Argument” section of defendant’s motion was divided equally between the two grounds, with the first paragraph quoting the statute, and the next three paragraphs arguing the first ground, and the following three paragraphs arguing the second ground. With respect to |
How is the 'Argument' section of the defendant's motion divided? |
20 El derecho aplicable en el caso de epígrafe se remite al Código Civil de Puerto Rico de 1930, puesto que, la presentación de la Demanda y los hechos que dan base a esta tuvieron su lugar antes de la aprobación del nuevo Código Civil de Puerto Rico, Ley 55-2020, según enmendado. KLAN202300916 14 cumplimiento de los contratos, y no debemos relevar a una parte del |
¿Cuál es el número del documento judicial mencionado en el extracto? |
- Loss:
MatryoshkaLoss with these parameters:{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
768,
512,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1,
1
],
"n_dims_per_step": -1
}
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy: epoch
per_device_train_batch_size: 32
per_device_eval_batch_size: 16
gradient_accumulation_steps: 16
learning_rate: 2e-05
num_train_epochs: 4
lr_scheduler_type: cosine
warmup_ratio: 0.1
bf16: True
tf32: False
load_best_model_at_end: True
optim: adamw_torch_fused
batch_sampler: no_duplicates
All Hyperparameters
Click to expand
overwrite_output_dir: False
do_predict: False
eval_strategy: epoch
prediction_loss_only: True
per_device_train_batch_size: 32
per_device_eval_batch_size: 16
per_gpu_train_batch_size: None
per_gpu_eval_batch_size: None
gradient_accumulation_steps: 16
eval_accumulation_steps: None
torch_empty_cache_steps: None
learning_rate: 2e-05
weight_decay: 0.0
adam_beta1: 0.9
adam_beta2: 0.999
adam_epsilon: 1e-08
max_grad_norm: 1.0
num_train_epochs: 4
max_steps: -1
lr_scheduler_type: cosine
lr_scheduler_kwargs: {}
warmup_ratio: 0.1
warmup_steps: 0
log_level: passive
log_level_replica: warning
log_on_each_node: True
logging_nan_inf_filter: True
save_safetensors: True
save_on_each_node: False
save_only_model: False
restore_callback_states_from_checkpoint: False
no_cuda: False
use_cpu: False
use_mps_device: False
seed: 42
data_seed: None
jit_mode_eval: False
use_ipex: False
bf16: True
fp16: False
fp16_opt_level: O1
half_precision_backend: auto
bf16_full_eval: False
fp16_full_eval: False
tf32: False
local_rank: 0
ddp_backend: None
tpu_num_cores: None
tpu_metrics_debug: False
debug: []
dataloader_drop_last: False
dataloader_num_workers: 0
dataloader_prefetch_factor: None
past_index: -1
disable_tqdm: False
remove_unused_columns: True
label_names: None
load_best_model_at_end: True
ignore_data_skip: False
fsdp: []
fsdp_min_num_params: 0
fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
fsdp_transformer_layer_cls_to_wrap: None
accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
deepspeed: None
label_smoothing_factor: 0.0
optim: adamw_torch_fused
optim_args: None
adafactor: False
group_by_length: False
length_column_name: length
ddp_find_unused_parameters: None
ddp_bucket_cap_mb: None
ddp_broadcast_buffers: False
dataloader_pin_memory: True
dataloader_persistent_workers: False
skip_memory_metrics: True
use_legacy_prediction_loop: False
push_to_hub: False
resume_from_checkpoint: None
hub_model_id: None
hub_strategy: every_save
hub_private_repo: None
hub_always_push: False
hub_revision: None
gradient_checkpointing: False
gradient_checkpointing_kwargs: None
include_inputs_for_metrics: False
include_for_metrics: []
eval_do_concat_batches: True
fp16_backend: auto
push_to_hub_model_id: None
push_to_hub_organization: None
mp_parameters:
auto_find_batch_size: False
full_determinism: False
torchdynamo: None
ray_scope: last
ddp_timeout: 1800
torch_compile: False
torch_compile_backend: None
torch_compile_mode: None
include_tokens_per_second: False
include_num_input_tokens_seen: False
neftune_noise_alpha: None
optim_target_modules: None
batch_eval_metrics: False
eval_on_start: False
use_liger_kernel: False
liger_kernel_config: None
eval_use_gather_object: False
average_tokens_across_devices: False
prompts: None
batch_sampler: no_duplicates
multi_dataset_batch_sampler: proportional
Training Logs
| Epoch |
Step |
Training Loss |
dim_768_cosine_ndcg@10 |
dim_512_cosine_ndcg@10 |
dim_256_cosine_ndcg@10 |
dim_128_cosine_ndcg@10 |
dim_64_cosine_ndcg@10 |
| 0.8791 |
10 |
5.6072 |
- |
- |
- |
- |
- |
| 1.0 |
12 |
- |
0.5880 |
0.5784 |
0.5408 |
0.4667 |
0.3408 |
| 1.7033 |
20 |
2.5041 |
- |
- |
- |
- |
- |
| 2.0 |
24 |
- |
0.6403 |
0.6249 |
0.5903 |
0.5162 |
0.3884 |
| 2.5275 |
30 |
1.8714 |
- |
- |
- |
- |
- |
| 3.0 |
36 |
- |
0.6550 |
0.6347 |
0.6034 |
0.5320 |
0.4023 |
| 3.3516 |
40 |
1.524 |
- |
- |
- |
- |
- |
| 4.0 |
48 |
- |
0.6552 |
0.6384 |
0.6056 |
0.5347 |
0.4074 |
- The bold row denotes the saved checkpoint.
Framework Versions
- Python: 3.11.13
- Sentence Transformers: 4.1.0
- Transformers: 4.53.2
- PyTorch: 2.6.0+cu124
- Accelerate: 1.8.1
- Datasets: 4.0.0
- Tokenizers: 0.21.2
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
MatryoshkaLoss
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
MultipleNegativesRankingLoss
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}