ayushexel's picture
Add new SentenceTransformer model
1081282 verified
metadata
tags:
  - sentence-transformers
  - sentence-similarity
  - feature-extraction
  - generated_from_trainer
  - dataset_size:1893949
  - loss:Contrastive
base_model: nreimers/MiniLM-L6-H384-uncased
widget:
  - source_sentence: what medicine can i give my dog for kennel cough?
    sentences:
      - >-
        Your veterinarian can prescribe a round of antibiotics to help your dog
        recover faster. Some of the most widely prescribed medications for
        Kennel Cough are Baytril, Doxycycline, and Clavamox. However, because
        the disease is caused by both a virus and bacteria, the dog will require
        a dual-purpose treatment.
      - >-
        Preventing Kennel Cough The kennel cough vaccination is given once a
        year, but should not be given at the same time as their annual booster.
        If you're going on holiday, it's important that your dog is vaccinated
        against kennel cough. Often kennels will not accept your dog if it has
        not had a kennel cough vaccination.
      - >-
        The Driver Guide A driver guide will act as your driver and tour guide
        in one person. He is less knowledgeable than the tour guide, but can
        provide you with the most important information, help you to get around
        at the destination and give tips about the best photo spots.
  - source_sentence: how to redirect http traffic to https in tomcat?
    sentences:
      - >-
        Yes, but it depends on the previous contents of the bag. FoodSaver® Bags
        that previously contained fruits, vegetables, breads and dry goods can
        be washed and reused. ... FoodSaver® Bags that contained greasy or oily
        foods should also be discarded, as they may be difficult to clean.
        FoodSaver® Bags can be washed by hand.
      - >-
        ['Go to SymantecDLP\\Protect\\tomcat\\conf directory.', 'Edit the file
        server.xml.', 'Add the following above the first <connector> entry: ...
        ', 'Save the server. ... ', 'Edit the web.xml file in the same
        directory.', 'Scroll to the bottom of the file and add the following
        just above the </web-app> entry: ... ', 'Save the web.xml file.']
      - >-
        Cause: The Enforce Console's tomcat webserver is configured to only
        accept HTTPS requests. Any non-secure HTTP request will not be
        redirected. By default the tomcat webserver is not configured to
        redirect HTTP requests to HTTPS.
  - source_sentence: when did the last rick and morty air?
    sentences:
      - >-
        Instead of Naota, the main character of the original FLCL (pronounced
        “Fooly Cooly”), Progressive focuses on Hidomi, a reserved girl who
        closes herself off from the world by wearing headphones (that aren't
        actually playing music).
      - >-
        On May 10, 2018, Adult Swim announced a long-term deal with the
        creators, ordering 70 new episodes of Rick and Morty over an unknown
        number of seasons. As of May 31, 2020, 41 episodes of Rick and Morty
        have aired, concluding the fourth season.
      - >-
        What time is Rick and Morty season 4 episode 8 out? The latest episode
        of Rick and Morty airs on Sunday, May 17 on Adult Swim. US fans can
        stream the new episode of Rick and Morty when it airs on the channel.
  - source_sentence: are clothes made of plastic?
    sentences:
      - >-
        It might surprise you, but you're probably wearing plastic clothes. ...
        Many of our clothes contain plastics like polyester, nylon, acrylic and
        polyamide. In fact most new fabrics are made of plastic – up to 64% of
        them. The thing is, every time we wash these materials they shed
        millions of plastic microfibres.
      - >-
        General Pharmacology. Beta-blockers are drugs that bind to
        beta-adrenoceptors and thereby block the binding of norepinephrine and
        epinephrine to these receptors. ... Second generation beta-blockers are
        more cardioselective in that they are relatively selective for β1
        adrenoceptors.
      - >-
        It might surprise you, but you're probably wearing plastic clothes. ...
        Many of our clothes contain plastics like polyester, nylon, acrylic and
        polyamide. In fact most new fabrics are made of plastic – up to 64% of
        them. The thing is, every time we wash these materials they shed
        millions of plastic microfibres.
  - source_sentence: are emg pickups any good?
    sentences:
      - >-
        EMGs are a one trick pony, and only sound good for high gain
        applications. Sort of, they definitely aren't as flexible as most
        passive options, but most metal oriented passive pickups have the same
        issue. A lot of guitarists forget that EMG makes more pickups than just
        the 81/85 set.
      - >-
        Among guitar and bass accessories, the company sells active humbucker
        pickups, such as the EMG 81, the EMG 85, the EMG 60, and the EMG 89.
        They also produce passive pickups such as the EMG-HZ Series, which
        include SRO-OC1's and SC Sets.
      - >-
        You can find the star next to the abandoned mansion. The Treasure Map
        Loading Screen is unlocked through the battle pass, and if you look at
        the treasure map loading screen you'll see a knife pointing in this
        location.
pipeline_tag: sentence-similarity
library_name: sentence-transformers

ColBERT based on nreimers/MiniLM-L6-H384-uncased

This is a sentence-transformers model finetuned from nreimers/MiniLM-L6-H384-uncased. It maps sentences & paragraphs to a 128-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: nreimers/MiniLM-L6-H384-uncased
  • Maximum Sequence Length: 31 tokens
  • Output Dimensionality: 128 dimensions
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

ColBERT(
  (0): Transformer({'max_seq_length': 31, 'do_lower_case': False}) with Transformer model: BertModel 
  (1): Dense({'in_features': 384, 'out_features': 128, 'bias': False, 'activation_function': 'torch.nn.modules.linear.Identity'})
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("ayushexel/colbert-MiniLM-L6-H384-uncased-gooaq-1995000")
# Run inference
sentences = [
    'are emg pickups any good?',
    "EMGs are a one trick pony, and only sound good for high gain applications. Sort of, they definitely aren't as flexible as most passive options, but most metal oriented passive pickups have the same issue. A lot of guitarists forget that EMG makes more pickups than just the 81/85 set.",
    "Among guitar and bass accessories, the company sells active humbucker pickups, such as the EMG 81, the EMG 85, the EMG 60, and the EMG 89. They also produce passive pickups such as the EMG-HZ Series, which include SRO-OC1's and SC Sets.",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 128]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Training Details

Training Dataset

Unnamed Dataset

  • Size: 1,893,949 training samples
  • Columns: question, answer, and negative
  • Approximate statistics based on the first 1000 samples:
    question answer negative
    type string string string
    details
    • min: 9 tokens
    • mean: 13.01 tokens
    • max: 27 tokens
    • min: 16 tokens
    • mean: 31.78 tokens
    • max: 32 tokens
    • min: 14 tokens
    • mean: 31.66 tokens
    • max: 32 tokens
  • Samples:
    question answer negative
    what is the relationship between humility and thankfulness? how gratitude can influence humility and vice versa. Humility is characterized by low self-focus, secure sense of self, and increased valuation of others. Gratitude is marked by a sense that one has benefited from the actions of another. -hum-, root. -hum- comes from Latin, where it has the meaning "ground. '' This meaning is found in such words as: exhume, humble, humiliate, humility, humus, posthumous.
    what is the difference between usb a b c? The USB-A has a much larger physical connector than the Type C, Type C is around the same size as a micro-USB connector. Unlike, Type A, you won't need to try and insert it, flip it over and then flip it over once more just to find the right orientation when trying to make a connection. First the transfer rates: USB 2.0 offers transfer rates of 480 Mbps and USB 3.0 offers transfer rates of 4.8 Gbps - that's 10 times faster. ... USB 2.0 provided up to 500 mA whereas USB 3.0 provides up to 900 mA, allowing power hungry devices to now be bus powered.
    how hyaluronic acid is made? Hyaluronic acid is a substance that is naturally present in the human body. It is found in the highest concentrations in fluids in the eyes and joints. The hyaluronic acid that is used as medicine is extracted from rooster combs or made by bacteria in the laboratory. Hyaluronic acid helps your skin hang on to the moisture. 2. ... Hyaluronic acid by itself is non-comedogenic (doesn't clog pores), but you should be careful when choosing a hyaluronic acid serum that the ingredient list doesn't contain any sneaky pore-clogging ingredients you're not expecting.
  • Loss: pylate.losses.contrastive.Contrastive

Evaluation Dataset

Unnamed Dataset

  • Size: 5,000 evaluation samples
  • Columns: question, answer, and negative_1
  • Approximate statistics based on the first 1000 samples:
    question answer negative_1
    type string string string
    details
    • min: 9 tokens
    • mean: 12.96 tokens
    • max: 22 tokens
    • min: 19 tokens
    • mean: 31.7 tokens
    • max: 32 tokens
    • min: 14 tokens
    • mean: 31.43 tokens
    • max: 32 tokens
  • Samples:
    question answer negative_1
    are tefal ingenio pans suitable for induction hobs? Tefal Ingenio is a revolutionary concept that brings a whole new take on versatility. ... The frying pans also feature Tefal's iconic Thermo-Spot which lets you know when the pan has reached optimal cooking temperature. The Ingenio Induction range is compatible with all hobs and is also dishwasher safe. Tefal Ingenio is a revolutionary concept that brings a whole new take on versatility. ... The frying pans also feature Tefal's iconic Thermo-Spot which lets you know when the pan has reached optimal cooking temperature. The Ingenio Induction range is compatible with all hobs and is also dishwasher safe.
    how many continuing education hours is acls? The ACLS, PALS, and NRP certification courses are approved for 8 CEUs/CMEs, and recertification courses are approved for 4 CEUs/CMEs. The BLS certification course is approved for 4 CEUs/CMEs and the recertification course is approved for 2 CEUs/CMEs. For more information, please visit our Accreditation page. The foremost difference between the two is their advancement level. Essentially, ACLS is a sophisticated and more advanced course and builds upon the major fundamentals developed during BLS. The main purpose of BLS and ACLS certification are well explained in this article.
    what are the health benefits of drinking peppermint tea? ['Makes you Stress Free. When it comes to relieving stress and anxiety, peppermint tea is one of the best allies. ... ', 'Sleep-Friendly. ... ', 'Aids in Weight Loss. ... ', 'Cure for an Upset Stomach. ... ', 'Improves Digestion. ... ', 'Boosts Immune System. ... ', 'Fights Bad Breath.'] Peppermint tea is a popular herbal tea that is naturally calorie- and caffeine-free. Some research has suggested that the oils in peppermint may have a number of other health benefits, such as fresher breath, better digestion, and reduced pain from headaches. Peppermint tea also has antibacterial properties.
  • Loss: pylate.losses.contrastive.Contrastive

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 256
  • per_device_eval_batch_size: 256
  • learning_rate: 3e-06
  • num_train_epochs: 1
  • warmup_ratio: 0.1
  • seed: 12
  • bf16: True
  • dataloader_num_workers: 12
  • load_best_model_at_end: True

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 256
  • per_device_eval_batch_size: 256
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 3e-06
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 1
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 12
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: True
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 12
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: True
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • tp_size: 0
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss
0.0001 1 10.8061
0.0270 200 8.9391
0.0541 400 5.1795
0.0811 600 2.3951
0.1081 800 1.6927
0.1352 1000 1.404
0.1622 1200 1.2496
0.1892 1400 1.1613
0.2162 1600 1.0843
0.2433 1800 1.0427
0.2703 2000 1.0005
0.2973 2200 0.9695
0.3244 2400 0.9325
0.3514 2600 0.9122
0.3784 2800 0.8832
0.4055 3000 0.8689
0.4325 3200 0.8626
0.4595 3400 0.8452
0.4866 3600 0.8329
0.5136 3800 0.8132
0.5406 4000 0.8111
0.5676 4200 0.7952
0.5947 4400 0.7892
0.6217 4600 0.7772
0.6487 4800 0.7793
0.6758 5000 0.7705
0.7028 5200 0.7692
0.7298 5400 0.7625
0.7569 5600 0.7595
0.7839 5800 0.7405
0.8109 6000 0.7513
0.8380 6200 0.7396
0.8650 6400 0.7312
0.8920 6600 0.7325
0.9190 6800 0.7371
0.9461 7000 0.7422
0.9731 7200 0.7296

Framework Versions

  • Python: 3.11.0
  • Sentence Transformers: 4.0.1
  • Transformers: 4.50.3
  • PyTorch: 2.6.0+cu124
  • Accelerate: 1.5.2
  • Datasets: 3.5.0
  • Tokenizers: 0.21.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}