trait2vec / README.md
egrace479's picture
Add DOI to model citation
e610f37 verified
metadata
license: mit
language:
  - en
widget:
  - source_sentence: 'Ventral humeral ridge: or not'
    sentences:
      - >-
        If metasternum ossified, shape: long, narrow and tapering markedly
        anteriorly to posteriorly, length up to 3.5 times maximum width
      - >-
        Astragalus, dorsolateral margin:: overlaps the anterior and posterior
        portions of the calcaneum equally
      - 'Ulna size: does not apply'
  - source_sentence: >-
      Form of distal portion of anteroventral process of ectopterygoid:
      varyingly falcate
    sentences:
      - 'Middle and distal radials in dorsal and anal fins: absent'
      - >-
        Degree of development of primitively medial portion of fourth upper
        pharyngeal tooth-plate: fourth upper pharyngeal tooth-plate covers
        ventral, posterior, dorsal and sometimes anterior surfaces of fourth
        infrapharyngobranchial
      - 'Shape of pharyngeal apophysis (basioccipital): forked anteriorly'
  - source_sentence: >-
      Form of distal portion of anteroventral process of ectopterygoid:
      varyingly falcate
    sentences:
      - 'parhypural: present'
      - 'Epural: heavy'
      - 'First infraorbital: short'
  - source_sentence: >-
      Form of distal portion of anteroventral process of ectopterygoid:
      varyingly falcate
    sentences:
      - 'Dentary and angular: touch'
      - 'Urohyal and first basibranchial: firmly attached'
      - 'Supraneural 3-4 (nonadditive): absent'
  - source_sentence: >-
      Form of distal portion of anteroventral process of ectopterygoid:
      varyingly falcate
    sentences:
      - 'Ventral diverging lamellae of mesethmoid: lamellae reduced or absent'
      - 'Ventral ridge of the coracoid with a posterior process: absent'
      - 'carpals: fully or partially ossified'
pipeline_tag: sentence-similarity
library_name: sentence-transformers
base_model: sentence-transformers/all-mpnet-base-v2
metrics:
  - pearson_cosine
  - spearman_cosine
model-index:
  - name: SentenceTransformer based on sentence-transformers/all-mpnet-base-v2
    results:
      - task:
          type: semantic-similarity
          name: Semantic Similarity
        dataset:
          name: pheno dev
          type: pheno-dev
        metrics:
          - type: pearson_cosine
            value: 0.6082332469417436
            name: Pearson Cosine
          - type: spearman_cosine
            value: 0.6250387873495056
            name: Spearman Cosine
      - task:
          type: semantic-similarity
          name: Semantic Similarity
        dataset:
          name: pheno test
          type: pheno-test
        metrics:
          - type: pearson_cosine
            value: 0.6822053314599665
            name: Pearson Cosine
          - type: spearman_cosine
            value: 0.705688010939619
            name: Spearman Cosine
tags:
  - ontology
  - nlp
  - biology
  - animals
  - fish
  - embedding
  - trait
  - sentence-transformers
  - sentence-similarity
  - feature-extraction
  - loss:CoSENTLoss
datasets:
  - imageomics/char-sim-data
model_name: Trait2Vec
model_description: >-
  Language model for embedding organismal trait descriptions. Built using
  Sentence-Transformer architecture and trained with trait descriptions from
  Imageomics/char-sim-data.

Model Card for Trait2Vec

Trait2Vec is a language model to embed organismal trait descriptions in a way that preserves the structure induced by a semantic similarity metric (e.g. SimGIC). The model was trained on the Character Similarity Dataset. It is fine-tuned from sentence-transformers/all-mpnet-base-v2. Through qualitative data exploration we observe the cosine similarity between embeddings of raw trait description is proportional to the semantic similarity of their corresponding ontological representations.

Model Details

Model Description

  • Developed by: Juan Garcia, Soumyashree Kar, Jim Balhoff, Hilmar Lapp
  • Model type: Sentence Transformer
  • Language(s) (NLP): English
  • License: MIT
  • Fine-tuned from model: sentence-transformers/all-mpnet-base-v2

Model Sources

Uses

Trait2Vec has been qualitatively evaluated in the ability to embed raw trait descriptions in a way that preserves the structure of an ontology. Accordingly, we expect it to produce an alternative computational representation of the traits of an organism.

Direct Use

It can be used to embed the textual trait descriptions associated with an organism.

Bias, Risks, and Limitations

This model is finetuned from sentence-transformers/all-mpnet-base-v2, therefore it inherits its corresponding biases and risks. The training dataset(Character Similarity Dataset) introduces the biases of the single similarity metric and ontology. This means the embedding inherits that metric’s inductive biases, coverage gaps, and evolving definitions. Biological conclusions may differ under alternative metrics (e.g., Resnik, Jaccard) or other phenotype ontologies.

Recommendations

Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.

How to Get Started with the Model

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("imageomics/trait2vec")
# Run inference
sentences = [
    'Form of distal portion of anteroventral process of ectopterygoid: varyingly falcate',
    'Ventral ridge of the coracoid with a posterior process: absent',
    'carpals: fully or partially ossified',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 256]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Training Details

Training Data

This model was trained on the Character Similarity Dataset.

  • Size: 438,516 training samples
  • Columns: sentence1, sentence2, and score
  • Approximate statistics based on the first 1000 samples:
    sentence1 sentence2 score
    type string string float
    details
    • min: 9 tokens
    • mean: 42.84 tokens
    • max: 164 tokens
    • min: 4 tokens
    • mean: 22.8 tokens
    • max: 164 tokens
    • min: 0.0
    • mean: 0.1
    • max: 0.61
  • Samples:
    sentence1 sentence2 score
    Gill raker shape between ceratobranchial 1 and ceratobranchials 2--4: Homomorphic Extent of development of inferior lamella of lateral ethmoid: inferior lamella absent 0.014706667500582846
    Gill raker shape between ceratobranchial 1 and ceratobranchials 2--4: Homomorphic Shape of anal-fin pterygiophore tips: tips of pterygiophores shaped like an arrow-head; axial series of pterygiophores providing the ventral margin of the anal-fin base a scalloped appearance 0.030538703023734296
    Gill raker shape between ceratobranchial 1 and ceratobranchials 2--4: Homomorphic Suprapreopercle: present 0.3385057414877959
  • Loss: CoSENTLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "pairwise_cos_sim"
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps

  • per_device_train_batch_size: 64

  • per_device_eval_batch_size: 64

  • learning_rate: 2e-05

  • num_train_epochs: 10

  • warmup_ratio: 1e-06

  • Training regime: fp32

Evaluation

We tested Trait2Vec on a hold-out split of 20% of the Character Similarity Dataset. No descriptor overlap was ensured.

Testing Data, Factors & Metrics

Testing Data

  • Size: 111,628 evaluation samples
  • Columns: sentence1, sentence2, and score
  • Approximate statistics based on the first 1000 samples:
    sentence1 sentence2 score
    type string string float
    details
    • min: 9 tokens
    • mean: 17.19 tokens
    • max: 24 tokens
    • min: 3 tokens
    • mean: 21.97 tokens
    • max: 143 tokens
    • min: 0.0
    • mean: 0.1
    • max: 0.86
  • Samples:
    sentence1 sentence2 score
    Ventral humeral ridge: or not Metacarpals, Metacarpal I, presence: absent 0.05558851078197206
    Ventral humeral ridge: or not Metapterygoid–quadrate fenestra: absent 0.004860625129173212
    Ventral humeral ridge: or not Dorsal and ventral borders of the maxillary articular process: straight or slightly curved ventrally 0.10380567059620477
  • Loss: CoSENTLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "pairwise_cos_sim"
    }
    

Metrics

Semantic Similarity:

Results

Metric Validation set Test set
pearson_cosine 0.6082 0.6822
spearman_cosine 0.625 0.7057

Summary

Trait2Vec embeds organismal trait descriptors in a way that preserves some of the ranking structure induced by the similarity metric of the ontology.

Environmental Impact

Experiments were conducted using a private infrastructure, which has a carbon efficiency of 0.432 kgCO$_2$eq/kWh. A cumulative of 20 hours of computation was performed on hardware of type A100 PCIe 40/80GB (TDP of 250W).

Total emissions are estimated to be 2.16 kgCO$_2$eq of which 0 percents were directly offset.

Estimations were conducted using the MachineLearning Impact calculator presented in:

@article{lacoste2019quantifying,
  title={Quantifying the Carbon Emissions of Machine Learning},
  author={Lacoste, Alexandre and Luccioni, Alexandra and Schmidt, Victor and Dandres, Thomas},
  journal={arXiv preprint arXiv:1910.09700},
  year={2019}
}

Model Architecture and Objective

SentenceTransformer(
  (0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: MPNetModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Dense({'in_features': 768, 'out_features': 256, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
)
  • Loss: CoSENTLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "pairwise_cos_sim"
    }
    

Software

  • Python: 3.10.16
  • Sentence Transformers: 3.3.1
  • Transformers: 4.48.1
  • PyTorch: 2.5.1.post303
  • Accelerate: 1.3.0
  • Datasets: 2.14.4
  • Tokenizers: 0.21.0

Citation

If you use this model in your research, please cite both it and the source model & method from which it was fine-tuned:

Model

@software{trait2vec2025,
  author = {Juan Garcia and Soumyashree Kar and Jim Balhoff and Hilmar Lapp},
  doi = {10.57967/hf/6892},
  title = {Trait2Vec (Revision f39747b)},
  version = {1.0.0},
  year = {2025},
  url = {https://huggingface.co/imageomics/trait2vec}
}

Source Model & Method

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    pages = "3982-3992",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/D19-1410/",
    doi = "10.18653/v1/D19-1410"
}

CoSENTLoss

@online{kexuefm-8847,
    title={CoSENT: A more efficient sentence vector scheme than Sentence-BERT},
    author={Su Jianlin},
    year={2022},
    month={Jan},
    url={https://kexue.fm/archives/8847},
}

Acknowledgements

This work was supported by the Imageomics Institute, which is funded by the US National Science Foundation's Harnessing the Data Revolution (HDR) program under Award #2118240 (Imageomics: A New Frontier of Biological Information Powered by Knowledge-Guided Machine Learning). Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.

Model Card Authors

Juan Garcia

Model Card Contact

Please open a Discussion on the Community Tab with any questions on the model.