Model Card for Trait2Vec
Trait2Vec is a language model to embed organismal trait descriptions in a way that preserves the structure induced by a semantic similarity metric (e.g. SimGIC). The model was trained on the Character Similarity Dataset. It is fine-tuned from sentence-transformers/all-mpnet-base-v2. Through qualitative data exploration we observe the cosine similarity between embeddings of raw trait description is proportional to the semantic similarity of their corresponding ontological representations.
Model Details
Model Description
- Developed by: Juan Garcia, Soumyashree Kar, Jim Balhoff, Hilmar Lapp
- Model type: Sentence Transformer
- Language(s) (NLP): English
- License: MIT
- Fine-tuned from model: sentence-transformers/all-mpnet-base-v2
Model Sources
- Repository: Imageomics/char-sim
Uses
Trait2Vec has been qualitatively evaluated in the ability to embed raw trait descriptions in a way that preserves the structure of an ontology. Accordingly, we expect it to produce an alternative computational representation of the traits of an organism.
Direct Use
It can be used to embed the textual trait descriptions associated with an organism.
Bias, Risks, and Limitations
This model is finetuned from sentence-transformers/all-mpnet-base-v2, therefore it inherits its corresponding biases and risks. The training dataset(Character Similarity Dataset) introduces the biases of the single similarity metric and ontology. This means the embedding inherits that metric’s inductive biases, coverage gaps, and evolving definitions. Biological conclusions may differ under alternative metrics (e.g., Resnik, Jaccard) or other phenotype ontologies.
Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
How to Get Started with the Model
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("imageomics/trait2vec")
# Run inference
sentences = [
'Form of distal portion of anteroventral process of ectopterygoid: varyingly falcate',
'Ventral ridge of the coracoid with a posterior process: absent',
'carpals: fully or partially ossified',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 256]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
Training Details
Training Data
This model was trained on the Character Similarity Dataset.
- Size: 438,516 training samples
- Columns:
sentence1,sentence2, andscore - Approximate statistics based on the first 1000 samples:
sentence1 sentence2 score type string string float details - min: 9 tokens
- mean: 42.84 tokens
- max: 164 tokens
- min: 4 tokens
- mean: 22.8 tokens
- max: 164 tokens
- min: 0.0
- mean: 0.1
- max: 0.61
- Samples:
sentence1 sentence2 score Gill raker shape between ceratobranchial 1 and ceratobranchials 2--4: HomomorphicExtent of development of inferior lamella of lateral ethmoid: inferior lamella absent0.014706667500582846Gill raker shape between ceratobranchial 1 and ceratobranchials 2--4: HomomorphicShape of anal-fin pterygiophore tips: tips of pterygiophores shaped like an arrow-head; axial series of pterygiophores providing the ventral margin of the anal-fin base a scalloped appearance0.030538703023734296Gill raker shape between ceratobranchial 1 and ceratobranchials 2--4: HomomorphicSuprapreopercle: present0.3385057414877959 - Loss:
CoSENTLosswith these parameters:{ "scale": 20.0, "similarity_fct": "pairwise_cos_sim" }
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy: stepsper_device_train_batch_size: 64per_device_eval_batch_size: 64learning_rate: 2e-05num_train_epochs: 10warmup_ratio: 1e-06Training regime: fp32
Evaluation
We tested Trait2Vec on a hold-out split of 20% of the Character Similarity Dataset. No descriptor overlap was ensured.
Testing Data, Factors & Metrics
Testing Data
- Size: 111,628 evaluation samples
- Columns:
sentence1,sentence2, andscore - Approximate statistics based on the first 1000 samples:
sentence1 sentence2 score type string string float details - min: 9 tokens
- mean: 17.19 tokens
- max: 24 tokens
- min: 3 tokens
- mean: 21.97 tokens
- max: 143 tokens
- min: 0.0
- mean: 0.1
- max: 0.86
- Samples:
sentence1 sentence2 score Ventral humeral ridge: or notMetacarpals, Metacarpal I, presence: absent0.05558851078197206Ventral humeral ridge: or notMetapterygoid–quadrate fenestra: absent0.004860625129173212Ventral humeral ridge: or notDorsal and ventral borders of the maxillary articular process: straight or slightly curved ventrally0.10380567059620477 - Loss:
CoSENTLosswith these parameters:{ "scale": 20.0, "similarity_fct": "pairwise_cos_sim" }
Metrics
Semantic Similarity:
- Datasets:
pheno-devandpheno-test - Evaluated with
EmbeddingSimilarityEvaluator
Results
| Metric | Validation set | Test set |
|---|---|---|
| pearson_cosine | 0.6082 | 0.6822 |
| spearman_cosine | 0.625 | 0.7057 |
Summary
Trait2Vec embeds organismal trait descriptors in a way that preserves some of the ranking structure induced by the similarity metric of the ontology.
Environmental Impact
Experiments were conducted using a private infrastructure, which has a carbon efficiency of 0.432 kgCO$_2$eq/kWh. A cumulative of 20 hours of computation was performed on hardware of type A100 PCIe 40/80GB (TDP of 250W).
Total emissions are estimated to be 2.16 kgCO$_2$eq of which 0 percents were directly offset.
Estimations were conducted using the MachineLearning Impact calculator presented in:
@article{lacoste2019quantifying,
title={Quantifying the Carbon Emissions of Machine Learning},
author={Lacoste, Alexandre and Luccioni, Alexandra and Schmidt, Victor and Dandres, Thomas},
journal={arXiv preprint arXiv:1910.09700},
year={2019}
}
Model Architecture and Objective
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Dense({'in_features': 768, 'out_features': 256, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
)
- Loss:
CoSENTLosswith these parameters:{ "scale": 20.0, "similarity_fct": "pairwise_cos_sim" }
Software
- Python: 3.10.16
- Sentence Transformers: 3.3.1
- Transformers: 4.48.1
- PyTorch: 2.5.1.post303
- Accelerate: 1.3.0
- Datasets: 2.14.4
- Tokenizers: 0.21.0
Citation
If you use this model in your research, please cite both it and the source model & method from which it was fine-tuned:
Model
@software{trait2vec2025,
author = {Juan Garcia and Soumyashree Kar and Jim Balhoff and Hilmar Lapp},
doi = {10.57967/hf/6892},
title = {Trait2Vec (Revision f39747b)},
version = {1.0.0},
year = {2025},
url = {https://huggingface.co/imageomics/trait2vec}
}
Source Model & Method
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
pages = "3982-3992",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/D19-1410/",
doi = "10.18653/v1/D19-1410"
}
CoSENTLoss
@online{kexuefm-8847,
title={CoSENT: A more efficient sentence vector scheme than Sentence-BERT},
author={Su Jianlin},
year={2022},
month={Jan},
url={https://kexue.fm/archives/8847},
}
Acknowledgements
This work was supported by the Imageomics Institute, which is funded by the US National Science Foundation's Harnessing the Data Revolution (HDR) program under Award #2118240 (Imageomics: A New Frontier of Biological Information Powered by Knowledge-Guided Machine Learning). Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.
Model Card Authors
Juan Garcia
Model Card Contact
Please open a Discussion on the Community Tab with any questions on the model.
- Downloads last month
- 26
Model tree for imageomics/trait2vec
Base model
sentence-transformers/all-mpnet-base-v2Dataset used to train imageomics/trait2vec
Evaluation results
- Pearson Cosine on pheno devself-reported0.608
- Spearman Cosine on pheno devself-reported0.625
- Pearson Cosine on pheno testself-reported0.682
- Spearman Cosine on pheno testself-reported0.706