model update
Browse files
README.md
CHANGED
|
@@ -2,7 +2,7 @@
|
|
| 2 |
datasets:
|
| 3 |
- relbert/semeval2012_relational_similarity
|
| 4 |
model-index:
|
| 5 |
-
- name: relbert/relbert-roberta-base-nce-
|
| 6 |
results:
|
| 7 |
- task:
|
| 8 |
name: Relation Mapping
|
|
@@ -186,11 +186,11 @@ model-index:
|
|
| 186 |
value: 0.8731443734393283
|
| 187 |
|
| 188 |
---
|
| 189 |
-
# relbert/relbert-roberta-base-nce-
|
| 190 |
|
| 191 |
RelBERT based on [roberta-base](https://huggingface.co/roberta-base) fine-tuned on [relbert/semeval2012_relational_similarity](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity) (see the [`relbert`](https://github.com/asahi417/relbert) for more detail of fine-tuning).
|
| 192 |
This model achieves the following results on the relation understanding tasks:
|
| 193 |
-
- Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-nce-
|
| 194 |
- Accuracy on SAT (full): 0.5909090909090909
|
| 195 |
- Accuracy on SAT: 0.599406528189911
|
| 196 |
- Accuracy on BATS: 0.6864924958310172
|
|
@@ -200,13 +200,13 @@ This model achieves the following results on the relation understanding tasks:
|
|
| 200 |
- Accuracy on ConceptNet Analogy: 0.39429530201342283
|
| 201 |
- Accuracy on T-Rex Analogy: 0.6557377049180327
|
| 202 |
- Accuracy on NELL-ONE Analogy: 0.6383333333333333
|
| 203 |
-
- Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-nce-
|
| 204 |
- Micro F1 score on BLESS: 0.8998041283712521
|
| 205 |
- Micro F1 score on CogALexV: 0.8272300469483568
|
| 206 |
- Micro F1 score on EVALution: 0.6462621885157096
|
| 207 |
- Micro F1 score on K&H+N: 0.9412951241566391
|
| 208 |
- Micro F1 score on ROOT09: 0.8752742087120025
|
| 209 |
-
- Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-nce-
|
| 210 |
- Accuracy on Relation Mapping: 0.8007341269841269
|
| 211 |
|
| 212 |
|
|
@@ -218,7 +218,7 @@ pip install relbert
|
|
| 218 |
and activate model as below.
|
| 219 |
```python
|
| 220 |
from relbert import RelBERT
|
| 221 |
-
model = RelBERT("relbert/relbert-roberta-base-nce-
|
| 222 |
vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (n_dim, )
|
| 223 |
```
|
| 224 |
|
|
@@ -242,7 +242,7 @@ vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (n_dim, )
|
|
| 242 |
- loss_function_config: {'temperature': 0.05, 'num_negative': 400, 'num_positive': 10}
|
| 243 |
- augment_negative_by_positive: True
|
| 244 |
|
| 245 |
-
See the full configuration at [config file](https://huggingface.co/relbert/relbert-roberta-base-nce-
|
| 246 |
|
| 247 |
### Reference
|
| 248 |
If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.emnlp-main.712/).
|
|
|
|
| 2 |
datasets:
|
| 3 |
- relbert/semeval2012_relational_similarity
|
| 4 |
model-index:
|
| 5 |
+
- name: relbert/relbert-roberta-base-nce-semeval2012-average
|
| 6 |
results:
|
| 7 |
- task:
|
| 8 |
name: Relation Mapping
|
|
|
|
| 186 |
value: 0.8731443734393283
|
| 187 |
|
| 188 |
---
|
| 189 |
+
# relbert/relbert-roberta-base-nce-semeval2012-average
|
| 190 |
|
| 191 |
RelBERT based on [roberta-base](https://huggingface.co/roberta-base) fine-tuned on [relbert/semeval2012_relational_similarity](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity) (see the [`relbert`](https://github.com/asahi417/relbert) for more detail of fine-tuning).
|
| 192 |
This model achieves the following results on the relation understanding tasks:
|
| 193 |
+
- Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-nce-semeval2012-average/raw/main/analogy.forward.json)):
|
| 194 |
- Accuracy on SAT (full): 0.5909090909090909
|
| 195 |
- Accuracy on SAT: 0.599406528189911
|
| 196 |
- Accuracy on BATS: 0.6864924958310172
|
|
|
|
| 200 |
- Accuracy on ConceptNet Analogy: 0.39429530201342283
|
| 201 |
- Accuracy on T-Rex Analogy: 0.6557377049180327
|
| 202 |
- Accuracy on NELL-ONE Analogy: 0.6383333333333333
|
| 203 |
+
- Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-nce-semeval2012-average/raw/main/classification.json)):
|
| 204 |
- Micro F1 score on BLESS: 0.8998041283712521
|
| 205 |
- Micro F1 score on CogALexV: 0.8272300469483568
|
| 206 |
- Micro F1 score on EVALution: 0.6462621885157096
|
| 207 |
- Micro F1 score on K&H+N: 0.9412951241566391
|
| 208 |
- Micro F1 score on ROOT09: 0.8752742087120025
|
| 209 |
+
- Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-nce-semeval2012-average/raw/main/relation_mapping.json)):
|
| 210 |
- Accuracy on Relation Mapping: 0.8007341269841269
|
| 211 |
|
| 212 |
|
|
|
|
| 218 |
and activate model as below.
|
| 219 |
```python
|
| 220 |
from relbert import RelBERT
|
| 221 |
+
model = RelBERT("relbert/relbert-roberta-base-nce-semeval2012-average")
|
| 222 |
vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (n_dim, )
|
| 223 |
```
|
| 224 |
|
|
|
|
| 242 |
- loss_function_config: {'temperature': 0.05, 'num_negative': 400, 'num_positive': 10}
|
| 243 |
- augment_negative_by_positive: True
|
| 244 |
|
| 245 |
+
See the full configuration at [config file](https://huggingface.co/relbert/relbert-roberta-base-nce-semeval2012-average/raw/main/finetuning_config.json).
|
| 246 |
|
| 247 |
### Reference
|
| 248 |
If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.emnlp-main.712/).
|