Update README.md
Browse files
README.md
CHANGED
|
@@ -22,7 +22,9 @@ license: mit
|
|
| 22 |
- **Fine-tuned model:** QATCForQuestionAnswering
|
| 23 |
- **Supported Language:** Vietnamese
|
| 24 |
- **Task:** Extractive QA, Evidence Extraction
|
| 25 |
-
- **Dataset:** [ViWikiFC](https://arxiv.org/abs/2405.07615)
|
|
|
|
|
|
|
| 26 |
|
| 27 |
---
|
| 28 |
|
|
|
|
| 22 |
- **Fine-tuned model:** QATCForQuestionAnswering
|
| 23 |
- **Supported Language:** Vietnamese
|
| 24 |
- **Task:** Extractive QA, Evidence Extraction
|
| 25 |
+
- **Dataset:** [ViWikiFC](https://arxiv.org/abs/2405.07615)
|
| 26 |
+
|
| 27 |
+
QATCForQuestionAnswering utilizes XLM-RoBERTa as a pre-trained language model. We further enhance it by incorporating a Token Classification mechanism, allowing the model to not only predict answer spans but also classify tokens as part of the rationale selection process. During training, we introduce Rationale Regularization Loss, which consists of sparsity and continuity constraints to encourage more precise and interpretable token-level predictions. This loss function ensures that the model effectively learns to identify relevant rationale tokens while maintaining coherence in token selection.
|
| 28 |
|
| 29 |
---
|
| 30 |
|