october-finetuning-more-variables-sweep-20251012-192013-t00

Slur reclamation binary classifier
Task: LGBTQ+ reclamation vs non-reclamation use of harmful words on social media text.

Trial timestamp (UTC): 2025-10-12 19:20:13

Data case: en-es-it

Configuration (trial hyperparameters)

Model: Alibaba-NLP/gte-multilingual-base

Hyperparameter Value
LANGUAGES en-es-it
LR 3e-05
EPOCHS 3
MAX_LENGTH 256
USE_BIO False
USE_LANG_TOKEN False
GATED_BIO False
FOCAL_LOSS True
FOCAL_GAMMA 2.5
USE_SAMPLER False
R_DROP True
R_KL_ALPHA 1.0
TEXT_NORMALIZE True

Dev set results (summary)

Metric Value
f1_macro_dev_0.5 0.7473743435858965
f1_weighted_dev_0.5 0.8781037130106579
accuracy_dev_0.5 0.8797327394209354
f1_macro_dev_best_global 0.7473743435858965
f1_weighted_dev_best_global 0.8781037130106579
accuracy_dev_best_global 0.8797327394209354
f1_macro_dev_best_by_lang 0.7270667550839964
f1_weighted_dev_best_by_lang 0.8596133661796822
accuracy_dev_best_by_lang 0.8530066815144766
default_threshold 0.5
best_threshold_global 0.5
thresholds_by_lang {"en": 0.35, "it": 0.5, "es": 0.5}

Thresholds

  • Default: 0.5
  • Best global: 0.5
  • Best by language: { "en": 0.35, "it": 0.5, "es": 0.5 }

Detailed evaluation

Classification report @ 0.5

              precision    recall  f1-score   support

 no-recl (0)     0.9254    0.9351    0.9302       385
    recl (1)     0.5833    0.5469    0.5645        64

    accuracy                         0.8797       449
   macro avg     0.7544    0.7410    0.7474       449
weighted avg     0.8767    0.8797    0.8781       449

Classification report @ best global threshold (t=0.50)

              precision    recall  f1-score   support

 no-recl (0)     0.9254    0.9351    0.9302       385
    recl (1)     0.5833    0.5469    0.5645        64

    accuracy                         0.8797       449
   macro avg     0.7544    0.7410    0.7474       449
weighted avg     0.8767    0.8797    0.8781       449

Classification report @ best per-language thresholds

              precision    recall  f1-score   support

 no-recl (0)     0.9322    0.8935    0.9125       385
    recl (1)     0.4875    0.6094    0.5417        64

    accuracy                         0.8530       449
   macro avg     0.7099    0.7514    0.7271       449
weighted avg     0.8689    0.8530    0.8596       449

Per-language metrics (at best-by-lang)

lang n acc f1_macro f1_weighted prec_macro rec_macro prec_weighted rec_weighted
en 154 0.8052 0.5497 0.8316 0.5451 0.5794 0.8652 0.8052
it 163 0.9141 0.8606 0.9141 0.8606 0.8606 0.9141 0.9141
es 132 0.8333 0.7000 0.8394 0.6875 0.7170 0.8472 0.8333

Data

  • Train/Dev: private multilingual splits with ~15% stratified Dev (by (lang,label)).
  • Source: merged EN/IT/ES data with bios retained (ignored if unused by model).

Usage

from transformers import AutoTokenizer, AutoModelForSequenceClassification, AutoConfig
import torch, numpy as np

repo = "SimoneAstarita/october-finetuning-more-variables-sweep-20251012-192013-t00"
tok = AutoTokenizer.from_pretrained(repo)
cfg = AutoConfig.from_pretrained(repo)
model = AutoModelForSequenceClassification.from_pretrained(repo)

texts = ["example text ..."]
langs = ["en"]

mode = "best_global"  # or "0.5", "by_lang"

enc = tok(texts, truncation=True, padding=True, max_length=256, return_tensors="pt")
with torch.no_grad():
    logits = model(**enc).logits
probs = torch.softmax(logits, dim=-1)[:, 1].cpu().numpy()

if mode == "0.5":
    th = 0.5
    preds = (probs >= th).astype(int)
elif mode == "best_global":
    th = getattr(cfg, "best_threshold_global", 0.5)
    preds = (probs >= th).astype(int)
elif mode == "by_lang":
    th_by_lang = getattr(cfg, "thresholds_by_lang", {})
    preds = np.zeros_like(probs, dtype=int)
    for lg in np.unique(langs):
        t = th_by_lang.get(lg, getattr(cfg, "best_threshold_global", 0.5))
        preds[np.array(langs) == lg] = (probs[np.array(langs) == lg] >= t).astype(int)
print(list(zip(texts, preds, probs)))

Additional files

reports.json: all metrics (macro/weighted/accuracy) for @0.5, @best_global, and @best_by_lang. config.json: stores thresholds: default_threshold, best_threshold_global, thresholds_by_lang. postprocessing.json: duplicate threshold info for external tools.

Downloads last month
18
Safetensors
Model size
0.6B params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Collection including SimoneAstarita/trilingual-no-bio-20251012-t08