RadioDino-b16 / README.md
Snarcy's picture
Update README.md
a23d1cf verified
---
license: cc-by-4.0
tags:
- radiomics
- medical-imaging
- vision-transformer
- dino
- dinov2
- feature-extraction
- foundation-model
library_name: timm
datasets:
- medmnist
- radimagenet
- BUSI
pipeline_tag: feature-extraction
model-index:
- name: RadioDINO-b16
results:
- task:
type: image-classification
name: Image Classification
dataset:
name: BreastMNIST
type: BreastMNIST
metrics:
- type: F1
value: 87.69
- task:
type: image-classification
name: Image Classification
dataset:
name: PneumoniaMNIST
type: PneumoniaMNIST
metrics:
- type: F1
value: 93.29
- task:
type: image-classification
name: Image Classification
dataset:
name: OrganAMNIST
type: OrganAMNIST
metrics:
- type: F1
value: 97.20
- task:
type: image-classification
name: Image Classification
dataset:
name: OrganCMNIST
type: OrganCMNIST
metrics:
- type: F1
value: 94.57
- task:
type: image-classification
name: Image Classification
dataset:
name: OrganSMNIST
type: OrganSMNIST
metrics:
- type: F1
value: 78.15
- task:
type: image-classification
name: Image Classification
dataset:
name: BUSI
type: BUSI
metrics:
- type: F1
value: 91.73
---
# RadioDINO-b16
**RadioDINO-b16** is a self-supervised Vision Transformer foundation model developed for radiomics and medical imaging. It is based on the DINO framework and pretrained on the large-scale **RadImageNet** dataset (1.35 million CT, MRI, and Ultrasound images across 165 classes and 11 anatomical regions). This model is part of the *Radio DINO* family and was created to extract robust, general-purpose features for downstream medical tasks including classification, segmentation, and interpretability analysis.
Unlike traditional radiomics methods that rely on handcrafted features and supervised models pretrained on natural images, RadioDINO-b16 offers a domain-adapted alternative that consistently outperforms previous models on diverse medical benchmarks. It has been rigorously validated on the MedMNISTv2 benchmark suite and shown to be effective even without fine-tuning.
> ๐Ÿง  Developed by [Luca Zedda](https://orcid.org/0009-0001-8488-1612), [Andrea Loddo](https://orcid.org/0000-0002-6571-3816), and [Cecilia Di Ruberto](https://orcid.org/0000-0003-4641-0307)
> ๐Ÿฅ Department of Mathematics and Computer Science, University of Cagliari
> ๐Ÿ“„ Published in: Computers in Biology and Medicine, 2025
---
## Model Details
- **Architecture:** ViT-base with patch size 16 (`b16`)
- **SSL framework:** DINO (self-distillation without labels)
- **Pretraining dataset:** RadImageNet (1.35M CT/MRI/Ultrasound images)
- **Embedding size:** 768
- **Applications:** Feature extraction, classification backbones, transfer learning, medical imaging analysis
## Example Usage
```python
from PIL import Image
from torchvision import transforms
import timm
import torch
# Load model from Hugging Face Hub
model = timm.create_model("hf_hub:Snarcy/RadioDino-b16", pretrained=True)
model.eval()
device = "cuda" if torch.cuda.is_available() else "cpu"
model.to(device)
# Load and preprocess a sample image
image = Image.open("path/to/your/image").convert("RGB")
transform = transforms.Compose([
transforms.Resize((224, 224)),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225]),
])
input_tensor = transform(image).unsqueeze(0).to(device)
# Forward pass to obtain feature embedding
with torch.no_grad():
embedding = model(input_tensor)
```
## ๐Ÿ“ Citation
If you use this model, please cite the following paper:
**Radio DINO: A foundation model for advanced radiomics and AI-driven medical imaging analysis**
Luca Zedda, Andrea Loddo, Cecilia Di Ruberto
Computers in Biology and Medicine, Volume 195, 2025, 110583
[https://doi.org/10.1016/j.compbiomed.2025.110583](https://doi.org/10.1016/j.compbiomed.2025.110583)
```bibtex
@article{ZEDDA2025110583,
title = {Radio DINO: A foundation model for advanced radiomics and AI-driven medical imaging analysis},
journal = {Computers in Biology and Medicine},
volume = {195},
pages = {110583},
year = {2025},
issn = {0010-4825},
doi = {https://doi.org/10.1016/j.compbiomed.2025.110583},
url = {https://www.sciencedirect.com/science/article/pii/S0010482525009345},
author = {Luca Zedda and Andrea Loddo and Cecilia {Di Ruberto}},
keywords = {Radiomics, Self-supervised learning, Deep learning, DINO, DINOV2, Medical imaging, Feature extraction, Generalizability},
}
```