Improve dataset card: Add paper/GitHub links, abstract, update categories and citation
Browse filesThis pull request aims to significantly improve the dataset card for the collection of biomedical image classification datasets associated with the "Singular Value Few-shot Adaptation of Vision-Language Models" paper.
Key changes include:
- Adding a descriptive top-level title.
- Providing direct links to the associated paper ([Singular Value Few-shot Adaptation of Vision-Language Models](https://huggingface.co/papers/2509.03740)) and the GitHub repository ([https://github.com/HealthX-Lab/CLIP-SVD](https://github.com/HealthX-Lab/CLIP-SVD)).
- Including the paper abstract for comprehensive context regarding the dataset's use.
- Updating the `task_categories` in the metadata to `zero-shot-image-classification` and `image-to-text` to better reflect the capabilities and domain of Vision-Language Models described in the paper.
- Adding `vision-language-model` and `few-shot-learning` as additional tags for improved discoverability.
- Consolidating the citation section to include both the "Singular Value Few-shot Adaptation of Vision-Language Models" paper and the "BiomedCoOp" paper, with updated years as per the GitHub repository.
- Removing the generic `# Introduction` header as the new structure provides better flow.
|
@@ -1,18 +1,25 @@
|
|
| 1 |
---
|
| 2 |
-
license: apache-2.0
|
| 3 |
-
task_categories:
|
| 4 |
-
- image-classification
|
| 5 |
language:
|
| 6 |
- en
|
|
|
|
|
|
|
|
|
|
|
|
|
| 7 |
tags:
|
| 8 |
- medical
|
| 9 |
- biology
|
|
|
|
|
|
|
| 10 |
---
|
| 11 |
|
|
|
|
| 12 |
|
| 13 |
-
|
| 14 |
|
| 15 |
-
|
|
|
|
|
|
|
|
|
|
| 16 |
|
| 17 |
```
|
| 18 |
data/
|
|
@@ -30,35 +37,34 @@ data/
|
|
| 30 |
```
|
| 31 |
|
| 32 |
## Datasets Description
|
| 33 |
-
| **Modality**
|
| 34 |
-
|
| 35 |
-
| Computerized Tomography
|
| 36 |
-
| Dermatoscopy
|
| 37 |
-
| Endoscopy
|
| 38 |
-
| Fundus Photography
|
| 39 |
-
| Histopathology
|
| 40 |
-
| Histopathology
|
| 41 |
-
| Magnetic Resonance Imaging
|
| 42 |
-
| Optical Coherence Tomography| Retina
|
| 43 |
-
| Ultrasound
|
| 44 |
-
| X-Ray
|
| 45 |
-
| X-Ray
|
| 46 |
-
|
| 47 |
|
| 48 |
### Download the datasets
|
| 49 |
-
All the datasets can be found [here](https://huggingface.co/datasets/TahaKoleilat/BiomedCoOp/tree/main) on HuggingFace. Download each dataset
|
| 50 |
|
| 51 |
-
-
|
| 52 |
-
-
|
| 53 |
-
-
|
| 54 |
-
-
|
| 55 |
-
-
|
| 56 |
-
-
|
| 57 |
-
-
|
| 58 |
-
-
|
| 59 |
-
-
|
| 60 |
-
-
|
| 61 |
-
-
|
| 62 |
|
| 63 |
After downloading each dataset, unzip and place each under its respective directory like the following
|
| 64 |
|
|
@@ -71,14 +77,23 @@ BTMRI/
|
|
| 71 |
| |ββ pituitary_tumor/
|
| 72 |
|ββ split_BTMRI.json
|
| 73 |
```
|
|
|
|
| 74 |
|
| 75 |
## Citation
|
| 76 |
-
If you use our work, please consider citing:
|
| 77 |
```bibtex
|
| 78 |
-
@article{
|
| 79 |
-
|
| 80 |
-
|
| 81 |
-
|
| 82 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 83 |
}
|
| 84 |
```
|
|
|
|
| 1 |
---
|
|
|
|
|
|
|
|
|
|
| 2 |
language:
|
| 3 |
- en
|
| 4 |
+
license: apache-2.0
|
| 5 |
+
task_categories:
|
| 6 |
+
- zero-shot-image-classification
|
| 7 |
+
- image-to-text
|
| 8 |
tags:
|
| 9 |
- medical
|
| 10 |
- biology
|
| 11 |
+
- vision-language-model
|
| 12 |
+
- few-shot-learning
|
| 13 |
---
|
| 14 |
|
| 15 |
+
# Singular Value Few-shot Adaptation of Vision-Language Models (CLIP-SVD) Datasets
|
| 16 |
|
| 17 |
+
[Paper](https://huggingface.co/papers/2509.03740) | [Code](https://github.com/HealthX-Lab/CLIP-SVD)
|
| 18 |
|
| 19 |
+
## Abstract
|
| 20 |
+
Vision-language models (VLMs) like CLIP have shown impressive zero-shot and few-shot learning capabilities across diverse applications. However, adapting these models to new fine-grained domains remains difficult due to reliance on prompt engineering and the high cost of full model fine-tuning. Existing adaptation approaches rely on augmented components, such as prompt tokens and adapter modules, which could limit adaptation quality, destabilize the model, and compromise the rich knowledge learned during pretraining. In this work, we present **CLIP-SVD**, a novel *multi-modal* and *parameter-efficient* adaptation technique that leverages Singular Value Decomposition (SVD) to modify the internal parameter space of CLIP without injecting additional modules. Specifically, we fine-tune only the singular values of the CLIP parameter matrices to rescale the basis vectors for domain adaptation while retaining the pretrained model. This design enables enhanced adaptation performance using only **0.04%** of the model's total parameters and better preservation of its generalization ability. CLIP-SVD achieves state-of-the-art classification results on 11 natural and 10 biomedical datasets, outperforming previous methods in both accuracy and generalization under few-shot settings. Additionally, we leverage a natural language-based approach to analyze the effectiveness and dynamics of the CLIP adaptation to allow interpretability of CLIP-SVD.
|
| 21 |
+
|
| 22 |
+
These datasets comprise 11 biomedical image classification datasets used and analyzed in the paper "Singular Value Few-shot Adaptation of Vision-Language Models". This repository contains these datasets. To ease management, place all the datasets in one directory under `data`. The file structure looks like
|
| 23 |
|
| 24 |
```
|
| 25 |
data/
|
|
|
|
| 37 |
```
|
| 38 |
|
| 39 |
## Datasets Description
|
| 40 |
+
| **Modality** | **Organ(s)** | **Name** | **Classes** | **# train/val/test** |
|
| 41 |
+
|:---:|:---:|:---:|:---:|:---:|
|
| 42 |
+
| Computerized Tomography | Kidney | [CTKidney](https://www.kaggle.com/datasets/nazmul0087/ct-kidney-dataset-normal-cyst-tumor-and-stone)| Kidney Cyst, Kidney Stone, Kidney Tumor, Normal Kidney | 6221/2487/3738 |
|
| 43 |
+
| Dermatoscopy | Skin | [DermaMNIST](https://medmnist.com/) | Actinic Keratosis, Basal Cell Carcinoma, Benign Keratosis, Dermatofibroma, Melanocytic nevus, Melanoma, Vascular Lesion | 7007/1003/2005 |
|
| 44 |
+
| Endoscopy | Colon | [Kvasir](https://www.kaggle.com/datasets/abdallahwagih/kvasir-dataset-for-classification-and-segmentation)| Dyed Lifted Polyps, Normal Cecum, Esophagitis, Dyed Resection Margins, Normal Pylorus, Normal Z Line, Polyps, Ulcerative Colitis | 2000/800/1200 |
|
| 45 |
+
| Fundus Photography | Retina | [RETINA](https://www.kaggle.com/datasets/gunavenkatdoddi/eye-diseases-classification) | Cataract, Diabetic Retinopathy, Glaucoma, Normal Retina | 2108/841/1268 |
|
| 46 |
+
| Histopathology | Lung, Colon | [LC25000](https://www.kaggle.com/datasets/andrewmvd/lung-and-colon-cancer-histopathological-images)| Colon Adenocarcinoma, Colon Benign Tissue, Lung Adenocarcinoma, Lung Benign Tissue, Lung Squamous Cell Carcinoma | 12500/5000/7500 |
|
| 47 |
+
| Histopathology | Colorectal | [CHMNIST](https://www.kaggle.com/datasets/kmader/colorectal-histology-mnist) | Adipose Tissue, Complex Stroma, Debris, Empty Background, Immune Cells, Normal Mucosal Glands, Simple Stroma, Tumor Epithelium | 2496/1000/1504 |
|
| 48 |
+
| Magnetic Resonance Imaging | Brain | [BTMRI](https://www.kaggle.com/datasets/masoudnickparvar/brain-tumor-mri-dataset) | Glioma Tumor, Meningioma Tumor, Normal Brain, Pituitary Tumor | 2854/1141/1717 |
|
| 49 |
+
| Optical Coherence Tomography| Retina | [OCTMNIST](https://medmnist.com/) | Choroidal Neovascularization, Drusen, Diabetic Macular Edema, Normal | 97477/10832/1000 |
|
| 50 |
+
| Ultrasound | Breast | [BUSI](https://www.kaggle.com/datasets/aryashah2k/breast-ultrasound-images-dataset) | Benign Tumors, Malignant Tumors, Normal Scans | 389/155/236 |
|
| 51 |
+
| X-Ray | Chest | [COVID-QU-Ex](https://www.kaggle.com/datasets/tawsifurrahman/covid19-radiography-database) | COVID-19, Lung Opacity, Normal Lungs, Viral Pneumonia | 10582/4232/6351 |
|
| 52 |
+
| X-Ray | Knee | [KneeXray](https://www.kaggle.com/datasets/shashwatwork/knee-osteoarthritis-dataset-with-severity) | No, Doubtful, Minimal, Moderate, and Severe Osteoarthritis | 5778/826/1656 |
|
|
|
|
| 53 |
|
| 54 |
### Download the datasets
|
| 55 |
+
All the datasets can be found [here](https://huggingface.co/datasets/TahaKoleilat/BiomedCoOp/tree/main) on HuggingFace. Download each dataset separately:
|
| 56 |
|
| 57 |
+
- **BTMRI** [[Drive](https://drive.google.com/file/d/1_lJLZRUmczqZqoN-dNqkAzGzmi4ONoU5/view?usp=sharing) | [HuggingFace](https://huggingface.co/datasets/TahaKoleilat/BiomedCoOp/resolve/main/BTMRI.zip)]
|
| 58 |
+
- **BUSI** [[Drive](https://drive.google.com/file/d/1hB5M7wcAUTV9EtiYrijACoQ36R6VmQaa/view?usp=sharing) | [HuggingFace](https://huggingface.co/datasets/TahaKoleilat/BiomedCoOp/resolve/main/BUSI.zip)]
|
| 59 |
+
- **CHMNIST** [[Drive](https://drive.google.com/file/d/1tyQiYQmqAGNaY4SCK_8U5vEbbaa1AD-g/view?usp=sharing) | [HuggingFace](https://huggingface.co/datasets/TahaKoleilat/BiomedCoOp/resolve/main/CHMNIST.zip)]
|
| 60 |
+
- **COVID_19** [[Drive](https://drive.google.com/file/d/1zMLN5q5e_tmH-deSZQiY4Xq0M1EqCrML/view?usp=sharing) | [HuggingFace](https://huggingface.co/datasets/TahaKoleilat/BiomedCoOp/resolve/main/COVID_19.zip)]
|
| 61 |
+
- **CTKidney** [[Drive](https://drive.google.com/file/d/1PBZ299k--mZL8JU7nhC1Wy8yEmlqmVDh/view?usp=sharing) | [HuggingFace](https://huggingface.co/datasets/TahaKoleilat/BiomedCoOp/resolve/main/CTKidney.zip)]
|
| 62 |
+
- **DermaMNIST** [[Drive](https://drive.google.com/file/d/1Jxd1-DWljunRDZ8fY80dl5zUMefriQXt/view?usp=sharing) | [HuggingFace](https://huggingface.co/datasets/TahaKoleilat/BiomedCoOp/resolve/main/DermaMNIST.zip)]
|
| 63 |
+
- **KneeXray** [[Drive](https://drive.google.com/file/d/1DBVraYJmxy2UcQ_nGLYvTB2reITOm453/view?usp=sharing) | [HuggingFace](https://huggingface.co/datasets/TahaKoleilat/BiomedCoOp/resolve/main/KneeXray.zip)]
|
| 64 |
+
- **Kvasir** [[Drive](https://drive.google.com/file/d/1T_cqnNIjmGazNeg6gziarvCNWGsFEkRi/view?usp=sharing) | [HuggingFace](https://huggingface.co/datasets/TahaKoleilat/BiomedCoOp/resolve/main/Kvasir.zip)]
|
| 65 |
+
- **LungColon** [[Drive](https://drive.google.com/file/d/1YIu5fqMXgyemisiL1L1HCvES2nVpCtun/view?usp=sharing) | [HuggingFace](https://huggingface.co/datasets/TahaKoleilat/BiomedCoOp/resolve/main/LungColon.zip)]
|
| 66 |
+
- **OCTMNIST** [[Drive](https://drive.google.com/file/d/1mYZNWxbPxnnVvcwHQYybA8gdMzQAoOem/view?usp=sharing) | [HuggingFace](https://huggingface.co/datasets/TahaKoleilat/BiomedCoOp/resolve/main/OCTMNIST.zip)]
|
| 67 |
+
- **RETINA** [[Drive](https://drive.google.com/file/d/18U-Gc22h5QryomNNzY4r4Qfrq52yf5EO/view?usp=sharing) | [HuggingFace](https://huggingface.co/datasets/TahaKoleilat/BiomedCoOp/resolve/main/RETINA.zip)]
|
| 68 |
|
| 69 |
After downloading each dataset, unzip and place each under its respective directory like the following
|
| 70 |
|
|
|
|
| 77 |
| |ββ pituitary_tumor/
|
| 78 |
|ββ split_BTMRI.json
|
| 79 |
```
|
| 80 |
+
For installation, data preparation and training/evaluation instructions for the associated research, please refer to the [GitHub repository](https://github.com/HealthX-Lab/CLIP-SVD).
|
| 81 |
|
| 82 |
## Citation
|
| 83 |
+
If you use our work or these datasets, please consider citing:
|
| 84 |
```bibtex
|
| 85 |
+
@article{koleilat2025singular,
|
| 86 |
+
title={Singular Value Few-shot Adaptation of Vision-Language Models},
|
| 87 |
+
author={Koleilat, Taha and Rivaz, Hassan and Xiao, Yiming},
|
| 88 |
+
journal={arXiv preprint arXiv:2509.03740},
|
| 89 |
+
year={2025}
|
| 90 |
+
}
|
| 91 |
+
|
| 92 |
+
@inproceedings{koleilat2025biomedcoop,
|
| 93 |
+
title={Biomedcoop: Learning to prompt for biomedical vision-language models},
|
| 94 |
+
author={Koleilat, Taha and Asgariandehkordi, Hojat and Rivaz, Hassan and Xiao, Yiming},
|
| 95 |
+
booktitle={Proceedings of the Computer Vision and Pattern Recognition Conference},
|
| 96 |
+
pages={14766--14776},
|
| 97 |
+
year={2025}
|
| 98 |
}
|
| 99 |
```
|