|
|
--- |
|
|
task_categories: |
|
|
- image-classification |
|
|
language: |
|
|
- en |
|
|
pretty_name: RSFAKE-1M |
|
|
size_categories: |
|
|
- 100K<n<1M |
|
|
license: cc-by-nc-4.0 |
|
|
--- |
|
|
|
|
|
# RSFAKE-1M: A Large-Scale Dataset for Detecting Diffusion-Generated Remote Sensing Forgeries |
|
|
|
|
|
## Dataset Summary |
|
|
|
|
|
**RSFAKE-1M** is a large-scale dataset designed to advance the detection of forged remote sensing images, particularly those generated by diffusion models. It contains **1 million images** in total — **500K real** and **500K fake**. The forged images are produced using **10 different diffusion models** fine-tuned on remote sensing data, spanning **six generation conditions**, including text-to-image, structure-guided generation, and inpainting. |
|
|
|
|
|
Remote sensing imagery plays a vital role in areas such as environmental monitoring, urban planning, and national security. However, with the rapid development of generative models, especially diffusion-based architectures, remote sensing images are increasingly vulnerable to realistic forgeries. Despite this, most existing benchmarks focus on GAN-based or natural image forgeries, leaving a critical gap in the remote sensing domain. |
|
|
|
|
|
RSFAKE-1M addresses this gap by offering a comprehensive benchmark for training and evaluating forgery detection models under realistic and diverse conditions. Extensive experiments in our accompanying paper demonstrate that: |
|
|
|
|
|
- Current state-of-the-art detectors struggle with diffusion-generated forgeries in remote sensing. |
|
|
- Training on RSFAKE-1M significantly improves generalization and robustness across different forgery types. |
|
|
|
|
|
We believe RSFAKE-1M serves as a solid foundation for the development of next-generation remote sensing forgery detection algorithms. |
|
|
|
|
|
## Dataset Structure |
|
|
|
|
|
``` |
|
|
RSFAKE/ |
|
|
├── FAKE/ |
|
|
│ ├── generated_crsdiff |
|
|
│ ├── generated_diffusion_sat |
|
|
│ ├── generated_diffusion_sat_256 |
|
|
│ ├── generated_geosynth |
|
|
│ ├── generated_geosynth_canny |
|
|
│ ├── generated_geosynth_sam |
|
|
│ ├── generated_mapsat |
|
|
│ ├── generated_rsinpaint |
|
|
│ ├── generated_RSSD_768 |
|
|
│ ├── generated_SDFRS |
|
|
│ |
|
|
├── REAL/ |
|
|
│ └── fmow/ |
|
|
│ ├── train/ |
|
|
│ ├── val/ |
|
|
│ └── test/ |
|
|
│ |
|
|
├── SPLIT/ |
|
|
│ ├── RSFAKE_train_new.csv |
|
|
│ ├── RSFAKE_val_new.csv |
|
|
│ └── RSFAKE_test_new.csv |
|
|
``` |
|
|
|
|
|
|
|
|
### Real Image Construction |
|
|
|
|
|
The **real image subset** is reconstructed from the publicly available [fMoW dataset](https://github.com/fMoW/dataset.git). To reproduce the real subset: |
|
|
|
|
|
1. Download the original fMoW-rgb dataset to the `REAL/fmow_process/` directory. |
|
|
2. Prepare the environment: |
|
|
|
|
|
```bash |
|
|
pip install pillow==11.2.1 pandas==2.2.3 tqdm==4.67.1 |
|
|
``` |
|
|
3. Run the cropping script: |
|
|
|
|
|
```bash |
|
|
cd REAL/fmow_process/ |
|
|
python crop.py |
|
|
``` |
|
|
|
|
|
The processed output will be structured into `train`, `val`, and `test` under `REAL/fmow/`. |
|
|
|
|
|
These scripts ensure that the real image set used for RSFAKE-1M evaluation is consistent and reproducible, while respecting the original data source’s license. |
|
|
|
|
|
## Disclaimer |
|
|
|
|
|
RSFAKE-1M is a synthetic benchmark designed to facilitate research on forgery detection in remote sensing. The fake images are artificially generated and do **not** correspond to real-world scenes or locations. They must **not** be used for any purpose that could mislead, misinform, or be interpreted as real satellite or aerial data. |
|
|
|
|
|
All model-generated content is based on publicly available generative models listed below. RSFAKE-1M does not distribute or modify the original models themselves — only images produced under fair-use conditions are included. |
|
|
|
|
|
By using this dataset, you agree: |
|
|
|
|
|
* The **real images** are reconstructed from the publicly available FMoW dataset and remain subject to its original license. |
|
|
* The **forged images** are generated using publicly available diffusion models, whose licenses we fully acknowledge. |
|
|
* We do **not claim ownership** of any third-party models or datasets used in RSFAKE-1M. |
|
|
* This dataset is provided **strictly for non-commercial research and educational use**. |
|
|
* Users must **cite the RSFAKE-1M paper** and comply with the licenses of all referenced resources. |
|
|
* The authors bear **no responsibility for any misuse or downstream consequences** related to this dataset. |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
## Citation |
|
|
|
|
|
``` |
|
|
@misc{tan2025rsfake1mlargescaledatasetdetecting, |
|
|
title={RSFAKE-1M: A Large-Scale Dataset for Detecting Diffusion-Generated Remote Sensing Forgeries}, |
|
|
author={Zhihong Tan and Jiayi Wang and Huiying Shi and Binyuan Huang and Hongchen Wei and Zhenzhong Chen}, |
|
|
year={2025}, |
|
|
eprint={2505.23283}, |
|
|
archivePrefix={arXiv}, |
|
|
primaryClass={cs.CV}, |
|
|
url={https://arxiv.org/abs/2505.23283}, |
|
|
} |
|
|
``` |
|
|
|
|
|
## Acknowledgements |
|
|
|
|
|
We would like to thank the creators of the following models and datasets, which served as the basis for generating the fake and real images in RSFAKE-1M: |
|
|
|
|
|
- 🛰 **Real Image Source** |
|
|
|
|
|
- [fMoW (Functional Map of the World)](https://github.com/fMoW/dataset.git) *(FUNCTIONAL MAP OF THE WORLD CHALLENGE PUBLIC LICENSE)* |
|
|
|
|
|
- 🧨 **Diffusion-Based Generative Models** |
|
|
|
|
|
- [CRSDiff](https://github.com/Sonettoo/CRS-Diff.git) |
|
|
- [Diffusion-SAT](https://github.com/samar-khanna/DiffusionSat) *(Apache 2.0)* |
|
|
- [GeoSynth](https://github.com/mvrl/GeoSynth.git) *(Apache 2.0)* |
|
|
- [MapSat](https://github.com/miquel-espinosa/map-sat.git) *(Apache 2.0)* |
|
|
- [RSPaint](https://github.com/SteveImmanuel/rs-paint.git) *(Apache 2.0)* |
|
|
- [SDFRS](https://github.com/xiaoyuan1996/Stable-Diffusion-for-Remote-Sensing-Image-Generation.git) *(MIT License)* |
|
|
- [GeoRSSD](https://huggingface.co/Zilun/GeoRSSD) *(Apache 2.0)* |
|
|
|
|
|
We sincerely acknowledge the contributions of the above works. This dataset would not have been possible without their efforts in advancing generative modeling in the remote sensing domain. |
|
|
|
|
|
|