🧠 Continual-MEGA: A Large-scale Benchmark for Generalizable Continual Anomaly Detection

This repository contains the evaluation code for the Continual-Mega benchmark, submitted to ICLR 2026.

We provide the checkpoint files of the proposed model along with the evaluation code for the Continual-Mega benchmark.

The training code will be made publicly available at a later date.

πŸ”— Codebase: Continual-Mega/Continual-Mega


πŸš€ Overview

Continual-MEGA introduces a realistic and large-scale benchmark for continual anomaly detection that emphasizes generalizability across domains and tasks.

The benchmark features:

  • βœ… Diverse anomaly types across domains
  • πŸ” Class-incremental continual learning setup
  • πŸ“ˆ A large-scale evaluation protocol surpassing previous benchmarks

This repository hosts pretrained model checkpoints used in various scenarios defined in the benchmark.


πŸ“¦ Available Checkpoints

Model Name Scenario Task Description
scenario2/prompt_maker Scenario 2 Base Prompt maker trained on Scenario 2 base classes
scenario2/adapters_base Scenario 2 Base Adapter trained on Scenario 2 base classes
scenario2/30classes/adapters_task1 Scenario 2 Task 1 (30 classes) Adapter trained on Task 1 (30 classes) in Scenario 2
scenario2/30classes/adapters_task2 Scenario 2 Task 2 (30 classes) Adapter trained on Task 2 (30 classes) in Scenario 2
(More to come) – – –

πŸ›  Usage Example

Continual Setting Evaluation

sh eval_continual.sh

Zero-Shot Evaluation

sh eval_zero.s
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Dataset used to train Continual-Mega/ADCT