FEMBA Logo

FEMBA: Foundational Encoder Model with Bidirectional Mamba for EEG

Github License Paper

FEMBA is a powerful and efficient foundation model for EEG signal analysis, built upon a bidirectional Mamba state-space architecture. It supports self-supervised pre-training via masked reconstruction and end-to-end supervised fine-tuning for multiple downstream tasks (abnormal EEG, artifact detection, slowing classification). By using linear-time state-space modeling instead of quadratic attention, FEMBA scales to long EEG sequences and constrained hardware while remaining performant.


πŸ”’ License & Usage Policy (Weights)

Weights license: The released model weights are licensed under Creative Commons Attribution–NoDerivatives 4.0 (CC BY-ND 4.0). This section summarizes the practical implications for users. This is not legal advice; please read the full license text.

βœ… You may

  • Use and redistribute the unmodified FEMBA weights (including in commercial settings) with proper attribution to the FEMBA authors.
  • Fine-tune / adapt the weights for your internal use (research or production) without redistributing the modified weights.
  • Publish your code, configs, logs, and papers describing experiments with FEMBA (please cite the paper).

🚫 You may not

  • Share, host, or redistribute any modified weights (including LoRA/adapter/delta checkpoints or pruned/quantized variants). Any parameter set that encodes an adaptation is considered a derivative and cannot be shared under CC BY-ND 4.0.
  • Imply endorsement by the FEMBA authors for any derivative or evaluation without our written permission.
  • Use the FEMBA name in a way that suggests your modified model is an official FEMBA release.

🀝 How to contribute improvements (PR-gated releases)

We welcome community improvements via a pull-request (PR) workflow. If you believe your improvements should become an official FEMBA release:

  1. Open a PR in the BioFoundation repository describing the change (architecture/head/training recipe, datasets, preprocessing, compute).
  2. Include reproducibility artifacts: configs, seeds, scripts, environment details, training/validation logs, and the evaluation protocol (e.g., TUAB/TUAR/TUSL) with exact splits.
  3. Provide comprehensive results (AUROC/AUPR/BA, FLOPs, memory) vs. the baselines reported in the FEMBA paper.
  4. After maintainer review, approved changes will be retrained/validated and, if accepted, released by the maintainers as a new official FEMBA checkpoint under CC BY-ND 4.0.

Rationale: CC BY-ND protects users from fragmented, lower-quality β€œFEMBA variants,” while still enabling internal fine-tuning and a path for the community to upstream improvements through review.


πŸ”Ž Model Summary

  • Architecture: Bidirectional Mamba encoder with a 2D-conv tokenizer (patching over channels Γ— time), random masking (60%) for SSL, and either a lightweight linear head or a Mamba-enhanced head for downstream tasks. Hidden state size is fixed at 80 across variants.
  • Scaling: Linear time & memory in sequence length (state-space model), enabling efficient long-context EEG modeling and on-device scenarios.
  • Pre-training data: >21,000 hours of unlabeled clinical EEG from Temple University Hospital (TUEG). Subjects overlapping with TUAB/TUAR/TUSL are removed to prevent leakage.
  • Downstream tasks: TUAB abnormal/normal (binary), TUAR artifact detection (BC/MC/MMC/MCC), TUSL slowing (4-class). TUAB uses its predefined split; TUAR/TUSL use 80/10/10 splits.
  • Optimization (typical): Pre-training with Smooth L1 masked-patch reconstruction; fine-tuning with Adam (LR 1e-4), cosine decay, early stopping; layer-wise LR decay factor 0.75.

πŸš€ Model Variants

Variant Parameters (num_blocks, embed_dim)
FEMBA-tiny 7.8M (2, 35)
FEMBA-base 47.7M (12, 35)
FEMBA-large 77.8M (4, 79)
FEMBA-huge 386M (20, 79)

Hidden state size is 80 for all variants; blocks correspond to Bi-Mamba layers in the encoder.


🧠 Intended Use & Limitations

Intended use. Research on EEG representation learning and downstream classification (e.g., abnormal EEG detection, artifact detection, slowing classification). FEMBA is particularly useful when long sequences or limited compute/memory preclude quadratic-cost attention.

Out-of-scope / limitations.

  • Not a medical device. Outputs are research signals and must not be used for clinical decision-making without appropriate validation and regulatory processes.
  • Domain shift. Performance can degrade across cohorts (e.g., neonatal vs. adult EEG) and label protocols; domain adaptation is encouraged.
  • Class imbalance. On some tasks (e.g., TUSL), AUROC may be strong while AUPR can trail attention baselines, highlighting sensitivity to class imbalance and protocol specifics.

πŸ—οΈ Architecture & Training Details

Tokenizer & patches. Raw EEG (CΓ—T) is quartile-normalized per channel (IQR scaling) and tokenized with a 2D convolution over channelΓ—time patches (e.g., 4Γ—32) with learnable positional embeddings.

Self-supervised objective. Randomly mask 60% of patches; reconstruct masked content with a lightweight decoder using Smooth L1 loss (computed on masked patches only).

Encoder. Stacked Bidirectional Mamba blocks (forward + backward over a reversed copy), merged and residually connected; hidden size fixed to 80.

Fine-tuning heads.

  • Linear classifier: small MLP (β‰ˆ0.5M params).
  • Mamba-enhanced classifier: adds one Mamba block before the linear layer (up to β‰ˆ0.7M params).

Optimization notes. Layer-wise LR decay (0.75); fine-tuning with Adam (initial LR 1e-4), cosine decay, early stopping; end-to-end updates (encoder + head).


πŸ“š Training Data

  • Pre-training: Temple University Hospital EEG (TUEG), ~21k hours, ~15k subjects; broad clinical coverage. Overlaps with TUAB/TUAR/TUSL removed to avoid leakage.
  • Downstream:
    • TUAB (abnormal vs normal; predefined split).
    • TUAR (artifact detection, BC/MC/MMC/MCC protocols; randomized 80/10/10).
    • TUSL (4-class slowing/seizure/complex/normal; randomized 80/10/10).

See paper for dataset licenses and curation details; users are responsible for complying with source dataset terms.


πŸ”§ How to Use

FEMBA weights are organized by downstream task:

  • TUAB/ β†’ base/large variants pre-trained on TUEG (excluding TUAB), for TUAB abnormal EEG.
  • TUAR/ β†’ tiny/base/large variants pre-trained on TUEG (excluding TUAR), for TUAR artifact detection.
  • TUSL/ β†’ variants pre-trained on TUEG (excluding TUSL), for TUSL slowing classification.

Example: fine-tune TUAR with the base checkpoint:

TUAR/base.safetensors

Open run_train.py from the BioFoundation GitHub repository and configure:

# Set paths (example)
os.environ['DATA_PATH'] = "/path/to/dataset"
os.environ['CHECKPOINT_DIR'] = "/my_directory/TUAR/base.safetensors"

Then launch fine-tuning (Hydra):

python -u run_train.py +experiment=FEMBA_finetune

Environment variables

  • DATA_PATH: directory of the fine-tuning dataset.
  • CHECKPOINT_DIR: path to the chosen task-specific checkpoint.

πŸ“Š Results (Key Highlights)

TUAB (Abnormal EEG Detection)

  • FEMBA-Huge: 81.82% balanced accuracy, 0.892 AUROC.

TUAR (Artifact Detection)

  • Binary (BC): FEMBA-Base AUROC 0.949, AUPR 0.932.

TUSL (Slowing Classification, 4-class)

  • FEMBA-Base: AUROC 0.731.

Full metrics, protocols, and comparisonsβ€”including MC/MMC on TUAR and multiple FEMBA sizesβ€”are reported in the paper.


⚑ Efficiency

FEMBA provides strong accuracy with reduced compute/memory relative to Transformer baselines:

  • FEMBA-Huge (386M): ~58.7B FLOPs, ~30% less memory than comparable Transformer baselines, with competitive TUAB accuracy.
  • FEMBA-Tiny (7.8M): ~1.31B FLOPsβ€”substantially fewer than large Transformer baselinesβ€”while achieving strong TUAR MCC performance.
  • FEMBA-Base (47.7M): ~7.52B FLOPs, markedly lower than many attention-based baselines.

See the paper for details on measurement setup and baseline references.


βœ… Responsible AI, Risks & Biases

  • Clinical safety: This model is not a certified medical device and should not be used for diagnosis. Human oversight is required.
  • Bias & drift: Clinical EEG cohorts vary by device, montage, age, and pathology. Expect distribution shift and validate locally; consider domain adaptation (e.g., neonatal vs adult).
  • Artifacts: While artifact detection is strong, rare/complex artifacts may still be misclassified; use robust pre-processing and QC procedures.

πŸ”— Sources


πŸ“œ Citation

If you use FEMBA in your research, please cite:

@misc{tegon2025fembaefficientscalableeeg,
      title={FEMBA: Efficient and Scalable EEG Analysis with a Bidirectional Mamba Foundation Model}, 
      author={Anna Tegon and Thorir Mar Ingolfsson and Xiaying Wang and Luca Benini and Yawei Li},
      year={2025},
      eprint={2502.06438},
      archivePrefix={arXiv},
      primaryClass={cs.LG},
      url={https://arxiv.org/abs/2502.06438}
}

πŸ› οΈ Maintenance & Contact

  • Issues & support: please open a GitHub issue in the BioFoundation repository.

πŸ—’οΈ Changelog

  • v1.0: Initial release of FEMBA model card with task-specific checkpoints and instructions.
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support