Model Card for REVE Position Bank
Wrapper to provide electrode positions to use the REVE EEG Foundation Model.
Model Details
Model Description
- Developed by: the BRAIN team and UdeM
- Funded by : AI@IMT, ANR JCJC ENDIVE, Jean Zay (with project numbers), Alliance Canada and Region Bretagne.
REVE (Representation for EEG with Versatile Embeddings) is a pretrained model explicitly designed to generalize across diverse EEG signals. REVE introduces a novel 4D positional encoding scheme that enables it to process signals of arbitrary length and electrode arrangement.
This position bank repository can be used to fetch electrode positions by name, in order to perform inference with the REVE modeL.
Model Sources
Uses
Example script to fetch electrode positions and extract embeddings with REVE.
from transformers import AutoModel
pos_bank = AutoModel.from_pretrained("brain-bzh/reve-positions", trust_remote_code=True)
eeg_data = ... # EEG data (batch_size, channels, time_points), must be sampled at 200 Hz
electrode_names = [...] # List of electrode names corresponding to the channels in eeg_data
positions = pos_bank(electrode_names) # Get positions (channels, 3)
model = AutoModel.from_pretrained("brain-bzh/reve-base", trust_remote_code=True)
## Expand the positions vector to match the batch size
positions = positions.expand(eeg_data.size(0), -1, -1) # (batch_size, channels, 3)
output = model(eeg_data, positions)
Available electrodes names can be printed using the method
pos_bank.get_all_positions(), and can be visualized here.
Most common electrode setups are available (10-20, 10-10, 10-05, EGI 256). For Biosemi-128, use the prefix biosemi128_ before the electrode names (e.g., biosemi128_C13).
- Downloads last month
- 21