EEG-to-IMG
Evoked visual retrieval: rank held-out candidate images from EEG epochs recorded during visual stimulation. Targets are frozen DINOv2-giant embeddings.
Neural Interfaces for Generalizable Decoding is a unified, open-source benchmark for AI models of EEG/EMG activity. 94 datasets, 36 EEG tasks, 14 architectures — all behind a standardized BIDS-first interface. Pretrain once. Evaluate everywhere. Reproduce by construction.
1# 1. Install the benchmark2$ pip install neuralbench eegdash braindecode34# 2. Download a task5$ neuralbench eeg eeg_to_image --download67# 3. Run the full grid8$ neuralbench eeg eeg_to_image -m eegnet9↳ NeuralBench-EEG-Core v1.0 · 94 datasets · 36 tasks
Each track shares the same evaluation harness, BIDS-first datasets, and reproducibility audit. Submit to one — or stack against all five.
Evoked visual retrieval: rank held-out candidate images from EEG epochs recorded during visual stimulation. Targets are frozen DINOv2-giant embeddings.
Calibration-stable command decoding across sessions and contexts. Generalize motor imagery, calculation, and word association tasks to later sessions without recalibration.
Estimate the latency from recording start to stable sleep onset (N2 event) using consumer-grade wearable EEG. Staging is secondary to precise timing.
Decode typed keystrokes from wristband surface EMG. Sequence transduction across 100+ users, addressing anatomy, typing strategy, and sensor re-placement.
The ultimate test of shift-robust representations: use a single shared EEG encoder across the Image, BCI, and Sleep tracks. Evaluates reusable cross-task learning. Fundamental models: can we do better across all tasks?
One warm-up window, one final submission window, one reproducibility audit, one weekend in Sydney. The schedule follows the NeurIPS 2026 competition guidelines: launch July 1, submissions close September 1 (AoE), final rankings November 1, present at the December workshop.
Public validation set, unlimited submissions, baseline models released on Jul 1. Get your pipeline running end-to-end before the sealed test opens.
Sealed test set opens Aug 1. Daily submission caps enforced. Submissions close Sep 1 AoE. Each team submits a 2-page methods note alongside their best model.
Reproducibility audit of the top-ranked submissions on Oct 1. Final rankings released Nov 1; competition reports and analysis paper drafted.
Track winners present in Sydney at the NeurIPS Competition Track. Awards, lessons-learned keynote, and the v2.0 roadmap. Travel grants for top-3 per track.
Search standardized metadata, run reproducible pipelines, export model-ready features in minutes.
1from eegdash import EEGDashDataset23dataset = EEGDashDataset(4cache_dir="./data",5dataset="ds002718",6subject=["012", "013"],7)8# 47 recordings · BIDS · streamed
Compute, datasets, prizes, and engineering hours from 14 institutions across 5 countries. Yneuro hosts the platform; AWS provides the GPU cloud. Each track has its own sponsor: Alljoined for EEG-to-IMG, Meta FAIR Brain & AI for BCI decoding, InteraXon for sleep onset, and Meta Reality Labs for EMG-to-Text.
Meet the team →The challenge is open-source, MIT-licensed, and community-driven. Add a task, add a dataset, add a model — every contribution makes the benchmark stronger.
1$ git clone https://github.com/neural-interfaces26/neuralbench2$ cd neuralbench3$ pip install -e ".[dev]"45# Add your task6$ python scripts/register_task.py --modality eeg