Visual retrieval winner.
Top-5 retrieval accuracy on held-out images. Sponsored by Alljoined, who also contribute the dataset.
A combined $XXX,000 USD prize pool, AWS compute grants for finalists, travel support for top-3 per track, and three cross-cutting awards that reward reproducibility, methodology, and student work. The benchmark is open-source — we reward open science as much as final accuracy.
Each track has its own sponsor and its own prize. First place takes the headline number. Second and third receive runner-up grants from the same sponsor and an invitation to present at the workshop.
Top-5 retrieval accuracy on held-out images. Sponsored by Alljoined, who also contribute the dataset.
Balanced accuracy across motor imagery, mental math, and word association — without recalibration. Sponsored by Meta FAIR Brain & AI.
Lowest MAE in seconds to stable N2 onset on consumer-grade wearable EEG. Sponsored by InteraXon, who provide the Muse-grade recordings.
Lowest character error rate decoding typed text from wristband surface EMG. Sponsored by Meta Reality Labs.
The cross-track winner. A single shared EEG encoder evaluated on the EEG-to-IMG, BCI, and Sleep tracks at once. Rewards the team whose model is the most generally useful, not the most specialized. Hosted by Yneuro.
Final accuracy is one signal — and not always the most useful one. Three additional awards recognize methodological clarity, reproducibility, and student work. Decided by the organizer panel, not the scoring server.
For the clearest 2-page methods note among the top-10 of any track. Rewards work that is straightforward to reproduce, scoped, and honest about ablations — including what didn't work. Judged blind by a 5-organizer panel.
For the submission whose audit pass produces the smallest gap between the originally-submitted score and the re-run score on the sealed split. Effectively, the team that runs the cleanest pipeline. Awarded with the support of EEGLAB and Codabench.
Highest-ranked team where every author is a student or junior researcher (≤2 years post-PhD) at submission time. Self-declared; verified at audit. Cross-track — only the team's best track placement counts.
The single largest barrier to participating in a NeurIPS competition is travel cost. The single largest barrier to building a winning entry is compute. We try to remove both, supported by AWS and the host institutions.
The leaderboard determines the per-track finalists. The audit determines whether they stay finalists. The organizer panel decides the special awards. Read this before you submit — eligibility quirks (industry affiliations, prior datasets, anonymized handles) all live here.
Industry, academia, students, independents. Cross-institution teams are encouraged. The single exception: organizers and their direct lab members cannot win track prizes — they appear on the public leaderboard as "Organizers · reference" and are ineligible. Special awards remain open if no organizer authorship is involved.
The final NeurIPS ranking — and therefore the per-track prizes — is decided on the best of your last five sealed-phase submissions, after the reproducibility audit. If your audit gap exceeds ±2 σ on the metric, you drop off the prize roster (but stay on the public board for context).
Methods note and student awards are decided by a blind 5-organizer panel from the top-10 per track. Reproducibility award is mechanical — it falls out of the audit. All three are announced together with the per-track winners on Nov 1, 2026.
Cash prizes are paid by wire transfer in USD within 90 days of the NeurIPS workshop. Multi-author teams nominate one bank account at audit time and split internally — the organizing committee does not handle internal team allocations. Tax treatment is the recipient's responsibility.
The prize pool is funded by the track sponsors and the host organisation. AWS provides the compute grants; ChaLearn supports diversity travel; the host institutions cover top-3 travel for every track. Every sponsor is listed on the homepage with their contribution.
All sponsors & institutions →The full per-track table, the audit gate, and the methods-note rubric are on the start-kit page. Start with a baseline, beat it, write up what you learned — the awards will take care of themselves.
1# Methods note · <team> · <track>23## 1. Model4arch, params, init, public weights used?56## 2. Data7splits, augmentations, external datasets89## 3. Ablations10what helped, what didn't, with numbers1112## 4. Reproduction13git sha, command, expected runtime