Preview The competition is in preview. Prize amounts and sponsor allocations below are placeholder values pending the final sponsorship agreements. Categories and selection rules are final.
Awards · five tracks, five winners, more recognition than money

Awards & prizes.

A combined $XXX,000 USD prize pool, AWS compute grants for finalists, travel support for top-3 per track, and three cross-cutting awards that reward reproducibility, methodology, and student work. The benchmark is open-source — we reward open science as much as final accuracy.

$XXX,000 prize pool 5 track winners 3 special awards Travel grants · top-3 AWS compute
Prize pool snapshot · placeholder USD
$50,000 per-track first place
$15,000 special awards
25 k h AWS GPU hours
15 travel grants
Per-track · five tracks, five winners amounts placeholder

One first-place team per track.

Each track has its own sponsor and its own prize. First place takes the headline number. Second and third receive runner-up grants from the same sponsor and an invitation to present at the workshop.

SelectionFinal ranking (post-audit)
AnnouncedNov 1, 2026
AwardedNeurIPS · Dec 11–12, 2026
Track 01 EEG-to-IMG

Visual retrieval winner.

Top-5 retrieval accuracy on held-out images. Sponsored by Alljoined, who also contribute the dataset.

Sponsor: Alljoined Metric: Top-5 acc
1st$50,000
2nd$15,000
3rd$5,000
Track 02 BCI

BCI generalization winner.

Balanced accuracy across motor imagery, mental math, and word association — without recalibration. Sponsored by Meta FAIR Brain & AI.

Sponsor: Meta FAIR Brain & AI Metric: Bal. Acc
1st$50,000
2nd$15,000
3rd$5,000
Track 03 SLEEP

Sleep-onset winner.

Lowest MAE in seconds to stable N2 onset on consumer-grade wearable EEG. Sponsored by InteraXon, who provide the Muse-grade recordings.

Sponsor: InteraXon Metric: MAE (s)
1st$50,000
2nd$15,000
3rd$5,000
Track 04 EMG

EMG-to-Text winner.

Lowest character error rate decoding typed text from wristband surface EMG. Sponsored by Meta Reality Labs.

Sponsor: Meta Reality Labs Metric: CER (%)
1st$50,000
2nd$15,000
3rd$5,000
Cross-cutting recognition amounts placeholder

Three awards that don't depend on the leaderboard.

Final accuracy is one signal — and not always the most useful one. Three additional awards recognize methodological clarity, reproducibility, and student work. Decided by the organizer panel, not the scoring server.

Special · methods $10,000

Best methods note.

For the clearest 2-page methods note among the top-10 of any track. Rewards work that is straightforward to reproduce, scoped, and honest about ablations — including what didn't work. Judged blind by a 5-organizer panel.

Blind judging Top-10 eligible
Judges5 organizers · blind
Form factor2-page PDF
Special · reproducibility $10,000

Reproducibility award.

For the submission whose audit pass produces the smallest gap between the originally-submitted score and the re-run score on the sealed split. Effectively, the team that runs the cleanest pipeline. Awarded with the support of EEGLAB and Codabench.

Audit-decided EEGLAB & Codabench
Decided atAudit · Oct 1
LeadArnaud Delorme
Special · students $5,000

Best student team.

Highest-ranked team where every author is a student or junior researcher (≤2 years post-PhD) at submission time. Self-declared; verified at audit. Cross-track — only the team's best track placement counts.

Cross-track Self-declared
EligibilityStudent / ≤ 2 y post-PhD
VerifiedAt audit
Travel & compute · removing barriers

If you can build it, we want you in Sydney.

The single largest barrier to participating in a NeurIPS competition is travel cost. The single largest barrier to building a winning entry is compute. We try to remove both, supported by AWS and the host institutions.

Grant Eligibility Value Slots Apply Deadline Backed by Status
Travel grant Top-3 per track $2,500 / person 15 Auto-awarded Nov 5, 2026 Host institutions
Diversity travel grant Underrepresented regions · top-10 $2,000 / person 8 Application form Sep 15, 2026 ChaLearn
AWS compute grant Sealed-phase finalists · top-25 1,000 GPU-hours 25 Auto-awarded Aug 1, 2026 AWS
Workshop scholarship Students · best methods note shortlist Reg. waiver + $1,500 6 Auto-shortlisted Oct 25, 2026 Organizers
All grants USD. Status bar indicates expected fill rate based on 2025 challenge. Ask about a grant →
How awards are decided

Selection, eligibility, and the audit gate.

The leaderboard determines the per-track finalists. The audit determines whether they stay finalists. The organizer panel decides the special awards. Read this before you submit — eligibility quirks (industry affiliations, prior datasets, anonymized handles) all live here.

Eligibility OPEN

Anyone can compete.

Industry, academia, students, independents. Cross-institution teams are encouraged. The single exception: organizers and their direct lab members cannot win track prizes — they appear on the public leaderboard as "Organizers · reference" and are ineligible. Special awards remain open if no organizer authorship is involved.

Team sizeNo cap
OrganizersReference-only, no prizes
Track prizes RANKING

Best of last five, post-audit.

The final NeurIPS ranking — and therefore the per-track prizes — is decided on the best of your last five sealed-phase submissions, after the reproducibility audit. If your audit gap exceeds ±2 σ on the metric, you drop off the prize roster (but stay on the public board for context).

Tolerance±2 σ on metric
AggregationBest of last 5
Special awards PANEL

Decided by a 5-organizer panel.

Methods note and student awards are decided by a blind 5-organizer panel from the top-10 per track. Reproducibility award is mechanical — it falls out of the audit. All three are announced together with the per-track winners on Nov 1, 2026.

Panel5 organizers · blind
AnnouncedNov 1, 2026
Payout USD · WIRE

Wire transfer · within 90 days.

Cash prizes are paid by wire transfer in USD within 90 days of the NeurIPS workshop. Multi-author teams nominate one bank account at audit time and split internally — the organizing committee does not handle internal team allocations. Tax treatment is the recipient's responsibility.

CurrencyUSD
Window≤ 90 days post-workshop
Funded by

The people behind the prizes.

The prize pool is funded by the track sponsors and the host organisation. AWS provides the compute grants; ChaLearn supports diversity travel; the host institutions cover top-3 travel for every track. Every sponsor is listed on the homepage with their contribution.

All sponsors & institutions →
Compute partner
Track sponsors
Host & diversity support
Warm-up opens Jul 1, 2026

Six prizes, three special awards, one workshop in Sydney.

The full per-track table, the audit gate, and the methods-note rubric are on the start-kit page. Start with a baseline, beat it, write up what you learned — the awards will take care of themselves.

methods.md · 2-page skeleton
1# Methods note · <team> · <track>
2
3## 1. Model
4arch, params, init, public weights used?
5
6## 2. Data
7splits, augmentations, external datasets
8
9## 3. Ablations
10what helped, what didn't, with numbers
11
12## 4. Reproduction
13git sha, command, expected runtime