Submitting Your Results
We welcome submissions from all models and agent frameworks. To have your results included in our leaderboard, please follow the instructions below.
Algorithmic Problems
We currently release 1 -- 3 public test case per problem for local testing and debugging. Full evaluation (with all test cases) is performed on our servers.
What to Submit
- Solution files:
{problem_id}_{model_name}_solution.cppfor each problem - Model/Agent info: Name and version of the model or agent framework used
- Generation method: Brief description of how solutions were generated (e.g., one-shot, multi-turn, with/without feedback)
Submission Format
Organize your solutions as:
submissions/
βββ 1_gpt4_solution.cpp
βββ 2_gpt4_solution.cpp
βββ ...
βββ metadata.json
metadata.json:
{
"model": "gpt-4o",
"agent_framework": "custom",
"generation_method": "one-shot",
"date": "2025-01-15",
"notes": "Optional additional notes"
}
Research Problems
Research problems require a solution.py file implementing the Solution class interface.
Problem Structure
Research problems follow a hierarchical structure:
Problem (e.g., gemm_optimization, poc_generation)
βββ Category (e.g., squares, heap_buffer_overflow)
βββ Variant (e.g., arvo_21000)
| Level | Example | Description |
|---|---|---|
| Problem | gemm_optimization |
Top-level problem domain |
| Category | gemm_optimization/squares |
Scores are aggregated at this level for leaderboard reporting |
| Variant | poc_generation/heap_buffer_overflow/arvo_21000 |
Each variant is evaluated independently with its own README |
Key distinction:
- Evaluation: Each variant runs independently and produces its own score
- Reporting: Scores are aggregated by category for the leaderboard (e.g., all
heap_buffer_overflowvariants β one score)
Note: Some problems have only one level (e.g.,
flash_attn), which functions as both category and variant.
Problem ID Format
Each variant has a unique Problem ID based on its path under research/.
The full list of all evaluatable variants is in research/problems.txt (109 variants total, aggregated into ~50 categories for reporting).
| Type | Example Path | Problem ID |
|---|---|---|
| Single problem | research/flash_attn |
flash_attn |
| Problem with variants | research/gemm_optimization/squares |
gemm_optimization/squares |
| Nested variants | research/poc_generation/heap_buffer_overflow/arvo_21000 |
poc_generation/heap_buffer_overflow/arvo_21000 |
What to Submit
- Solution files:
solution.pyfor each problem, placed in a directory matching the Problem ID - Model/Agent info: Name and version of the model or agent framework used
- Local evaluation results (optional but recommended): Score from running the evaluator locally
Submission Format
Your submission zip should mirror the Problem ID directory structure:
submission.zip
βββ flash_attn/
β βββ solution.py
βββ gemm_optimization/
β βββ squares/
β βββ solution.py
βββ cant_be_late/
β βββ high_availability_loose_deadline/
β βββ solution.py
βββ poc_generation/
β βββ heap_buffer_overflow/
β βββ arvo_21000/
β βββ solution.py
βββ metadata.json
Important: The directory structure must exactly match the Problem ID. For example:
flash_attn/solution.pygemm_optimization/squares/solution.py
Each solution.py must implement:
class Solution:
def __init__(self):
pass
def solve(self, *args):
# Returns: solution output (format varies by problem)
pass
metadata.json
{
"model": "gpt-4o",
"agent_framework": "custom",
"generation_method": "one-shot",
"date": "2025-01-15",
"problems_solved": [
"flash_attn",
"gemm_optimization/squares",
"cant_be_late/high_availability_loose_deadline"
],
"notes": "Optional additional notes"
}
Running Local Evaluation
Before submitting, you can verify your solutions locally:
# Evaluate a single solution
frontier-eval flash_attn solution.py
# Batch evaluation with progress tracking
frontier-eval batch --pairs-file pairs.txt --results-dir results/
# Batch evaluation with SkyPilot (cloud)
frontier-eval batch --pairs-file pairs.txt --skypilot --max-concurrent 4
How to Submit
Send your submission to:
- Email: [email protected] or [email protected]
Please include:
- A zip/tar archive of your solutions following the format above
metadata.jsonwith model and method information- (Optional) Local evaluation results if you ran them
Leaderboard
Accepted submissions will be evaluated on our full test suite and results will be published on the Frontier-CS Leaderboard.