The dataset viewer is not available for this split.
Error code: TooBigContentError
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Idea First, Code Later: CP Benchmark
Benchmark dataset for the paper: "Idea First, Code Later: Disentangling Problem Solving from Code Generation in Evaluating LLMs for Competitive Programming"
A curated benchmark of 83 competitive programming problems designed for evaluating LLMs on algorithmic problem-solving separately from code generation.
Motivation
We curate problems from seven contests that are not hosted on major public CP platforms (e.g., Codeforces, AtCoder). While some problem statements may be publicly accessible (e.g., contest PDFs or course materials), we expect contamination risk to be lower than for widely scraped judge platforms.
The problems come from regional ICPC contests and CS3233 (Competitive Programming course) examinations at the National University of Singapore, spanning 2017–2025.
What's Included
Each problem package includes:
- Original problem statement in Markdown format
- Gold editorial written by the problem setter or tester (solution analysis)
- Full official test suite (sample + hidden test cases)
We rely on complete judge test suites to avoid false positives in evaluation.
Difficulty Grouping
To account for variation in problem difficulty, we group problems using solve rates (the proportion of teams that solved the problem) from official scoreboards. Within each contest, problems are ranked by solve rate and partitioned into three contest-relative tertiles:
- T1 (easiest third)
- T2 (middle third)
- T3 (hardest third)
Pooling across contests yields approximately balanced difficulty groups while preserving each contest's intrinsic difficulty profile.
Dataset Composition
| Contest | Year | Source | #Teams | #Problems |
|---|---|---|---|---|
| CS3233 Midterm Contest | 2023 | NUS | 25 | 11 |
| CS3233 Midterm Contest | 2024 | NUS | 15 | 12 |
| CS3233 Midterm Contest | 2025 | NUS | 16 | 11 |
| ICPC Asia Pacific Championship | 2024 | GitHub | 65 | 13 |
| ICPC Asia Jakarta Regional | 2017 | GitHub | 80 | 12 |
| ICPC Asia Jakarta Regional | 2018 | GitHub | 75 | 12 |
| ICPC Asia Jakarta Regional | 2019 | GitHub | 80 | 12 |
| Total | -- | -- | -- | 83 |
Dataset Description
This dataset contains competitive programming problems with:
- Problem statements in Markdown format
- Test cases (input/output pairs)
- Solution analysis (where available)
- Contest metadata (difficulty, solve rates, etc.)
Sources
The problems are sourced from:
- ICPC Asia Pacific Championship 2024
- ICPC Jakarta Regionals (2017, 2018, 2019)
- CS3233 NUS Midterm Contests (2023, 2024, 2025)
Copyright and Permissions
The CS3233 portion of the dataset consists of course assessment materials from the National University of Singapore. We obtained copyright permission from the course instructor to include and redistribute these materials (problem statements, gold editorials) as part of our dataset release.
The CS3233 gold editorials are private course materials that were not publicly released prior to this work.
Dataset Structure
Each example contains:
Identifiers
| Field | Type | Description |
|---|---|---|
problem_id |
string | Unique identifier (contest_slug) |
problem_code |
string | Problem letter (A, B, C, ...) |
problem_slug |
string | URL-friendly problem name |
problem_title |
string | Full problem title |
Contest Information
| Field | Type | Description |
|---|---|---|
contest_name |
string | Contest identifier |
contest_full_name |
string | Full contest name |
year |
string | Competition year |
source |
string | Source URL/repository |
total_teams |
int | Total teams in contest |
total_problems |
int | Total problems in contest |
Problem Details
| Field | Type | Description |
|---|---|---|
statement |
string | Problem statement in Markdown |
analysis |
string | Editorial/solution analysis |
time_limit |
string | Time limit for solutions |
memory_limit |
string | Memory limit |
author |
string | Problem author |
analysis_author |
string | Editorial author |
Test Cases
| Field | Type | Description |
|---|---|---|
sample_test_cases_input |
list[string] | Sample inputs (public) |
sample_test_cases_output |
list[string] | Sample outputs (public) |
hidden_test_cases_input |
list[string] | Hidden inputs (judging) |
hidden_test_cases_output |
list[string] | Hidden outputs (judging) |
has_special_judge |
bool | True if problem accepts multiple correct outputs |
special_judge_code |
string | C++ scorer code (testlib) for validating outputs |
special_judge_format |
string | "standard" or "jakarta2017" |
uses_kattis |
bool | True for CS3233 (submit via Kattis) |
kattis_problem_id |
string | Kattis problem ID for submission |
contest_standings_csv |
string | Full contest scoreboard in CSV format |
scoreboard_url |
string | Original URL of the contest scoreboard |
Special Judge (Scorer)
Some problems accept multiple correct outputs for a given input (e.g., "output any valid permutation"). These problems have has_special_judge=True and include a C++ scorer in special_judge_code that validates whether a contestant's output is correct.
standard format (most contests):
./scorer <input_file> <judge_output_file> <contestant_output_file>
# Prints "AC" or "WA" to stdout
jakarta2017 format (ICPC Jakarta 2017 only):
./scorer <input_file> <unused> <judge_output_file> < contestant_output
# Reads contestant output from stdin
# Prints "WA" if wrong, nothing if correct
CS3233 Contests (Kattis Submission)
For CS3233 problems, hidden test cases are not publicly available (hosted on Kattis). Sample test cases are extracted from problem statements.
To submit solutions, set up Kattis CLI following the instructions in their repository, then run:
kattis <solution_file> -p <problem_id> -f
The kattis_problem_id field contains the Kattis problem ID for submission.
Notes on Empty Fields
Some fields may be empty depending on the contest source:
| Field | When Empty | Reason |
|---|---|---|
hidden_test_cases_* |
CS3233 contests | Test cases hosted on Kattis, not public |
author, analysis_author |
CS3233 contests | Author info not available |
special_judge_* |
Most problems | Only ~40% of problems need special judges |
kattis_problem_id |
ICPC contests | Only CS3233 uses Kattis |
memory_limit |
Some ICPC contests | Not always specified |
Contest Statistics
| Field | Type | Description |
|---|---|---|
teams_solved |
int | Number of teams that solved |
teams_wrong_answer |
int | Teams with wrong answers |
teams_unattempted |
int | Teams that didn't try |
teams_tried |
int | Teams that attempted |
solve_percentage |
float | % of teams that solved |
first_solve_time |
int | Time to first solve (minutes) |
average_solve_time |
float | Average solve time (minutes) |
total_attempts |
int | Total submission attempts |
average_attempts |
float | Average attempts per team |
Difficulty_Tertile |
int | Difficulty rank in contest |
Usage
from datasets import load_dataset
dataset = load_dataset("samahadhoud/idea-first-code-later-cp")
# Access a problem
problem = dataset["train"][0]
print(problem["problem_title"])
print(problem["statement"])
# Get test cases
for inp, out in zip(problem["sample_test_cases_input"], problem["sample_test_cases_output"]):
print(f"Sample Input: {inp[:100]}...")
print(f"Sample Output: {out[:100]}...")
# Hidden test cases for evaluation
print(f"Hidden test cases: {len(problem['hidden_test_cases_input'])}")
Test Runner
Use the provided test runner to evaluate solutions:
from hf_test_runner import TestRunner
runner = TestRunner()
# Run a C++ solution
results = runner.run_solution(
problem_id="icpc-jakarta-2017_icpc",
solution_code="""
#include <iostream>
using namespace std;
int main() {
int n;
cin >> n;
cout << n * 2 << endl;
return 0;
}
""",
language="cpp"
)
print(results["status"]) # "PASSED" or error type
The test runner automatically handles:
- Sample and hidden test cases
- Special judges (problems with multiple valid outputs)
- Kattis submission (for CS3233 problems via kattis-cli)
- Memory and time limits
Citation
Paper: "Idea First, Code Later: Disentangling Problem Solving from Code Generation in Evaluating LLMs for Competitive Programming"
Citation details will be added once the paper is published on arXiv.
License
MIT License
- Downloads last month
- 7