File size: 4,426 Bytes
a442c5b 15b3cda a16675c a442c5b 15b3cda a16675c 15b3cda a16675c 15b3cda a16675c 15b3cda a16675c 15b3cda a16675c 3b1b0d6 a16675c 3b1b0d6 a16675c 3b1b0d6 a16675c 3b1b0d6 a16675c 3b1b0d6 a16675c 3b1b0d6 a16675c 3b1b0d6 a16675c 3b1b0d6 a16675c 3b1b0d6 a16675c 3b1b0d6 a16675c 3b1b0d6 a16675c |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 |
---
dataset_info:
features:
- name: rank
dtype: int64
- name: model
dtype: string
- name: accuracy
dtype: float64
- name: parameters
dtype: float64
- name: extra_training_data
dtype: string
- name: paper
dtype: string
- name: code
dtype: string
- name: result
dtype: string
- name: year
dtype: int64
- name: tags
sequence: string
splits:
- name: train
num_bytes: 19092
num_examples: 112
download_size: 9472
dataset_size: 19092
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
task_categories:
- question-answering
- text-generation
language:
- en
tags:
- math
- llm
- benchmarks
- saturation
- evaluation
- leaderboard
- parameter-scaling
---
# LLM Leaderboard Data for Hendrycks MATH Dataset (2022–2024)
This dataset aggregates yearly performance (2022–2024) of large language models (LLMs) on the Hendrycks MATH benchmark. It is specifically compiled to explore performance evolution, benchmark saturation, parameter scaling trends, and evaluation metrics of foundation models solving complex math word problems.
Original source data: [Math Word Problem Solving on MATH (Papers with Code)](https://paperswithcode.com/sota/math-word-problem-solving-on-math)
## About Hendrycks' MATH Benchmark
Introduced by Hendrycks et al., the [MATH dataset](https://arxiv.org/abs/2103.03874) includes 12,500 challenging competition math problems, each accompanied by detailed solutions. These problems provide an ideal setting for evaluating and training AI models in advanced mathematical reasoning.
## Dataset Highlights
- **Performance Evolution**: Significant increase in accuracy over three years (benchmark saturation analysis).
- **Parameter Scaling**: Insight into how model size (parameters) correlates with accuracy improvements.
- **Benchmark Saturation**: Clear evidence of performance brackets becoming saturated, indicating the need for new and more challenging mathematical reasoning benchmarks.
## Key Insights from the Dataset (2022–2024)
- **Rapid Accuracy Gains**: Top model accuracy jumped dramatically—from approximately 65% in 2022 to nearly 90% in 2024.
- **Performance Bracket Saturation**: Models achieving over 80% accuracy increased significantly, illustrating benchmark saturation and highlighting a potential ceiling in current dataset challenges.
- **Efficiency in Parameter Scaling**: Smaller parameter models now perform tasks previously requiring large parameter counts, emphasizing efficiency gains alongside increased accuracy.
## Dataset Structure
- **Number of Examples**: 112
- **Data Format**: CSV (converted from Papers with Code)
- **Features include**:
- Model ranking and year-specific accuracy
- Parameter counts and extra training data
- Direct links to relevant academic papers and model code
## Practical Usage
Here's how to quickly load and interact with the dataset:
```python
from datasets import load_dataset
data = load_dataset("your_dataset_name_here")
df = data['train'].to_pandas()
df.head()
```
## Visualizations
### Model Accuracy Improvement (2022–2024)

*Rapid growth in top accuracy indicating approaching benchmark saturation.*
### Accuracy Distribution Among Top 20%

*Sharp increase in the number of high-performing models over three years.*
### Parameter Scaling and Model Accuracy

*Visualizing consistency in accuracy improvements and the diminishing returns from scaling model parameters.*
## Citation
Please cite the original Hendrycks MATH dataset paper and this dataset aggregation/analysis:
**MATH Dataset:**
```bibtex
@article{hendrycks2021math,
title={Measuring Mathematical Problem Solving With the MATH Dataset},
author={Hendrycks, Dan and Burns, Collin and Basart, Steven and Zou, Andy and Mazeika, Mantas and Song, Dawn and Steinhardt, Jacob},
journal={arXiv preprint arXiv:2103.03874},
year={2021}
}
```
```bibtex
@misc{nlile2024mathbenchmark,
author = {nlile},
title = {LLM Leaderboard Data for Hendrycks MATH Dataset (2022-2024): Benchmark Saturation and Performance Trends},
year = {2024},
publisher = {Hugging Face},
url = {https://huggingface.co/datasets/nlile/math_benchmark_test_saturation/}
}
```
|