Datasets:
language:
- ja
tags:
- japanese
- text-difficulty
- language-learning
- linguistics
- aozora-bunko
- educational
- curriculum-design
task_categories:
- text-classification
size_categories:
- 1K<n<10K
license: mit
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: text
dtype: string
- name: overall_difficulty
dtype: float64
- name: difficulty_level
dtype: string
- name: kanji_difficulty
dtype: float64
- name: lexical_difficulty
dtype: float64
- name: grammar_complexity
dtype: float64
- name: sentence_complexity
dtype: float64
- name: text_length
dtype: int64
splits:
- name: train
num_bytes: 9611892
num_examples: 1800
download_size: 5523038
dataset_size: 9611892
Aozora Text Difficulty Dataset
This dataset contains Japanese literary texts from the Aozora Bunko digital library, enhanced with jReadability-based difficulty analysis for Japanese language learning and curriculum development.
Dataset Overview
- Source: Aozora Bunko (青空文庫) - Japan's premier digital library of public domain literature
- Enhancement: jReadability-based difficulty scoring using research-backed Japanese readability models
- Primary Methodology: jReadability - A Python implementation of Lee & Hasebe's Japanese readability evaluation system
- Use Cases: Japanese language curriculum design, reading level assessment, adaptive learning systems, difficulty-controlled text generation
- License: Original Aozora Bunko texts are public domain; analysis code and scores are provided under open source terms
📊 Dataset Structure
Total Records: 5,000 Japanese texts Total Columns: 21 Column Categories: Original Data (3) + Core Difficulty Scores (6) + Detailed Metrics (11) + Legacy Score (1)
🗂️ Column Descriptions
Original Data Columns (3)
| Column | Type | Description |
|---|---|---|
text |
string | Full Japanese text content from Aozora Bunko (50-532,561 characters) |
footnote |
string | Publishing information and bibliographic details in Japanese |
meta |
string | JSON metadata with work ID, title, author, and readings |
Core Difficulty Scores (6) - Main Features for Learning
| Column | Type | Range | Description |
|---|---|---|---|
overall_difficulty |
float64 | 0.0-1.0 | Primary difficulty score based on jReadability model |
kanji_difficulty |
float64 | 0.0-1.0 | Complexity based on kanji grade levels and density |
lexical_difficulty |
float64 | 0.0-1.0 | Vocabulary complexity using authentic frequency data |
grammar_complexity |
float64 | 0.0-1.0 | Grammatical structure complexity |
sentence_complexity |
float64 | 0.0-1.0 | Sentence length and structure variation |
difficulty_level |
string | categorical | Curriculum classification: Beginner/Elementary/Intermediate/Advanced/Expert |
Detailed Linguistic Metrics (11)
| Column | Type | Description |
|---|---|---|
text_length |
int64 | Total character count including punctuation |
kanji_density |
float64 | Proportion of Chinese characters (0.0-1.0) |
avg_sentence_length |
float64 | Average characters per sentence |
joyo_grade_avg |
float64 | Average educational grade of kanji used (1-9 scale) |
lexical_diversity |
float64 | Unique words ÷ total words (vocabulary richness) |
non_joyo_percentage |
float64 | Proportion of advanced kanji beyond standard education |
avg_word_length |
float64 | Average characters per word |
katakana_percentage |
float64 | Proportion of katakana (foreign/technical words) |
word_frequency_score |
float64 | Vocabulary rarity score (0=common, 1=rare) |
sentence_length_variance |
float64 | Statistical variance in sentence lengths |
grammar_complexity_score |
float64 | Grammatical pattern complexity score |
Legacy Compatibility (1)
| Column | Type | Range | Description |
|---|---|---|---|
difficulty_score |
float64 | 0.0-10.0 | Traditional 10-point difficulty scale for compatibility |
🎯 Difficulty Calculation Methodology
Primary Difficulty Score: jReadability Model
The overall_difficulty score is calculated using the jReadability Python library, which implements the research-backed Japanese readability model developed by Jae-ho Lee and Yoichiro Hasebe.
jReadability Model Formula:
readability = {mean words per sentence} × -0.056
+ {percentage of kango} × -0.126 # Chinese-origin words
+ {percentage of wago} × -0.042 # Native Japanese words
+ {percentage of verbs} × -0.145
+ {percentage of particles} × -0.044
+ 11.724
Score Normalization:
- jReadability output: 0.5-6.5 (higher = easier)
- Our normalization:
(6.5 - jreadability_score) / 6.0→ 0-1 scale (higher = harder)
Curriculum Level Classification
- Beginner (0.00-0.19): Basic modern Japanese
- Elementary (0.20-0.34): Simple literary texts
- Intermediate (0.35-0.54): Standard literary works
- Advanced (0.55-0.74): Complex literary language
- Expert (0.75-1.00): Classical or highly sophisticated texts
Supporting Linguistic Metrics
- Kanji Analysis: 3,003 kanji with official educational grades from kanjiapi.dev
- Vocabulary Analysis: wordfreq library with real corpus data
- Grammar Analysis: Pattern-based complexity scoring using formal Japanese constructions
- Sentence Analysis: Length variation and structural complexity measures
Research Foundation
- Lee, J. & Hasebe, Y. Introducing a readability evaluation system for Japanese language education
- Lee, J. & Hasebe, Y. Readability measurement of Japanese texts based on levelled corpora
- Model specifically designed for non-native Japanese learners (not native speaker grade levels)
📈 Dataset Statistics
jReadability-Based Analysis Results (5,000 texts):
- Overall Difficulty: Mean 0.547 (0.0-1.0 scale)
- Difficulty Distribution:
- Beginner: 97 texts (1.9%)
- Elementary: 552 texts (11.0%)
- Intermediate: 2,047 texts (40.9%)
- Advanced: 1,690 texts (33.8%)
- Expert: 614 texts (12.3%)
Text Characteristics:
- Text Length: 50 - 532,561 characters (mean: ~11,285)
- Kanji Density: 29.4% average
- Average Joyo Grade: 3.71 (elementary-intermediate level)
- Lexical Diversity: 0.270 (moderate vocabulary variation)
🔬 jReadability Advantages
Why jReadability?
- Research-Backed: Based on empirical studies of Japanese learner corpora
- Learner-Focused: Designed specifically for non-native Japanese speakers
- Linguistic Sophistication: Considers Japanese-specific features (kango/wago ratios, particle usage)
- Reproducible: Standardized implementation with consistent results
- Validated: Published research with proven correlation to learner difficulty perception
Improvements Over Composite Scoring
- Holistic Assessment: Considers text as a unified linguistic entity rather than separate features
- Native Speaker Bias Reduction: Avoids assumptions based on native speaker intuitions
- Empirical Foundation: Based on actual learner performance data
- Standardized Scale: Consistent 6-level difficulty assessment widely used in Japanese education
💻 Usage Examples
Loading the Dataset
from datasets import load_dataset
# Load the complete dataset
dataset = load_dataset("ronantakizawa/aozora-text-difficulty")
train_data = dataset['train']
Filtering by Difficulty Level
import pandas as pd
df = train_data.to_pandas()
# Get beginner-level texts
beginner_texts = df[df['difficulty_level'] == 'Beginner']
# Get texts within specific difficulty range
intermediate = df[
(df['overall_difficulty'] >= 0.35) &
(df['overall_difficulty'] < 0.55)
]
# Filter by kanji difficulty for kanji learning
easy_kanji = df[df['kanji_difficulty'] < 0.3]
Analyzing Text Metrics
# Examine vocabulary complexity
rare_vocab = df[df['word_frequency_score'] > 0.7]
# Find texts with specific sentence patterns
short_sentences = df[df['avg_sentence_length'] < 30]
# Analyze kanji grade distribution
elementary_kanji = df[df['joyo_grade_avg'] <= 4.0]
🎓 Applications
- Language Learning: Personalized reading recommendations based on learner level
- Curriculum Design: Structured progression of reading materials using research-backed difficulty assessment
- Assessment Tools: Automatic text difficulty evaluation for placement using jReadability standards
- Research: Japanese language complexity and readability analysis with validated metrics
- EdTech: Adaptive learning system development and content curation
- Text Generation: Difficulty-controlled generation of Japanese educational content
- Reading Comprehension: Graded text selection for language learning platforms
🛠️ Technical Implementation
jReadability Integration
from jreadability import compute_readability
# Calculate jReadability score
jreadability_score = compute_readability(japanese_text)
# Normalize to 0-1 difficulty scale (0=easiest, 1=hardest)
overall_difficulty = max(0.0, min(1.0, (6.5 - jreadability_score) / 6.0))
Batch Processing
For optimal performance when processing large datasets:
from fugashi import Tagger
from jreadability import compute_readability
# Initialize tagger once for batch processing
tagger = Tagger()
# Process multiple texts efficiently
for text in texts:
score = compute_readability(text, tagger) # Reuse tagger
Dependencies
- jreadability: Research-backed Japanese readability calculation
- fugashi: Fast Japanese morphological analysis (MeCab wrapper)
- unidic-lite: Japanese linguistic resources
- wordfreq: Authentic Japanese word frequency data
🔗 Related Datasets
- Japanese Character Difficulty Dataset - Kanji grades used in this analysis
- jReadability GitHub - Original jReadability implementation
📚 Acknowledgments
- Aozora Bunko for providing the foundational literary corpus
- kanjiapi.dev for comprehensive kanji educational data
- wordfreq project for authentic Japanese frequency data
📄 Citation
If you use this dataset in your research, please cite:
@dataset{aozora_text_difficulty_2024,
title={Aozora Text Difficulty Dataset},
author={Claude Code Analysis},
year={2024},
publisher={Hugging Face},
url={https://huggingface.co/datasets/ronantakizawa/aozora-text-difficulty}
}