finewiki-1B / README.md
codelion's picture
Fix citation to reference blog post
22e2f3c verified
metadata
language:
  - en
license: apache-2.0
tags:
  - wikipedia
  - finewiki
  - sampled
dataset_info:
  features:
    - name: text
      dtype: string
    - name: id
      dtype: string
    - name: wikiname
      dtype: string
    - name: page_id
      dtype: int64
    - name: title
      dtype: string
    - name: url
      dtype: string
    - name: date_modified
      dtype: string
    - name: in_language
      dtype: string
    - name: wikidata_id
      dtype: string
    - name: bytes_html
      dtype: int64
    - name: wikitext
      dtype: string
    - name: version
      dtype: int64
    - name: infoboxes
      dtype: string
    - name: has_math
      dtype: bool
  splits:
    - name: train
      num_examples: 52721

FineWiki Sampled Dataset (1,000,000,332 tokens)

This is a sampled subset of HuggingFaceFW/finewiki containing approximately 1,000,000,332 tokens.

Dataset Details

Source

  • Original Dataset: HuggingFaceFW/finewiki (English subset, train split)
  • Sampling Method: Reservoir sampling (unbiased random sampling)
  • Target Token Count: 1,000,000,332 tokens
  • Tokenizer: GPT-2 (50,257 vocabulary)

Sampling Statistics

  • Documents Sampled: 52,721
  • Average Tokens/Doc: 18971.0
  • Random Seed: 42

Sampling Method

This dataset was created using reservoir sampling, which ensures:

  • βœ… Unbiased random sample from the full dataset
  • βœ… Every document has equal probability of being selected
  • βœ… No distribution bias (early/late documents equally represented)
  • βœ… Streaming-based (no need to download full dataset)

The sampling algorithm:

  1. Streams through HuggingFaceFW/finewiki without downloading
  2. Uses GPT-2 tokenizer to count tokens per document
  3. Maintains a reservoir of documents using standard reservoir sampling
  4. Stops when target token count is reached

Usage

from datasets import load_dataset

# Load the dataset
dataset = load_dataset("codelion/finewiki-1B")

# Access the training data
for example in dataset['train']:
    print(example['text'])
    print(example['title'])
    print(example['url'])

Dataset Structure

Each example contains all fields from the original FineWiki dataset:

  • text (string): The Wikipedia article text (primary content)
  • id (string): Unique identifier
  • wikiname (string): Wikipedia source name
  • page_id (int64): Wikipedia page ID
  • title (string): Article title
  • url (string): Source Wikipedia URL
  • date_modified (string): Last modification date
  • in_language (string): Language code (always 'en' for this subset)
  • wikidata_id (string): Wikidata identifier
  • bytes_html (int64): Size of HTML content
  • wikitext (string): Original wikitext markup
  • version (int64): Article version number
  • infoboxes (string): Extracted infobox data
  • has_math (bool): Whether article contains mathematical formulas

Use Cases

This sampled dataset is ideal for:

  • πŸ”¬ Small-scale language model pretraining experiments
  • πŸ“Š Dataset composition studies
  • ⚑ Quick prototyping and testing
  • πŸ’° Low-cost training runs

Citation

If you use this model/dataset, please cite:

@article{sharma2025billion,
  title={The 1 Billion Token Challenge: Finding the Perfect Pre-training Mix},
  author={Sharma, Asankhaya},
  year={2025},
  url={https://huggingface.co/blog/codelion/optimal-dataset-mixing/}
}

For more details, see the blog post.

License

Apache 2.0 (same as original FineWiki dataset)

Dataset Card Authors

CodeLion

Dataset Card Contact

For questions or issues, please open an issue on the dataset repository.