Datasets:
Tasks:
Text Generation
Formats:
parquet
Sub-tasks:
language-modeling
Languages:
Danish
Size:
10M - 100M
ArXiv:
DOI:
License:
| pretty_name: Wikibooks | |
| language: | |
| - da | |
| license: cc0-1.0 | |
| license_name: CC-0 | |
| size_categories: | |
| - 1-10k | |
| task_categories: | |
| - text-generation | |
| - fill-mask | |
| task_ids: | |
| - language-modeling | |
| source_datasets: | |
| - danish-foundation-models/danish-gigaword | |
| domains: | |
| - Books | |
| # Dataset Card for Wikibooks | |
| <!-- START-SHORT DESCRIPTION --> | |
| The Danish Subsection of [Wikibooks](https://www.wikibooks.org). | |
| <!-- END-SHORT DESCRIPTION --> | |
| ## Dataset Description | |
| <!-- START-DESC-STATS --> | |
| - **Number of samples**: 1.73K | |
| - **Number of tokens (Llama 3)**: 7.63M | |
| - **Average document length in tokens (min, max)**: 4.40K (8, 368.88K) | |
| <!-- END-DESC-STATS --> | |
| ## Dataset Structure | |
| An example from the dataset looks as follows. | |
| <!-- START-SAMPLE --> | |
| ```py | |
| { | |
| "id": "wikibooks_7", | |
| "text": "Boghylde:Hobbies\nHobbies er alle de aktiviteter vi giver os ud i, uden at forvente et afkast ud over[...]", | |
| "source": "wikibooks", | |
| "added": "2025-08-18", | |
| "created": "2006-06-20, 2006-06-20", | |
| "token_count": 95 | |
| } | |
| ``` | |
| ### Data Fields | |
| An entry in the dataset consists of the following fields: | |
| - `id` (`str`): An unique identifier for each document. | |
| - `text`(`str`): The content of the document. | |
| - `source` (`str`): The source of the document (see [Source Data](#source-data)). | |
| - `added` (`str`): An date for when the document was added to this collection. | |
| - `created` (`str`): An date range for when the document was originally created. | |
| - `token_count` (`int`): The number of tokens in the sample computed using the Llama 8B tokenizer | |
| <!-- END-SAMPLE --> | |
| ### Dataset Statistics | |
| <!-- START-DATASET PLOTS --> | |
| <p align="center"> | |
| <img src="./images/dist_document_length.png" width="600" style="margin-right: 10px;" /> | |
| </p> | |
| <!-- END-DATASET PLOTS --> | |
| ### Processing | |
| For this dataset we have pulled the latest [database dump from wikimedia](https://dumps.wikimedia.org/dawikibooks/latest/) and extracted the texts using the [wtf_wikipedia](https://github.com/spencermountain/wtf_wikipedia/tree/dev) parser. | |
| Because the parser is written in javascript you need to have Node.js installed on you machine. | |
| To run the `create.py` file you first need to do: | |
| ```bash | |
| $ cd parser/ && npm install && cd .. | |
| ``` | |
| We chose to use `wtf_wikipedia` because out of the other parsers we tested this was the imperically best one. We tested `mwparserfromhell`, `mediawiki_dump`, `wikiextractor`, and `wtf_wikipedia`. It seemed that the others still produced some sort of artifacts from the parsing of wikicode. | |
| ## Additional Information | |
| ### Citation Information | |
| This dataset was initially published as part of the [Danish gigaword](https://huggingface.co/danish-foundation-models). We recommend that you cite and reference it if you use this dataset: | |
| > Derczynski, L., Ciosici, M. R., et al. (2021). The Danish Gigaword Corpus. In Proceedings of the 23rd Nordic Conference on Computational Linguistics (NoDaLiDa 2021). | |
| ```bash | |
| @inproceedings{dagw, | |
| title = {{The Danish Gigaword Corpus}}, | |
| author = {Leon Derczynski and Manuel R. Ciosici and Rebekah Baglini and Morten H. Christiansen and Jacob Aarup Dalsgaard and Riccardo Fusaroli and Peter Juel Henrichsen and Rasmus Hvingelby and Andreas Kirkedal and Alex Speed Kjeldsen and Claus Ladefoged and Finn Årup Nielsen and Jens Madsen and Malte Lau Petersen and Jonathan Hvithamar Rystrøm and Daniel Varab}, | |
| year = 2021, | |
| booktitle = {Proceedings of the 23rd Nordic Conference on Computational Linguistics}, | |
| publisher = {NEALT} | |
| } | |
| ``` | |