Datasets:
Tasks:
Text Generation
Formats:
parquet
Sub-tasks:
language-modeling
Languages:
Danish
Size:
10M - 100M
ArXiv:
DOI:
License:
File size: 3,462 Bytes
fcc0222 43d839a fcc0222 ef0f90a fcc0222 43d839a fcc0222 43d839a fcc0222 43d839a ef0f90a 43d839a 1c33234 2debafd fcc0222 243fc1f fcc0222 243fc1f de8bb50 ef0f90a de8bb50 243fc1f fcc0222 73fd2fd 38b692a 4781621 73fd2fd 38b692a 3e28a50 fcc0222 38b692a de8bb50 38b692a 4781621 38b692a 4781621 38b692a 43d839a 38b692a 43d839a de8bb50 fcc0222 38b692a 0cef317 43d839a 0cef317 43d839a 0cef317 243fc1f 4781621 243fc1f |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 |
---
pretty_name: Wikibooks
language:
- da
license: cc0-1.0
license_name: CC-0
size_categories:
- 1-10k
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
source_datasets:
- danish-foundation-models/danish-gigaword
domains:
- Books
---
# Dataset Card for Wikibooks
<!-- START-SHORT DESCRIPTION -->
The Danish Subsection of [Wikibooks](https://www.wikibooks.org).
<!-- END-SHORT DESCRIPTION -->
## Dataset Description
<!-- START-DESC-STATS -->
- **Number of samples**: 1.73K
- **Number of tokens (Llama 3)**: 7.63M
- **Average document length in tokens (min, max)**: 4.40K (8, 368.88K)
<!-- END-DESC-STATS -->
## Dataset Structure
An example from the dataset looks as follows.
<!-- START-SAMPLE -->
```py
{
"id": "wikibooks_7",
"text": "Boghylde:Hobbies\nHobbies er alle de aktiviteter vi giver os ud i, uden at forvente et afkast ud over[...]",
"source": "wikibooks",
"added": "2025-08-18",
"created": "2006-06-20, 2006-06-20",
"token_count": 95
}
```
### Data Fields
An entry in the dataset consists of the following fields:
- `id` (`str`): An unique identifier for each document.
- `text`(`str`): The content of the document.
- `source` (`str`): The source of the document (see [Source Data](#source-data)).
- `added` (`str`): An date for when the document was added to this collection.
- `created` (`str`): An date range for when the document was originally created.
- `token_count` (`int`): The number of tokens in the sample computed using the Llama 8B tokenizer
<!-- END-SAMPLE -->
### Dataset Statistics
<!-- START-DATASET PLOTS -->
<p align="center">
<img src="./images/dist_document_length.png" width="600" style="margin-right: 10px;" />
</p>
<!-- END-DATASET PLOTS -->
### Processing
For this dataset we have pulled the latest [database dump from wikimedia](https://dumps.wikimedia.org/dawikibooks/latest/) and extracted the texts using the [wtf_wikipedia](https://github.com/spencermountain/wtf_wikipedia/tree/dev) parser.
Because the parser is written in javascript you need to have Node.js installed on you machine.
To run the `create.py` file you first need to do:
```bash
$ cd parser/ && npm install && cd ..
```
We chose to use `wtf_wikipedia` because out of the other parsers we tested this was the imperically best one. We tested `mwparserfromhell`, `mediawiki_dump`, `wikiextractor`, and `wtf_wikipedia`. It seemed that the others still produced some sort of artifacts from the parsing of wikicode.
## Additional Information
### Citation Information
This dataset was initially published as part of the [Danish gigaword](https://huggingface.co/danish-foundation-models). We recommend that you cite and reference it if you use this dataset:
> Derczynski, L., Ciosici, M. R., et al. (2021). The Danish Gigaword Corpus. In Proceedings of the 23rd Nordic Conference on Computational Linguistics (NoDaLiDa 2021).
```bash
@inproceedings{dagw,
title = {{The Danish Gigaword Corpus}},
author = {Leon Derczynski and Manuel R. Ciosici and Rebekah Baglini and Morten H. Christiansen and Jacob Aarup Dalsgaard and Riccardo Fusaroli and Peter Juel Henrichsen and Rasmus Hvingelby and Andreas Kirkedal and Alex Speed Kjeldsen and Claus Ladefoged and Finn Årup Nielsen and Jens Madsen and Malte Lau Petersen and Jonathan Hvithamar Rystrøm and Daniel Varab},
year = 2021,
booktitle = {Proceedings of the 23rd Nordic Conference on Computational Linguistics},
publisher = {NEALT}
}
```
|