Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -58,8 +58,8 @@ size_categories:
|
|
| 58 |
|
| 59 |
## Data Collection
|
| 60 |
|
| 61 |
-
The initial fineweb dataset was collected from **Nairaland.com**, extracting **
|
| 62 |
-
The full data collection can be found [in this repo](https://github.com/saheedniyi02/Naijaweb), kindly give a star
|
| 63 |
## Data Cleaning
|
| 64 |
|
| 65 |
The cleaning process was conducted using **[Datatrove](https://github.com/huggingface/datatrove)**, the same library employed in cleaning the **[FineWeb](https://huggingface.co/datasets/HuggingFaceFW/fineweb)** dataset, which is known for its high quality. The data cleaning process involved multiple stages of deduplication, filtering, and normalization to ensure the dataset's quality matches that of other high-performing datasets.
|
|
|
|
| 58 |
|
| 59 |
## Data Collection
|
| 60 |
|
| 61 |
+
The initial fineweb dataset was collected from **Nairaland.com**, extracting **about 30 million unique posts** from 19 different sections of the site. Additionally, **1,289,195 outbound links** were extracted from these posts. The content of these web pages was extracted using **Trafilatura**, a popular library for web scraping and content extraction.
|
| 62 |
+
The full data collection can be found [in this repo](https://github.com/saheedniyi02/Naijaweb), kindly give a star ⭐.
|
| 63 |
## Data Cleaning
|
| 64 |
|
| 65 |
The cleaning process was conducted using **[Datatrove](https://github.com/huggingface/datatrove)**, the same library employed in cleaning the **[FineWeb](https://huggingface.co/datasets/HuggingFaceFW/fineweb)** dataset, which is known for its high quality. The data cleaning process involved multiple stages of deduplication, filtering, and normalization to ensure the dataset's quality matches that of other high-performing datasets.
|