Update README.md
Browse files
README.md
CHANGED
|
@@ -32,4 +32,50 @@ configs:
|
|
| 32 |
data_files:
|
| 33 |
- split: train
|
| 34 |
path: data/train-*
|
|
|
|
|
|
|
|
|
|
|
|
|
| 35 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 32 |
data_files:
|
| 33 |
- split: train
|
| 34 |
path: data/train-*
|
| 35 |
+
license: apache-2.0
|
| 36 |
+
pretty_name: FineWeb-edu 10BT Sample embedded with nomic-text-v1.5
|
| 37 |
+
size_categories:
|
| 38 |
+
- 10M<n<100M
|
| 39 |
---
|
| 40 |
+
# FineWeb-edu 10BT Sample embedded with nomic-text-v1.5
|
| 41 |
+
|
| 42 |
+
The [FineWeb-edu 10BT sample](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu/tree/main/sample/10BT) was first chunked into 500 tokens (using bert-base-uncased) with 10% overlap resulting in 25 million rows and 10.5BT.
|
| 43 |
+
The chunks were then embedded using [nomic-text-v1.5](https://huggingface.co/nomic-ai/nomic-embed-text-v1.5).
|
| 44 |
+
|
| 45 |
+
## Dataset Details
|
| 46 |
+
|
| 47 |
+
### Dataset Description
|
| 48 |
+
|
| 49 |
+
- **Curated by:** Ian @enjalot Johnson
|
| 50 |
+
- **Funded by:** Latent Interfaces
|
| 51 |
+
- **License:** Apache license 2.0
|
| 52 |
+
|
| 53 |
+
### Dataset Sources
|
| 54 |
+
|
| 55 |
+
- **Repository:** https://github.com/enjalot/fineweb-modal
|
| 56 |
+
|
| 57 |
+
## Uses
|
| 58 |
+
|
| 59 |
+
### Direct Use
|
| 60 |
+
|
| 61 |
+
The dataset was embedded with the `clustering: ` prefix, so the main usecase is clustering and feature extraction.
|
| 62 |
+
The motivation for making the dataset is to create training data for an [SAE to identify features](https://transformer-circuits.pub/2024/scaling-monosemanticity) in nomic-text-v1.5.
|
| 63 |
+
|
| 64 |
+
## Dataset Structure
|
| 65 |
+
|
| 66 |
+
The columns of the dataset are:
|
| 67 |
+
|
| 68 |
+
- id: the document id in fineweb-edu
|
| 69 |
+
- url: the url of the document in fineweb-edu
|
| 70 |
+
- score: the score from fineweb-edu
|
| 71 |
+
- dump: the dump in fineweb-edu
|
| 72 |
+
- chunk_index: which chunk of the original document this is
|
| 73 |
+
- chunk_text: the text of the chunk
|
| 74 |
+
- chunk_tokens: the tokens tokenized by bert-base-uncased
|
| 75 |
+
- chunk_token_count: the number of tokens in this chunk
|
| 76 |
+
- embedding: the 768 dimension vector representing the nomic-text-v1.5 embedding
|
| 77 |
+
## Dataset Creation
|
| 78 |
+
|
| 79 |
+
### Curation Rationale
|
| 80 |
+
The 10BT Sample is big enough to warrant a scaled up process but manageable enough to be done on a small budget. Using on-demand CPUs and GPUs from modal.com the total cost was ~$60.
|
| 81 |
+
|