Datasets:
The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.
Nano-Start Tokenized Dataset
Pre-tokenized binary files ready for training with oxidizr. This is the tokenized version of fs90/nano-start-data.
What is Tokenization?
Language models don't process text directly - they work with numbers called tokens. Tokenization converts text into token IDs:
"Hello world" → [9906, 1917]
This dataset is pre-tokenized for simplicity - download and start training immediately. To learn how tokenization works and create your own datasets, see the splintr project.
Quick Start
Option A: Using hf
pip install huggingface_hub
hf download fs90/nano-start-data-bin --local-dir data/nano-start/tokenized --repo-type dataset
Option B: Direct download
Download combined.bin from the Files tab and place it in your project.
Train with oxidizr:
cargo run --release -- \
--config models/nano-start.yaml \
--data data/nano-start/tokenized/combined.bin
Files
Download combined.bin for training - it contains all data merged together:
| File | Size | Tokens | Description |
|---|---|---|---|
combined.bin |
25,516 bytes | 6,379 | All data merged (recommended) |
Individual Files (Optional)
You can also train on individual subsets. Training on different data produces different model behavior:
| File | Size | Tokens | Description |
|---|---|---|---|
completions.bin |
8,788 bytes | 2,197 | Factual statements only |
qa.bin |
11,036 bytes | 2,759 | Q&A pairs only |
chat.bin |
5,692 bytes | 1,423 | Multi-turn conversations only |
Experiment with different files to see how the training data affects model behavior!
Binary Format
Each .bin file contains raw token IDs:
- Encoding: u32 (32-bit unsigned integer)
- Byte order: Little-endian
- Headers: None (raw token stream)
- Tokenizer:
cl100k_base(OpenAI, vocab size: 100,331)
Reading the Data
import struct
def read_tokens(path):
with open(path, "rb") as f:
data = f.read()
return list(struct.unpack(f"<{len(data)//4}I", data))
tokens = read_tokens("combined.bin")
print(f"Total tokens: {len(tokens)}")
Tokenizer Details
| Property | Value |
|---|---|
| Tokenizer | cl100k_base (OpenAI GPT-4/GPT-3.5) |
| Vocab size | 100,331 |
| EOS token | <|endoftext|> (ID: 100257) |
Special Tokens
| Token | ID | Purpose |
|---|---|---|
<|endoftext|> |
100257 | Separates examples |
<|system|> |
100277 | System instructions |
<|user|> |
100278 | User input |
<|assistant|> |
100279 | Model response |
Source Data
To see the human-readable text before tokenization: fs90/nano-start-data
Related Resources
- Raw data: fs90/nano-start-data
- Training framework: oxidizr
- Tokenization: splintr - Learn how to tokenize your own data
License
MIT License
Citation
@dataset{nano_start_bin_2024,
title={Nano-Start Tokenized Dataset},
author={fs90},
year={2024},
publisher={Hugging Face},
url={https://huggingface.co/datasets/fs90/nano-start-data-bin}
}
- Downloads last month
- 6