KL3M Multi-Word Tokenizer - 8K
This is the 8,192 token variant of the KL3M (Kelvin Legal Large Language Model) multi-word tokenizer family, optimized for legal domain text with hierarchical vocabulary nesting.
Overview
The KL3M multi-word tokenizers are a family of byte-pair encoding (BPE) tokenizers trained on ~44GB of legal domain text from the KL3M dataset (copyright-clean legal corpus from the ALEA Institute). Unlike general-purpose tokenizers, these tokenizers:
- Capture multi-word legal phrases as single tokens (e.g., "United States", "with respect to", "Supreme Court")
- Use hierarchical vocabulary nesting where smaller vocabularies are proper subsets of larger ones
- Enable vocabulary expansion experiments and transfer learning across vocabulary sizes
- Optimize for legal domain text while maintaining general-purpose capability
Tokenizer Family
This tokenizer is part of a hierarchically nested family. Token IDs in smaller vocabularies are identical across all larger vocabularies, enabling seamless vocabulary expansion:
| Vocabulary Size | HuggingFace Repository | File Size |
|---|---|---|
| 4,096 (4K) | alea-institute/kl3m-multi-word-001-4k | 118 KB |
| 8,192 (8K) | alea-institute/kl3m-multi-word-001-8k | 249 KB |
| 16,384 (16K) | alea-institute/kl3m-multi-word-001-16k | 529 KB |
| 32,768 (32K) | alea-institute/kl3m-multi-word-001-32k | 1.2 MB |
| 65,536 (64K) | alea-institute/kl3m-multi-word-001-64k | 2.4 MB |
| 131,072 (128K) | alea-institute/kl3m-multi-word-001-128k | 5.2 MB |
β You are viewing: 8,192 (8K)
Key Features
1. Multi-Word Tokenization
Legal text contains frequent multi-word phrases that benefit from being treated as single tokens. The larger vocabularies capture increasingly sophisticated legal terminology:
Example 1: "with respect to" (common legal phrase)
from tokenizers import Tokenizer
tok4k = Tokenizer.from_file("tokenizer-4096.json")
tok128k = Tokenizer.from_file("tokenizer-131072.json")
text = "with respect to"
# 4K tokenizer: 3 tokens
tok4k.encode(text).tokens
# ['with respec', 't ', 'to']
tok4k.encode(text).ids
# [2317, 313, 424]
# 128K tokenizer: 1 token
tok128k.encode(text).tokens
# ['with respect to']
tok128k.encode(text).ids
# [15903]
Example 2: "Supreme Court"
text = "Supreme Court"
# 4K tokenizer: 5 tokens
tok4k.encode(text).tokens
# ['Sup', 'rem', 'e ', 'Cour', 't']
tok4k.encode(text).ids
# [4091, 1878, 296, 3063, 170]
# 128K tokenizer: 1 token
tok128k.encode(text).tokens
# ['Supreme Court']
tok128k.encode(text).ids
# [81445]
Example 3: "United States"
text = "United States"
# 4K: 2 tokens β 128K: 1 token
tok4k.encode(text).tokens # ['United St', 'ates']
tok128k.encode(text).tokens # ['United States']
Example 4: "Department of State"
text = "Department of State"
# 4K: 3 tokens β 8K+: 2 tokens
tok4k.encode(text).tokens # ['Depart', 'ment of ', 'State']
tok8k.encode(text).tokens # ['Department of ', 'State']
Other multi-word tokens in larger vocabularies:
- Legal phrases: "in accordance with", "on behalf of", "pursuant to"
- Frequent constructions: "of the ", "in the ", ", the ", ". The "
- Legal terminology: "the defendant", "the Court", "Therefore,", "However,"
2. Hierarchical Token ID Nesting
Token IDs are preserved across vocabulary sizes β a token with ID 1877 in the 4K vocabulary has the same ID in all larger vocabularies:
# Example: "of the" has the same token ID across ALL vocabulary sizes
text = "of the"
tok4k.encode(text).ids # [1877]
tok8k.encode(text).ids # [1877]
tok16k.encode(text).ids # [1877]
tok32k.encode(text).ids # [1877]
tok64k.encode(text).ids # [1877]
tok128k.encode(text).ids # [1877]
# Special tokens are identical across all sizes
tok4k.encode("<|start|>").ids # [0]
tok4k.encode("<|end|>").ids # [1]
tok4k.encode("<|pad|>").ids # [2]
This enables:
- Vocabulary expansion during training: Start with 4K vocab, expand to 8K β 16K β 32K
- Embedding transfer: Initialize larger vocabulary models from smaller ones
- Controlled ablation studies: Isolate the effect of vocabulary size
3. Special Tokens
All tokenizers include 7 special tokens with consistent IDs:
| Token | ID | Purpose |
|---|---|---|
<|start|> |
0 | Start of sequence (GPT-style) |
<|end|> |
1 | End of sequence |
<|pad|> |
2 | Padding token |
<|unk|> |
3 | Unknown token |
<|cls|> |
4 | Classification token (BERT-style) |
<|sep|> |
5 | Separator token (BERT-style) |
<|mask|> |
6 | Mask token (MLM training) |
4. Argument Notation Tokens (Optional)
Some tokenizer variants include 47 additional special tokens (IDs 7-53) for structured reasoning, debate, and argumentation:
Claim Type Markers
β§Fact/descriptive claimβValue/ethical claimβ΅Policy/action claimβ¦Preference/taste claim
Belief Strength
⬀Certain trueβStrongly believe trueβLean trueβUndecidedβLean falseβCertain false
Value/Attitude
β¬Approve/goodβ¬Disapprove/badβMixedβNeutral
Structural Markers
β΄Thereforeβ΅BecauseβAndβOrβ·EquivalentβΆSupportsβUndercutsβ’ExplainsβΊMutual supportβ’Evidence marker
Evidence Sources
πObservationπ§ͺExperimentπData/statisticsπTheory/literatureπ£Testimonyπ€IntuitionβStrong evidenceβWeak evidence
Meta-Discourse
βWarning/objectionβEmphasisβQuestionβ»RevisionβReframe
Agent Markers
Β«Open agent quoteΒ»Close agent quote
Numbered Markers
ββ‘β’β£β€β₯β¦β§β¨β©Circled numbers 1-10
These tokens enable models to represent structured arguments, track evidence strength, and model multi-agent debates with explicit reasoning chains.
Usage
Quick Start
from transformers import PreTrainedTokenizerFast
# Load tokenizer
tokenizer = PreTrainedTokenizerFast.from_pretrained("alea-institute/kl3m-multi-word-001-8k")
# Tokenize text
text = "The Supreme Court held that the defendant violated due process."
tokens = tokenizer.tokenize(text)
ids = tokenizer.encode(text)
print(f"Tokens: {tokens}")
print(f"Token IDs: {ids}")
Using with π€ Tokenizers Library
from tokenizers import Tokenizer
# Load tokenizer
tokenizer = Tokenizer.from_pretrained("alea-institute/kl3m-multi-word-001-8k")
# Encode text
encoding = tokenizer.encode("in accordance with the United States Code")
print(f"Tokens: {encoding.tokens}")
print(f"IDs: {encoding.ids}")
Configuration for Training
from transformers import PreTrainedTokenizerFast
tokenizer = PreTrainedTokenizerFast.from_pretrained("alea-institute/kl3m-multi-word-001-8k")
# Configure special tokens for your model
tokenizer.pad_token = "<|pad|>"
tokenizer.eos_token = "<|end|>"
tokenizer.bos_token = "<|start|>"
tokenizer.unk_token = "<|unk|>"
tokenizer.cls_token = "<|cls|>" # For BERT-style models
tokenizer.sep_token = "<|sep|>" # For BERT-style models
tokenizer.mask_token = "<|mask|>" # For masked language modeling
Training Details
Training Corpus
- Source: KL3M (Kelvin Legal Large Language Model) dataset
- Size: ~44.2 GB (44,168,540,153 bytes)
- Content: 1,018,355,750 lines, 5,997,814,602 words
- Domain: Legal text (court opinions, contracts, statutes, legal documents)
- License: Copyright-clean corpus from the ALEA Institute
Training Method
Trained using the bbpe (Binary Byte Pair Encoding) Rust crate with multi-word optimization:
zcat /nas4/data/kl3m/kl3m-bbpe-sample.txt.gz | \
bbpe train -v - \
--max-entropy 7.0 \
--preprocessor unicode-whitespace \
--preprocessor-probability 0.1 \
--vocab-size 131072 \
--family-size 65536 --family-size 32768 --family-size 16384 \
--family-size 8192 --family-size 4096 \
--family-template tokenizer-{size}.json \
--output tokenizer-131072.json
Parameters:
max-entropy 7.0: Entropy threshold balancing multi-word phrases with common tokensfamily-size: Creates nested vocabulary families ensuring ID consistencypreprocessor unicode-whitespace: Whitespace normalization
Use Cases
1. Legal Language Models
Train domain-specific language models optimized for legal text:
from transformers import AutoModelForCausalLM, PreTrainedTokenizerFast
tokenizer = PreTrainedTokenizerFast.from_pretrained("alea-institute/kl3m-multi-word-001-8k")
model = AutoModelForCausalLM.from_pretrained("your-legal-model")
# The model will efficiently process legal terminology
text = "The Court held that the statute of limitations had expired."
inputs = tokenizer(text, return_tensors="pt")
outputs = model(**inputs)
2. Vocabulary Ablation Studies
Compare model performance across vocabulary sizes:
# Train models with different vocabulary sizes
for vocab_size in ["4k", "8k", "16k", "32k", "64k", "128k"]:
tokenizer = PreTrainedTokenizerFast.from_pretrained(
f"alea-institute/kl3m-multi-word-001-{vocab_size}"
)
# Train model and compare convergence, perplexity, downstream performance
3. Curriculum Learning with Vocabulary Expansion
Leverage hierarchical nesting for progressive vocabulary growth:
# Stage 1: Train with 4K vocabulary
tokenizer_4k = PreTrainedTokenizerFast.from_pretrained("alea-institute/kl3m-multi-word-001-4k")
# ... train model ...
# Stage 2: Expand to 16K vocabulary (embeddings for IDs 0-4095 are identical!)
tokenizer_16k = PreTrainedTokenizerFast.from_pretrained("alea-institute/kl3m-multi-word-001-16k")
# ... expand model embeddings and continue training ...
Research Applications
These tokenizers enable research into:
Vocabulary Size Effects: How does vocabulary size affect convergence speed, final perplexity, and downstream task performance?
Domain-Specific Tokenization: Do legal domain tokenizers outperform general-purpose tokenizers (GPT-4, LLaMA) on legal tasks?
Multi-Word Phrase Modeling: Does capturing legal phrases as single tokens improve legal reasoning and understanding?
Hierarchical Curriculum Learning: Can progressive vocabulary expansion improve training efficiency or final performance?
Transfer Learning: Can models trained on smaller vocabularies transfer knowledge to larger vocabularies?
Citation
If you use these tokenizers in your research, please cite:
@misc{kl3m-multi-word-tokenizers-2025,
title={KL3M Multi-Word Tokenizers: Hierarchically Nested BPE for Legal Domain Language Modeling},
author={ALEA Institute},
year={2025},
url={https://huggingface.co/alea-institute/kl3m-multi-word-001-8k}
}
Also consider citing the KL3M dataset:
@article{kl3m-data-2025,
title={The KL3M Data Project: Copyright-Clean Training Resources for Large Language Models},
author={Bommarito, Michael and others},
journal={arXiv preprint arXiv:2504.07854},
year={2025}
}
License
These tokenizers are released under the MIT License. The training corpus (KL3M dataset) is copyright-clean and permissively licensed.
Links
- ALEA Institute: https://aleainstitute.ai/
- KL3M Project: https://aleainstitute.ai/work/kl3m/
- KL3M Dataset Paper: https://arxiv.org/html/2504.07854
- Research Repository: https://github.com/alea-institute/multi-word-tokenization
Acknowledgments
These tokenizers were created as part of research into vocabulary size effects on legal language model performance. The KL3M dataset and tokenizers are stewarded by the ALEA Institute for public benefit.
Version: 001 Created: November 2025 Vocabulary Size: 8,192 tokens Domain: Legal text (with general-purpose capability)