Grand Tokenizer - Cluster 7 (Vocab 128000)
This is a multilingual tokenizer trained on cluster 7 with vocabulary size 128000.
Usage
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("tokenizer-iso-cluster-7-vocab-128000")
Files
final_normalized_tokenizer.model: SentencePiece model filefinal_normalized_tokenizer.vocab: Vocabulary filetokenizer.config: Tokenizer configurationspecial_tokens_map.json: Special tokens mapping
Training Details
- Cluster: 7
- Vocabulary Size: 128000
- Model Type: SentencePiece Unigram
- Downloads last month
- 5
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support