Datasets:
license: cc-by-4.0
language:
- fa
task_categories:
- text-to-speech
- automatic-speech-recognition
- audio-to-audio
- audio-classification
- text-to-audio
- voice-activity-detection
tags:
- TTS
- farsi
- yodas
- quality
pretty_name: YodaLingua
dataset_info:
features:
- name: __key__
dtype: string
- name: mp3
dtype:
audio:
sampling_rate: 24000
- name: text
dtype: string
- name: language
dtype: string
- name: speaker_id
dtype: string
- name: dnsmos
dtype: float64
splits:
- name: train
num_bytes: 680427539.778
num_examples: 14586
download_size: 636505858
dataset_size: 680427539.778
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
YodaLingua-Farsi
YodaLingua is a high-quality speech dataset designed for training text-to-speech (TTS) systems, ASR models, and any application requiring clean, well-aligned audio–text pairs.
This release contains the Farsi portion of the multilingual YodaLingua collection.
🧾 Dataset Overview
| Property | Value |
|---|---|
| Total clips | 14,586 audio–transcription pairs |
| Total duration | 43.7 hours |
| Speakers | 504 distinct speakers |
| Audio format | MP3 • mono • 24 kHz • 16-bit |
| License | Permissive — commercial use allowed |
All audio clips are noise-reduced, normalized, and matched with accurate transcriptions.
Data Fields
Each entry in the dataset contains the following fields:
| Field | Description |
|---|---|
__key__ |
Unique identifier for each sample. |
audio |
Path to the audio file associated with the sample (MP3 format). |
text |
Ground-truth transcription of the audio segment. |
language |
Language code following ISO 639 standards. |
speaker_id |
Unique identifier assigned to each speaker. Multiple audio can share the same speaker ID. |
dnsmos |
DNSMOS P.835 Overall (OVRL) score estimating perceptual speech quality; higher values indicate cleaner and more intelligible audio. |
🌍 Multilingual Versions
Other languages are available in the YodaLingua multilingual collection:
👉 https://huggingface.co/collections/Thomcles/yodalingua
We apply a multi-stage pipeline to ensure maximum data quality:
1. Standardization
- Convert to WAV
- Mono channel
- Resample to 24 kHz
- 16-bit sample width
- Normalize to –20 dBFS (with volume correction between –3 and +3 dB)
2. Noise Reduction
Advanced denoising applied to improve clarity and remove background artifacts.
3. Speaker Diarization
Segment long recordings by speaker to improve diversity and ensure speaker-consistent utterances.
4. Voice Activity Detection (VAD)
Merge consecutive VAD segments from the same speaker into clean utterances of 3–30 s.
5. Transcription
State-of-the-art ASR models produce accurate text transcripts.
6. Quality Filtering
Clips are filtered using DNSMOS P.835 OVRL; only samples with a score > 3.0 are retained.
📚 Loading the Dataset
from datasets import load_dataset
ds = load_dataset("Thomcles/YodaLingua-Farsi")
Phoneme distribution (as produced by the G2P model)
The following table shows the relative frequency of G2P-generated phoneme units. These units include vowels, consonants, and G2P-specific markers (e.g., length ː, aspiration ʰ).
This is not intended as a phonological analysis of Farsi, but as an objective indicator of the phonetic diversity and coverage of the dataset for speech-generation tasks.
| Symbol | Frequency |
|---|---|
| ː | 13.94% |
| æ | 10.49% |
| ʰ | 6.25% |
| i | 5.90% |
| ɒ | 5.85% |
| n | 4.95% |
| h | 4.77% |
| d | 4.38% |
| e | 4.25% |
| ɾ | 4.20% |
| m | 4.06% |
| t | 3.53% |
| ʔ | 3.09% |
| b | 2.86% |
| v | 2.31% |
| k | 2.29% |
| u | 2.14% |
| o | 2.10% |
| s | 2.10% |
| ʃ | 1.82% |
| j | 1.78% |
| l | 1.73% |
| z | 1.32% |
| ɡ | 0.78% |
| x | 0.77% |
| f | 0.68% |
| q | 0.64% |
| ʒ | 0.60% |
| p | 0.42% |
Contact
e-mail : [email protected]