|
|
--- |
|
|
license: mit |
|
|
language: |
|
|
- en |
|
|
- ru |
|
|
tags: |
|
|
- audio |
|
|
- audio-quality-assessment |
|
|
- perceptual-audio-quality |
|
|
- regression |
|
|
- speech |
|
|
- music |
|
|
- sound |
|
|
- score |
|
|
--- |
|
|
|
|
|
# Balanced Perceptual Audio Quality Dataset |
|
|
|
|
|
## Dataset Summary |
|
|
|
|
|
This is a large-scale, balanced dataset designed for training models for perceptual audio quality assessment. It consists of **612,020** examples, each containing a pair of 1-second audio clips: a high-quality original and a degraded version processed by various audio codecs. Each pair is accompanied by a perceptual quality score **(ranging from 0.0 to 1.0)** generated by visqol-like algorithms. |
|
|
|
|
|
The key feature of this dataset is its **balanced distribution of quality scores**. Unlike typical audio datasets, which are heavily skewed towards high-quality examples, this dataset has been carefully curated to ensure a uniform distribution across the entire quality spectrum, from "terrible" (MOS ≈ 1.0) to "excellent" (MOS ≈ 5.0). This makes it exceptionally well-suited for training robust regression models that perform reliably across all quality levels, especially in the challenging low-quality range. |
|
|
|
|
|
The dataset is provided in the `webdataset` format for high-performance, sequential I/O, making it ideal for large-scale deep learning workloads. |
|
|
|
|
|
**The primary use case** is training fast and accurate neural networks to assess the subjective quality of audio data, detect the level of audible degradation, and automatically select encoding settings to meet a target subjective quality level, among other applications. |
|
|
|
|
|
## Dataset Structure |
|
|
|
|
|
The dataset is composed of 72 `.tar` files (shards), each containing thousands of training examples. The structure inside each shard is as follows: |
|
|
|
|
|
``` |
|
|
shard-000000.tar |
|
|
├── 000000000.original.wav |
|
|
├── 000000000.degraded.wav |
|
|
├── 000000000.score.txt |
|
|
├── 000000001.original.wav |
|
|
├── 000000001.degraded.wav |
|
|
├── 000000001.score.txt |
|
|
│ ... |
|
|
└── 0004999.score.txt |
|
|
``` |
|
|
|
|
|
Each example is a trio of files sharing the same unique key (e.g., `000000000`). |
|
|
|
|
|
### Data Instances |
|
|
|
|
|
A single data instance consists of: |
|
|
1. The original, reference audio clip. |
|
|
2. The degraded audio clip, which has been processed by an audio codec. |
|
|
3. A text file containing the **0.0 to 1.0** score. |
|
|
|
|
|
### Data Fields |
|
|
|
|
|
* `original.wav`: A 1-second, single-channel WAV file containing the reference audio at a 44.1kHz sample rate. |
|
|
* `degraded.wav`: A 1-second, single-channel WAV file containing the audio after degradation by various codecs (e.g., MP3, AAC, Opus) with a wide range of settings. |
|
|
* `score.txt`: A text file containing a single floating-point number. This score is the NMOS (Normalized Mean Opinion Score), which ranges from 0.0 (absolutely terrible) to 1.0 (transparent). To get the MOS value (on a 1.0-5.0 scale), use the following formula: `mos = (score * 4) + 1`. |
|
|
|
|
|
### Data Splits |
|
|
|
|
|
The dataset does not have predefined train/validation/test splits. The intended usage is to split the shards manually. For example, with 72 total shards, a common split would be: |
|
|
* **Training:** The first ~80% of shards (e.g., `shard-000000.tar` to `shard-000057.tar`). |
|
|
* **Validation:** The remaining ~20% of shards (e.g., `shard-000058.tar` to `shard-000071.tar`). |
|
|
|
|
|
### Dataset Creation |
|
|
|
|
|
The primary challenge in this task is data imbalance. Because most audio is high-quality, it is difficult for a model to learn the nuances of low-quality artifacts. |
|
|
|
|
|
To solve this, we first generated a massive, multi-million sample dataset. From this pool, we then **meticulously curated** a final dataset with a **perfectly flat, uniform distribution** of quality scores. This means there is a **precisely equal number of files** for every single score bin across the entire quality spectrum. |
|
|
|
|
|
To be precise, the dataset is partitioned into 20 bins based on the NMOS score, with each bin containing an identical number of samples. The exact distribution is as follows: |
|
|
|
|
|
| Score Range | Number of Samples | |
|
|
|:-----------:|:-----------------:| |
|
|
| [0.00, 0.05) | 30601 | |
|
|
| [0.05, 0.10) | 30601 | |
|
|
| [0.10, 0.15) | 30601 | |
|
|
| [0.15, 0.20) | 30601 | |
|
|
| [0.20, 0.25) | 30601 | |
|
|
| [0.25, 0.30) | 30601 | |
|
|
| [0.30, 0.35) | 30601 | |
|
|
| [0.35, 0.40) | 30601 | |
|
|
| [0.40, 0.45) | 30601 | |
|
|
| [0.45, 0.50) | 30601 | |
|
|
| [0.50, 0.55) | 30601 | |
|
|
| [0.55, 0.60) | 30601 | |
|
|
| [0.60, 0.65) | 30601 | |
|
|
| [0.65, 0.70) | 30601 | |
|
|
| [0.70, 0.75) | 30601 | |
|
|
| [0.75, 0.80) | 30601 | |
|
|
| [0.80, 0.85) | 30601 | |
|
|
| [0.85, 0.90) | 30601 | |
|
|
| [0.90, 0.95) | 30601 | |
|
|
| [0.95, 1.00] | 30601 | |
|
|
| **Total** | **612020** | |
|
|
|
|
|
This deliberate and uniform sampling strategy ensures that a model trained on this data is not biased towards high-quality examples and can learn to accurately assess audio across the full range of perceptual quality. |
|
|
|
|
|
Additionally, 1-second audio clips were intelligently extracted from more complex segments of the source files to provide the model with challenging and informative examples. |
|
|
|
|
|
### Source Data |
|
|
|
|
|
The degraded audio clips were created from a diverse library of high-fidelity material, **primarily music with a small portion of speech**. The source material was processed through a wide array of modern and legacy codecs (AAC, HE-AAC, xHE-AAC, Opus, MP3) at various bitrates and settings. |
|
|
|
|
|
|
|
|
## Usage with PyTorch and `webdataset` |
|
|
|
|
|
Here is a basic example of how to load the dataset using PyTorch and the `webdataset` library. |
|
|
|
|
|
First, install the necessary libraries: |
|
|
```bash |
|
|
pip install webdataset torch torchaudio |
|
|
``` |
|
|
|
|
|
Then, you can create a `DataLoader`: |
|
|
|
|
|
```python |
|
|
from io import BytesIO |
|
|
|
|
|
import torch |
|
|
import torch.nn.functional as F |
|
|
import torchaudio |
|
|
from torchaudio.transforms import Resample |
|
|
from webdataset import shardlists |
|
|
from webdataset.compat import WebLoader, WebDataset |
|
|
|
|
|
# --- Parameters --- |
|
|
# The path should point to the directory containing your shards. |
|
|
# Example: './data/shard-{000000..000071}.tar' |
|
|
# or 'gs://my-bucket/shard-{...}.tar' |
|
|
# or 'https://huggingface.co/datasets/overfitprolabse/subjective_audio_quality/resolve/main/shard-{000000..000071}.tar' |
|
|
# for local Windows path use 'file:X/...' |
|
|
|
|
|
DATASET_PATH = 'file:D:/audio_ds/subjective_audio_quality/shard-{000000..000071}.tar' |
|
|
VAL_SHARD_COUNT = 15 |
|
|
BATCH_SIZE = 1664 |
|
|
NUM_WORKERS = 8 |
|
|
TARGET_SR = 44100 |
|
|
# ------------------ |
|
|
|
|
|
all_shards = shardlists.expand_urls(DATASET_PATH) |
|
|
train_shards = all_shards[:-VAL_SHARD_COUNT] # Use all shards except the last N for training |
|
|
val_shards = all_shards[-VAL_SHARD_COUNT:] # Use the last N for validation |
|
|
|
|
|
resampler_cache: dict[int, Resample] = {} |
|
|
|
|
|
|
|
|
def get_resampler(orig_sr, target_sr): |
|
|
if orig_sr not in resampler_cache: resampler_cache[orig_sr] = Resample(orig_sr, target_sr) |
|
|
return resampler_cache[orig_sr] |
|
|
|
|
|
|
|
|
def preprocess_for_training(sample: dict[str, bytes | str]) -> dict[str, torch.Tensor] | None: |
|
|
try: |
|
|
wav_orig, sr_orig = torchaudio.load(BytesIO(sample["original.wav"])) |
|
|
wav_degr, sr_degr = torchaudio.load(BytesIO(sample["degraded.wav"])) |
|
|
|
|
|
if sr_orig != TARGET_SR: |
|
|
wav_orig = get_resampler(sr_orig, TARGET_SR)(wav_orig) |
|
|
if sr_degr != TARGET_SR: |
|
|
wav_degr = get_resampler(sr_degr, TARGET_SR)(wav_degr) |
|
|
|
|
|
if wav_orig.shape[0] > 1: |
|
|
wav_orig = torch.mean(wav_orig, dim=0, keepdim=True) |
|
|
if wav_degr.shape[0] > 1: |
|
|
wav_degr = torch.mean(wav_degr, dim=0, keepdim=True) |
|
|
|
|
|
target_len = int(TARGET_SR * 1.0) |
|
|
wav_orig = F.pad(wav_orig, (0, target_len - wav_orig.shape[1]))[:, :target_len] |
|
|
wav_degr = F.pad(wav_degr, (0, target_len - wav_degr.shape[1]))[:, :target_len] |
|
|
score = torch.tensor([float(sample["score.txt"])], dtype=torch.float32) |
|
|
return {"wav_orig": wav_orig, "wav_degr": wav_degr, "score": score} |
|
|
except: |
|
|
return None |
|
|
|
|
|
|
|
|
train_dataset = ( |
|
|
WebDataset(train_shards, resampled=True) |
|
|
.shuffle(1000) |
|
|
.map(preprocess_for_training) |
|
|
.select(lambda x: x is not None) |
|
|
.to_tuple('wav_orig', 'wav_degr', 'score') |
|
|
) |
|
|
|
|
|
train_loader = WebLoader( |
|
|
train_dataset, |
|
|
batch_size=BATCH_SIZE, |
|
|
shuffle=False, |
|
|
num_workers=NUM_WORKERS, |
|
|
pin_memory=False, |
|
|
prefetch_factor=2, |
|
|
) |
|
|
|
|
|
val_dataset = ( |
|
|
WebDataset(val_shards, shardshuffle=False) |
|
|
.map(preprocess_for_training) |
|
|
.select(lambda x: x is not None) |
|
|
.to_tuple('wav_orig', 'wav_degr', 'score') |
|
|
) |
|
|
|
|
|
val_loader = WebLoader( |
|
|
val_dataset, |
|
|
batch_size=BATCH_SIZE, |
|
|
shuffle=False, |
|
|
num_workers=NUM_WORKERS, |
|
|
pin_memory=False, |
|
|
prefetch_factor=2, |
|
|
) |
|
|
``` |
|
|
|
|
|
## Optional: On-the-fly Data Augmentation |
|
|
|
|
|
To further improve model robustness and effectively increase the dataset size, it is highly recommended to apply on-the-fly data augmentations on the GPU. This introduces variety in each training epoch, helping the model generalize better. |
|
|
|
|
|
The functions below provide a highly efficient, vectorized implementation of common audio augmentations: random circular shift, gain adjustment, and reversing. Because these operations are performed on the GPU in batches, they introduce minimal overhead to the training pipeline. |
|
|
|
|
|
```python |
|
|
import torch |
|
|
|
|
|
|
|
|
def batch_roll(tensor: torch.Tensor, shifts: torch.Tensor) -> torch.Tensor: |
|
|
""" |
|
|
Applies a circular shift to a batch of tensors, each with its own shift value. |
|
|
This implementation uses torch.gather for maximum GPU performance. |
|
|
""" |
|
|
B, C, L = tensor.shape |
|
|
device = tensor.device |
|
|
# Wrap shifts to the [0, L-1] range for valid indexing |
|
|
shifts = shifts % L |
|
|
# Create base indices [0, 1, ..., L-1] |
|
|
idx = torch.arange(L, device=device) |
|
|
# Shift the indices for each item in the batch |
|
|
# (1, L) - (B, 1) -> (B, L) |
|
|
idx = (idx.unsqueeze(0) - shifts.unsqueeze(1)) % L |
|
|
# Expand indices to all channels |
|
|
idx = idx.unsqueeze(1).expand(B, C, L) |
|
|
# Gather the new tensor using the shifted indices |
|
|
return torch.gather(tensor, 2, idx) |
|
|
|
|
|
|
|
|
def apply_augmentations_batch_gpu( |
|
|
wav_orig: torch.Tensor, |
|
|
wav_degr: torch.Tensor, |
|
|
p_shift: float = 0.5, |
|
|
p_gain: float = 0.75, |
|
|
p_reverse: float = 0.0, |
|
|
) -> tuple[torch.Tensor, torch.Tensor]: |
|
|
""" |
|
|
Applies vectorized augmentations to an entire batch on the GPU. |
|
|
""" |
|
|
B, C, L = wav_orig.shape |
|
|
device = wav_orig.device |
|
|
|
|
|
# 1. Circular Shift |
|
|
# Apply shift to a subset of the batch determined by p_shift |
|
|
shift_mask = (torch.rand(B, device=device) < p_shift) |
|
|
if shift_mask.any(): |
|
|
max_shift = L // 4 |
|
|
# Generate random shifts for the entire batch |
|
|
shifts_all = torch.randint(-max_shift, max_shift + 1, (B,), device=device) |
|
|
# Zero out shifts where the mask is False |
|
|
shifts_all *= shift_mask.long() |
|
|
|
|
|
wav_orig = batch_roll(wav_orig, shifts_all) |
|
|
wav_degr = batch_roll(wav_degr, shifts_all) |
|
|
|
|
|
# 2. Gain Adjustment |
|
|
gain_mask = (torch.rand(B, device=device) < p_gain) |
|
|
if gain_mask.any(): |
|
|
# Create gain factors (0.5 to 1.0) for the entire batch |
|
|
gains = (0.5 + 0.5 * torch.rand(B, 1, 1, device=device)) |
|
|
# Apply gain only where the mask is True |
|
|
wav_orig[gain_mask] *= gains[gain_mask] |
|
|
wav_degr[gain_mask] *= gains[gain_mask] |
|
|
|
|
|
# 3. Reverse |
|
|
if p_reverse > 0: |
|
|
reverse_mask = (torch.rand(B, device=device) < p_reverse) |
|
|
if reverse_mask.any(): |
|
|
wav_orig[reverse_mask] = torch.flip(wav_orig[reverse_mask], dims=[-1]) |
|
|
wav_degr[reverse_mask] = torch.flip(wav_degr[reverse_mask], dims=[-1]) |
|
|
|
|
|
return wav_orig, wav_degr |
|
|
``` |
|
|
|
|
|
### How to Use |
|
|
|
|
|
You can apply these augmentations inside your training loop right after fetching a batch from the `DataLoader`. |
|
|
|
|
|
```python |
|
|
# Assuming train_loader is defined as in the previous example |
|
|
|
|
|
# Example training loop |
|
|
for wav_orig_batch, wav_degr_batch, score_batch in train_loader: |
|
|
# Move data to the GPU (if not already there) |
|
|
wav_orig_batch = wav_orig_batch.to('cuda') |
|
|
wav_degr_batch = wav_degr_batch.to('cuda') |
|
|
score_batch = score_batch.to('cuda') |
|
|
|
|
|
# Apply augmentations |
|
|
wav_orig_aug, wav_degr_aug = apply_augmentations_batch_gpu( |
|
|
wav_orig_batch, |
|
|
wav_degr_batch, |
|
|
p_shift=0.5, |
|
|
p_gain=0.75 |
|
|
) |
|
|
|
|
|
# Now, use wav_orig_aug and wav_degr_aug for training your model |
|
|
# ... your model forward pass, loss calculation, etc. |
|
|
# output = model(wav_orig_aug, wav_degr_aug) |
|
|
# loss = loss_fn(output, score_batch) |
|
|
# ... |
|
|
``` |
|
|
|
|
|
### Important Considerations |
|
|
|
|
|
#### A Note on the `reverse` Augmentation |
|
|
|
|
|
By default, the `reverse` augmentation is disabled (`p_reverse=0.0`). This is intentional. Reversing audio can interfere with the perception of **temporal artifacts**, such as **pre-echo**, which are characteristic of certain audio codecs. |
|
|
|
|
|
Furthermore, for a model to effectively learn from reversed audio, its architecture should be capable of handling such transformations. This may not be the case for simpler or naive convolutional networks. It is recommended to enable this augmentation with caution and only if your model is designed to be robust to time-reversal. |
|
|
|
|
|
#### A Note on Reproducibility |
|
|
|
|
|
Since these augmentations are applied randomly, **setting a random seed is crucial for ensuring reproducible training runs**. |
|
|
|
|
|
For best results, you should ensure that the augmentations are different for each epoch but identical for the same epoch across different runs. A common and effective practice is to set the seed at the beginning of each training epoch using a deterministic formula, such as: |
|
|
|
|
|
```python |
|
|
import random |
|
|
import numpy as np |
|
|
import torch |
|
|
|
|
|
initial_seed = 42 |
|
|
num_epochs = 10 |
|
|
|
|
|
for epoch in range(num_epochs): |
|
|
# Set a deterministic seed for the current epoch |
|
|
seed = initial_seed + epoch |
|
|
torch.manual_seed(seed) |
|
|
random.seed(seed) |
|
|
np.random.seed(seed) |
|
|
# and etc x.seed(seed) |
|
|
|
|
|
# --- Your training loop for this epoch --- |
|
|
# for batch in train_loader: |
|
|
# ... |
|
|
``` |
|
|
|
|
|
This ensures that your experiments are perfectly reproducible while still benefiting from varied augmentations across epochs. |
|
|
|