Update README.md
Browse files
README.md
CHANGED
|
@@ -72,6 +72,11 @@ enhanced = enhance_model.enhance_file("speechbrain/mtl-mimic-voicebank/example.w
|
|
| 72 |
# Saving enhanced signal on disk
|
| 73 |
torchaudio.save('enhanced.wav', enhanced.unsqueeze(0).cpu(), 16000)
|
| 74 |
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 75 |
### Inference on GPU
|
| 76 |
To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method.
|
| 77 |
|
|
|
|
| 72 |
# Saving enhanced signal on disk
|
| 73 |
torchaudio.save('enhanced.wav', enhanced.unsqueeze(0).cpu(), 16000)
|
| 74 |
```
|
| 75 |
+
|
| 76 |
+
The system is trained with recordings sampled at 16kHz (single channel).
|
| 77 |
+
The code will automatically normalize your audio (i.e., resampling + mono channel selection) when calling *enhance_file* if needed. Make sure your input tensor is compliant with the expected sampling rate if you use *enhance_batch* as in the example.
|
| 78 |
+
|
| 79 |
+
|
| 80 |
### Inference on GPU
|
| 81 |
To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method.
|
| 82 |
|