Efficient Transformer-based Speech Enhancement Using Long Frames and STFT Magnitudes
This website complements the following conference paper:
Danilo de Oliveira, Tal Peer, Timo Gerkmann, "Efficient Transformer-based Speech Enhancement Using Long Frames and STFT Magnitudes", ISCA Interspeech, Incheon, Korea, Sep. 2022 [arxiv]
Abstract
The SepFormer architecture shows very good results in speech separation. Like other learned-encoder models, it uses short frames, as they have been shown to obtain better performance in these cases. This results in a large number of frames at the input, which is problematic; since the SepFormer is transformer- based, its computational complexity drastically increases with longer sequences. In this paper, we employ the SepFormer in a speech enhancement task and show that by replacing the learned-encoder features with a magnitude short-time Fourier transform (STFT) representation, we can use long frames without compromising perceptual enhancement performance. We obtained equivalent quality and intelligibility evaluation scores while reducing the number of operations by a factor of approximately 8 for a 10-second utterance.
Audio examples
Below are some examples of audio enhanced by the SepFormer with the learned encoder and with the magnitude STFT features, along with the execution times averaged over 10 runs on an Intel Core i9-10900X CPU at 3.70GHz:
IMPORTANT NOTE:
For an optimal listening experience, please use a Chromium-based web browser to listen to the audio examples.
Example 1
Clean | Noisy | ||
---|---|---|---|
Learned-encoder SepFormer [1] CPU time: 828ms |
Magnitude STFT SepFormer (ours) CPU time: 115ms |
Example 2
Clean | Noisy | ||
---|---|---|---|
Learned-encoder SepFormer [1] CPU time: 796ms |
Magnitude STFT SepFormer (ours) CPU time: 113ms |
Example 3
Clean | Noisy | ||
---|---|---|---|
Learned-encoder SepFormer [1] CPU time: 799ms |
Magnitude STFT SepFormer (ours) CPU time: 125ms |
Example 4
Clean | Noisy | ||
---|---|---|---|
Learned-encoder SepFormer [1] CPU time: 831ms |
Magnitude STFT SepFormer (ours) CPU time: 114ms |
Example 5
Clean | Noisy | ||
---|---|---|---|
Learned-encoder SepFormer [1] CPU time: 394ms |
Magnitude STFT SepFormer (ours) CPU time: 77ms |
References
[1] C. Subakan, M. Ravanelli, S. Cornell, M. Bronzi, and J. Zhong, “Attention Is All You Need In Speech Separation,” in ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Toronto, ON, Canada: IEEE, Jun. 2021, pp. 21–25.