GLA-Grad++: An Improved Griffin-Lim Guided Diffusion Model for Speech Synthesis

Teysir Baoueb, Xiaoyu Bie, Mathieu Fontaine, Gaël Richard

[Code (coming soon) | Paper]

Abstract: Recent advances in diffusion models have positioned them as powerful generative frameworks for speech synthesis, demonstrating substantial improvements in audio quality and stability. Nevertheless, their effectiveness in vocoders conditioned on mel spectrograms remains constrained, particularly when the conditioning diverges from the training distribution. The recently proposed GLA-Grad model introduced a phase-aware extension to the WaveGrad vocoder that integrated the Griffin-Lim algorithm (GLA) into the reverse process to reduce inconsistencies between generated signals and conditioning mel spectrogram. In this paper, we further improve GLA-Grad through an innovative choice in how to apply the correction. Particularly, we compute the correction term only once, with a single application of GLA, to accelerate the generation process. Experimental results demonstrate that our method consistently outperforms the baseline models, particularly in out-of-domain scenarios.

Contents

I. Illustration of the Generation Algorithm

step1 step2

II. Impact of the End Timestep of Stage 1

Ground-Truth 0 1 2 3 4 5 6

III. Randomly-Selected Samples

1. LJSpeech

Ground-Truth WaveGrad GLA-Grad GLA-Grad++

2. VCTK

Ground-Truth WaveGrad GLA-Grad GLA-Grad++

IV. Cherry-Picked Samples

1. LJSpeech

Ground-Truth WaveGrad GLA-Grad GLA-Grad++

2. VCTK

Ground-Truth WaveGrad GLA-Grad GLA-Grad++

References

[1] Haocheng Liu, Teysir Baoueb, Mathieu Fontaine, Jonathan Le Roux, and Gaël Richard, “Gla-grad: A griffin-lim extended waveform generation diffusion model,” in Proc. ICASSP, 2024.
[2] Nanxin Chen, Yu Zhang, Heiga Zen, Ron J Weiss, Mohammad Norouzi, and William Chan, “Wavegrad: Estimating gradients for waveform generation,” in Proc. ICLR, 2020.