Individuals who undergo a laryngectomy lose their ability to phonate. Yet current treatment options allow alaryngeal speech,they struggle in their daily communication and social life due to the low intelligibility of their speech. In this paper, we presentedtwo conversion methods for increasing intelligibility and naturalness of speech produced by laryngectomees (LAR). The first method used a deep neural network for predicting binary voicing/unvoicing or the degree of aperiodicity. The second methodused a conditional generative adversarial network to learn the mapping from LAR speech spectra to clearly-articulated speech spectra. We also created a synthetic fundamental frequency trajectory with an intonation model consisting of phrase and accent curves. For the two conversion methods, we showed that adaptation always increased the performance of pre-trained models,objectively. In subjective testing involving four LAR speakers,we significantly improved the naturalness of two speakers, andwe also significantly improved the intelligibility of one speaker.
We convert alaryngeal speech to clearly-spoken speech
Speaker | Alaryngeal Speech | Predict Voicing | Predict Voicing & Spectrum | Predict Spectrum | Clearly Spoken Speech |
---|---|---|---|---|---|
DL001: TEP | |||||
DL004: ELX | |||||
L006: TEP |