传统抗噪算法无法解决人声背景下语音识别(Automatic speech recognition,ASR)系统的鲁棒性问题。本文提出了一种基于计算听觉场景分析(Computational auditory scene analysis,CASA)和语者模型信息的混合语音分离系统。该系统在CASA框架下,利用语者模型信息和因子最大矢量量化(Factorial-max vector quantization,MAXVQ)方法进行实值掩码估计,实现了两语者混合语音中有效地分离出目标说话人语音的目标,从而为ASR系统提供了鲁棒的识别前端。在语音分离挑战(Speech separation challenge,SSC)数据集上的评估表明,相比基线系统,本文所提出的系统的语音识别正确率提高了15.68%。相关的实验结果也验证了本文提出的多语者识别和实值掩码估计的有效性。
Conventional noise robust speech recognition system does not work well when human speech is presented in the background. In this paper, a computational auditory scene analysis (CASA) and speaker model based speech segregation system is proposed to solve this problem. By utilizing speaker model and factorial-max vector quantization (MAXVQ) to estimate real-value masks in CASA framework, a robust front-end for speech recognition is constructed. Evaluations on speech separation challenge (SSC) showed that the proposed system won 15.68 % improvement over the baseline system. The results of evaluation also proved the validity of the multi-speaker recognition and the real-value mask estimation module.