Advanced audio noise characterization and filtration for accelerated Automated Speech Recognition

Collaborators

Researcher

Environmental noise can provide a rich source of information about the current context. For example, humans often infer the location of a respondent in a mobile phone conversation by identifying the background noise and adjusting their responses accordingly. Emulating this capability using computers and automated algorithms is a challenging problem due to the inherent diversity and complexity of background noise. The need for characterising noise is increasingly in demand due to its applicability in a number of applications including speech recognition and voice activity detection. This project aims to develop a set of novel tools that will facilitate the automatic characterisation of the surrounding noise-field via blind audio recordings from a single/ multiple microphones, and analyse its applicability in advancing existing speech recognition techniques. The specific objectives of this project are: (i) Development and evaluation of an environmental noise classifier using an advanced feature vector; (ii) Robust speech detection directed by the aforementioned noise classifier.

 

Funding

This is an Industry Project funded by the Australian Signals Directorate as part of the ANU-ASD CoLAB.

 

Partners

Australian Signals Directorate

References

1. A. P. Bates, D. Grixti-Cheng, P. Samarasinghe and T. Abhayapala, "On the use of the Relative Transfer Function for Source Separation using Two-channel Recordings," 2020 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC), Auckland, New Zealand, 2020, pp. 734-738.(link)

Updated:  10 August 2021/Responsible Officer:  Dean, CECS/Page Contact:  CECS Marketing