Seminar at Speech, Music and Hearing:
Mining Speech SoundsMachine Learning Methods for Automatic Speech Recognition and Analysis
Giampiero Salvi, TMH
Opponent: Torbjørn Svendsen
AbstractThis thesis collects studies on machine learning methods applied to speech technology and speech research problems. The six research papers included in this thesis are organised in three main areas.
The first group of studies were carried out within the European project Synface. The aim was to develop a low latency phonetic recogniser to drive the articulatory movements of a computer-generated virtual face from the acoustic speech signal. The visual information provided by the face is used as a hearing aid for telephone users.
Paper A compares two solutions based on regression and classification techniques that address the problem of mapping acoustic to visual information. Recurrent Neural Networks are used to perform regression whereas Hidden Markov Models are used for the classification task. In the second case, the visual information needed to drive the synthetic face is obtained by interpolation between target values for each acoustic class. The evaluation is based on listening tests with hearing-impaired subjects, where the intelligibility of sentence material is compared in different conditions: audio alone, audio and natural face, and audio and synthetic face driven by the different methods.
Paper B analyses the behaviour, in low latency conditions, of a phonetic recogniser based on a hybrid of recurrent neural networks (RNNs) and hidden Markov models (HMMs). The focus is on the interaction between the time evolution model learned by the RNNs and the one imposed by the HMMs.
Paper C investigates the possibility of using the entropy of the posterior probabilities estimated by a phoneme classification neural network as a feature for phonetic boundary detection. The entropy and its time evolution are analysed with respect to the identity of the phonetic segment and the distance frombased on regression and classification techniques a reference phonetic boundary.
In the second group of studies, the aim was to provide tools for analysing a large amounts of speech data in order to study geographical variations in pronunciation (accent analysis).
Paper D and Paper E use Hidden Markov Models and Agglomerative Hierarchical Clustering to analyse a data set of about 100 millions data points
(5000 speakers, 270 hours of speech recordings). In Paper E, Linear Discriminant Analysis was used to determine the features that most concisely describe the groupings obtained with the clustering procedure.
The third group belongs to studies carried out within the international project MILLE (Modelling Language Learning), which that aims at investigating and modelling the language acquisition process in infants.
Paper F proposes the use of an incremental form of Model-Based Clustering to describe the unsupervised emergence of phonetic classes in the first stages of language acquisition. The experiments were carried out on child-directed speech expressly collected for the purposes of the project.
For more information, see Web link
13:00 - 17:00
Friday October 6, 2006
The seminar is held in F3, Lindstedtsvägen 26.
| Show complete seminar list