In this seminar we will present work carried out in the EU-funded CHIL-project where the task was to perform automatic recognition of expressiveness in natural speech. We present two corpora, the ISL meeting data corpus which contains human-human interaction in English and the VoiceProvider corpus which contains human-computer interaction in Swedish. Two approaches to automatic classification are presented. The first one is based on typical measurements such as fundamental frequency, intensity, formants, voice quality and duration, while the second is based on Mel-Frequency Cepstral Coefficients in the pitch region of 20-600 Hz and in the formant and spectral shape region above 300 Hz. A more careful analysis, based on Mimmi Forsell´s master´s thesis, of a selected part of the VoiceProvicer corpus will also be presented.