Synthesis and Decoding of
|
Roberto Bresin, Anders Friberg
Abstract A recently developed application of Director Musices (DM) is presented. The DM is a rule-based software tool
for automatic music performance developed at the Speech Music and Hearing Dept. at the Royal Institute of Technology,
Stockholm. It is written in Common Lisp and is available both for Windows and Macintosh. It will be demonstrated
that particular combinations of rules defined in the DM can be used for synthesizing performances that differ in
emotional quality. Different performances of two pieces of music were synthesized so as to elicit listeners' associations
to six different emotions (fear, anger, happiness, sadness, tenderness, and solemnity). Performance rules and their
parameters were selected so as to match previous findings about emotional aspects of music performance. Variations
of the performance variables IOI (Inter-Onset Interval), OOI (Offset-Onset Interval) and L (Sound Level) will be
presented for each rule-setup. In a forced-choice listening test 20 listeners were asked to classify the performances
with respect to emotions. The results showed that the listeners, with very few exceptions, recognized the intended
emotions correctly. This shows that a proper selection of rules and rule parameters in DM can indeed produce a
wide variety of meaningful, emotional performances, even extending the scope of the original rule definition. A
variety of applications of this musical emotion synthesis can be imagined. One would be to have an "emotional
performance tool-box" where users can chose different emotions by selecting a button or a combination of buttons.
This could be implemented in the Java tools, thus allowing a direct application to the existing large MIDI music
databases on the Internet. Also, icons, e.g., emoticons such as ":-)" which are sometimes used in email
messages could be complemented by emotional performances of music excerpts that can be attached to the message
in order to communicate certain emotions. It may also be possible to let a user's (e.g. a dancer's) different face
or body expressions control the emotionality of the performance of played music in real time. Synthesis of "emotional
performances" could also be used for playing music in computer-generated cartoons or films. Sound examples
will be provided. |
Roberto Bresin | Music Performance | Music Group staff | Music Acoustics | SE dictionary | MidiShare |
|
KTH - Royal Institute of Technology TMH - Department of Speech, Music and Hearing Drottning Kristinas v. 31 SE-100 44 Stockholm, Sweden |
phone +46 (8) 790 78 76 |
Updated 99.06.16 |