Please read news on problems with Opera 7.11 browsers

Session - 00 Plenary

OPENING
Recent developments in musical sound synthesis based on a physical model
J O Smith
Stanford University, Music Dept. / CCRMA, Stanford, United States

This presentation will provide an overview of relatively recent methods for musical sound synthesis based on physical models from musical acoustics, including an overview of selected highlights from the more recent literature.
For musical sound synthesis, fast algorithms are usually required, e.g., for real time musical performance. While at least a few voices of some high-quality models can run in real time on modern general purpose processors, there is always a desire for more voices in real time. Fast execution is also important during the algorithm development phase, since manual "voicing" is greatly facilitated by real time experimentation. Automatic voicing is still largely a subject of ongoing research.
We begin with a brief review of general finite difference methods for discrete-time simulation of acoustic systems. Next, these methods are related to faster computational forms normally associated with digital signal processing techniques. In particular, the "digital waveguide" and "wave digital" paradigms are briefly reviewed. More specialized methods will be discussed, which may be applied to the simulation of nonlinearities, distributed scattering, and model-order reduction (with little or no audible error).


From musical acoustics to everyday acoustics: a physical modeling route
D Rocchesso
Università di Verona, Dipartimento di Informatica, Verona, Italy

In the year 2000, I started thinking about how the experience and knowledge gained in physical modeling from musical acoustics could be profitably used to model everyday sound phenomena. At that time, there were some remarkable success stories of non-musical non-speech sound synthesis [Cook, Gaver]. However, I had the feeling that more fundamental research had to be done to obtain models that are capable of describing large families of sounds, being at the same time efficient and simple to control. This was the rationale behind the Sounding Object (SOb) EU project (soundobject.org). We put together a group of researchers that started working from models inherited from our own experience in musical acoustics, and steered them toward everyday sound modeling by considering recent studies in applied mechanics and automatic control. Since everyday sounds are rarely produced by linearly-excited one-dimensional resonators we departed from classic waveguide modeling to adopt a more general and manageable modal description of resonators. By imposing the appropriate parameters and control signals, we realized models of rolling, sliding, squeaking, breaking, crumpling, walking, running, ect., that are all based on a similar kernel impact or friction model. All the models are so efficient that many instances of them can run in real time and drive simultaneous interactive 3D graphics.


THE SUNDBERG SESSION


A marriage of the Director Musices program and the Conductor Program
M V Mathews¹, G Bennett², C Sappº, A Friberg³, J Sundberg³
¹Stanford, Music, Stanford California, United States; ²Musik Hochschule, Composition, Zurich, Switzerland; ºJohns Hopkins, Music, Baltimore Maryland, United States; ³KTH, Speech and Music, Stockholm, Sweden

This paper will describe an ongoing collaboration between the authors to combine the Director Musices and Conductor programs in order to achieve a more expressive and socially interactive performance of a midi file score by an electronic orchestra. Director Musices processes a "square" midi file, adjusting the dynamics and timing of the notes to achieve the expressive performance of a trained musician. The Conductor program and Radio-Baton allow a conductor, wielding an electronic baton, to follow and synchronize with other musicians, for example to provide an orchestral accompaniment to an operatic singer. These programs may be particularly useful for student soloists who wish to practice concertos with orchestral accompaniments.


Expressiveness in music performance: analysis and modeling
G De Poli
University of Padova, CSC-DEI, Padova, Italy

Expression is an important aspect of music performance. It is the added value of a performance and is part of the reason for music to be interesting to listen to and to sound alive. Moreover understanding and modelling expressive content communication is important in many engineering applications. In human musical performance, acoustical or perceptual changes in sound are organized in a complex way by the performer in order to communicate different emotions to the listener. The same piece of music can be performed trying to convey a specific interpretation of the score, by adding mutable expressive intentions. The analysis of these systematic deviations has led to the formulation of several models that try to describe their structures, and aim at explaining where, how and why a performer modifies, sometime in an unconscious way, what is indicated by the notation of the score. It should be noticed that, although deviations are only the external surface of something deeper and not directly accessible, they are quite easily measurable, and thus it is useful to develop computational models in scientific research. Analysis and modeling methodologies will be reviewed and discussed, with special reference to the work developed by J. Sundberg and co-workers at KTH.


Tonality eludes acoustics, cognition, culture, music theory, and brain science
C L Krumhansl
Cornell University, Psychology, Ithaca, NY, United States

In this talk, I will survey a range of approaches to understanding tonality. Limits in acoustic explanations for Western scales and harmony raise questions about the role of learning and cognition. Results of psychological studies indicate that Western listeners, even those without formal instruction, have extensive knowledge of typical tonal and harmonic patterns. This knowledge is used by listeners to organize and remember music, and by performers to plan and execute expressive performances. Empirical studies implicate statistical learning in acquiring this knowledge. Cross-cultural studies suggest listeners possess a highly flexible cognitive system that may, nonetheless, be constrained by certain acoustic and psychological principles. Another issue is whether tonality, as formally elaborated within music theory, can account for stylistic variations across historical periods, particularly compositional innovations in the last century. Theoretical proposals extending to chromatic, hexatonic, and octatonic compositions are tested. This and related work suggests a possible link between musical tension and emotion. A final approach is to investigate the neural systems involved in processing tonality with techniques of brain imaging. Localized cortical areas are identified, but these appear to be activated during a variety of cognitive tasks. Despite an intense multidisciplinary effort, tonality remains remarkably elusive.


The voice as a musical instrument: fundamental differences between man-made and biological designs
Ingo R. Titze
University of Iowa, Dept of Speech Pathology and Audiology, Iowa City, U.S.A.

Although the singing voice resembles string and wind instruments in many ways, there is a major size difference that forces the biological system to take advantage of nonlinearities. Both the length of the vocal fold and the length of the vocal tract are well below standard (in terms of musical instruments). Nature has compensated by creating a ligament "string" that has a nonlinear stress-strain relation and nonuniform cross-section to cover a wide pitch range. The vocal tract, being only 10-20 cm long, can only resonate one or two harmonics at a time at high pitches, but general harmonic reinforcement can be obtained by inertive source-resonator coupling, a nonlinear effect not often used in speech. But a price is paid for the heavy use of nonlinearity. The instrument becomes more difficult to control. Pitch and intensity, for example, are never independent. It required a complex central nervous system to play the instrument. Furthermore, there are two critical pathways to deal with in laryngeal control, the speech motor pathways and the limbic system pathways. These pathways can interfere with each other in the production of musical sounds, but they can also be useful in the expression of emotion.


Why music does not produce basic emotions: A plea for changing the paradigm in studying the emotional effects of music
K R Scherer
University of Geneva, Geneva, Switzerland

It is suggested that the study of emotional effects of music has been handicapped by two common misunderstandings: 1) The tendency to assume that "emotions" and "feelings" are synonyms, and 2) The tendency to assume that music evokes "basic" or "fundamental" emotions. As to 1), it can be shown that "feelings" can be profitably conceptualized as one component of emotion, albeit a very important one. The feeling component integrates the all other components of emotion and serves as the basis for the conscious representation of emotional processes and for affect regulation. As to 2), the notion of a limited number of basic or fundamental emotions is (correctly) based on the evolutionarily continuous adaptational function of emotion processes. However, in focussing on the small number of emotion classes that can be identified for a large number of different species, psychobiologists have downplayed the evolution of much more complex forms of emotional processes in humans, including affective states produced by music. It is proposed that a radical paradigm change is required to free research on the emotional effects of music from the excessive constraints imposed by these two common misconceptions. Concretely, it is suggested that affect produced by music should be studied as (more or less conscious) feelings that integrate cognitive and physiological effects, which may be be accounted for by widely different production rules. This suggestion will be buttressed by recent research findings from the Geneva Emotion Research Group.

Maintained by webmaster@speech.kth.se