Santa Barbara Beach
Home Page
Stress Resiliency Programs
Professional Training
resources resources Adirondacks

"Music is the Shorhand of Emotion" continued

which blasts from loudspeakers attached to the choppers. The result is a chilling yet unspoken commentary on modern warfare as well as a blunt comment on the American psyche. And who can forget the ominous leitmotiv in Peter Benchley's thriller, Jaws.

Technology has made music instantly available for almost everyone. Walkmans, portable CD and MP3 players, sophisticated sound systems in our cars and SUVs, and waterproof radios make music accessible anywhere, even in the shower! Our lives are immersed in background music, whether we want it or not, in the grocery store, in restaurants, bookstores, malls and of course, in elevators, spawning the Musak moniker for bland and unctuous tunes by the likes of Percy Faith.

But awash as we are in this sea of sound, do we really listen? Psychologist Mihaly Csikszentmihalyi, author of Flow: The Psychology of Optimal Experience, believes that to enjoy music, we must first listen to it. He states. "It's not the hearing that improves life, it is the listening " (1990). Csikszentmihalyi separates listening into three levels or stages: sensory, analogic and analytic. He postulates that at the sensory level the listener is responding somehow to the "qualities of sound that induce the pleasant physical reactions that are genetically wired into our nervous system." He speculates that we are especially sensitive to music's beat and rhythm, which may bring back memories of the maternal heartbeat. The analogic mode of listening is the skill of "evoking feelings and images based on the patterns of sound…", for example visualizing a sleigh ride through snow while listening to Tchaikovsky. The third, and in Csikszentmihalyi's opinion, the most complex and advanced stage of listening is the analytic one, in which the listener shifts her attention to " the structural elements of music, instead of the sensory or narrative ones." Critical evaluation, comparison and analysis are all actively employed by the listener.

On one hand, Csikszentmihalyi is arguing that it is the music itself which evokes a (sensory) response in the listener, but on the other he believes that the analogic and analytic skills that are developed by the listener can be used with any piece of music to enhance one's appreciation. This indirectly reflects a long-standing argument in the music and emotion research arena: is the emotional processing of music a product of evolution with an adaptive advantage or are the emotions evoked by music a mere by-product of the way the human brain is wired? The latter view is held by cognitive neuroscientist Steven Pinker who calls emotional responses to music, "so much cheesecake" (1997), an epiphenomenon, if you will, of the brain's auditory circuitry.

What exactly happens when we hear music? Is music really "only" vibrations that the ear picks up? The human auditory system is surprisingly complex and a complete description of the inner ear, with its hair cells, basilar and tectorial membranes, is beyond the scope of this paper. But a brief review of the main auditory nerve pathways is necessary to understand the complexity of the auditory system and to understand the various brain areas that are involved in an emotional response to music. (The following neuroanatomical review is adapted from an online tutorial on auditory pathways located at

Sound signals from each ear travel along the auditory nerve to the two cochlear nucleli located in the medulla, one on each side. The nerve fibers do not cross over but synapse there and then travel to the superior olive, also located in the medulla. The superior olive is actually a group of nuclei known as the superior olivary complex and each receives projections from both the ispilateral and contralateral cochlear nuclei, with the largest contribution from the contralateral side. This means that the information that finally reaches the primary auditory cortex comes from the opposite ear. The superior olivary complex is important in the localization of sound. Fibers then travel upward in a tract known as the lateral lemniscus, where they synapse a third time in the inferior colliculus, a part of the dorsal midbrain. The inferior colliculus is also involved in the localization of sound. Fibers then travel to the thalamus, where they synapse a fourth time in the medial geniculate nucleus. Fibers leaving the medial geniculate project to either the primary or secondary auditory cortex. Fibers from the thalamus also project to the subcortical forebrain, the dorsal amygdala and the posterior neostriatum. These connections are important in the emotional responses to auditory stimuli, including fear conditioning to sounds. Fibers then synapse a fifth and final time in the auditory cortex located along the ventral surface of the temporal lobe. The auditory cortex is necessary in the ordering of sounds and sound detection and localization. Besson, Faieta, Peretz and Bonnel (1998) investigated how the brain processes vocal music. They determined that music and language are processed in two different regions and that the listener's musical experience is spread out over different regions of the brain rather than being found in a "musical center". Perhaps this anatomical distinction was not always present. David Abram (1996), writes that the human voice is "necessarily tuned…to the various nonhuman calls and cries that animate the local terrain". He reports on the work of ethomusicologist Steven Feld, who has compiled field recordings of the Kaluli people of Papua, New Guinea. The Kaluli sing with the birds, the insects, the rain, the waterfalls…and "when the Kaluli sing with them, they sing like them. Nature is music to Kaluli ears".

Weinberger (1998) writes that there are unfortunate "implicit assumptions about the relationship between music and emotion…some workers appear to believe that a given piece of music will have the same emotional outcome in…all people. Were this…true…understanding the emotional power of music would be greatly simplified."

Emotional reactions to music appear to be real, that is with concomitant physiological changes and not just verbal reports. Sloboda (1991), in a survey of British adults, found that over 80% reported physical responses to music, including thrills and chills, tears or laughter. Arousal also seems to be important in emotional reactions to music (Berlyne, 1971). Berlyne found that subjects liked middle range of arousal, preferring pieces that were neither highly exciting or very boring. The same piece of music can also produce different emotional responses in the same subject at different times.

How does the emotional message get conveyed? Kate Hevner (1935) was one of the first researchers to study which musical elements are related to the emotional responses of listeners. Juslin (2000), analyzed in detail the structure of four performances of the same musical selection, played at different times to adults who had some musical training. He found that two factors explained the conveying of emotional content: tempo and articulation. Tempos were either fast or slow and articulations were either staccato (brief, punctuated notes) or legato, with one note melding into the next.

Adults, of course, are not the only subjects emotionally affected by music. Kastner and Crowder (1990) studied the responses of children to four types of music. The children used cartoon faces to match positive or negative music with the passages, which had been previously rated by adult "experts". All of the children did well, with the older ones doing better than the younger. But children as young as three years old tested far better than chance.

Music can not only be learned in utero but can also be remembered after birth (Hepper, 1991). Sandra Trehub (1990) of the University of Toronto studied the ability of infants to recognize anomalous notes in a Western major scale. Infants reliably recognize the "wrong" note with a turn of the head. One could argue that the infant has learned the Western convention; after all, he has been hearing music since before he was born. But Trehub conducted a second experiment which used an invented distinctly non-Western scale. The infants could still reliably recognize anomalous notes.

Sloboda (1991) believes that musical emotions are more accurately characterized as "mood states" rather than discrete emotions, as the concrete circumstances of realistic life settings is missing. Leonard Meyer's (1956) analysis of music and emotion is still quiet influential in our understanding of this complex topic. Meyer wrote that there are certain elements in music which set up expectations about the future. It is precisely these expectations which determine the intensity of emotion. "The greater the build-up of suspense of tension, the greater the emotional release upon resolution (Meyer, 1956, p.28). Annemiek Vink (1999) reviewed Jansma and de Vries (1995) work (original in German) which extended Meyer's theory. The authors found that listeners without much musical knowledge had responses which were primarily affective, while more sophisticated listeners reacted cognitively.

Methodological problems abound in the study of music and emotions. Emotions are usual short lived and may be influenced by task demands. Subjects also tend to choose basic emotions from questionnaires and are "less likely to describe nuances" (Vink, 1998). Aldridge (1996) argues for a phenomenological, qualitative approach to research in this area. Very little is really known about how and why music affects us emotionally. There is much research to be done. Film theorist Susan Langer (1953) writes eloquently about the complexity of music:

    The tonal structures we call "music" bear a close logical similarity to the forms of human feeling--forms of growth and attenuation, flowing and slowing, conflict and resolution, speed, arrest, terrific excitement, calm or subtle activation or
dreamy lapses__not joy and sorrow perhaps, but the
poignancy of both--the greatness and brevity and eternal passing of everything vitally felt. Such is the pattern, or logical form, of sentience; and the pattern of music is that same form worked out in pure measures, sound and silence. Music is the tonal
analogue of emotive life.


Abram, D. (1996). The spell of the sensous. Random House: New York.
Aldridge, D. (1996). Music therapy research and practice: from out of the silence. London: Jessica Kingsley Publishers.
Berlyne, D. E. (1971). Aesthetics and psychobiology. New York: Appleton Century-Crofts.
Besson, M., Faieta, F., Peretz, I. & Bonnel, A. M. (1998). Singing in the brain: independence of lyrics and tunes. Psychological Science, 9: 494-498.
Csikszentmihalyi, Mihaly (1990). Flow: the psychology of optimal experience. Harper Collins: New York.
Hepper, P. G. (1991). An examination of fetal learning before and after birth. The Irish Journal of Psychology, 12: 95-107.
Hevner, K. (1935). The affective character of behavior response patterns to music. Journal of Psychology, 44, 111-127.
Juslin, P.N. (2000). Cue utilization in communication of emotion in music performance: relating performance to perception. Journal of Experimental Psychology, 26, 1797-1813.
Langer, S. K. (1953). Feeling and form. New York: Scribner.
Pinker, S. (1997). How the mind works. Norton: New York.
Randall, S. N. & Grant, L. K. Advanced biological psychology tutorials. Online at
Sloboda, J. A. (1991). Music structure and emotional response: some empirical findings. Psychology of Music, 19, 110-120.
Trehub, S., Thorpe, L.A. & Trainor, L. A. (1990). Infants' perceptions of good and bad melodies. Psychomusicology, 9: 5-19.
Vink, A. (1999). Living apart together: a relationship between music psychology and music therapy. Nordic Journal of Music Therapy, 10(2),144-158.
Weinberger, N. M. (1998). Understanding music's emotional power. Musica Research Notes, Volume V, Issue 2. Online at