MUSIC IN THE BRAIN: DIFFERENCES BETWEEN MUSICIANS AND NON-MUSICIANS by Julie Orlando BSc, The University of Alberta, 1996 THESIS SUBMITTED IN PARTIAL FULFILMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF SCIENCE in PSYCHOLOGY © Juhe Orlando, 2001 THE UNIVERSITY OF NORTHERN BRITISH COLUMBIA March 2001 All rights reserved. This work may not be reproduced in whole or in part, by photocopy or other means, without the permission of the author. 1^1 Library and Archives Canada Bibliothèque et Archives Canada Published Heritage Branch Direction du Patrimoine de l'édition 395 Wellington Street Ottawa ON K1A 0N4 Canada 395, rue Wellington Ottawa ON K1A 0N4 Canada Your file Votre référence ISBN: 978-0-494-28371-4 Our file Notre référence ISBN: 978-0-494-28371-4 NOTICE: The author has granted a non­ exclusive license allowing Library and Archives Canada to reproduce, publish, archive, preserve, conserve, communicate to the public by telecommunication or on the Internet, loan, distribute and sell theses worldwide, for commercial or non­ commercial purposes, in microform, paper, electronic and/or any other formats. AVIS: L'auteur a accordé une licence non exclusive permettant à la Bibliothèque et Archives Canada de reproduire, publier, archiver, sauvegarder, conserver, transmettre au public par télécommunication ou par l'Internet, prêter, distribuer et vendre des thèses partout dans le monde, à des fins commerciales ou autres, sur support microforme, papier, électronique et/ou autres formats. The author retains copyright ownership and moral rights in this thesis. Neither the thesis nor substantial extracts from it may be printed or otherwise reproduced without the author's permission. L'auteur conserve la propriété du droit d'auteur et des droits moraux qui protège cette thèse. Ni la thèse ni des extraits substantiels de celle-ci ne doivent être imprimés ou autrement reproduits sans son autorisation. In compliance with the Canadian Privacy Act some supporting forms may have been removed from this thesis. Conformément à la loi canadienne sur la protection de la vie privée, quelques formulaires secondaires ont été enlevés de cette thèse. While these forms may be included in the document page count, their removal does not represent any loss of content from the thesis. Bien que ces formulaires aient inclus dans la pagination, il n'y aura aucun contenu manquant. Canada APPROVAL Name: Julie Orlando Degree: Master of Science Thesis Title: MUSIC IN THE BRAIN: DIFFERENCES BETWEEN MUSICIANS AND NON-MUSICIANS Examining Committee: Chair: Dr. Alex Micnaios Professor, Political Science Program UNBC Supervisor: Dr. Glenda Prkachin Associate Professor, Psychology Program UNBC Committee M em bei^^r. Kyle Matsuba Assistant Professor, Psychology Program UNBC Committee M e m ^ rT fe . Peter MacMillan Assistant Professor, Education Program UNBC External Examiner: Dr. Carol Oosthuizen Speech Language Pathologist and Clinic Director Child Development Centre (Prince George) Date Approved: AfyuJ /% fioo! 11 Abstract Although it is widely beheved that most language functions take place within the left neural hemisphere and most music functions in the right, there are many exceptions. Notably, musicians often display different patterns of neural activation in response to musical stimuli than do non­ musicians. This study used dichotic listening to examine the differences between musicians and non-musicians in the levels of distraction produced by music in the left and right ears. Fifteen musicians and 15 non-musicians each monitored for a target word in spoken passages in a prespecified ear while the material being presented to the unattended ear was varied between speech and music. It was found that Musicians were significantly slower than Non-musicians when music was being presented to the right ear, indicating greater left hemisphere involvement in passive music perception. 111 TABLE OF CONTENTS Abstract 11 Table of Contents iii List of Tables V List of Figures vi Acknowledgement vii Chapter I Chapter II Music in the Brain: Differences Between Musicians and Non-musicians 1 Absolute Pitch 6 Dichotic Listening 8 Present Experiment 11 Method 15 Preliminary Study 15 Participants 15 Materials and Apparatus 16 Procedure 18 Experiment Chapter III 20 Participants 20 Materials and Apparatus 20 Procedure 21 Results 24 Preliminary Study 24 IV Chapter IV Experiment 24 Language as Unattended Stimulus 32 Music as Unattended Stimulus 36 Discussion Language as Unattended Stimulus 44 Music as Unattended Stimulus 45 Limitations to This Study 48 Further Areas to Study 50 References Appendix A 44 53 Specific Text Passages and Time Indices for Music Passages Used as Stimuh 59 Appendix B Informed Consent Form 63 Appendix C Participant Information Form 64 Appendix D Means and Standard Deviations 65 LIST OF TABLES Table 1 t-Test Results Comparing Blocks 1 and 2 for All Unattended Conditions 27 2 Analysis of Variance Results for All Block 1 Conditions 30 3 Analysis of Variance Results for All Language Unattended Conditions, Non Fiction Attended Condition Removed 4 Analysis of Variance Results for All Music Unattended Conditions, Non Fiction Attended Condition Removed 5 68 Mean Reaction Times for Attended x Unattended Interaction, Music Unattended Conditions Only D6 67 Mean Reaction Times for Ear x Unattended Interaction, Music Unattended Conditions Only D5 66 Mean Reaction Times for Group x Unattended Interaction, Music Unattended Conditions Only D4 65 Mean Reaction Times for Group x Attended Interaction, Language Unattended Conditions Only D3 39 Mean Reaction Times for Ear x Attended x Unattended Interaction, Language Unattended Conditions Only D2 37 Mean Reaction Times for Group x Ear Interaction, Music Unattended Conditions Only D1 33 69 Mean Reaction Times for Ear x Attended Interaction, Music Unattended Conditions Only 70 VI LIST OF FIGURES Figure 1 Hypothesized Pattern of Group x Ear Interaction for All Music Unattended Conditions 2 13 Reaction Times for All Unattended Stimulus Conditions, Block 1 Versus Block 2 28 3 Reaction Times for All Attended Stimulus Conditions Within Block 1 31 4 Reaction Times for the Ear x Attended x Unattended Interaction in Language Unattended Conditions, Block 1 Only 5 Reaction Times for the Group x Ear Interaction in Music Unattended Conditions, Block 1 Only 6 41 Reaction Times for the Ear x Unattended Interaction in Music Unattended Conditions, Block 1 Only 8 38 Reaction Times for the Group x Unattended Interaction in Music Unattended Conditions, Block 1 Only 7 35 42 Reaction Times for the Attended x Unattended Interaction in Music Unattended Conditions, Block 1 Only 43 vil ACKNOWLEDGEMENT I would like to extend my heartfelt thanks to my supervisor, Glenda Prkachin, and to my committee members past and present, Kyle Matsuba, Peter MacMillan, Richard Lasenby, and Philip Higham, for their invaluable contributions and guidance in preparing this research paper. Many thanks also go to the participants who volunteered their time to be a part of this study. I cannot produce a manuscript on music research without recognising my many music teachers and coaches over the years, most notably Rose Loewen. Without her encouragement so many years ago, music would not be as important a part of my life as it is today. Finally, I am always grateful for the support and encouragement of my family and friends. CHAPTER I Music in the Brain: Differences Between Musicians and Non-Musicians Historically, the belief of psychologists regarding the brain was that the two hemispheres are fundamentally different and diametrically opposed. They believed that the left hemisphere was involved solely in analytic, serial, systematic, and logical processes, whereas the right hemisphere was involved solely in hohstic, synthetic processes. More specifically, researchers believed that the left hemisphere dealt with all things relating to speech and language and the right hemisphere dealt with arts and music. Although the more generic distinction of analytic versus holistic holds true in the face of recent research, the specific distinction of language versus music does not. There are many situations in which the right hemisphere is observed to have a role in languagerelated tasks. For example, the recognition of physical letter shapes appears to be a right hemisphere driven task (Ley & Bryden, 1979). In addition, cases have been reported of patients with right hemisphere damage who present language deficits (Kolb & Whishaw, 1990). Similarly, researchers have found a number of circumstances in which the left hemisphere plays a prominent role in music processing. An example of this is the interpretation or recognition of rhythms (Platel, Price, et al., 1997). It has also become clear, in cases of people with brain damage, that music and language are not distinct. Patel and Peretz (1997) provided a meta-analysis of studies of brain damaged patients with music and/or language deficits who showed such a lack of distinction. Of all the research cited, the most compelling experiment they detailed was one of their own (Patel, Peretz, Tramo, & Labreque, as cited in Patel & Peretz). Here, amusic patients (people who cannot produce or comprehend musical sounds) were presented with pairs of lexically identical sentences differing only in intonation along with musical analogues that were built on the basis of the tonal pattern of the spoken phrases. For example, two lexically identical sentences are “He speaks French.” and “He speaks French?”. If the speaker raises the pitch of his or her voice at the end of the sentence, the meaning changes from a statement to a question although the words have remained the same. After recording sets of such sentences, Patel et al. analysed the tonal frequencies and time values of the spoken words. They used the results of this analysis to compose brief musical phrases that were melodically equal to the spoken sentences. Intonation in language is paralleled by melodic contour; contour is a processing ability lacking in amusics although their language perception remains intact. The participants were required to make same-different judgements on the pairs of sentences and on the pairs of musical analogues. If music was completely dissociable from language, one would expect amusic patients to be impaired on the musical decisions only. However, their results showed that the patients performed equally well on both the lexical and musical decisions, indicating that there are some neural processes shared by the two task types. Within the same research, Patel et al. included sentence pairs that differed only in the timing of the words, along with rhythmic analogues to the sentences. Results were similar to those from the musical decisions, with the patients performing equally on both decision types. The fact that some musical ftmctions are processed in the left hemisphere and language functions in the right, and that music and language are not totally dissociable, supports the view that the locahsation of these functions cannot be determined by a simple “music versus language” distinction. Instead, the type of processing being applied to the stimulus is the most important aspect (Boucher & Bryden, 1997; Platel, Price, et al., 1997). Most researchers have interpreted the left versus right hemisphere distraction as being due to either analytic or holistic processes being applied to the stimulus at hand, analytic or time-ordered processes relating to the left hemisphere and hohstic processes relating to the right (e.g., Bryden, 1982; Gordon, 1975; Minagawa, Nakagawa, & Kashu, 1987). Tliis interpretation is supported by many findings, botii in relation to language and to music, as we will soon see. Despite the fact that most early auditory research was conducted using pure musical tones as stimuli (Goldstein, 1999), a great deal more is known now about language processing than about music processing. We know that language is not an indivisible whole but is made up of many smaller components. Some of these components are prosody (the vocal intonations of speech), phonemes (the auditory properties of speech sounds), graphemes (the visual properties of written words or letters), and semantics (the meanings of words). Extensive research has been conducted on all of these sub-components and more, and the neural sites of their processing are in most cases fairly well established. The issue of passive language listening has also been a topic of study with robust results. However, similar research on the sub-components of music has been more rare, and the results not so easily agreed upon. Music has as many sub-components contributing to its whole as does language (Hantz, KreiUck, Kananen, & Swartz, 1997; Platel, Price, et al., 1997). The basic components that comprise a musical passage are pitch (the fi-equency of the sound, or how “high” or “low” it sounds), timbre (the quality of tone that distinguishes different instruments), and rhythm (the measured beat or flow of the sequence). Higher level components are phrasing (similar to sentence parsing in language), dynamics (the relative volume levels of sections of the passage), and tempo (the relative speed of the passage, usually measured in beats per minute). Finally, all these components combine to make the music we perceive, which itself can be processed in terms of its own qualities or in terms of its famiUarity or similarity to previously perceived passages. Although many of these components are unique to music, many parallel those of language. As such, it is not surprising to find that some of the musical sub-components are processed in areas of the brain once thought to be solely dedicated to the processing of language (e.g., rhythm; Platel, Price, et al., 1997). The first psychologist to specifically investigate music and the localisation of its processing was Kimura in 1964 (Boucher & Bryden, 1997). Since then, a small body of research has accumulated to this end, much of it separating music into its above-mentioned sub­ components just as language has been divided into its own sub-components. As mentioned earher, the common view is that the hemisphere that dominates a particular process is largely determined by whether the process is an analytic or holistic one. This helps to explain the finding that rhythm in music is processed in Broca’s area, an area of the left hemisphere utiUsed in producing proper sentence structure. Both are analytical functions, requiring that a person apply the proper structure, form, and timing to the stimuli in order to properly perceive them. On the other hand, prosody in language is largely a right hemisphere function (Kolb & Whishaw, 1990), as is phrase processing in music (Breitling, Guenther, & Rondot, 1987). Both of these are hohstic processes and require an interpretation of how the entire sentence or phrase flows in temporal sequence, pitch, and dynamics. The analytic versus hohstic distinction also helps to explain the finding that trained musicians show different neural patterns than non-musicians when processing music as a whole, seen through Event Related Potential (ERF) recordings (e.g., Besson & Fmta, 1994; Crummer, Walton, Wayman, Hantz, & Frisina, 1994). Musicians have been specifically taught to interpret chord progressions, key changes, harmonies, and counter-melodies in ways that would not occur to a non-musician. In this sense, music functions more like a language, with specific form, structure, and temporal sequences, than previously thought. Another interpretation of this is that the interpretations musicians have been trained to make within music are of an analytical nature, as opposed to holistic perceptions of the music. This is the explanation put forth by Minagawa et al. (1987) in response to their findings that trained musicians show a right ear advantage for musical stimuli where non-musicians show the expected left ear advantage. These advantages are interpreted fi'om responses being faster and more accurate when the musical stimuli are presented to one ear as opposed to the other. As mentioned previously, the left hemisphere of the brain is involved in analytic processing and the right hemisphere is involved in hohstic processing. A person primarily utihsing their left hemisphere to process a stimulus would therefore exhibit a right ear advantage due to the fact that nearly all external stimuh reach the hemisphere of the brain opposite (contralateral) to the area of space in which they occurred. The analytic-holistic, left-right distinction has also been used by Breitling et al. (1987) to explain their Electroencephalogram (BEG) results showing more left hemisphere activation in musicians than non-musicians during music processing. One group of researchers has suggested that the differences between musicians and non­ musicians are not due to experience or training but to inborn aptitude. Gaede, Parsons, and Bertera (1978) developed a test to determine a person’s musical aptitude, then separated their participants by both musical experience and musical aptitude, resulting in four groups: high experience-low aptitude; high experience-high aptitude; low experience-low aptitude; low experience-high aptitude. Participants then underwent tests of memory and chord analysis. Their results showed that aptitude, but not experience, was a good predictor of hemispheric dominance. Specifically, they stated that “while both variables [aptitude and experience] affected general level of performance it was only aptitude which related to ear or hemispheric differences” (p. 371). However, no further research has been reported to support their findings. 6 Absolute Pitch Other research has indicated that the musician/non-musician differences may be due to more of the participants in the “musicians” group than in the “non-musicians” group possessing a skill called absolute pitch (AP), or perfect pitch (Schlaug, Jancke, Huang, & Steinmetz, 1995). Absolute pitch is “the abihty to name the pitch of a note without reference to any previously sounded one (recognition), or to sing a named note without reference to a previously sounded one (recall)” (Spender, 1980, p. 27). Zakay, Roziner, and Ben-Arzi (1984) describe it further using a familiar analogy: This process is similar to that of color naming where pitches are an auditory to the color dimension. For a population with absolute pitch the differentiation of pitches is probably as natural as that of colors and the verbal response to pitches, i.e., naming them (sic) is learned the same as the verbal response to colors, (p. 164) A great deal of research has been conducted regarding the nature and characteristics of AP possessors. Not only are such people more accurate in various tasks requiring identification or recognition of tones or melodies, but they also report using different strategies than people without absolute pitch. (Eaton & Siegel, 1976; Siegel, 1974; Zatorre & Beckett, 1989) Schlaug et al. (1995) discovered that AP possessors not only process musical stimuli differently on a functional or cognitive level than other musicians, but their brain structures also show differences fi'om both non-AP possessing musicians and non-musicians. The planum temporalus on the left corresponds with Wernicke’s Area, the neural structure involved in the semantic comprehension of language. It is always slightly larger on the left than on the right, but Positron Emission Tomography (PET) scans show that in people with AP the planum temporalus is even larger on the left than in most other people (Schlaug et al.). In addition, Klein, Coles, & Donchin (1984) used ERPs to show that AP possessors process musical tones differently than others. Both musicians and non-musicians display a positive shift in their neural electrical activity patterns 300 ms after presentation of a musical tone; this is called the P300. In contrast, Klein et al. found that AP possessors do not show a P300 after presentation of a musical tone. The researchers suggest that this is because participants with AP do not need to process the tone as other participants do. They do not need to think about the tone or rehearse it in memory, they simply know its name and use that label in further tasks. Wayman, Frisina, Walton, Hantz, & Crummer (as cited in Besson, 1997), using an auditory oddball task, observed similar results. An auditory oddball task involves participants listening to a series of tones of varying pitches; high, medium, and low. Participants must count, for example, the number of high tones that occur; usually the frequency being counted occurs far less often than the others. In their experiment, AP possessors showed a smaller P300 than non-musicians and other musicians. Hantz et al. (1997) found, using stimuli of musical sequences that were melodically or harmonically either closed or open*, that possessors of AP produce robust P300s in response to open passages. Although this seems contradictory, it is not. In such a study as that conducted by Hantz et al., the P300 is thought to indicate surprise, or any reaction to something unexpected. AP possessors, due to their abihty to know exactly which pitches are being played, have stronger and clearer expectations than non-AP possessors about what “should” be played next within a musical passage. Having that expectation denied, as in musically open passages, produces a strong P300. It is thus clear ‘ Melodic phrases consist of only one note played at a time, whereas harmonic phrases are multi-lined and are played as series of chords. A phrase which is closed is usually harmonically resolved by progressing from the dominant harmony (based on the fifth note of the scale) to the tonic harmony (based on the first note of the scale), and melodically resolved by progressing to the tonic pitch. This is analagous to “finishing the sentence” in spoken language. An open phrase does not accomplish the above musical progressions; for example, the phrase might end on the dominant or submediant (based on the sixth note of the scale) harmony. This would be analagous to, for example, ending a sentence with a preposition or a definite article, or with a semantically unexpected word. 8 that differences do exist at the physiological level between possessors of AP and musicians without the ability. Dichotic Listening The present experiment used the method of dichotic hstening, a method that is widely used with robust and accepted results. Dichotic listening involves presenting the participants with two different auditory signals simultaneously. Each signal is presented discreetly to each ear and participants are required to perform any of a variety of cognitive tasks relating to the signals (e.g., listen for a specific word to occur and count the number of times it does so, recall a list of words presented to a pre-specified ear, repeat the message being read to a pre-specified ear while it is being presented). The degree of difficulty, measured with reaction time or accuracy, that the participant has completing the task in one ear compared with the other indicates which neural hemisphere is dominant in that task’s normal processing. This conclusion is based on assumptions and extrapolations from previous research. When examining participants whose language locahsation is already known, dichotic listening of language detects the proper ear advantage with 95% accuracy (Gefifen, Traub, & Stierman, 1978). A person shows faster and more accurate responding to language stimuh when those stimuh are presented to the ear contralateral to the person’s dominant hemisphere for language. As mentioned earher, almost all sensory pathways to the brain are crossed so that information fi'om one side of the body is transmitted to the contralateral cortical hemisphere. For example, sounds heard in the left ear are processed by the right hemisphere of the brain and vice versa. Signals may reach the hemisphere ipsilateral to their source (e.g., signals in the left ear reaching the left hemisphere) by one of two routes. First, there are a few direct pathways fiom the sensory organs to the ipsilateral hemisphere. Second, information may be passed fiom one hemisphere to the other by way of the corpus callosum, the large bundle of nerves connecting the left and right hemispheres. Signals passed in this way have already had some low level processing performed on them, whereas direct signals have not. In dichotic listening, very little information is shared between the two hemispheres from either of the above two routes. The reasons for this are as yet unknown, although many theories have been advanced. The first such theory does not so much explain how dichotic listening works as why it works. Kimura (as cited in Helhge, 1983) developed the “direct access model”: Stimuli that were projected directly to the hemisphere specialised for processing that type of stimuli would be processed with much greater speed and efficiency than if they were projected to the less specialised hemisphere. This makes intuitive sense; If a patient sees a general practitioner with visual acuity problems, treatment will likely be slower and poorer than if the patient had gone straight to an optometrist. However, this still does not explain the apparent lack of information sharing between the cerebral hemispheres during dichotic listening. The second theory, and the first to attempt to explain this lack of interhemispheric sharing, was the “partial-occlusion theory”, also by Kimura (as cited in Murray & Richards, 1978). In this theory, Kimura postulates that as neural pathways reach the auditoiy cortex, signals from the ipsilateral ear are blocked by the more numerous and powerful signals from the contralateral ear. In addition, most cells in the auditory cortex respond to contralateral input, and those cells that do respond to ipsilateral input also respond to contralateral input. There is no such duality in the contralaterally responding cells (Wexler, 1988). Therefore, when there is competition between the two signals, the contralateral signals will reach the cortex and be processed almost exclusively. However, among other problems, this theory does not account for the ear advantages that are found in research using monotic hstening (one message being presented to one ear at a time only; 10 Murray & Richards). It also does not account for the partial processing of distracter messages that does occur in dichotic hstening and that will be discussed later (Lewis, 1970; Lewis, Honeck, & Fishbein, 1975; Mayes, Emery, & Beagiey, 1998). The third specific theory advanced was Kinsboume’s “attention bias” (Bryden, 1988; Helhge, 1983; Hugdahl, 1996). The attention bias theory holds that “performance on a task is better if the stimuh are presented to the side of space contralateral to the hemisphere that is more activated by the task being performed” (Helhge, p. 7). Some researchers have called this a priming effect (e.g., Bryden). In simpler terms, the theoiy is that if, for example, a participant knows that the task wiU be one of language processing, then that person’s language-dominant left hemisphere will be primed for use. Therefore, any linguistic stimuh presented to the left hemisphere will be processed more quickly and efficiently, thus skewing results so that natural advantages of one hemisphere or the other are unclear. This theory holds imphcations for the type of instructions or attentional direction cues used in an experiment. Verbal instructions that the task will be linguistic may prime the left hemisphere for language. Conversely, a chime to the left ear signalling an upcoming trial may prime the right hemisphere for tones. Unfortunately, there is often no practical way of avoiding such primes, and one must simply hope that the counterbalancing of trials across ears may cancel any priming effects. Some researchers have put forth more informal explanations for the lack of information sharing between the hemispheres in dichotic listening within the bodies of their research. For instance, Hellige (1983) suggested that, within the context of Kimura’s partial-occlusion theory, the inhibition of ipsilateral paths is greater when the two sets of stimuli are acoustically similar. Bryden (1982) stated that it is possible that the more direct and more numerous contralateral pathways simply cause an advantage in and of themselves. 11 As these few explanations show, the early theories held that the secondary message presented to the ear ipsilateral to the hemisphere in question - in dichotic listening was completely occluded by the primary message - presented to the ear contralateral to the hemisphere in question - and was not processed at all. This belief was held strongly for some time. However, researchers are more widely beginning to accept the notion that the secondary message in dichotic hstening is processed to some extent. Certainly the majority of the evidence lends support for such a viewpoint. For example, Lewis (1970) varied the content of the unattended (secondary) message while measuring participants’ reaction times to shadowing the attended (primary) message. He foimd that when the paired word in the unattended message was a synonym of that in the attended message, reaction time was longer. Mayes et al. (1998) paired fiction on the attended channel with sound effects on the unattended channel. Their results showed that all soimd effects hindered responding, but when the timing of the sound effects matched the content of the fiction, reaction time in shadowing was even slower than when the sound effects did not match. Finally, Lewis et al. (1975) conducted an experiment in which participants were asked to either listen to or shadow one channel while responding to occurrences of a target word on both channels. A lthou^ shadowing decreased responding to the unattended channel, participants correctly detected target words in both cases on both the attended and unattended channels. Present Experiment The present research used dichotic listening to investigate in which ear participants find the presentation of music more distracting during a linguistic task and whether this finding is different for musicians versus non-musicians. From this, we may infer the different involvement of the hemispheres in language and music processing. The stimuli used were passages of spoken language as the primary (attended to) message and other spoken language passages and musical 12 passages as the secondary (unattended) messages. Both musicians and non-musicians received the same procedures, but the two groups were separated on data analysis. Distraction levels were inferred by observing delays in reaction time to the primary task. When music is being studied with dichotic listening, it is usually presented as both the primary (specifically attended to) and secondary (unattended, or distracter) messages with the ear advantage for music processing as the variable under scrutiny. This experiment uses language passages as the primary message and both music and language as the secondary messages. The question under investigation is not determining which ear presents the best performance in processing the music but which ear presents the most distraction from the task of processing language when music is the secondary message. Most of the past research that has compared musicians and non-musicians had not separated AP possessors from the groups. On the other hand, much of the research that did separate AP possessors from musicians focused only on the differences between those two groups, not including non-musicians at all. 1 have bridged this gap by separating all AP possessors from potential participants, only comparing non-AP possessing musicians to non-AP possessing nonmusicians. This allowed an investigation into the differences between musicians and nonmusicians without the confounding variables that the skill of AP presents. I hypothesise that language will produce the greatest distraction to the participants, regardless of ear of presentation and regardless of the participant’s musician or non-musician classification. The primary task will be the pressing of a button in response to the occurrence of a target word in the attended passage. My second, and more important, hypothesis is that musicians will find music to be of equal distraction when presented to either ear, but that non-musicians will find music a 13 I F c o I Musicians Non-musicians Left Right Ear Figure 1. Hypothesized pattern of Group x Ear interaction for all Music Unattended conditions. Note that “Ear” refers to Attended Ear; Music stimuli are being presented to the Unattended Ear. 14 greater distraction when presented to the left ear than to the right (Figure 1). In that comparison, the level of distraction that music causes will be equal in the left ears of musicians and non­ musicians, but musicians will be more distracted than non-musicians when the music is presented to the right ear. 15 CHAPTER II Method Preliminary Study A preliminary inyestigation was conducted to establish the dichotic listening and reaction time paradigm. Participants. For this study, I recruited 25 women and 4 men, all of whom were right handed. Past researchers in this field haye used between 5 (Tsao, Wittheb, Miller, & Wang, 1983) and 64 (Boucher & Bryden, 1997) participants for their experiments. The participants were dravm fi'om the undergraduate student research pool at the Uniyersity of Northern British Columbia (UNBC), enrolled graduate students at UNBC, Grade 12 students fi*om DPTodd Secondary School, and members of the general population of Prince George attending post-secondary instruction elsewhere. It is important to use only right-handed participants as left handedness (sinistrality) has been shown to affect many patterns of neural functioning (Bryden, 1978), including music (Bryden, 1988), while family sinistrality, on the other hand, has little effect on dichotic hstening results (Bryden, 1988). As noted above, AP possessors have slightly different neural asymmetries than other people, so all such people were identified and excluded from the data analysis. Participants were divided into two groins based on their previous musical experience or training. “Musicians” were those participants who had at least five years of formal training or 10 years of informal experience, at least one of which was within the past five years. “Non­ musicians” were any participants who did not meet the criteria for “musician”. Because the research performed by Gaede et al. (1978) is the only study to date which has suggested that musician/non-musician group differences may be due to musical aptitude and not musical training 16 or experience, and because the test for musical aptitude is lengthy and expensive, I based my group divisions on experience. Materials and Apparatus. The stimuli for this experiment were three passages of music, nine passages of spoken text, and one passage of static (see Appendix A for exact stimuli). Each segment was 15 seconds long. The music passages were all excerpts of songs played on cellos with no other instruments or voices. This eliminates any effects of timbre processing and provides a sample of pure music without spoken (or sung) language. All three passages were of different musical styles (rock, folk, and classical) to allow the results to be generalisable to all genres of music. They were all from professionally recorded compact discs. The rock music was a selection from “Harvester of Sorrow” (Hetfield & Ulrich, 1988, track 3), the classical music a portion of “Vocalize” (Rachmaninoff, 1915, track 6), and the folk music a segment of “Hush Little Baby” (traditional, arr. 1992, track 5). The nine passages of spoken text were three each of three different styles: poetry, fictional prose, and non-fiction. There were three purposes to using three different styles of spoken text. One is so that, like the music passages, the results of this research may be generahsable to all language. A second is so that the spoken stimuh used in this research are comparable to the music stimuli. Three styles of music were used, so three styles of spoken text were also used. The styles loosely correspond to one another as well: folk music to spoken poetry, rock music to spoken prose, and classical music to spoken non-fiction material. The third purpose in using three different styles of spoken text is that no other research has done so. In most cases of dichotic listening research, either meaningless syllables or non-grammatical word lists have been used as stimuh. If complete sentences have been used, they have usually been taken from one source, or they have been devised without an external context, solely for the purpose of the experiment. 17 Two sets of the three styles of spoken text were used for the attended messages, and the last set was used for the unattended distracter messages. The attended poetry messages were portions of “In Memory of Ann Jones” by Dylan Thomas (n.d.) and “The Deserted Village” by Oliver Goldsmith (n.d.). The unattended poetry message was a portion of “Michael” by William Wordsmith (n.d.). All prose passages, attended and unattended, were taken from “Covenant With the Vampire” (Kalogridis, 1994). The non-fiction messages were from three entries in the World Book Encyclopedia; The attended messages were portions of the entries for “monitor” (Pope, 1986) and “snake” (Bennett, 1986), and the unattended message was a portion of the entry for “dinosaurs” (Dodson, 1986). All recorded spoken passages were read clearly by a female at a slightly slower than normal speaking pace (to enhance clarity and ease of understanding), with natural prosody. The target word for the dichotic monitoring task, black, was present at least once in each of the six attended messages but in none of the unattended messages. There were twice as many attended messages as unattended messages to allow for variety and to reduce the chances that participants would learn the passages’ contents. The white noise was static recorded from a television set on a non­ receiving channel. All the music and speech consisted of materials that are not widely known to the pubhc in order to reduce possible familiarity effects. Although the lullaby “Hush Little Baby” is well-known, the portion used here is of an unusual arrangement. All the aural passages were recorded into a computer using the Sound Recorder computer program and combined using the DDClip program. The onset of each spoken passage was determined by expanding the waveform of the soimd file and visually selecting the point at which the initial consonant sound of the passage began. This procedure was accurate to ± 1 ms. All passages were set to play back at a volume level that was kept relatively constant throughout the 18 passage and relatively equal across the left and right stereo channels. The maximum tolerable volume difference between the input of the two ears before the relative volume begins to affect the participant’s attention is ± 5 dB (laccino, 1993). Previous researchers have used volume levels ranging from 60 dB (Tsao et al., 1983) to 80 dB (Wiens, Emmerich, & Katkin, 1997), with an average volume of 70.2 dB (Ambler, Fisicaro, & Proctor, 1976; Boucher & Bryden, 1997; Hantz et al., 1997; laccino; Tsao et al.; Wiens et al ). Recordings were played back on a GE tape deck from a tape recording made from the DDClip sound files, and participants listened through a pair of Koss TD/60 stereo headphones. Participants’ reaction times were recorded in milliseconds from the start of each trial by the computer software MEL. Preliminary tests for AP were carried out using a Roland E-12 Intelligent Keyboard. Procedure. Upon the participants’ arrival at the research laboratory, they were given a consent form and an information sheet to fill out which requested information regarding age, gender, handedness, hearing ability, brain damage, and musical background (Appendices B and C respectively). After completing this form, they underwent a brief test for AP that consisted of naming notes played out of their view on an electric keyboard. Participants were instructed to attempt to name the pitch that was played, guessing if necessary. No feedback was provided until the test was finished. Only a score of 100% correct was taken as an indication that the participant possessed the ability of AP. At that time any participants who indicated left handedness, who had some hearing loss or brain damage, or who showed the ability of AP were identified for elimination from the data analysis. Participants were given verbal instructions for the dichotic monitoring task. They were told that they would hear a tone in one ear tliat would indicate which ear was to be attended to for that trial. The task at hand would be to hsten for the word black to occur within the speech in the 19 attended ear. When the target word was heard, participants pressed a button on a computer keyboard with the hand corresponding to the currently attended ear. For example, if the right ear was being attended to, participants would use their right hands to press the response key. Each trial was of a duration of 15 seconds. There was a two second pause between trials followed by a 500 ms tone and another 500 ms silence before the next trial began. Participants were permitted three practice trials, one with each type of distracter (spoken text, music, white noise), before testing began. There were two identical blocks of 42 trials each; participants had a one minute break half-way through each block (i.e., after 21 trials) and a five minute break between blocks. For the second block, participants were asked to reverse the orientation of the headphones so that the left headphone was over their right ear and vice versa. This was to control for any mechanical differences between each headphone and was consistent with previous research in this field (e.g., Inoue, 1981, Wiens et al., 1997). After hearing the instructions, participants were also informed that they were free to leave the experiment at any time if they so chose. There were two confines on the potential order of the trials: any type of attended message could not occur in the same ear twice in a row, and no combination of conditions (ear, attended message, and distracter) could ever be repeated. In other words, the specific combination of, for example. Right Ear - Attended Prose - Unattended Folk Music could only occur once within the block of trials. Within those conditions, the exact order of trials was randomly determined by throwing dice. This order was fixed for all participants, and both blocks of 42 trials were presented in the same order. After both blocks of trials had been run, participants were debriefed as to the purposes of the experiment. They were also asked if they recognised any of the spoken or musical passages, and if so, they were asked to attempt to name the sources of the passages. None were able to do so. 20 The computer software recorded each participant’s reaction time (RT) to each occurrence of the target word. I did not keep track of errors (hitting the response key early or missing the target word entirely), as ear differences noted in dichotic listening tend to be independent of accuracy measures (Halwes, 1969). The conditions of each trial were noted along with the reaction time to allow for an investigation as to the effects of the different distracters. Experiment Participants. I recruited 48 right-handed men and women to participate in the final experiment. Participants were from the same pools as in the Preliminary Study, however no person participated in both experiments. In this experiment, a participant was considered to be a musician if he or she had five years or more of formal music training or experience, at least one of which was within the past five years. A participant was considered to be a non-musician if he or she had three years or less of formal music training or experience, none of which were within the past five years. Using these distinctions, and after removing 18 participants for various reasons which will be detailed in the results section, there were 11 women and 4 men in the “musician” group, and 12 women and 3 men in the “non-musician” group. Materials and Apparatus. The materials for this experiment were three passages of music and six passages of text. Each passage was 10 seconds long. The music passages were the first 10 seconds of those used in the Preliminary Study. As before, three text passages were used only on ftie unattended audio charmel. These three were the first 10 seconds of the same unattended passages used in the Preliminary Study. The attended passages were the first 10 seconds of the following three attended passages used in the Preliminary Study: “The Deserted Village” by Ohver Goldsmith (n.d ), passage #2 (see Appendix) from “Covenant With the Vampire” (Kalogridis, 1994), and the encyclopaedia entry for “snake” (Bennett, 1986). The white noise was 21 eliminated as an unattended stimulus, as it was determined that it was an unnecessary condition. In place of those trials, six catch trials were built by muting the sound of just the target word on the attended channel. For the catch trials, one each of the six unattended messages was used, and each attended message was used twice, spread evenly across both ears. Preliminary tests for AP were again conducted using a Roland E-12 Intelhgent Keyboard. The audio recordings were played through a computer using the DDClip program. Participants hstened using a pair of Koss TD/60 stereo headphones, and the experimenter listened along through a pair of standard Sony stereo headphones, both sets coimected to the computer’s speaker through a Radio Shack mini stereo jack splitter. When participants responded to the occurrence of the target word, playback of the audio clip was immediately stopped and the DDChp program noted the time index, in milliseconds, at which the playback stopped. The experimenter manually recorded these time indices in an Excel spreadsheet on another computer before starting playback of the next clip. Procedure. Upon individual arrival at the experiment, participants underwent the same procedures as those in the Preliminary Study regarding the consent form, information form, and AP test. Again, any participants showing the abihty of AP or indicating left-handedness, hearing loss, or brain damage were identified for removal from the final data analysis. Participants were given verbal instructions as to the procedure of the experiment. They were given a pair of headphones to wear that would convey all stimuli for the experiment, and they were seated before a computer keyboard with no monitor. They were instructed to keep both hands resting on the keyboard, fingers from their left and right hands on the “z” and “/” keys respectively, so they would not have to reach when making their responses. These keys were marked with blue and yellow stickers for easy identification. Participants were assured they could 22 adjust the height of the chair or place the keyboard on their lap according to comfort, as long as hand movement was not compromised. The participants were instructed that there would be three practice trials, during which they could adjust the volume of the headphones slightly if necessary, followed by as much time as was needed to answer any final questions. The actual experiment would consist of two blocks of 42 trials each with a one-minute break halfway through each block and a three-minute break between the blocks. These breaks were measured with hourglass-style timers. As in the Preliminary Study, participants were instructed to reverse the position of the headphones for Block 2 to account for any mechanical differences in the speakers. The order of the trials was kept the same as in the Preliminary Study, but only using one each of the three text styles in the attended messages. The six catch trials were placed where the “white noise” trials had been in the Preliminary Study, those white noise trials having been removed for the actual experiment. Participants were told that each trial would begin with a tone in one ear that would indicate which ear was to be attended to for that trial. In that ear only, they were to listen for the target word, black, and press the corresponding left or right key on the keyboard as soon as they had heard it. It was strongly impressed upon the participants that they should be sure to wait until they were certain they had heard the word before pressing the key. This was to eliminate anticipatory responses once they had heard the trials enough to learn them. They were not, however, told about the catch trials, which would serve the same purpose. As soon as the participant pressed the response key, playback of the audio clip stopped and the playback software noted the time in milliseconds at which it had been stopped. For each trial, the experimenter transcribed that number into a spreadsheet on a laptop computer. Then the next trial was loaded into the program and played for the participant. If the participant did not respond 23 at all, the dip stopped automatically at the end, and the experimenter noted “miss” in the data. If the participant stopped the trial early, the experimenter noted “early” in the data. When all trials had been run, the experimenter thanked the participant for his/her time, debriefed him/her as to the purposes of the experiment, and answered any questions the participant might have had. The participants were also asked if they recognised any of the spoken or musical passages, and if so, they were asked to attempt to name the sources of the passages. Any responses they might have given to these questions were noted on their information sheets. 24 CHAPTER III Results Preliminary Study Upon analysing the results of the Preliminaiy Study, I quickly realised that the computer program measuring the reaction times and the tape deck playing the stimuh had not been properly synchronised in nearly all cases. In addition, the tape deck played at an inconsistent speed, resulting in the two devices being farther out of sync as the experiment progressed. These two facts made the results virtually uninterpretable. However, the following observations could be made: 1) In many cases, after a few trials participants had learned the attended passages, likely due to their simplicity, so they were anticipating the target word and hitting the response key slightly before it had occurred. Catch trials were introduced in the actual experiment to help control for this behaviour. 2) The data showed interactions between the different attended messages, even passages of the same subject area. The number of passages used in the actual experiment was thus reduced from six to three. 3) The five-minute break between trials was unnecessarily long. It was reduced to three minutes for the actual experiment. Experiment The data from 18 participants were removed from the final analysis. The disk containing data for eight of them (four musicians and four non-musicians) was lost. Due to construction in the building creating too much noise, one musician did not complete the trials. One non-musician did not disclose his or her age or gender. Another non-musician had worn the headphones the wrong 25 way during the first block of trials. Two non-musicians had some hearing loss in one ear. One participant had too much experience to be considered a non-musician but not enough to be considered a musician. One musician was left-handed, while another had absolute pitch. Finally, two non-musicians were more than two standard deviations above the mean with regard to error rates. Fifteen musicians and 15 non-musicians were used in the final analysis. The average ages of all musicians and non-musicians were 23.5 years and 21.1 years respectively. The two groups’ ages were not significantly different (t(28) = 1.17, p > 0.05). Musicians had an average of 15.7 years of musical training or experience, while non-musicians had an average of 0.6 years of training or experience. The two groups’ levels of musical training were significantly different (t(28) = 6.66, g < 0.001). On average, musicians made 3.73% errors in the experimental trials while non-musicians made 3.57% errors. The two error rates are not significantly different (t(28) = 0.16, g > 0.05). All errors resulting from participants anticipating or missing the target word were replaced with the mean value for that group for that variable. For example, if a musician made an error on Trial 31, the erroneous data was replaced by the mean reaction time for all musicians on Trial 31. Data Replacement by Means is an accepted practice for the purpose of data analysis, as leaving the cells blank would have resulted in all data for that participant being removed from the analysis. It is less conservative than replacing missing data with overall means instead of group means, but less liberal than merely guessing at what the values might be, based on expected values (Tabachnick & FideU, 1996). The main problem with this treatment of missing data is that it artificially decreases the variation of the data. This can result in statistical tests reporting significant differences where there may be none. However, since the error rates between the two 26 groups of participants are not significantly different, and are low at less than 4.0%, this should not be a concern here. A five-way repeated measures ANOVA was conducted on the data. The five factors were Group (Musician vs. Non-musician), Block (One vs. Two), Ear (Left vs. Right), Attended stimulus type (Poetry vs. Prose vs. Non Fiction), and Unattended stimulus type (spoken Poetry vs. spoken Prose vs. spoken Non Fiction vs. Folk music vs. Rock music vs. Classical music). Eta squared (q^) was calculated as a measure of effect size. The results showed main effects of Block (F(l,28) = 23.52, g < 0.001, = 0.46), Attended stimulus (F(2,56) = 38.45, g < 0.001, Unattended stimulus (F(5,140) = 10.29, g < 0.001, = 0.58), and = 0.27). Significant two-way interactions were between Block and Unattended stimulus (F(5,140) = 3.11, g < 0.05, q^ = 0.10), and Attended stimulus and Unattended stimulus (F(10,280) = 8.64, g < 0.001, q^ = 0.24). There were three significant three-way interactions: Block, Ear, and Attended stimulus (F(2,56) = 4.48, g < 0.05, q^ = 0.14); Group, Ear, and Unattended stimulus (F(5,140) = 2.47, g < 0.05, q^ = 0.08); and Block, Ear, and Unattended stimulus (F(5,140) = 7.94, g < 0.001, q^ = 0.22). There was only one significant four-way interaction, between Block, Ear, Attended stimulus, and Unattended stimulus, F(10,280) = 3.82, g < 0.001, q^ = 0.12. The five-way interaction was not significant. My primary concern with the first level of analysis was whether or not there was a practice effect in this experiment. A practice effect is a trend for the task to get easier, and thus reaction times to get faster, as the participants continue. In most circumstances, any practice effects are minimal and can be ignored, however in some situations it is enough to mask any significant effects of the factors in question. As one can see in Table 1 and Figure 2, all six of the different Unattended (distracter) conditions were easier for the participants in Block 2, although individual univariate ANOVAs showed that only Prose speech (F(l,28) = 16.30), Folk music (F(l,28) = 27 Table 1 t-Test Results Comparing Blocks 1 and 2 for All Unattended Conditions MS df MS Effect Effect Error Error F Poetry 1 42510.40 28 16732.02 2.54 Prose 1 260714.80 28 15990.30 16.30**** Non Fiction 1 48511.23 28 13429.06 3.61 Folk 1 379145.80 28 15836.70 23.94**** Rock 1 57204.01 28 7563.93 7.56 Classical 1 122139.30 28 9900.70 12.34**** Unattended Stimulus Note. A Bonferroni correction to the critical alpha level to account for the number of individual univariate ANOVAs performed on the data results in a critical alpha of 0.008. •»**e < .008 28 390 Block 1 I I 'o o . Block 2 350 3 1 310 290 270 Poetry Ftose Non Fiction Folk Rock Classical Unattended Stimulus Figure 2. Reaction tim es for all Unattended stim ulus conditions, Block 1 versu s Block 2. All data points marked with the sa m e letter are significantly different, p < 0.008. 29 23.94), and Classical music (F(l,28) = 12.34) were significantly so (all ps < 0.008; note that 0.008 is the critical alpha after a Bonferroni correction to account for the number of individual univariate ANOVAs). Throughout the data similar patterns can be seen. Block 2 values were almost always smaller (i.e., participants reacted faster, indicating the task was easier) than Block 1 values, though only sporadically significantly so. Verbal feedback from the participants also confirmed a practice effect. Many said that by the end of the first block they had memorised all the passages and were able to “tune out” until the point when they knew the target word would occur. This may also be a reason that Block 2 reaction times were not consistently significantly smaller than Block 1 reaction times. Boredom may have contributed to actually slowing participants’ reactions, thus masking the practice effect that could have been expected from the anecdotal evidence. Not wanting to mask any possible effects of the factors under study, the decision was made to only look at data from Block 1. All data from Block 2 was removed from the analysis from this point forward. There was a main effect of Attended message type (F(2,56) = 19.79, p < 0.001, Unattended message type (F(5,140) = 8.86, g < 0.001, = 0.41) and = 0.24), as well as a variety of interactions involving all factors (Group, Ear, Attended, and Unattended; see Table 2). Significant differences between the three Attended messages were unexpected. Tukey’s HSD test revealed that reaction times to Poetry and Prose text were not significantly different from each other (g > 0.05). However, participants reacted more quickly to the Non Fiction passage than to either the Poetry (g < 0.001) or the Prose (g < 0.001; see Figure 3). This pattern was maintained throughout nearly all significant interactions. On further investigation of the Non Fiction Attended passage, it became evident that it was not an appropriate stimulus for this experiment. The target word in all 30 Table 2 Analysis of Variance Results for Ail Block 1 Conditions Source df MS df MS Effect Effect Error Error £ a! 28 116992.40 0,03 — Between Subjects 1 Group 3975.20 Within Subjects Ear 1 28954.10 28 7378.60 3.92 Attended 2 405071.00 56 20466.80 19.79*** 0.41 Unattended 5 99307.40 140 11207.40 8.86*** 0.24 Group X Ear 1 32582.10 28 7378.60 4.42* 0.14 Group X Attended 2 43094.40 56 20466.80 2.10 — Group X Unattended 5 13879.90 140 11207.40 1.24 — Ear X Attended 2 52268.20 56 12652.70 4.13* 0.13 Ear X Unattended 5 68218.60 140 8973.80 7.60*** 0.21 Attended x Unattended 10 42192.20 280 9610.70 4.39*** 0.14 Group X Ear x Attended 2 9268.40 56 12652.70 0.73 " Group X Ear x Unattended 5 18566.70 140 8973.80 2.07 — Group X Attended x Unattended 10 10592.60 280 9610.70 1.10 — Ear X Attended x Unattended 10 26603.90 280 8834.70 3.01** 010 Group X Ear x Attended x Unattended 10 10143.20 280 8834.70 1.15 — Note, = effect size *e < ,05. **E < .01. ***B < .001. 31 450 0) 400 I § ^ 350 I 300 Poetry Prose Non Fiction Attended Stimulus Figure 3. Reaction tim es for all Attended Stim ulus conditions within Block 1. All data points marked with the sa m e letter are significantly different, q < 0.05. 32 passages was black, and in the Non Fiction Attended passage, the target appeared after a comment about colours. Therefore, participants were primed to expect a colour word to occur soon, making their reaction times to the word black significantly faster than in passages without priming. The presence of a priming effect is a confound to the true effects under investigation. Further analysis was conducted without this factor. Spoken passages were used as Unattended stimuli so that these results could be compared to oftier results of similar experiments in the field. The results of special interest to this experiment are those using the musical Unattended stimuli. Beyond the expectation that spoken text would be more distracting than music, comparisons between the two sets of Unattended stimuli would not have been meaningful. An ANOVA dividing the Unattended factor into only two groups language (M = 373.34, SD = 46.45) and music (M = 377.95, SD = 52.11) - while pooling across all other conditions revealed no significant difference between the two types of Unattended stimuli (F(l,28) = 0.54, g > 0.05). Remaining analyses considered the two sets separately. Language as Unattended Stimulus. A four-way repeated measures ANOVA was performed on the data, the four factors being Group (Musician vs. Non-musician), Ear (Left vs. Right), Attended stimulus (Poetry vs. Prose), and Unattended stimulus (Poetry vs. Prose vs. Non Fiction); the results are presented m Table 3. Tukey’s HSD was used as the post-hoc test to further investigate significant effects, and again eta squared (q^) was calculated as a measure of effect size. The only significant main effect was for the Unattended stimuli, F(2,56) = 5.74, g < 0.01, = 0.17. There were significant interactions between Group and Attended stimulus (F(l,28) = 8.12, g < 0.01, 0.22); Ear and Unattended stimulus (F(2,56) = 8.78, g < 0.001, q^ = 0.24); Ear, Attended stimulus, and Unattended stimulus (F(2,56) = 3.811, g < 0.05, q^ = 0.12); and Group, Ear, Attended stimulus, and Unattended stimulus (£(2,56) = 3.92, g < 0.05, q^ = 0.12). In order to = 33 Table 3 Removed Source g MS df MS Effdct Effect Error Error £ 28 45708.65 0.02 — Between Groups Group 1 686.14 Within Groups Ear 1 6596.34 28 10276.55 0.64 — Attended 1 4774.23 28 11843.28 0.40 — Unattended 2 82116.27 56 14298.93 5.74** 0.17 Group X Ear 1 2215.14 28 10276.55 0.22 Group X Attended 1 96203.41 28 11843.28 8.12** 0.22 Group X Unattended 1 575.07 28 15796.99 0.04 — Ear X Attended 2 671.10 56 14298.93 0.05 — Ear X Unattended 2 91035.34 56 10374.10 8.78*** 0.24 Attended x Unattended 2 16904.28 56 9717.34 1.74 — Group X Ear x Attended 1 1703.03 28 15796.99 0.11 — Group X Ear x Unattended 2 21244.77 56 10374.10 2.05 — Group X Attended x Unattended 2 10819.75 56 9717.34 1.11 — Ear X Attended x Unattended 2 45672.22 56 11983.14 3.81* 0.12 Group X Ear x Attended x Unattended 2 46991.27 56 11983.14 3.92* 0.12 Note. T|^ = effect size *E < .05. **e < .01. ***e < 001. 34 determine whether lower-level interactions were interpretable, the four-way interaction was examined first. Of the 276 possible comparisons of each separate condition within the four-way interaction between Group, Ear, Attended stimulus, and Unattended stimulus, only six reached significance. Of the six significantly different values, only one was a meaningful comparison. When musicians were attending to poetry in their left ears, non-fiction being played to their right ears was significantly more distracting than poetry being played to their right ears (p < 0.001). In addition, the calculated effect size of this four-way interaction was = 0.12 which, relative to the other conditions in this experiment, is relatively small^. Considering the effect size and the lack of significant interactions at this level, interpreting interactions at a level lower than the four-way interaction was a valid task. The only three-way interaction that reached significance was Ear versus Attended stimulus versus Unattended stimulus, F(2,56) = 3.81, p < 0.05, = 0.12 (Figure 4; means and standard deviations are presented in Table Dl). Post-hoc tests showed that when Prose was being attended to in either ear, there were no significant differences between ears or Unattended stimuh. However, when Poetry was being attended to in the Left ear. Poetry as an Unattended stimulus was significantly less distracting than either Prose (p < 0.05) or Non Fiction (p < 0.01). Also when Poetry was being attended to. Non Fiction was significantly less distracting when it was the R i^ t ear being attended to than when it was the Left (p < 0.01). In other words, in this condition, reaction time was faster when attention was directed to the Right Ear. A trend for reaction times to ^Cohen (1992) suggests that an effect accounting for 9% (effect size of = 0.09) of the variation in an experiment may be considered medium, while an effect size of 0.25 can be considered large. Within this experiment, all effect sizes ranged from 0.11 to 0.29, so in comparison with the other conditions o f this experiment an effect size of 0.14 may be considered relatively small. 35 Prose Attended Poetry Attended 500 450 i. I 400 350 I 300 250 —O— Left Ear —O— Left Ear —• — Right Ear —• — Right Ear 200 Poefry Non Fiction Unattended Poetry Non FicHon Stimulus Figure 4. Reaction tim es for the Ear x Attended x Unattended interaction in Language Unattended conditions, Block 1 only. Note that “Ear” refers to Attended Ear. All data points marked with the sa m e letter are significantly different, p < 0.05. 36 language stimuli to be faster in the rigjit ear than in the left is known as the Right Ear Advantage (REA). An REA is expected in any language-related dichotic listening task; however this is the only condition in this experiment in which it was seen. Although the two-way interaction between Group and Attended stimulus was significant QF(1,28) = 8.12, E < 0.01, = 0.22), post-hoc tests did not show any significant difterences between the four conditions (Musician-Poetry, Musician-Prose, Non-musician-Poetry, Nonmusician-Prose; see Table D2 for means and standard deviations of the four conditions). However, post-hoc tests for the two-way interaction between Ear and Unattended stimulus (F(2,56) = 8.78, E < 0.001, = 0.24) did show significant differences between the conditions. The patterns of significance were the same as with the higher-level interaction between Ear, Unattended stimulus, and Attended stimulus. As we have seen with that interaction, all significant differences only held true when Poetry was the Attended stimulus. Music as Unattended Stimulus. A four-way repeated measures ANOVA was run on the data with the four measures of Group (Musician vs. Non-musician), Ear (Left vs. Right), Attended stimulus (Poetry vs. Prose), and Unattended stimulus (Folk vs. Rock vs. Classical); results are presented in Table 4. Again, Tukey’s HSD was used as the post-hoc test to further investigate significant effects and eta squared was calculated as the measure of effect size. The specific twoway interaction that directly relates to the main hypothesis of this experiment is that of Group and Ear. That interaction was significant, F(l,28) = 8.17, p < 0.01, - 0.23. Post-hoc tests showed that the significant difference w ithin this interaction was between Musicians and Non-musicians when the Left ear was being attended to (Figure 5). In this condition. Non-musicians reacted to the target word significantly faster than Musicians, p < 0.01 (Table 5). Although there was a trend for Non-musicians to react faster when attending to their Left ears than to their Right ears, and a 37 Table 4 Analysis of Variance Results for All Music Unattended Conditions. Non Fiction Attended Condition Removed Source If MS df MS Effect Effect Error Error F a! 28 50775.60 0.64 “ Between Groups 1 Group 32566.00 Witfiin Groups Ear 1 2180.50 28 7714.07 0.28 - Attended 1 7362.20 28 19795.45 0.37 - Unattended 2 81699.00 56 13029.80 6.27** 0.18 Group X Ear 1 63043.60 28 7714.07 8.17** 0.22 Group X Attended 1 1173.60 28 19795.45 0.06 — Group X Unattended 1 113635.60 28 16973.54 6.69* 0.19 Ear X Attended 2 46649.60 56 13029.80 3.58* 0.11 Ear X Unattended 2 101645.30 56 8826.50 11.52*** 0.29 Attended x Unattended 2 117906.90 56 11655.43 10.12*** 0.26 Group X Ear x Attended 1 5244.10 28 16973.54 0.31 Group X Ear x Unattended 2 3591.80 56 8826.50 0.41 Group X Attended x Unattended 2 3597.80 56 11655.43 0.31 Ear X Attended x Unattended 2 355.80 56 8722.84 0.04 - Group X Ear x Attended x Unattended 2 361.40 56 8722.84 0.04 — Note. = effect size *e < 05. **e < .01. ***Q < .001. — 38 450 & 0) E 400 p .2 350 I Musician Non-musician Û£ 300 Left Ear R ight Figure 5. Reaction tim es for the Group x Ear interaction in Music Unattended conditions, Block 1 only. Note that “Ear” refers to Attended Ear; Music stimuli are being presented to the Unattended Ear. All data points marked with the sa m e letter are significantly different, B < 0.05. 39 Table 6 Group M SB Musician Left Ear 398.23a 138.26 Musician Right Ear 376.69 123.86 Non-musician Left Ear 352.74a 103.56 Non-musician Right Ear 384.13 144.18 Note. Conditions sharing the same subscript are significantly different, g < 0.01 40 reverse trend for Musicians, neither was significant. Reaction time was the same for both groups when attending to their Right ears. The only significant main effect was that of the Unattended stimulus, F(2,56) = 6.27, g < 0.01, = 0.18. However, that fector was involved in three different two-way interactions: with Group, F(2,56) = 3.58, g < 0.05, = 0.19 (Figure 6, Table D3); with Ear, F(2,56) = 11.52, g < 0.001, = 0.29 (Figure 7, Table D4); and with Attended stimulus, F(2,56) = 10.12, g < 0.001, r\^ = 0.26 (Figure 8, Table D5). Post hoc tests of these two-way interactions did not reveal any discernible pattern in the distraction levels of the different styles of music. The two-way interaction between Ear and Attended stimulus was also significant, F(l,28) = 6.69, g < 0.05, = 0.11. However, post-hoc tests did not reveal any significant differences between the four conditions (Left ear-Poetry, Left ear-Prose, R i^ t ear-Poetiy, Right ear-Prose; see Table D6 for means and standard deviations). There were no significant three- or four-way interactions. 41 450 n I« Musicians Non-musicians 400 - ; c I 350 - 300 Folk Rock Q assicai Unattended Stimulus Figure 6. Reaction tim es for the Group x Unattended interaction in Music Unattended conditions, Block 1 only. All data points marked with the sa m e letter are significantly different, p < 0.05. 42 450 -1 Left Ear g Right Ear Ip 400 - 4 co 350 15 S K 300 4 Folk Rock Classical U nattend ed S tim u lu s Figure 7. Reaction tim es for the Ear x Unattended interaction in Music Unattended conditions, Block 1 only. Note that “Ear” refers to Attended Ear; M usic stimuli are being presented to the Unattended Ear. All data points marked with the sa m e letter are significantly different, p < 0.05. 43 I 1 1s 450 n —O—Poetry Attended Prose Attended 400 - h- d - c 350 300 J Folk Rock Classical Unattended Stimulus Figure 8. Reaction tim es for the Attended x Unattended interaction in Music Unattended conditions, Block 1 only. All data points marked with the sa m e letter are significantly different, g < 0.05. 44 CHAPTER IV Discussion The primary hypothesis of this study was that musicians would find music more distracting than non-musicians when the music was being presented to their right ears. This hypothesis was supported by the data. A number of other effects and interactions reached significance as well, with varying degrees of strength. Language as Unattended Stimulus. This condition was included so as to have some comparison to the dichotic listening experiments conducted in the past, and to compare overall reaction times between language and music distraction. These prior experiments have shown that when performing language-related tasks, participants will display an REA (e.g., Clark, Geffen, & Geffen, 1988; Inoue, 1981; Kimura, 1967). They have also shown that when the material being used as a distracter is congruent with the material being attended to, such as using words fi'om the same semantic category, distraction is at its highest (Ambler, Fisicaro, & Proctor, 1976; Mayes et al., 1998). The current experiment did not support either of these findings, however prior research indicates that this may have been expected. In studying Kinsboume’s attentional bias model, Hugdahl and Andersson (1986) had participants attempt to recall consonant-vowel (CV) syllables that were presented in a dichotic hstening situation. Participants were either left fi-ee to shift their attention between their ears or they were directed to attend to one ear or the other. In both cases, participants were asked to recall syllables presented to both ears. Hugdahl and Andersson found the expected REA in recall during the free attention condition. However, when participants had been directed to attend to one ear or the other, then asked to recall all CV syllables that were presented to both ears, recall was significantly better in the attended ear. If the right ear had been the target, the REA was distinctly 45 more pronounced than in free attention. However, if the left ear had been the target, recall from the left ear was significantly higher than from the right, thus eliminating any trace of the REA. In the current experiment, participants’ attention was directed to one ear or the other on each trial, thus eliminating the expected REA. Ear advantages may also decline with practice and repetition. Using a free recall paradigm with dichotically presented word lists, Bartz (1972) found that the expected REA was present in early trials but declined towards the end of the experiment. Using shadowing during monotic listening, Murray and Richards (1978) found that the REA became non-significant over the course of repeated trials. The present experiment used stimuli that were extremely familiar to the participants by the end of the experiment. Considering the trials were presented in random order throughout the experiment and then pooled for analysis, it is not surprising that there would be no apparent REA in the data (as opposed to finding an REA for early trials but not one for later trials; trial order was not preserved in the analysis). In fact, these findings are further support for eliminating the data from Block 2 in the final analysis. The finding that language was no more distracting than music was unexpected. Although anecdotal evidence from participants after they completed the experiment indicates that they did find the Unattended language trials more difficult, their reaction times do not reflect this feeling. This could possibly be due to the ease of the experiment. Music as Unattended Stimulus. Although many aspects of music processing have often been shown to be a function of the right hemisphere, musicians have been trained to process music at a more analytical level. These more in-depth analyses are the type that would be expected to be a function of the left hemisphere, and evidence from prior research supports this suggestion (e.g., Bever & Chiarello, 1974; Breitling et al., 1987; Gordon, 1975; Platel, Price, et al., 1997), While 46 any person could be directed to make such analyses, it is more likely that musicians would do so even in passive processing due to their training in the subtleties of music structure. Such is the situation in this experiment, where music is the distracter to an attended channel. Since musicians and non-musicians both perform the same right-hemisphere passive processing on music, but only musicians perform left-hemisphere passive processing, we would expect that both groups will find music distracting to the same degree when in the left ear. This was the case in this study. We expected that the musicians would find music more distracting than the non-musicians when the distraction was presented to the right ear. This hypothesis was also supported. When the main effect of the Unattended stimulus type is examined in the light of the significant two-way interactions involving the Unattended stimulus, there is no single explanation that can account for the pattern of differences in conditions. For example. Rock music was significantly less distracting than Folk music, overall. However, in higher level interactions, this difference only maintains significance when the Group is Musicians, when Poetry is the Attended stimulus, or when participants are attending to their Right Ear. Despite this apparent distinction, the four-way interaction shows no significant difference between Musicians attending to Poetry in their Right ears and any other group. In other words, althou^i differences between conditions are statistically significant, they are small enough such that an attempt to combine them to form a consistent explanation removes all significance. There has been a great deal of debate in the past as to whether there is any inter-hemispheric sharing of information within a dichotic listening situation. The fact that the different unattended stimuli in this experiment impede reaction times to different extents supports past research suggesting that there is some sharing of information between the neural hemispheres (e.g.. Ambler et al., 1976; Lewis, 1970; Mayes et al., 1998). If there is no sharing, and the hemisphere receiving 47 the attended stimulus is working independently, then there should be no effects of distracter stimuli. At what point this sharing occurs, however, is not clear. There are two possibilities; a small amount of raw information is conveyed from the ear to the ipsilateral (same-side) hemisphere through direct neural pathways, or some information is relayed from the contralateral hemisphere through the corpus callosum (the bundle of nerves connecting the hemispheres). Kimura’s Partial Occlusion Theory (POT, as cited in Murray & Richards, 1978) holds that the former is not possible, thereby suggesting that the latter is the case. However, a great deal of research has suggested that the POT cannot accoimt for all results in the fields of dichotic and monotic (single-ear) listening (e.g., Murray & Richards). Although the results of the current experiment add to the body of research suggesting that the POT may not tell the whole story of dichotic hstening, an investigation as to the theories behind dichotic listening is not only beyond the scope of this paper, it is also not the purpose here. The fact that musicians in this experiment found music more distracting in the right ear than the non-musicians serves to support the theory of music perception put forth by Bever and Chiarello (1974). They stated that ...as their [musicians’] capacity for musical analysis increases, the left hemisphere becomes increasingly involved in the processing of music. This raises the possibility that being musically sophisticated has real neurological concomitants, permitting the utilization of a different strategy of musical apprehension that calls on left hemisphere fimction. (p. 539) The majority of research regarding musical processing suggests that, when decisions specifically requiring a more analytical perception of music are required of participants, the left hemisphere is called into play. Bever and Chiarello hold that musicians call on left hemisphere fimctions automatically, as a part of passive music perception. The current research did not require the 48 participants to analyse the music being presented to them in any way; any processing performed on the music was therefore automatic and passive. Since musicians found music more distracting in their right ear (leading to the left hemisphere), despite the lack of active processing, this supports Bever and Chiarello’s claims. Limitations to this studv. The results of this study are difficult to interpret in a practical sense due to the outdated procedures used. Although dichotic listening is an established and robust test of hemispheric dominance, it has long ago been ecUpsed by more modem measurements. It is also most powerful when directly investigating reaction time. In this experiment, delays to reaction time were the focus of the investigation, with the goal of interpreting these delays as indications of processing of the unattended stimuli. Although it is clear there was some effect of these stimuh, and those eSects differed significantly depending on what the stimuli were and in which ear they were being presented, any further interpretation is problematic. When an auditory stimulus is presented to the right ear, that stimulus is primarily analysed by the left neural hemisphere. Because of that, it is logical to deduce that reaction time delays when music is being presented to the right ear are likely due to the left hemisphere becoming more actively involved in processing tasks. However, in situations such as these, the attended stimulus the spoken text- is being presented to the left ear, and thus to the right hemisphere. The r i^ t hemisphere is analysing the attended stimulus and executing the required responses. Therefore, is the distraction of the music, as measured by a delay in reaction time, a result of the left hemisphere’s processing taking up overall attentional resources or of the information being passed to the right hemisphere for ftirther analysis, thus competing with the attended task? A variety of sources lead us to conclude that the former is the case, however we cannot conclusively rule out the latter. 49 The results of past research can help us confirm this conclusion. It has been well established that the right hemisphere is primarily responsible for the processing of music, at least at basic levels, in any person regardless of their musical training. In this experiment, musicians and non­ musicians were equally distracted when music was presented to their right hemispheres (left ears). A few experiments have indicated that as musical training increases, the left hemisphere becomes more involved in automatic music processing. Indeed, in this experiment, when music was presented to the left hemisphere (right ear), the musicians found it more of a distraction than the non-musicians did. If it is a case of the information being passed to the right hemisphere for finther processing, musicians and non-musicians should have found the condition equally distracting. Conversely, if music initially presented to the right hemisphere is passed to the left hemisphere for processing, then musicians should have found the condition more distracting than the non-musicians. Since the reverse is the case, we can safely conclude that the distraction is a result of the music being processed by the hemisphere that initially received it and of that processing taking up some of the brain’s limited attentional resources. Further limitations to this study concern the physical design of the process. The spoken text passages were not recorded, modulated, or monitored by specialised recording equipment in order to eliminate unwanted vocal inflections, faint background static, or other such aural impurities. The music passages were excerpts from professional recordings, but they all involved more than one instrument (albeit the same type of instrument, a simplification in itself), sound mixing, and other procedures that reduced the purity of the sounds, from a phonological perspective. This may affect a person’s processing of the sound. Regarding using pure tones as opposed to recorded music in experimentation, Frisina, Walton, and Crummer stated that “...the music sounds appropriate for this line of research may be too simple for some musicians and too complex for 50 some neuroscientists. For the musicians we have carried scientific reductionism too far, and for some neuroscientists it has not been carried far enough” (1988, p. 102). In addition, when putting the two sets of passages together to form the left and right audio channels for the dichotic listening stimuh, they were not balanced for volume beyond “eyeballing” the peak volume levels on the computer monitor display. laccino (1993) stated that there could be a maximum difference of ± 5 dB in the input between the two ears before the volume difference began to affect perception. Finally, there were only three different attended stimuli and only six unattended stimuli, all of which were presented many times over the course of the experiment. The boredom factor for the participants was thus very high, and could have possibly affected their performances. Unfortunately, there is very little that can be done about this. Not only must one consider possible interactions resulting from using too many different audio samples, but the task also must be kept relatively simple at this level of investigation. When comparing dichotic results of language and music tasks, Bryden noted that, when studying music, “the procedural problems are rather different, and it is not easy to achieve an appropriate level of difficulty” (1982, p. 57), Further areas to studv. If this experiment were to be run again, I would first recommend formally testing and screening participants’ hearing abihties rather than relying on their own evaluation of their hearing. I would also recommend recording the spoken passages in a professional recording studio, balancing the volume levels of all passages when mixing the dichotic tracks, and conducting the experiment in a sound proof laboratory to eliminate ambient background noise during the procedure. It would be prudent to use more selections of both text and music in each category so as to balance out any fluctuations caused by individual differences in the passages themselves, not by differences between the overall categories. This would also reduce or eliminate instances where there may have been a pause in the content of the unattended 51 stimulus at the moment of the target word's occurrence in the attended stimulus, thus causing momentary monotic listening conditions. Further, I would also suggest using the same procedure but with a different measurement. Investigating hemispheric activation and involvement with the behavioural measure of reaction time is problematic. Using a physiological measurement such as Event Related Potentials (ERPs) or Positron Emission Tomography (PET) would be much more reliable. Finally, clearer definitions of what constitutes a musician or a non-musician would be helpfixl. Experimental groups separated by more extreme differences in training may show more defined results. Also, a correlation may be investigated between exact hours and type of musical training and any effects on processing reaction time. Consideration may also be made for people with distinct training versus those who have only practical experience, or both. Following these results, we may investigate such behavioural and environmental factors as whether any specific style or level of musical training is required before any significant increases in left hemisphere music processing are seen, and what other automatic or directed processes might be affected by musical training. Another question is whether tiie possession of absolute pitch has any further effect on cognitive tasks, musical or otherwise. Neurological factors to research might be which specific neural structures are affected by musical training, and which structures are involved in music processing at both active and passive levels. These investigations are all usefirl in terms of music development, general neural development, and brain damage rehabilitation. An alternate avenue of investigation also presents itself in light of the discussion made here. I have argued that musical training grants music many properties of language in the minds of those who are so trained. Conversely, language may be stripped of its purely “language-like” properties 52 - such as grammar and word meaning - by using passages spoken in languages not spoken by the participants. This brings language and music closer to the same level, and further comparisons may be made between the two in this way. 53 References Ambler, B. A., Fisicaro, S. A., & Proctor, R. W. (1976). Temporal characteristics of primarysecondary message interference in a dichotic listening task. Memory & Cognition. 4(6). 709-716. Bartz, W. H. (1972). Repetition effects in dichotic presentation. Journal of Experimental Psychology. 9 2 .220-224. Bennett, A. F. (1986). Snake. In The world book encyclopedia (Vol. 17, pp. 524-536). Chicago: World Book, Inc. Besson, M. (1997). Electrophysiological studies of music processing. In I. Deliège & J. Sloboda (Eds.), Perception and cognition of music (pp. 217-250). East Sussex, UK: Psychology Press Ltd. Besson, M., & Faïta, F. (1994). Electrophysiological studies of musical incongruities: Comparison between musicians and non-musicians. In Proceedings of the Third International Conference on Music Perception and Cognition (pp. 41-43). Liège, Belgium: ICMPC. Beyer, T., & Chiarello, R. J. (1974). Cerebral dominance in musicians and nonmusicians. Science. 185. 537-539. Boucher, R., & Bryden, M. P. (1997). LateraUty effects in the processing of melody and timbre. Neuropsychologia. 35(11). 1467-1473. Breitling, D., Guenther, W., & Rondot, P. (1987). Auditory perception of music measured by brain electrical actiyity mapping. Neuropsychologia. 25151. 765-774. Bryden, M. P. (1978), Strategy effects in the assessment of hemispheric asymmetry. In G. Underwood (Ed.), Strategies of information processing. London: Academic Press. Bryden, M. P. (1982). Laterality: Functional asymmetry in the intact brain. New York: Academic Press. 54 Bryden, M. P. (1988). An overview of the dichotic listening procedure and its relation to cerebral organisation. In K. Hugdahl (Ed.), Handbook of dichotic listening: Theory, methods, and research (pp. 1-44). Chichester, UK: John Wiley & Sons. Clark, C. R., Geffen, L. B., & Geffen, G. (1988). Invariant properties of auditory perceptual asymmetry assessed by dichotic monitoring. In K. Hugdahl (Ed.), Handbook of dichotic listening: Theorv. methods, and research (pp. 1-44). Chichester, UK: John Wiley & Sons. Cohen, J. (1992). A Power Primer. Psvcholoeical Bulletin. 112111.155-159. Crummer, G. C., Walton, J. P., Wayman, J., Hantz, E. C., & Frisina, R. D. (1994). Neural processing of musical timbre by musicians, nonmusicians, and musicians possessing absolute pitch. Journal of the Acoustical Societv of America. 95(51.2720-2727. Dodson, P. (1986). Dinosaurs. In The world book encvclopedia (Vol. 5, pp. 212-220). Chicago: World Book, Inc. Eaton, K. E., & Siegel, M. H. (1976). Strategies of absolute pitch possessors in the learning of an unfamiliar scale. Bulletin of the Psvchonomic Societv. 8(41.289-291. Frisina, R. D., Walton, J. P., & Crummer, G. C. (1988). Neural basis for music cognition: Neurophysiological foundations. Psvchomusicoloev. 7121.99-107. Gaede, S. E., Parsons, O. A., and Bertera, J. H. (1978). Hemispheric differences in music perception: Aptitude vs experience. Neuropsvcholoma. 16.369-373. Geffen, G., Traub, E., & Stierman, 1. (1978). Language laterality assessed by unilateral E.C.T. and dichotic monitoring. Journal of Neurology. Neurosurgery, and Psychiatry. 41. 354-360. Goldsmith, O. (n.d ). The deserted village. In D. Daiches Poems in English: 1530 - 1940 (pp. 226-237). New York: The Ronald Press Company. (1950) 55 Goldstein, E. B. (1999). Sensation and Perception (5th éd.). Pacific Grove, CA: Brooks/Cole Publishing Company. Gordon, H. (1975). Hemispheric asymmetry and musical performance. Science. 189. 68-69. Halwes, T. G. (1969). Effects of dichotic fusion on the perception of speech. Supplement to Status Report on Speech Research (chapter 2). New Haven, C I; Haskins Laboratories. Hantz, E. C., Kreüick, K. G., Kananen W., & Swartz, K. P. (1997). Neural responses to melodic and harmonic closure: An event-related-potential study. Music Perception. 15(11.69-98. Helhge, J. B. (1983). Hemispheric asvmmetrv: What’s right and what’s left. Cambridge, MA: Harvard University Press. Hetfield, J., & Ulrich, L. (1988). Harvester of sorrow [Recorded by M. Lilja, A. Manninen, P. Lôtjônen, & E. Toppinen]. On Plavs Metallica bv four cellos [CD]. Finland: Polygram. (1996) Hugdahl, K. (1996). Brain laterality: Beyond the basics. European Psychologist. 1131.206220 . Hugdahl, K., & Andersson, L. (1986). The “forced-attention paradigm” in dichotic hstening to CV-syllables: A comparison between adults and children. Cortex. 22.417-432. laccino, J. F. (1993). Left brain - right brain differences: Inquiries, evidence, and new approaches. Hillsdale, NJ: Lawrence Erlbaum Associates. Inoue, T. (1981). Effects of shadowing and selective attention in dichotic hstening. Psvchologia. 2 4 .21-31. Kalogridis, J. (1994). Covenant with the vampire: The diaries of the family Dracul. New York: Dell Publishing. Kimura, D. (1964). Left-right differences in the perception of melodies. Quarterly .loumal of Psychology. 16. 355-358. 56 Kimura, D. (1967). Functional asymmetry of the brain in dichotic hstening. Cortex. 3 . 163178. Klein, M., Coles, M. G. H., & Donchin, E. (1984). People with absolute pitch process tones without producing a P300. Science. 2 2 3 .1306-1308. Kolb, B., & Whishaw, I. Q. (1990). Fundamentals of human neuropsychology (3rd ed.). New York: W. H. Freeman and Company. Lewis, J. (1970). Semantic processing of unattended messages using dichotic hstening. Journal of Experimental Psychology. 8 5 .225-228. Lewis, M., Honeck, R. P., & Fishbein, H. (1975). Does shadowing differentially unlock attention? American Journal ofPsvcholoev. 88f3L 455-458. Ley, R. G., & Bryden, M. P. (1979). Hemispheric differences in processing emotions and faces. Brain and Language. 7(11.127-138. Mayes, J., Emery, B., & Beagley, W. (1998, July 30). Dichotic listening: The correlation between sound effects, memory, and shadowing [On-line]. Available Internet: http://www.alma.edu/Academics/PsychologyAVebPosters/ST98COG/JB/SoundPage.html Minagawa, N., Nakagawa, M., & Kashu, K. (1987). The differences between musicians and non-musicians in the utilization of asymmetrical brain function during a melody recognition task. Psvchologia. 3 0 .251-257. Murray, M. R., & Richards, S. J. (1978). A right-ear advantage in monotic shadowing. Acta Psvchologica. 42. 495-504. Patel, A. D., & Peretz, I. (1997). Is music autonomous from language? A neuropsychological appraisal. In I. Deliège & J. Sloboda (Eds.), Perception and cognition of music (pp. 191-215). East Sussex, UK: Psychology Press Ltd. 57 Platel, H., Price, C., Baron, J. -C., Wise, R., Lambert, J., Frackowiak, R. S. J., Lechevalier, B., & Eustache, F. (1997). The structural components of music perception; A functional anatomical study. Brain. 120.229-243. Pope, C. H. (1986). Monitor. In The world book encvclopedia (Vol. 13, p. 728). Chicago: World Book, Inc. Rachmaninoff, S. V. (1915). Vocalise [Recorded by Y-. Y. Ma & B. McFerrin]. On Hush [CD]. Don Mills, ON: Sony Music Entertainment, Inc. (1992) Schlaug, G., Jancke, L., Huang, Y., & Steinmetz, H. (1995). In vivo evidence of structural brain asymmetry in musicians. Science. 267. 699-701. Siegel, J. A. (1974). Sensoiy and verbal coding strategies in subjects with absolute pitch. Journal of Experimental Psvcholosv. 103(1). 37-44. Spender, N. (1980). Absolute pitch. In The new Grove dictionarv of music and musicians (Vol. 1, pp. 27-29). New York: Macmillan Pubhshers Ltd. Tabachnick, B. G., & Fidell, L. S. (1996). Using Multivariate Statistics (3rd ed.). New York: HarperCollins College Pubhshers. Thomas, D. (n.d.). In memory of Ann Jones. In D. Daiches Poems in English: 1530 - 1940 (pp. 635-636). New York; The Ronald Press Company. (1950) Traditional (n.d.). Hush Little Baby [Recorded by Y-. Y. Ma & B. McFerrin]. On Hush [CD]. Don Mills, ON: Sony Music Entertainment, Inc. (1992) Tsao, Y. -C., Wittlieb, E., Miller, B., & Wang, T. -G. (1983). Time estimation of a secondary event. Perceptual and Motor Skills. 5 7 .1051-1055. 58 Wexler, B. E. (1988). Dichotic presentation as a method for single hemisphere stimulation studies. In K. Hugdahl (Ed.), Handbook of dichotic listening: Theorv. methods, and research (pp. 85-116). Chichester, UK: John Wiley & Sons. Wiens, S., Emmerich, D. W., & Katkin, E. S. (1997). Response bias affects perceptual asymmetry scores and performance measures on a dichotic listening task. Neuropsychologia. 35(11), 1475-1482. Wordsmith, W. (n.d.). Michael. In D. Daiches Poems in English: 1530 - 1940 (pp. 304-315). New York: The Ronald Press Company. (1950) Zakay, D., Roziner, 1., & Ben-Arzi, S. (1984). On the nature of absolute pitch. Archivfur Psychologie. 136.163-166. Zatorre, R. J., & Beckett, C. (1989). Multiple coding strategies in the retention of musical tones by possessors of absolute pitch. Memory & Cognition. 17(51. 582-589. 59 Appendix A Specific Text Passages and Time Indices for Music Passages Used as Stimuli The recordings of each spoken text passage will begin with the first word of the sections given below. Recordings will end precisely 15 s later. The sections below end at the nearest word to the 15 smark. Attended Poetry 1 “In Memoiy of Ann Jones” (Thomas, n.d., 11. 1-6) After the fimeral, mule praises, brays, Windshdce of sailshaped ears, muffle-toed tap Tap happily of one peg in the thick Grave’s foot, blinds down the lids, the teeth in black. The spittled eyes, the salt ponds in the sleeves. Morning smack Attended Poetry 2 “The Deserted Village” (Goldsmith, n.d., 11. 317-322) Here while the proud their long-drawn pomps display, There the black gibbet blooms beside the way; The dome where Pleasure holds her midnight reign. Here, richly decked, admits the gorgeous train; Tumultuous grandeur crowds the blazing square. The rattling chariots 60 Unattended Poetry “Michael” (Wordsworth, n.d., U. 81-87) She was a woman of a stirring life, Whose heart was in her house: two wheels she had Of antique form; this large, for spinning wool; That small, for flax; and, if one wheel had rest. It was because the other was at work. The Pair had but one inmate in their house. An only Child, who had been bom to them Attended Prose 1 Covenant With the Vampire (Kalogridis, 1994, p. 45) Nonetheless, I permitted myself a promised detour to the family burial place to spend a solitajy moment with Father. Yet, approaching the black iron fence, I could see through the bars a strage sight: the corpses of two wolves lying just inside the wide-open gate. I knew at once something was wrong. Attended Prose 2 Covenant With the Vampire (Kalogridis, 1994, p. 293) In the centre of the fer wall stood the door which lead to even deeper mysteries, and to the left To the left, the black velvet veil had been pulled aside to reveal what had once been hidden: Bolted to the wall, a set of black iron manacles; propped nearby, four oiled, glistening wooden 61 Unattended Prose Covenant With the Vampire (Kalogridis. 1994, p. 145) I strained harder to see, but in the darkness, could only be sure that the shutters were open. It was impossible to judge whether the sash had been thrown up. I leaned closer, nose almost touching the window. A dark, growling form hurled itself out of the shadows and struck the glass with such force that it cracked Attended Non-Fiction 1 “Monitor” (Pope, 1986, p. 728) The different kinds are much alike and hard to tell apart. The body is usually black or brown with yellow bands, spots, or mottling. The deeply forked tongue looks like a snake’s tongue. Monitors are usually at least 4 feet (1.2 meters) long. One, the Komodo dragon, is often Attended Non-Fiction 2 “Snake” (Bennett, 1986, p. 526) Some snakes have bright colors. For example, the coral snakes of North America have bright bands of black, red, and yellow or white. In some cases, snakes of the same species have different color patterns. For example, some California king snakes are black with white bands across the width of 62 Unattended Non-Fiction “Dinosaur” (Dodson, 1986, p. 218) Scientists have developed many theories to explailn the disappearance of dinosaurs and the other great reptiles. Probably the most widely accepted theory involves a change in the earth’s climate. Toward the end of the Cretaceous Period, the climate cooled and may have become too cold for the dinosaurs. Dinosaurs were Rock Music “Harvester of Sorrow” (Hetfield & Ulrich, 1988, track 3) Begin at approximately time index 4:26. Classical Music “Vocalise” (RachmaninofiF, 1915, track 6) Begin at approximately time index 4:00. Folk Music “Hush Little Baby” (traditional, arr. 1992, track 5) Begin at approximately time index 0:03. 63 Appendix B Informed Consent Form NOTE: All research involving human participants falls under the authority of the Human Research Committee. The University and those conducting this research subscribe to the ethical conduct of research and to the protection at all times of the interests, comfort, and safety of the participants. Present Studv: Music in the brain: Differences between musicians and non-musicians. Research Personnel: If you have any questions regarding this form or any of the research involved today, please feel free to contact Dr. Glenda Prkachin at 960-6632 or Juhe Orlando at 561-1292. Task Requirements: You will be listening to two different audio signals presented simultaneously, one to each ear. You will be required to monitor for the occurrence of a target word and to press a button when you hear that word. The ear to which you should attend will change from one trial to the next and will be signalled by a tone to that ear. Duration: Completion of this study will take approximately 45 minutes (five minutes pre-testing, five minutes practice, two twenty minute blocks, one three minute break between blocks). Potential Risks: There is no deception or risks known to be associated with participation in this study. Anonvmitv/Confidentialitv: The data collected in this research wül remain strictly confidential and will only be accessible to project staff. Names will not be attached to the data at any level. Right to Withdraw: You are free to leave this study at any time with no penalty. I have read the above description, and I understand the conditions of my participation. My signature indicates that I agree to participate in this experiment. name signature date researcher signature date 64 Appendix C Participant Infonnation Form Dichotic Listening Study - Participant Information Participant # Gender M /F Age Which hand do you primarily use for writing, eating, throwing, etc? LEFT / RIGHT To your knowledge, do you have any hearing loss or deficits? Y / N To your knowledge, do you have any brain damage or abnormalities? Y / N Please list any music experience, formal or informal, that you have. If there are more than five, please fist the most recent five. Please be as detailed as possible. What? (instrument, voice, choir, band, private lessons, etc.) Actual Years 65 Appendix D Means and Standard Deviations Table D1 Mean Reaction Times for Ear x Attended x Unattended interaction. Language Unattended Conditions Qn!ï Group M SD Left Ear - Poetry Attended - Poetry Unattended 298.57a,b,c 96.36 Left Ear - Poetry Attended - Prose Unattended 395.53a 145.03 Left Ear - Poetry Attended - Non-fiction Unattended 453.50b,d,a 196.19 Left Ear - Prose Attended - Poetry Unattended 345.90d 89.37 Left Ear - Prose Attended - Prose Unattended 393.17 142.18 Left Ear - Prose Attended - Non-fiction Unattended 379.10 107.79 Right Ear - Poetry Attended - Poetry Unattended 370.37 122.49 Right Ear - Poetry Attended - Prose Unattended 403.30c 134.04 Right Ear - Poetry Attended - Non-fiction Unattended 340.67. 113.63 Right Ear - Prose Attended - Poetry Unattended 360.10 78.25 Right Ear - Prose Attended - Prose Unattended 381.23 97.42 Right Ear - Prose Attended - Non-fiction Unattended 358.73 95.58 Note. Conditions sharing the same subscript are significantly different, g < 0.05 66 Table 02 Mean Reaction Times for Group x Attended Interaction. Language Unattended Conditions Only Group M SO Musician - Poetry Attended 362.02 152.51 Musician - Prose Attended 387.43 102.40 Non-musician - Poetry Attended 391.96 136.07 Non-musician - Prose Attended 351.98 102.19 Note. No conditions are significantly different from each other at the level of g < 0.05. 67 Table D3 Mean Reaction Times for Group x Unattended Interaction. Music Unattended Conditions Only M SD Musician - Folk Unattended 419.93a,b 142.50 Musician - Rock Unattended 356.00a 95.90 Musician - Classical Unattended .386.45 144.05 Non-musician - Folk Unattended 395.38c 127.19 Non-musician - Rock Unattended 378.88 132.97 Non-musician - Classical Unattended 331.05b,c 110.25 Group Note. Conditions sharing the sam e subscript are significantly different, g < 0.05 68 Table D4 Mean Reaction Times for Ear x Unattended Interaction. Music Unattended Conditions Only Group M SB Left Ear - Folk Unattended 381.43a,b 123.76 Left Ear - Rock Unattended 356.28c 105.01 Left Ear - Classical Unattended 388.75d 140.20 Right Ear - Folk Unattended 433.88a,c.e.f 141.68 Right Ear - Rock Unattended 378.60. 125.96 Right Ear - Classical Unattended 328.75b,d,f 113.93 Note. Conditions sharing the sam e subscript are significantly different, g < 0.05 69 Table D5 Mean Reaction Times for Attended x Unattended Interaction. Music Unattended Conditions Only Group M SD Poetry Attended - Folk Unattended 430.85a,b 134.05 Poetry Attended - Rock Unattended 389.48c 113.51 Poetry Attended - Classical Unattended 327.08a,c,d 143.75 Prose Attended - Folk Unattended 384.47 133.14 Prose Attended - Rock Unattended 345.40b 115.21 Prose Attended - Classical Unattended 390.42d 108.46 Note. Conditions sharing the sam e subscript are significantly different, g < 0.05 70 Table D6 Mean Reaction Times for Ear x Attended Interaction. Music Unattended Conditions Only Group M SD Left Ear - Poetry Attended 397.78 138.69 Left Ear - Prose Attended 353.20 103.19 367.17 134.63 393.66 132.96 Right Ear - Poetry Attended Right Ear - Prose Attended . Note. No conditions are significantly different from each other at the level of g < 0.05.