Cognitive musicology

Cognitive musicology is a branch of cognitive science concerned with computationally modeling musical knowledge with the goal of understanding both music and cognition.[1]

Cognitive musicology can be differentiated from other branches of music psychology via its methodological emphasis, using computer modeling to study music-related knowledge representation with roots in artificial intelligence and cognitive science. The use of computer models provides an exacting, interactive medium in which to formulate and test theories.[2]

This interdisciplinary field investigates topics such as the parallels between language and music in the brain. Biologically inspired models of computation are often included in research, such as neural networks and evolutionary programs.[3] This field seeks to model how musical knowledge is represented, stored, perceived, performed, and generated. By using a well-structured computer environment, the systematic structures of these cognitive phenomena can be investigated.[4]

Even while enjoying the simplest of melodies there are multiple brain processes that are synchronizing to comprehend what is going on. After the stimulus enters and undergoes the processes of the ear, it enters the auditory cortex, part of the temporal lobe, which begins processing the sound by assessing its pitch and volume. From here, brain functioning differs amongst the analysis of different aspects of music. For instance, the rhythm is processed and regulated by the left frontal cortex, the left parietal cortex and the right cerebellum standardly. Tonality, the building of musical structure around a central chord, is assessed by the prefrontal cortex and cerebellum (Abram, 2015). Music is able to access many different brain functions that play an integral role in other higher brain functions such as motor control, memory, language, reading and emotion. Research has shown that music can be used as an alternative method to access these functions that may be unavailable through non-musical stimulus due to a disorder. Musicology explores the use of music and how it can provide alternative transmission routes for information processing in the brain for diseases such as Parkinson's and dyslexia as well.

Notable researchers

The polymath Christopher Longuet-Higgins, who coined the term "cognitive science", is one of the pioneers of cognitive musicology. Among other things, he is noted for the computational implementation of an early key-finding algorithm.[5] Key finding is an essential element of tonal music, and the key-finding problem has attracted considerable attention in the psychology of music over the past several decades. Carol Krumhansl and Mark Schmuckler proposed an empirically grounded key-finding algorithm which bears their names.[6] Their approach is based on key-profiles which were painstakingly determined by what has come to be known as the probe-tone technique.[7] This algorithm has successfully been able to model the perception of musical key in short excerpts of music, as well as to track listeners' changing sense of key movement throughout an entire piece of music.[8] David Temperley, whose early work within the field of cognitive musicology applied dynamic programming to aspects of music cognition, has suggested a number of refinements to the Krumhansl-Schmuckler Key-Finding Algorithm.[9]

Otto Laske was a champion of cognitive musicology.[10] A collection of papers that he co-edited served to heighten the visibility of cognitive musicology and to strengthen its association with AI and music.[11] The foreword of this book reprints a free-wheeling interview with Marvin Minsky, one of the founding fathers of AI, in which he discusses some of his early writings on music and the mind.[12] AI researcher turned cognitive scientist Douglas Hofstadter has also contributed a number of ideas pertaining to music from an AI perspective.[13] Musician Steve Larson, who worked for a time in Hofstadter's lab, formulated a theory of "musical forces" derived by analogy with physical forces.[14] Hofstadter[15] also weighed in on David Cope's experiments in musical intelligence,[16] which take the form of a computer program called EMI which produces music in the form of, say, Bach, or Chopin, or Cope.

Cope's programs are written in Lisp, which turns out to be a popular language for research in cognitive musicology. Desain and Honing have exploited Lisp in their efforts to tap the potential of microworld methodology in cognitive musicology research.[17] Also working in Lisp, Heinrich Taube has explored computer composition from a wide variety of perspectives.[18] There are, of course, researchers who chose to use languages other than Lisp for their research into the computational modeling of musical processes. Tim Rowe, for example, explores "machine musicianship" through C++ programming.[19] A rather different computational methodology for researching musical phenomena is the toolkit approach advocated by David Huron.[20] At a higher level of abstraction, Gerraint Wiggins has investigated general properties of music knowledge representations such as structural generality and expressive completeness.[21]

Although a great deal of cognitive musicology research features symbolic computation, notable contributions have been made from the biologically inspired computational paradigms. For example, Jamshed Bharucha and Peter Todd have modeled music perception in tonal music with neural networks.[22] Al Biles has applied genetic algorithms to the composition of jazz solos.[23] Numerous researchers have explored algorithmic composition grounded in a wide range of mathematical formalisms.[24][25]

Within cognitive psychology, among the most prominent researchers is Diana Deutsch, who has engaged in a wide variety of work ranging from studies of absolute pitch and musical illusions to the formulation of musical knowledge representations to relationships between music and language.[26] Equally important is Aniruddh D. Patel, whose work combines traditional methodologies of cognitive psychology with neuroscience. Patel is also the author of a comprehensive survey of cognitive science research on music.[27]

Perhaps the most significant contribution to viewing music from a linguistic perspective is the Generative Theory of Tonal Music (GTTM) proposed by Fred Lerdahl and Ray Jackendoff.[28] Although GTTM is presented at the algorithmic level of abstraction rather than the implementational level, their ideas have found computational manifestations in a number of computational projects.[29][30]

For the German-speaking area, Laske's conception of cognitive musicology has been advanced by Uwe Seifert in his book Systematische Musiktheorie und Kognitionswissenschaft. Zur Grundlegung der kognitiven Musikwissenschaft ("Systematic music theory and cognitive science. The foundation of cognitive musicology")[31] and subsequent publications.

Music and language acquisition skills

Both music and speech rely on sound processing and require interpretation of several sound features such as timbre, pitch, duration, and their interactions (Elzbieta, 2015). A fMRI study revealed that the Broca's and Wernicke's areas, two areas that are known to activated during speech and language processing, were found activated while the subject was listening to unexpected musical chords (Elzbieta, 2015). This relation between language and music may explain why, it has been found that exposure to music has produced an acceleration in the development of behaviors related to the acquisition of language. The Suzuki music education which is very widely known, emphasizes learning music by ear over reading musical notation and preferably begins with formal lessons between the ages of 3 and 5 years. One fundamental reasoning in favor of this education points to a parallelism between natural speech acquisition and purely auditory based musical training as opposed to musical training due to visual cues. There is evidence that children who take music classes have obtained skills to help them in language acquisition and learning (Oechslin, 2015), an ability that relies heavily on the dorsal pathway. Other studies show an overall enhancement of verbal intelligence in children taking music classes. Since both activities tap into several integrated brain functions and have shared brain pathways it is understandable why strength in music acquisition might also correlate with strength in language acquisition.

Music and pre-natal development

Extensive prenatal exposure to a melody has been shown to induce neural representations that last for several months. In a study done by Partanen in 2013, mothers in the learning group listened to the ‘Twinkle twinkle little star' melody 5 times per week during their last trimester. After birth and again at the age of 4 months, they played the infants in the control and learning group a modified melody in which some of the notes were changed. Both at birth and at the age of 4 months, infants in the learning group had stronger event related potentials to the unchanged notes than the control group. Since listening to music at a young age can already map out neural representations that are lasting, exposure to music could help strengthen brain plasticity in areas of the brain that are involved in language and speech processing.

Music therapy effect on cognitive disorders

If neural pathways can be stimulated with entertainment there is a higher chance that it will be more easily accessible. This illustrates why music is so powerful and can be used in such a myriad of different therapies. Music that is enjoyable to a person illicit an interesting response that we are all aware of. Listening to music is not perceived as a chore because it is enjoyable, however our brain is still learning and utilizing the same brain functions as it would when speaking or acquiring language. Music has the capability to be a very productive form of therapy mostly because it is stimulating, entertaining, and appears rewarding. Using fMRI, Menon and Levitin found for the first time that listening to music strongly modulates activity in a network of mesolimbic structures involved in reward processing. This included the nucleus accumbens and the ventral tegmental area (VTA), as well as the hypothalamus, and insula, which are all thought to be involved in regulating autonomic and physiological responses to rewarding and emotional stimuli (Gold, 2013).

Pitch perception was positively correlated with phonemic awareness and reading abilities in children (Flaugnacco, 2014). Likewise, the ability to tap to a rhythmic beat correlated with performance on reading and attention tests (Flaugnacco, 2014). These are only a fraction of the studies that have linked reading skills with rhythmic perception, which is shown in a meta-analysis of 25 cross-sectional studies that found a significant association between music training and reading skills (Butzlaff, 2000). Since the correlation is so extensive it is natural that researchers have tried to see if music could serve as an alternative pathway to strengthen reading abilities in people with developmental disorders such as dyslexia. Dyslexia is a disorder characterized by a long lasting difficulty in reading acquisition, specifically text decoding. Reading results have been shown to be slow and inaccurate, despite adequate intelligence and instruction. The difficulties have been shown to stem from a phonological core deficit that impacts reading comprehension, memory and prediction abilities (Flaugnacco, 2014). It was shown that music training modified reading and phonological abilities even when these skills are severely impaired. By improving temporal processing and rhythm abilities, through training, phonological awareness and reading skills in children with dyslexia were improved. The OPERA hypothesis proposed by Patel (2011), states that since music places higher demands on the process than speech it brings adaptive brain plasticity of the same neural network involved in language processing.

Parkinson's disease is a complex neurological disorder that negatively impacts both motor and non-motor functions caused by the degeneration of dopaminergic (DA) neurons in the substantia nigra (Ashoori, 2015). This in turn leads to a DA deficiency in the basal ganglia. The deficiencies of dopamine in these areas of the brain have shown to cause symptoms such as tremors at rest, rigidity, akinesia, and postural instability. They are also associated with impairments of internal timing of an individual (Ashoori, 2015). Rhythm is a powerful sensory cue that has shown to help regulate motor timing and coordination when there is a deficient internal timing system in the brain. Some studies have shown that musically cued gait training significantly improves multiple deficits of Parkinson's, including in gait, motor timing, and perceptual timing. Ashoori's study consisted of 15 non-demented patients with idiopathic Parkinson's who had no prior musical training and maintained their dopamine therapy during the trials. There were three 30-min training sessions per week for 1 month where the participants walked to the beats of German folk music without explicit instructions to synchronize their footsteps to the beat. Compared to pre-training gait performance, the Parkinson's patients showed significant improvement in gait velocity and stride length during the training sessions. The gait improvement was sustained for 1 month after training, which indicates a lasting therapeutic effect. Even though this was uncued it shows how the gait of these Parkinson's patients was automatically synchronized with the rhythm of the music. The lasting therapeutic effect also shows that this might have affected the internal timing of the individual in a way that could not be accessed by other means.

See also


  1. ^ Laske, Otto (1999). Navigating New Musical Horizons (Contributions to the Study of Music and Dance). Westport: Greenwood Press. ISBN 978-0-313-30632-7.
  2. ^ Laske, O. (1999). AI and music: A cornerstone of cognitive musicology. In M. Balaban, K. Ebcioglu, & O. Laske (Eds.), Understanding music with AI: Perspectives on music cognition. Cambridge: The MIT Press.
  3. ^ Graci, C (2009). "A brief tour of the learning sciences featuring a cognitive tool for investigating melodic phenomena". Journal of Educational Technology Systems. 38 (2): 181–211. doi:10.2190/et.38.2.i.
  4. ^ Hamman, M., 1999. "Structure as Performance: Cognitive Musicology and the Objectification of Procedure," in Otto Laske: Navigating New Musical Horizons, ed. J. Tabor. New York: Greenwood Press.
  5. ^ Longuet-Higgins, C. (1987) Mental Processes: Studies in cognitive science. Cambridge, MA, US: The MIT Press.
  6. ^ Krumhansl, Carol (1990). Cognitive Foundations of Musical Pitch. Oxford Oxfordshire: Oxford University Press. ISBN 978-0-19-505475-0.
  7. ^ Krumhansl, C.; Kessler, E. (1982). "Tracing the dynamic changes in perceived tonal organisation in a spatial representation of musical keys". Psychological Review. 89 (4): 334–368. doi:10.1037/0033-295x.89.4.334.
  8. ^ Schmuckler, M. A.; Tomovski, R. (2005). "Perceptual tests of musical key-finding". Journal of Experimental Psychology: Human Perception and Performance. 31 (5): 1124–1149. CiteSeerX doi:10.1037/0096-1523.31.5.1124. PMID 16262503.
  9. ^ Temperley, David (2001). The Cognition of Basic Musical Structures. Cambridge: MIT Press. ISBN 978-0-262-20134-6.
  10. ^ Laske, Otto (1999). Otto Laske. Westport: Greenwood Press. ISBN 978-0-313-30632-7.
  11. ^ Balaban, Mira (1992). Understanding Music with AI. Menlo Park: AAAI Press. ISBN 978-0-262-52170-3.
  12. ^ Minsky, M (1981). "Music, mind, and meaning". Computer Music Journal. 5 (3): 28–44. doi:10.2307/3679983. JSTOR 3679983.
  13. ^ Hofstadter, Douglas (1999). Gödel, Escher, Bach. New York: Basic Books. ISBN 978-0-465-02656-2.
  14. ^ Larson, S (2004). "Musical Forces and Melodic Expectations: Comparing Computer Models with Experimental Results". Music Perception. 21 (4): 457–498. doi:10.1525/mp.2004.21.4.457.
  15. ^ Cope, David (2004). Virtual Music. Cambridge: The MIT Press. ISBN 978-0-262-53261-7.
  16. ^ Cope, David (1996). Experiments in Musical Intelligence. Madison: A-R Editions. ISBN 978-0-89579-337-9.
  17. ^ Honing, H (1993). "A microworld approach to formalizing musical knowledge". Computers and the Humanities. 27 (1): 41–47. doi:10.1007/bf01830716.
  18. ^ Taube, Heinrich (2004). Notes from the Metalevel. New York: Routledge. ISBN 978-90-265-1975-8.
  19. ^ Rowe, Robert (2004). Machine Musicianship. City: MIT Pr. ISBN 978-0-262-68149-0.
  20. ^ Huron, D. (2002). Music Information Processing Using the Humdrum Toolkit: Concepts, Examples, and Lessons. "Computer Music Journal, 26" (2), 11–26.
  21. ^ Wiggins, G.; et al. (1993). "A Framework for the Evaluation of Music Representation Systems". Computer Music Journal. 17 (3): 31–42. CiteSeerX doi:10.2307/3680941. JSTOR 3680941.
  22. ^ Bharucha, J. J., & Todd, P. M. (1989). Modeling the perception of tonal structure with neural nets. Computer Music Journal, 44−53
  23. ^ Biles, J. A. 1994. "GenJam: A Genetic Algorithm for Generating Jazz Solos." Proceedings of the 1994 International Computer Music Conference. San Francisco: International Computer Music Association
  24. ^ Nierhaus, Gerhard (2008). Algorithmic Composition. Berlin: Springer. ISBN 978-3-211-75539-6.
  25. ^ Cope, David (2005). Computer Models of Musical Creativity. Cambridge: MIT Press. ISBN 978-0-262-03338-1.
  26. ^ Deutsch, Diana (1999). The Psychology of Music. Boston: Academic Press. ISBN 978-0-12-213565-1.
  27. ^ Patel, Aniruddh (1999). Music, Language, and the Brain. Oxford: Oxford University Press. ISBN 978-0-12-213565-1.
  28. ^ Lerdahl, Fred; Ray Jackendoff (1996). A Generative Theory of Tonal Music. Cambridge: MIT Press. ISBN 978-0-262-62107-6.
  29. ^ Katz, Jonah; David Pesetsky (May 2009). "The Recursive Syntax and Prosody of Tonal Music" (PDF). Recursion: Structural Complexity in Language and Cognition. Conference at UMass Amherst.
  30. ^ Hamanaka, Masatoshi; Hirata, Keiji; Tojo, Satoshi (2006). "Implementing 'A Generative Theory of Tonal Music'". Journal of New Music Research. 35 (4): 249–277. doi:10.1080/09298210701563238.
  31. ^ Uwe Seifert: Systematische Musiktheorie und Kognitionswissenschaft. Zur Grundlegung der kognitiven Musikwissenschaft. Orpheus Verlag für systematische Musikwissenschaft, Bonn 1993

Further reading

External links

Absolute pitch

Absolute pitch (AP), widely referred to as perfect pitch, is a rare auditory phenomenon characterized by the ability of a person to identify or re-create a given musical note without the benefit of a reference tone.AP can be demonstrated via linguistic labeling ("naming" a note), auditory imagery, or sensorimotor responses. For example, an AP possessor can accurately reproduce a heard tone on a musical instrument without "hunting" for the correct pitch. Researchers estimate the occurrence of AP to be 1 in 10,000 people.Generally, absolute pitch implies some or all of the following abilities, achieved without a reference tone:

Identify by name individual pitches (e.g. F♯, A, G, C) played on various instruments.

Name the key of a given piece of tonal music.

Reproduce a piece of tonal music in the correct key days after hearing it.

Identify and name all the tones of a given chord or other tonal mass.

Accurately sing a named pitch.

Name the pitches of common everyday sounds such as car horns and alarms.

Name the frequency of a pitch (e.g. that G♯4 is 415Hz) after hearing it.People may have absolute pitch along with the ability of relative pitch, and relative and absolute pitch work together in actual musical listening and practice, but strategies in using each skill vary. Those with absolute pitch may train their relative pitch and there has been a reported case of 6 adults in China (with previous musical training) acquiring absolute pitch through specific tonal training. Furthermore, two studies by Harvard and the University of Chicago have shown Valproate, a medication used to treat epilepsy and severe depression, may re-open the "critical period" of learning, making the acquisition of absolute pitch, as well as languages, potentially as efficient for adults as for children. Adults who possess relative pitch but do not already have absolute pitch can also learn "pseudo-absolute pitch" and become able to identify notes in a way that superficially resembles absolute pitch. Moreover, training absolute pitch can require considerable motivation, time, and effort, and learning is not retained without constant practice and reinforcement.

Background music

Background music refers to a mode of musical performance in which the music is not intended to be a primary focus of potential listeners, but its content, character, and volume level are deliberately chosen to affect behavioral and emotional responses in humans such a concentration, relaxation, distraction, and excitement. Listeners are uniquely subject to background music with no control over its volume and content. The range of responses created are of great variety, and even opposite, depending on numerous factors such as, setting, culture, audience, and even time of day.

Background music is commonly played where there is no audience at all, such as empty hallways and restrooms and fitting rooms. It is also used in artificial space, such as music played while on hold during a telephone call, and virtual space, as in the ambient sounds or thematic music in massively multiplayer online role-playing games. It is typically played at low volumes from multiple small speakers distributing the music across broad public spaces.The widespread use of background music in offices, restaurants, and stores began with the founding of Muzak in the 1930s and was characterized by repetition and simple musical arrangements. Its use has grown worldwide and today incorporates the findings of psychological research relating to consumer behavior in retail environments, employee productivity, and workplace satisfaction.

Due to the growing variety of settings (from doctors offices to airports), many styles of music are utilized as background music. Because the aim of background music is passive listening, vocals, commercial interruptions, and complexity are typically avoided. In spite of the international distribution common to syndicated background music artists, it is often associated with artistic failure and a lack of musical talent in the entertainment industry. There are composers who write specifically for music syndication services such as Dynamic Media and Mood Media, successors of Muzak.


Biomusicology is the study of music from a biological point of view. The term was coined by Nils L. Wallin in 1991 to encompass several branches of music psychology and musicology, including evolutionary musicology, neuromusicology, and comparative musicology.Evolutionary musicology studies the "origins of music, the question of animal song, selection pressures underlying music evolution", and "music evolution and human evolution". Neuromusicology studies the "brain areas involved in music processing, neural and cognitive processes of musical processing", and "ontogeny of musical capacity and musical skill". Comparative musicology studies the "functions and uses of music, advantages and costs of music making", and "universal features of musical systems and musical behavior".Applied biomusicology "attempts to provide biological insight into such things as the therapeutic uses of music in medical and psychological treatment; widespread use of music in the audiovisual media such as film and television; the ubiquitous presence of music in public places and its role in influencing mass behavior; and the potential use of music to function as a general enhancer of learning."Whereas biomusicology refers to music among humans, zoomusicology extends the field to other species.

Christopher Longuet-Higgins

Hugh Christopher Longuet-Higgins (April 11, 1923 – March 27, 2004) was both a theoretical chemist and a cognitive scientist.

Evolutionary musicology

Evolutionary musicology is a subfield of biomusicology that grounds the psychological mechanisms of music perception and production in evolutionary theory. It covers vocal communication in non-human animal species, theories of the evolution of human music, and cross-cultural human universals in musical ability and processing.

Generative grammar

Generative grammar is a linguistic theory that regards grammar as a system of rules that generates exactly those combinations of words that form grammatical sentences in a given language. Noam Chomsky first used the term in relation to the theoretical linguistics of grammar that he developed in the late 1950s. Linguists who follow the generative approach have been called generativists. The generative school has focused on the study of syntax and addressed other aspects of a language's structure, including morphology and phonology.

Early versions of Chomsky's theory were called transformational grammar, a term still used to include his subsequent theories, the most recent of which is the minimalist program theory: Chomsky and other generativists have argued that many of the properties of a generative grammar arise from a universal grammar that is innate to the human brain, rather than being learned from the environment (see the poverty of the stimulus argument).

There are a number of versions of generative grammar currently practiced within linguistics.

A contrasting approach is that of constraint-based grammars. Where a generative grammar attempts to list all the rules that result in all well-formed sentences, constraint-based grammars allow anything that is not otherwise constrained. Certain versions of dependency grammar, head-driven phrase structure grammar, lexical functional grammar, categorial grammar, relational grammar, link grammar, and tree-adjoining grammar are constraint-based grammars that have been proposed. In stochastic grammar, grammatical correctness is taken as a probabilistic variable, rather than a discrete (yes or no) property.

Generative theory of tonal music

A generative theory of tonal music (GTTM) is a theory of music conceived by American composer and music theorist Fred Lerdahl and American linguist Ray Jackendoff and presented in the 1983 book of the same title. It constitutes a "formal description of the musical intuitions of a listener who is experienced in a musical idiom" with the aim of illuminating the unique human capacity for musical understanding.The collaboration between Lerdahl and Jackendoff was inspired by Leonard Bernstein's 1973 Charles Eliot Norton Lectures at Harvard University, wherein he called for researchers to uncover a musical grammar that could explain the human musical mind in a scientific manner comparable to Noam Chomsky's revolutionary transformational or generative grammar.Unlike the major methodologies of music analysis that preceded it, GTTM construes the mental procedures under which the listener constructs an unconscious understanding of music, and uses these tools to illuminate the structure of individual compositions. The theory has been influential, spurring further work by its authors and other researchers in the fields of music theory, music cognition and cognitive musicology.

Jamshed Bharucha

Jamshed Bharucha is the inaugural Vice Chancellor of SRM University, Andhra Pradesh in Amaravati, in the newly planned capital city of the state of Andhra Pradesh in India.He previously served as Distinguished Fellow and Research Professor at Dartmouth College, where his research and teaching were focused on education data science. He is President Emeritus of Cooper Union, a college located in Manhattan, New York City, having served as the 12th President of Cooper Union from July 2011 through June 2015.

Prior to becoming President of Cooper Union, Bharucha was Provost and Senior Vice President of Tufts University and Professor in the Departments of Psychology, Music and in the Medical School's Department of Neuroscience. Prior to Tufts he was the John Wentworth Professor of Psychological & Brain Sciences and Dean of the Faculty of Arts & Sciences at Dartmouth College, where he received the Huntington Teaching Award. His research is in cognitive psychology and neuroscience, focusing on the cognitive and neural basis of the perception of music. He was Editor of the interdisciplinary journal Music Perception and was a Fellow at the Center for Advanced Study in the Behavioral Sciences at Stanford University.

He is a Trustee of Vassar College, where he has chaired the Budget & Finance Committee and the Academic Affairs Committee. He received the Distinguished Achievement Award from the Alumnae & Alumni of Vassar College. Other past and present board service includes: the Board of Managers of SRM University - Amaravati, a new university in Amaravati, the new capital of the State of Andhra Pradesh in India; the Board of Trustees of Vellore Christian Medical College Foundation, which supports Christian Medical College and Hospital, the top private medical school in India; and the Honorary Advisory Board of IIMUN.

Music-related memory

Musical memory refers to the ability to remember music-related information, such as melodic content and other progressions of tones or pitches. The differences found between linguistic memory and musical memory have led researchers to theorize that musical memory is encoded differently from language and may constitute an independent part of the phonological loop. The use of this term is problematic, however, since it implies input from a verbal system, whereas music is in principle nonverbal.

Music-specific disorders

Neuroscientists have learned a lot about the role of the brain in numerous cognitive mechanisms by understanding corresponding disorders. Similarly, neuroscientists have come to learn a lot about music cognition by studying music-specific disorders. Even though music is most often viewed from a "historical perspective rather than a biological one" music has significantly gained the attention of neuroscientists all around the world. For many centuries music has been strongly associated with art and culture. The reason for this increased interest in music is because it "provides a tool to study numerous aspects of neuroscience, from motor skill learning to emotion".

Music and artificial intelligence

Research in artificial intelligence (AI) is known to have impacted medical diagnosis, stock trading, robot control, and several other fields. Perhaps less popular is the contribution of AI in the field of music. Nevertheless, artificial intelligence and music (AIM) has, for a long time, been a common subject in several conferences and workshops, including the International Computer Music Conference, the Computing Society Conference and the International Joint Conference on Artificial Intelligence. In fact, the first International Computer Music Conference was the ICMC 1974, Michigan State University, East Lansing, USA

Current research includes the application of AI in music composition, performance, theory and digital sound processing.Several music softwares have been developed that use AI to produce music. Like its applications in other fields, the A.I. in this case also simulates mental task. A prominent feature is the capability of the A.I. algorithm to learn based on information obtained such as the computer accompaniment technology, which is capable of listening to and following a human performer so it can perform in synchrony. Artificial intelligence also drives the so-called interactive composition technology, wherein a computer composes music in response to the performance of a live musician. There are several other A.I. applications to music that covers not only music composition, production, and performance but also the way it is marketed and consumed. Companies like Apple and Spotify rely on user data to augment their engagement metrics to power their ability to get consumers to listen more to music or push the right songs to users according to their preferences.

Music psychology

Music psychology, or the psychology of music, may be regarded as a branch of both psychology and musicology. It aims to explain and understand musical behaviour and experience, including the processes through which music is perceived, created, responded to, and incorporated into everyday life. Modern music psychology is primarily empirical; its knowledge tends to advance on the basis of interpretations of data collected by systematic observation of and interaction with human participants. Music psychology is a field of research with practical relevance for many areas, including music performance, composition, education, criticism, and therapy, as well as investigations of human attitude, skill, performance, intelligence, creativity, and social behavior.

Music psychology can shed light on non-psychological aspects of musicology and musical practice. For example, it contributes to music theory through investigations of the perception and computational modelling of musical structures such as melody, harmony, tonality, rhythm, meter, and form. Research in music history can benefit from systematic study of the history of musical syntax, or from psychological analyses of composers and compositions in relation to perceptual, affective, and social responses to their music. Ethnomusicology can benefit from psychological approaches to the study of music cognition in different cultures.

Pitch (music)

Pitch is a perceptual property of sounds that allows their ordering on a frequency-related scale,

or more commonly, pitch is the quality that makes it possible to judge sounds as "higher" and "lower" in the sense associated with musical melodies.

Pitch can be determined only in sounds that have a frequency that is clear and stable enough to distinguish from noise.

Pitch is a major auditory attribute of musical tones, along with duration, loudness, and timbre.Pitch may be quantified as a frequency, but pitch is not a purely objective physical property; it is a subjective psychoacoustical attribute of sound. Historically, the study of pitch and pitch perception has been a central problem in psychoacoustics, and has been instrumental in forming and testing theories of sound representation, processing, and perception in the auditory system.

Prehistoric music

Prehistoric music (previously primitive music) is a term in the history of music for all music produced in preliterate cultures (prehistory), beginning somewhere in very late geological history. Prehistoric music is followed by ancient music in different parts of the world, but still exists in isolated areas. However, it is more common to refer to the "prehistoric" music which still survives as folk, indigenous or traditional music. Prehistoric music is studied alongside other periods within music archaeology.

Findings from Paleolithic archaeology sites suggest that prehistoric people used carving and piercing tools to create instruments. Archeologists have found Paleolithic flutes carved from bones in which lateral holes have been pierced. The Divje Babe flute, carved from a cave bear femur, is thought to be at least 40,000 years old. Instruments such as the seven-holed flute and various types of stringed instruments, such as the Ravanahatha, have been recovered from the Indus Valley Civilization archaeological sites. India has one of the oldest musical traditions in the world—references to Indian classical music (marga) are found in the Vedas, ancient scriptures of the Hindu tradition. The earliest and largest collection of prehistoric musical instruments was found in China and dates back to between 7000 and 6600 BCE.


Psychoacoustics is the scientific study of sound perception and audiology – how humans perceive various sounds. More specifically, it is the branch of science studying the psychological and physiological responses associated with sound (including noise, speech and music). It can be further categorized as a branch of psychophysics. Psychoacoustics received its name from a field within psychology—i.e., recognition science—which deals with all kinds of human perceptions. It is an interdisciplinary field of many areas, including psychology, acoustics, electronic engineering, physics, biology, physiology, and computer science.

Relative pitch

Relative pitch is the ability of a person to identify or re-create a given musical note by comparing it to a reference note and identifying the interval between those two notes. Relative pitch implies some or all of the following abilities:

Determine the distance of a musical note from a set point of reference, e.g. "three octaves above middle C"

Identify the intervals between given tones, regardless of their relation to concert pitch (A = 440 Hz)

the skill used by singers to correctly sing a melody, following musical notation, by pitching each note in the melody according to its distance from the previous note. Alternatively, the same skill which allows someone to hear a melody for the first time and name the notes relative to some known reference pitch.This last definition, which applies not only to singers but also to players of instruments who rely on their own skill to determine the precise pitch of the notes played (wind instruments, fretless string instruments like violin or viola, etc.), is an essential skill for musicians in order to play successfully with others. As an example, think of the different concert pitches used by orchestras playing music from different styles (a baroque orchestra using period instruments might decide to use a higher-tuned pitch).

Unlike absolute pitch (sometimes called "perfect pitch"), relative pitch is quite common among musicians, especially musicians who are used to "playing by ear", and a precise relative pitch is a constant characteristic among good musicians. Unlike perfect pitch, relative pitch can be developed through ear training. Computer-aided ear training is becoming a popular tool for musicians and music students, and various software is available for improving relative pitch.Some music teachers teach their students relative pitch by having them associate each possible interval with the first two notes of a popular song. (See ear training.) Another method of developing relative pitch is playing melodies by ear on a musical instrument, especially one which, unlike a piano or other fingered instrument, requires a specific manual adjustment for each particular tone. Indian musicians learn relative pitch by singing intervals over a drone, which is also described by W. A. Mathieu using Western just intonation terminology. Many Western ear training classes use solfège to teach students relative pitch, while others use numerical sight-singing.

Compound intervals (intervals greater than an octave) can be more difficult to detect than simple intervals (intervals less than an octave).Interval recognition is used to identify chords, and can be applied to accurately tune an instrument with respect to a given reference tone, even if the tone is not in concert pitch.


Rhythm (from Greek ῥυθμός, rhythmos, "any regular recurring motion, symmetry" (Liddell and Scott 1996)) generally means a "movement marked by the regulated succession of strong and weak elements, or of opposite or different conditions" (Anon. 1971, 2537). This general meaning of regular recurrence or pattern in time can apply to a wide variety of cyclical natural phenomena having a periodicity or frequency of anything from microseconds to several seconds (as with the riff in a rock music song); to several minutes or hours, or, at the most extreme, even over many years.

In the performance arts, rhythm is the timing of events on a human scale; of musical sounds and silences that occur over time, of the steps of a dance, or the meter of spoken language and poetry. In some performing arts, such as hip hop music, the rhythmic delivery of the lyrics is one of the most important elements of the style. Rhythm may also refer to visual presentation, as "timed movement through space" (Jirousek 1995) and a common language of pattern unites rhythm with geometry. In recent years, rhythm and meter have become an important area of research among music scholars. Recent work in these areas includes books by Maury Yeston (1976), Fred Lerdahl and Ray Jackendoff (Lerdahl and Jackendoff 1983), Jonathan Kramer, Christopher Hasty (1997), Godfried Toussaint (2005), William Rothstein (1989), Joel Lester (Lester 1986), and Guerino Mazzola.

Temporal dynamics of music and language

The temporal dynamics of music and language describes how the brain coordinates its different regions to process musical and vocal sounds. Both music and language feature rhythmic and melodic structure. Both employ a finite set of basic elements (such as tones or words) that are combined in ordered ways to create complete musical or lingual ideas.

This page is based on a Wikipedia article written by authors (here).
Text is available under the CC BY-SA 3.0 license; additional terms may apply.
Images, videos and audio are available under their respective licenses.