Computational musicology

Computational musicology is an interdisciplinary research area between musicology and computer science[1]. The term computational musicology is a general term that refers to any disciplines that use computers in order to study music and includes sub-disciplines such as mathematical music theory, computer music, systematic musicology, music information retrieval, computational musicology, digital musicology, sound and music computing, and music informatics.[2] As this area of research is defined by the tools that it uses and its subject matter, research in computational musicology intersects with both the humanities and the sciences. The use of computers in order to study and analyze music generally began in the 1960s[3], although musicians have been using computers to assist them in the composition of music beginning in the 1950s [4]. Today, computational musicology encompasses a wide range of research topics dealing with the multiple ways music can be represented[5].

History

This history of computational musicology generally began in the middle of the 20th century. Generally, the field is considered to be an extension of a much longer history of intellectual inquiry in music that overlaps with science, mathematics, technology[6], and archiving.

1960s

Early approaches to computational musicology began in the early 1960s and were being fully developed by 1966[7].[3] At this point in time data entry was done primarily with paper tape or punch cards[3] and was computationally limited. Due to the high cost of this research, in order to be funded projects often tended to ask global questions and look for global solutions[3]. One of the earliest symbolic representation schemes was the Digital Alternate Representations of Music or DARMS. The project was supported by Columbia University and the Ford Foundation between 1964 and 1976[8]. The project was one of the initial large scale projects to develop an encoding scheme that incorporated completeness, objectivity, and encoder-directedness[8]. Other work at this time at Princeton University chiefly driven by Arthur Mendel, and implemented by Michael Kassler and Eric Regener helped push forward the Intermediary Musical Language (IML) and Music Information Retrieval (MIR) languages that later fell out of popularity in the late 1970s. The 1960s also marked a time of documenting bibliographic initiatives such as the Repertoire International de Litterature Musicale (RILM) created by Barry Brook in 1967.

1970s

Unlike the global research interests of the 1960s, goals in computational musicology in the 1970s were driven by accomplishing certain tasks.[3] This task driven motivation lead to the development of MUSTRAN for music analysis by lead by Jerome Wenker and Dorothy Gross at Indiana University. Similar projects like SCORE (SCORE-MS) at Stanford University was developed primarily for printing purposes.

1980s

The 1980s were the first decade to move away from centralized computing and move towards that of personalized computing. This transference of resources led to growth in the field as a whole. John Walter Hill began developing a commercial program called Savy PC that was meant to help musicologists analyze lyrical content in music. Findings from Hill's music were able to find patterns in the conversions of sacred and secular texts where only first lines of texts were changed[3]. In keeping with the global questions that dominated the 1960s, Helmuth Schaffrath began his Essen Folk Collection encoded in Essen Associative Code (ESAC) which has since been converted to humdrum notation.[9] Using software developed at the time, Sandra Pinegar examined 13th century music theory manuscripts in her doctoral work at Columbia University in order to gain evidence on the dating and authoring of texts[10]. The 1980s also introduced MIDI notation.

Methods

Computational musicology can be generally divided into the three main branches relating to the three ways music can represented by a computer: sheet music data, symbolic data, and audio data. Sheet music data refers to the human-readable, graphical representation of music via symbols. Examples of this branch of research would include digitizing scores ranging from 15th Century neumenal notation to contemporary Western music notation. Like sheet music data, symbolic data refers to musical notation in a digital format, but symbolic data is not human readable and is encoded in order to be parsed by a computer. Examples of this type of encoding include piano roll, kern[11], and MIDI representations. Lastly, audio data refers to recording of the representations of the acoustic wave or sound that results from changes in the oscillations of air pressure[12]. Examples of this type of encoding include MP3 or WAV files.

Sheet Music Data

Sheet music is meant to be read by the musician or performer. Generally, the term refers to the standardized nomenclature used by a culture to document their musical notation. In addition to music literacy, musical notation also demands choices from the performer. For example, the notation of Hindustani ragas will begin with an alap that does not demand a strict adherence to a beat or pulse, but is left up to the discretion of the performer[13]. The sheet music notation captures the sequence of gestures the performer is encouraged to make within a musical culture, but is by no means fixed to those performance choices.

Symbolic Data

Symbolic data refers to musical encoding that is able to be parsed by a computer. Unlike sheet music data, Any type of digital data format may be regarded as symbolic due to the fact that the system that is representing it is generated from a finite series of symbols. Symbolic data typically does not have any sort of performative choices required on the part of the performer. [5]

Audio Data

Audio data is generally conceptualized as existing on a continuum of features ranging from lower to higher level audio features. Low-level audio features refer to loudness, spectral flux, and cepstrum. Mid-level audio features refer to pitch, onsets, and beats. Examples of high-level audio features include style, artist, mood, and key. [14]

Applications

Music databases

One of the earliest applications in computational musicology was the creation and use of musical databases. Input, usage and analysis of large amounts of data can be very troublesome using manual methods while usage of computers can make such tasks considerably easier.

Analysis of music

Different computer programs have been developed to analyze musical data. Data formats vary from standard notation to raw audio. Analysis of formats that are based on storing all properties of each note, for example MIDI, were used originally and are still among the most common methods. Significant advances in analysis of raw audio data have been made only recently.

Artificial production of music

Different algorithms can be used to both create complete compositions and improvise music. One of the methods by which a program can learn improvisation is analysis of choices a human player makes while improvising. Artificial neural networks are used extensively in such applications.

Historical change and music

One developing sociomusicological theory in computational musicology is the "Discursive Hypothesis" proposed by Kristoffer Jensen and David G. Hebert, which suggests that "because both music and language are cultural discourses (which may reflect social reality in similarly limited ways), a relationship may be identifiable between the trajectories of significant features of musical sound and linguistic discourse regarding social data."[15] According to this perspective, analyses of "big data" may improve our understandings of how particular features of music and society are interrelated and change similarly across time, as significant correlations are increasingly identified within the musico-linguistic spectrum of human auditory communication.[16]

Non-western music

Strategies from computational musicology are recently being applied for analysis of music in various parts of the world. For example, professors affiliated with the Birla Institute of Technology in India have produced studies of harmonic and melodic tendencies (in the raga structure) of Hindustani classical music.[17]

Research

RISM's (Répertoire International des Sources Musicales) database is one of the world's largest music databases, containing over 700,000 references to musical manuscripts. Anyone can use its search engine to find compositions.[18]

The Centre for History and Analysis of Recorded Music (CHARM) has developed the Mazurka Project,[19] which offers "downloadable recordings . . . analytical software and training materials, and a variety of resources relating to the history of recording."

See also

References

  1. ^ "Unfolding the Potential of Computational Musicology" (PDF). Proceedings of the Thirteenth International Conference on Informatics and Semiotics in Organisations: Problems and Possibilities of Computational Humanities.
  2. ^ Meredith, David (2016). "Preface". Computational Music Analysis. New York: Springer. p. v. ISBN 978-3319259291.
  3. ^ a b c d e f "Computing in Musicology, 1966-91". Computers and the Humanities.
  4. ^ "Illiac Suite", Wikipedia, 2019-01-31, retrieved 2019-02-11
  5. ^ a b Meinard,, Müller,. Fundamentals of music processing : audio, analysis, algorithms, applications. Switzerland. ISBN 9783319219455. OCLC 918555094.
  6. ^ Forte, Allen (1967). "Music and computing: the present situation". Computers and The Humanities.
  7. ^ "Writings on the Use of Computers in Music". College Music Symposium. 6. Fall 1966 – via JSTOR.
  8. ^ a b ""The Darms Project": A Status Report".
  9. ^ "ESAC Data Homepage". www.esac-data.org. Retrieved 2019-02-11.
  10. ^ "Textual and conceptual relationships among theoretical writings on measurable music of the thirteenth and early fourteenth centuries - ProQuest". search.proquest.com. Retrieved 2019-02-11.
  11. ^ Huron, David. "Music information processing using the Humdrum Toolkit: Concepts, examples, and lessons". Computer Music Journal.
  12. ^ Müller, Meinard (2015), Müller, Meinard, ed., "Music Representations", Fundamentals of Music Processing: Audio, Analysis, Algorithms, Applications, Springer International Publishing, pp. 1–37, doi:10.1007/978-3-319-21945-5_1, ISBN 9783319219455, retrieved 2019-02-11
  13. ^ The raga guide : a survey of 74 Hindustani ragas, Bor, Joep., Rao, Suvarnalata, 1954-, Meer, Wim van der., Harvey, Jane, 1949-, Chaurasia, Hariprasad., Das Gupta, Buddhadev, 1933-, Nimbus Records, 2002, ℗1999, ISBN 0954397606, OCLC 80291538, retrieved 2019-02-11 Check date values in: |date= (help)
  14. ^ Pablo Bello, Juan. "Low-level features and timbre" (PDF). nyu.edu. Retrieved 2019-02-11.
  15. ^ McCollum, Jonathan and Hebert, David (2014) Theory and Method in Historical Ethnomusicology Lanham, MD: Lexington Books / Rowman & Littlefield ISBN 0739168266; p.62. Some of Jensen and Hebert’s pioneering findings from 2013 on tendencies in US Billboard Hot 100 songs have since been replicated and expanded upon by other scholars (e.g. Mauch M, MacCallum RM,Levy M, Leroi AM. 2015 The evolution of popular music: USA 1960–2010. R. Soc. Open sci. 2: 150081. https://dx.doi.org/10.1098/rsos.150081).
  16. ^ Kristoffer Jensen and David G. Hebert (2016). Evaluation and Prediction of Harmonic Complexity Across 76 Years of Billboard 100 Hits. In R. Kronland-Martinet, M. Aramaki, and S. Ystad, (Eds.), Music, Mind, and Embodiment. Switzerland: Springer Press, pp.283-296. ISBN 978-3-319-46281-3.
  17. ^ Chakraborty, S., Mazzola, G., Tewari, S., Patra, M. (2014) "Computational Musicology in Hindustani Music" New York: Springer.
  18. ^ RISM database, <http://www.rism.info/>
  19. ^ Mazurka Project, <http://mazurka.org.uk/>

External links

Art Tatum

Arthur Tatum Jr. (, October 13, 1909 – November 5, 1956) was an American jazz pianist.

Tatum is considered one of the greatest jazz pianists of all time. His performances were hailed for their technical proficiency and creativity, which set a new standard for jazz piano virtuosity. Critic Scott Yanow wrote, "Tatum's quick reflexes and boundless imagination kept his improvisations filled with fresh (and sometimes futuristic) ideas that put him way ahead of his contemporaries."

Computer audition

Computer audition (CA) or machine listening is general field of study of algorithms and systems for audio understanding by machine. Since the notion of what it means for a machine to "hear" is very broad and somewhat vague, computer audition attempts to bring together several disciplines that originally dealt with specific problems or had a concrete application in mind. The engineer Paris Smaragdis, interviewed in Technology Review, talks about these systems --"software that uses sound to locate people moving through rooms, monitor machinery for impending breakdowns, or activate traffic cameras to record accidents."Inspired by models of human audition, CA deals with questions of representation, transduction, grouping, use of musical knowledge and general sound semantics for the purpose of performing intelligent operations on audio and music signals by the computer. Technically this requires a combination of methods from the fields of signal processing, auditory modelling, music perception and cognition, pattern recognition, and machine learning, as well as more traditional methods of artificial intelligence for musical knowledge representation.

David De Roure

David Charles De Roure PhD FBCS MIMA CITP is a Professor of e-Research at the University of Oxford, where he was Director of the Oxford e-Research Centre (OeRC) from 2012-17. From 2009 to 2013 he held the post of National Strategic Director for e-Social Science. and was subsequently a Strategic Advisor to the UK Economic and Social Research Council in the area of new and emerging forms of data and realtime analytics. He is a supernumerary Fellow of Wolfson College, Oxford. and Oxford Martin School Senior Fellow.

Eduardo Reck Miranda

Eduardo Reck Miranda (born 1963) is a Brazilian composer of chamber and electroacoustic pieces but is most notable in the United Kingdom for his scientific research into computer music, particularly in the field of human-machine interfaces where brain waves will replace keyboards and voice commands to permit the disabled to express themselves musically.

GUIDO music notation

GUIDO Music Notation is a computer music notation format designed to logically represent all aspects of music in a manner that is both computer-readable and easily readable by human beings. It was named after Guido of Arezzo, who pioneered today's conventional musical notation 1,000 years ago.

GUIDO was first designed by Holger H. Hoos (then at Technische Universität Darmstadt, Germany, now at University of British Columbia, Canada) and Keith Hamel (University of British Columbia, Canada).

Later developments have been done by the SALIERI Project by Holger H. Hoos, Kai Renz and Jürgen F. Kilian.

GUIDO Music Notation has been designed to represent music in a logical format (with the ability to render to sheet music), whereas LilyPond is more narrowly focused on typesetting sheet music.

The basic idea behind the GUIDO design is representational adequacy which means that simple musical concepts are represented in a simple way and only complex notions require more complex representations. [1]GUIDO is not primarily focused on conventional music notation, but has been invented as an open format, capable of storing musical, structural, and notational information.

GUIDO Music Notation is designed as a flexible and easily extensible open standard. In particular, its syntax does not restrict the features it can represent. Thus, GUIDO can be easily adapted and customized to cover specialized musical concepts as might be required in the context of research projects in computational musicology. More importantly, GUIDO is designed in a way that when using such custom extensions, the resulting GUIDO data can still be processed by other applications that support GUIDO but are not aware of the custom extensions, which are gracefully ignored. This design also greatly facilitates the incremental implementation of GUIDO support in music software, which can speed up the software development process significantly, especially for research software and prototypes.

GUIDO has been split into three consecutive layers: Basic

GUIDO introduces the main concepts of the GUIDO design and allows to represent much of the conventional music of today. Advanced GUIDO extends Basic GUIDO by adding exact score-formatting and some more advanced musical concepts. Finally, Extended GUIDO can represent user-defined extensions, like microtonal information or user defined pitch classes.

Guerino Mazzola

Guerino Mazzola (born 1947) is a Swiss mathematician, musicologist, jazz pianist as well as book writer.

Henkjan Honing

Henkjan Honing (born 1959 in Hilversum) is a Dutch researcher. He is professor of Music Cognition at both the Faculty of Humanities and the Faculty of Science of the University of Amsterdam (UvA). He conducts his research under the auspices of the Institute for Logic, Language and Computation (ILLC), and the University of Amsterdam’s Brain and Cognition (ABC) center.

Honing obtained his PhD at City University (London) in 1991 with research into the representation of time and temporal structure in music. During the period between 1992 and 1997, he worked as a KNAW Research Fellow (Academieonderzoeker) at the University of Amsterdam’s Institute for Logic, Language and Computation (ILLC), where he conducted a study on the formalization of musical knowledge. Up until 2003, he worked as a research coordinator at the Nijmegen Institute for Cognition and Information (NICI; now F.C. Donders Centre for Cognitive Neuroimaging) where he specialized in the computational modeling of music cognition. In 2007, he was appointed Associate Professor in Music Cognition at the University of Amsterdam’s Musicology capacity group. In 2010 he was awarded the KNAW-Hendrik Muller chair, designated on behalf of the Royal Netherlands Academy of Arts and Sciences (KNAW). In 2012 he was appointed strategic Professor of Cognitive and Computational Musicology, and in 2014 he became full professor in Music Cognition at both the Faculty of Humanities and the Faculty of Science of the University of Amsterdam. In 2013 he received a Distinguished Lorentz Fellowship, a prize granted by the Lorentz Center for the Sciences and the Netherlands Institute for Advanced Study in the Humanities and Social Sciences.

Henkjan Honing authored over 200 scientific publications in the areas of music cognition, musicality and music technology, and published several books for a general audience, including Iedereen is muzikaal. Wat we weten over het luisteren naar muziek (Nieuw Amsterdam, 2009/2012), published in English as Musical Cognition: A Science of Listening (Routledge, 2011/2013), and Aap slaat maat. Op zoek naar de oorsprong van muzikaliteit bij mens en dier (Nieuw Amsterdam, 2018) that will appear in English as The Evolving Animal Orchestra: In Search of What Makes Us Musical (2019, The MIT Press). In 2018 a research agenda on the topic of musicality appeared as The Origins of Musicality (2018, The MIT Press).

Henkjan is the older brother of the saxophonist Yuri Honing.

Indian classical music

Indian classical music is the classical music of the Indian subcontinent. It has two major traditions: the North Indian classical music tradition is called Hindustani, while the South Indian expression is called Carnatic. These traditions were not distinct till about the 16th century. There on, during the turmoils of Islamic rule period of the Indian subcontinent, the traditions separated and evolved into distinct forms. Hindustani music emphasizes improvisation and exploring all aspects of a raga, while Carnatic performances tend to be short and composition-based. However, the two systems continue to have more common features than differences.The roots of the classical music of India are found in the Vedic literature of Hinduism and the ancient Natyashastra, the classic Sanskrit text on performance arts by Bharata Muni. The 13th century Sanskrit text Sangita-Ratnakara of Sarangadeva is regarded as the definitive text by both the Hindustani music and the Carnatic music traditions.Indian classical music has two foundational elements, raga and tala. The raga, based on swara (notes including microtones), forms the fabric of a melodic structure, while the tala measures the time cycle. The raga gives an artist a palette to build the melody from sounds, while the tala provides them with a creative framework for rhythmic improvisation using time. In Indian classical the space between the notes is often more important than the notes themselves, and it does not have Western classical concepts such as harmony, counterpoint, chords, or modulation.

International Society for Music Information Retrieval

The International Society for Music Information Retrieval (ISMIR) is an international forum for research on the organization of music-related data. It started as an informal group steered by an ad hoc committee in 2000 which established a yearly symposium - whence "ISMIR", which meant International Symposium on Music Information Retrieval. It was turned into a conference in 2002 while retaining the acronym. ISMIR was incorporated in Canada on July 4, 2008.

List of musicology topics

This is a list of musicology topics. Musicology is the scholarly study of music. A person who studies music is a musicologist. The word is used in narrow, intermediate and broad senses. In the narrow sense, musicology is confined to the music history of Western culture. In the intermediate sense, it includes all relevant cultures and a range of musical forms, styles, genres and traditions, but tends to be confined to the humanities - a combination of historical musicology, ethnomusicology, and the humanities of systematic musicology (philosophy, theoretical sociology, aesthetics). In the broad sense, it includes all musically relevant disciplines (both humanities and sciences) and all manifestations of music in all cultures, so it also includes all of systematic musicology (including psychology, biology, and computing).

Music Technology Group

The Music Technology Group (MTG) is a research group of the Department of Information and Communication Technologies of the Universitat Pompeu Fabra, Barcelona. It was founded in 1994 by its current director, Xavier Serra, and it specializes in sound and music computing research.

Musicology

Musicology (from Greek, Modern μουσική (mousikē), meaning 'music', and -λογία (-logia), meaning 'study of') is the scholarly analysis and research-based study of music. Musicology departments traditionally belong to the humanities, although music research is often more scientific in focus (psychological, sociological, acoustical, neurological, computational). A scholar who participates in musical research is a musicologist.Traditionally, historical musicology (commonly termed "music history") has been the most prominent sub-discipline of musicology. In the 2010s, historical musicology is one of several large musicology sub-disciplines. Historical musicology, ethnomusicology, and systematic musicology are approximately equal in size. Ethnomusicology is the study of music in its cultural context. Systematic musicology includes music acoustics, the science and technology of acoustical musical instruments, and the musical implications of physiology, psychology, sociology, philosophy and computing. Cognitive musicology is the set of phenomena surrounding the computational modeling of music. In some countries, music education is a prominent sub-field of musicology, while in others it is regarded as a distinct academic field, or one more closely affiliated with teacher education, educational research, and related fields. Like music education, music therapy is a specialized form of applied musicology which is sometimes considered more closely affiliated with health fields, and other times regarded as part of musicology proper.

Pietro Grossi

Pietro Grossi (15 April 1917 in Venice – 2002 in Florence) was an Italian composer pioneer of computer music, visual artist and hacker ahead of his time. He began in Italy, experimenting with electronic techniques in the early sixties.

Raga

A raga or raag (IAST: rāga; also raaga or raagm ; literally "coloring, tingeing, dyeing") is a melodic framework for improvisation akin to a melodic mode in Indian classical music. While the raag is a remarkable and central feature of the classical music tradition, it has no direct translation to concepts in the classical European music tradition. Each raag is an array of melodic structures with musical motifs, considered in the Indian tradition to have the ability to "colour the mind" and affect the emotions of the audience.A raag consists of at least five notes, and each raag provides the musician with a musical framework within which to improvise. The specific notes within a raag can be reordered and improvised by the musician. Raags range from small raags like Bahar and Shahana that are not much more than songs to big raags like Malkauns, Darbari and Yaman, which have great scope for improvisation and for which performances can last over an hour. Raags may change over time, with an example being Marwa, the primary development of which has gone down to the lower octave compared to the traditionally middle octave. Each raag traditionally has an emotional significance and symbolic associations such as with season, time and mood. The raag is considered a means in Indian musical tradition to evoke certain feelings in an audience. Hundreds of raag are recognized in the classical tradition, of which about 30 are common. Each raag, state Dorothea E. Hast and others, has its "own unique melodic personality".There are two main classical music traditions, Hindustani (North Indian) and Carnatic (South Indian), and the concept of raag is shared by both. Raag are also found in Sikh traditions such as in Guru Granth Sahib, the primary scripture of Sikhism. Similarly it is a part of the qawwali tradition found in Sufi Islamic communities of South Asia. Some popular Indian film songs and ghazals use rāgas in their compositions.

Sound and Music Computing Conference

The Sound and Music Computing (SMC) Conference is the forum for international exchanges around the core interdisciplinary topics of Sound and Music Computing. The conference is held annually to facilitate the exchange of ideas in this field.

Sound and music computing

Sound and music computing (SMC) is a research field that studies the whole sound and music communication chain from a multidisciplinary point of view. By combining scientific, technological and artistic methodologies it aims at understanding, modeling and generating sound and music through computational approaches.

Tala (music)

A Tala (IAST tāla), sometimes spelled Taal or Tal, literally means a "clap, tapping one's hand on one's arm, a musical measure". It is the term used in Indian classical music to refer to musical meter, that is any rhythmic beat or strike that measures musical time. The measure is typically established by hand clapping, waving, touching fingers on thigh or the other hand, verbally, striking of small cymbals, or a percussion instrument in the Indian subcontinental traditions. Along with raga which forms the fabric of a melodic structure, the tala forms the life cycle and thereby constitutes one of the two foundational elements of Indian music.Tala is an ancient music concept traceable to Vedic era texts of Hinduism, such as the Samaveda and methods for singing the Vedic hymns. The music traditions of the North and South India, particularly the raga and tala systems, were not considered as distinct till about the 16th century. There on, during the tumultuous period of Islamic rule of the Indian subcontinent, the traditions separated and evolved into distinct forms. The tala system of the north is called Hindustani, while the south is called Carnatic. However, the tala system between them continues to have more common features than differences.Tala in the Indian tradition embraces the time dimension of music, the means by which musical rhythm and form were guided and expressed. While a tala carries the musical meter, it does not necessarily imply a regularly recurring pattern. In the major classical Indian music traditions, the beats are hierarchically arranged based on how the music piece is to be performed. The most widely used tala in the South Indian system is adi tala. In the North Indian system, the most common tala is teental.Tala has other contextual meanings in ancient Sanskrit texts of Hinduism. For example, it means trochee in Sanskrit prosody.

Xavier Serra

Xavier Serra (born September 10, 1959) is a researcher in the field of Sound and Music Computing and professor at the Pompeu Fabra University (UPF) in Barcelona. He is the founder and director of the Music Technology Group at the UPF.

This page is based on a Wikipedia article written by authors (here).
Text is available under the CC BY-SA 3.0 license; additional terms may apply.
Images, videos and audio are available under their respective licenses.