Sound

In physics, sound is a vibration that typically propagates as an audible wave of pressure, through a transmission medium such as a gas, liquid or solid.

In human physiology and psychology, sound is the reception of such waves and their perception by the brain.[1] Humans can only hear sound waves as distinct pitches when the frequency lies between about 20 Hz and 20 kHz. Sound waves above 20 kHz are known as ultrasound and is not perceptible by humans. Sound waves below 20 Hz are known as infrasound. Different animal species have varying hearing ranges.

Thoth08BigasDrumEvansChalmette
A drum produces sound via a vibrating membrane.

Acoustics

Acoustics is the interdisciplinary science that deals with the study of mechanical waves in gases, liquids, and solids including vibration, sound, ultrasound, and infrasound. A scientist who works in the field of acoustics is an acoustician, while someone working in the field of acoustical engineering may be called an acoustical engineer.[2] An audio engineer, on the other hand, is concerned with the recording, manipulation, mixing, and reproduction of sound.

Applications of acoustics are found in almost all aspects of modern society, subdisciplines include aeroacoustics, audio signal processing, architectural acoustics, bioacoustics, electro-acoustics, environmental noise, musical acoustics, noise control, psychoacoustics, speech, ultrasound, underwater acoustics, and vibration.[3]

Definition

Sound is defined as "(a) Oscillation in pressure, stress, particle displacement, particle velocity, etc., propagated in a medium with internal forces (e.g., elastic or viscous), or the superposition of such propagated oscillation. (b) Auditory sensation evoked by the oscillation described in (a)."[4] Sound can be viewed as a wave motion in air or other elastic media. In this case, sound is a stimulus. Sound can also be viewed as an excitation of the hearing mechanism that results in the perception of sound. In this case, sound is a sensation.

Physics of sound

Experiment using two tuning forks oscillating usually at the same frequency. One of the forks is being hit with a rubberized mallet. Although only the first tuning fork has been hit, the second fork is visibly excited due to the oscillation caused by the periodic change in the pressure and density of the air by hitting the other fork, creating an acoustic resonance between the forks. However, if we place a piece of metal on a prong, we see that the effect dampens, and the excitations become less and less pronounced as resonance isn't achieved as effectively.

Sound can propagate through a medium such as air, water and solids as longitudinal waves and also as a transverse wave in solids (see Longitudinal and transverse waves, below). The sound waves are generated by a sound source, such as the vibrating diaphragm of a stereo speaker. The sound source creates vibrations in the surrounding medium. As the source continues to vibrate the medium, the vibrations propagate away from the source at the speed of sound, thus forming the sound wave. At a fixed distance from the source, the pressure, velocity, and displacement of the medium vary in time. At an instant in time, the pressure, velocity, and displacement vary in space. Note that the particles of the medium do not travel with the sound wave. This is intuitively obvious for a solid, and the same is true for liquids and gases (that is, the vibrations of particles in the gas or liquid transport the vibrations, while the average position of the particles over time does not change). During propagation, waves can be reflected, refracted, or attenuated by the medium.[5]

The behavior of sound propagation is generally affected by three things:

  • A complex relationship between the density and pressure of the medium. This relationship, affected by temperature, determines the speed of sound within the medium.
  • Motion of the medium itself. If the medium is moving, this movement may increase or decrease the absolute speed of the sound wave depending on the direction of the movement. For example, sound moving through wind will have its speed of propagation increased by the speed of the wind if the sound and wind are moving in the same direction. If the sound and wind are moving in opposite directions, the speed of the sound wave will be decreased by the speed of the wind.
  • The viscosity of the medium. Medium viscosity determines the rate at which sound is attenuated. For many media, such as air or water, attenuation due to viscosity is negligible.

When sound is moving through a medium that does not have constant physical properties, it may be refracted (either dispersed or focused).[5]

Spherical pressure waves
Spherical compression (longitudinal) waves

The mechanical vibrations that can be interpreted as sound can travel through all forms of matter: gases, liquids, solids, and plasmas. The matter that supports the sound is called the medium. Sound cannot travel through a vacuum.[6][7]

Longitudinal and transverse waves

Sound is transmitted through gases, plasma, and liquids as longitudinal waves, also called compression waves. It requires a medium to propagate. Through solids, however, it can be transmitted as both longitudinal waves and transverse waves. Longitudinal sound waves are waves of alternating pressure deviations from the equilibrium pressure, causing local regions of compression and rarefaction, while transverse waves (in solids) are waves of alternating shear stress at right angle to the direction of propagation.

Sound waves may be "viewed" using parabolic mirrors and objects that produce sound.[8]

The energy carried by an oscillating sound wave converts back and forth between the potential energy of the extra compression (in case of longitudinal waves) or lateral displacement strain (in case of transverse waves) of the matter, and the kinetic energy of the displacement velocity of particles of the medium.

Onde compression impulsion 1d 30 petit
Longitudinal plane wave.
Onde cisaillement impulsion 1d 30 petit
Transverse plane wave.

Sound wave properties and characteristics

The Elements of Sound jpg
A 'pressure over time' graph of a 20 ms recording of a clarinet tone demonstrates the two fundamental elements of sound: Pressure and Time.
Sine waves different frequencies
Sounds can be represented as a mixture of their component Sinusoidal waves of different frequencies. The bottom waves have higher frequencies than those above. The horizontal axis represents time.

Although there are many complexities relating to the transmission of sounds, at the point of reception (i.e. the ears), sound is readily dividable into two simple elements: pressure and time. These fundamental elements form the basis of all sound waves. They can be used to describe, in absolute terms, every sound we hear.

In order to understand the sound more fully, a complex wave such as the one shown in a blue background on the right of this text, is usually separated into its component parts, which are a combination of various sound wave frequencies (and noise).[9][10][11]

Sound waves are often simplified to a description in terms of sinusoidal plane waves, which are characterized by these generic properties:

Sound that is perceptible by humans has frequencies from about 20 Hz to 20,000 Hz. In air at standard temperature and pressure, the corresponding wavelengths of sound waves range from 17 m to 17 mm. Sometimes speed and direction are combined as a velocity vector; wave number and direction are combined as a wave vector.

Transverse waves, also known as shear waves, have the additional property, polarization, and are not a characteristic of sound waves.

Speed of sound

FA-18 Hornet breaking sound barrier (7 July 1999) - filtered
U.S. Navy F/A-18 approaching the speed of sound. The white halo is formed by condensed water droplets thought to result from a drop in air pressure around the aircraft (see Prandtl–Glauert singularity).[12]

The speed of sound depends on the medium the waves pass through, and is a fundamental property of the material. The first significant effort towards measurement of the speed of sound was made by Isaac Newton. He believed the speed of sound in a particular substance was equal to the square root of the pressure acting on it divided by its density:

This was later proven wrong when found to incorrectly derive the speed. The French mathematician Laplace corrected the formula by deducing that the phenomenon of sound travelling is not isothermal, as believed by Newton, but adiabatic. He added another factor to the equation—gamma—and multiplied by , thus coming up with the equation . Since , the final equation came up to be , which is also known as the Newton–Laplace equation. In this equation, K is the elastic bulk modulus, c is the velocity of sound, and is the density. Thus, the speed of sound is proportional to the square root of the ratio of the bulk modulus of the medium to its density.

Those physical properties and the speed of sound change with ambient conditions. For example, the speed of sound in gases depends on temperature. In 20 °C (68 °F) air at sea level, the speed of sound is approximately 343 m/s (1,230 km/h; 767 mph) using the formula v [m/s] = 331 + 0.6 T [°C]. In fresh water, also at 20 °C, the speed of sound is approximately 1,482 m/s (5,335 km/h; 3,315 mph). In steel, the speed of sound is about 5,960 m/s (21,460 km/h; 13,330 mph). The speed of sound is also slightly sensitive, being subject to a second-order anharmonic effect, to the sound amplitude, which means there are non-linear propagation effects, such as the production of harmonics and mixed tones not present in the original sound (see parametric array).

If relativistic effects are important, the speed of sound is calculated from the relativistic Euler equations.

Perception of sound

A distinct use of the term sound from its use in physics is that in physiology and psychology, where the term refers to the subject of perception by the brain. The field of psychoacoustics is dedicated to such studies. Webster's 1936 dictionary defined sound as: "1. The sensation of hearing, that which is heard; specif.: a. Psychophysics. Sensation due to stimulation of the auditory nerves and auditory centers of the brain, usually by vibrations transmitted in a material medium, commonly air, affecting the organ of hearing. b. Physics. Vibrational energy which occasions such a sensation. Sound is propagated by progressive longitudinal vibratory disturbances (sound waves)." [13] This means that the correct response to the question: "if a tree falls in the forest with no one to hear it fall, does it make a sound?" is "yes", and "no", dependent on whether being answered using the physical, or the psychophysical definition, respectively.

The physical reception of sound in any hearing organism is limited to a range of frequencies. Humans normally hear sound frequencies between approximately 20 Hz and 20,000 Hz (20 kHz),[14]:382 The upper limit decreases with age.[14]:249 Sometimes sound refers to only those vibrations with frequencies that are within the hearing range for humans[15] or sometimes it relates to a particular animal. Other species have different ranges of hearing. For example, dogs can perceive vibrations higher than 20 kHz.

As a signal perceived by one of the major senses, sound is used by many species for detecting danger, navigation, predation, and communication. Earth's atmosphere, water, and virtually any physical phenomenon, such as fire, rain, wind, surf, or earthquake, produces (and is characterized by) its unique sounds. Many species, such as frogs, birds, marine and terrestrial mammals, have also developed special organs to produce sound. In some species, these produce song and speech. Furthermore, humans have developed culture and technology (such as music, telephone and radio) that allows them to generate, record, transmit, and broadcast sound.

Noise is a term often used to refer to an unwanted sound. In science and engineering, noise is an undesirable component that obscures a wanted signal. However, in sound perception it can often be used to identify the source of a sound and is an important component of timbre perception (see above).

Soundscape is the component of the acoustic environment that can be perceived by humans. The acoustic environment is the combination of all sounds (whether audible to humans or not) within a given area as modified by the environment and understood by people, in context of the surrounding environment.

There are, historically, six experimentally separable ways in which sound waves are analysed. They are: pitch, duration, loudness, timbre, sonic texture and spatial location.[16] Some of these terms have a standardised definition (for instance in the ANSI Acoustical Terminology ANSI/ASA S1.1-2013). More recent approaches have also considered temporal envelope and temporal fine structure as perceptually relevant analyses.[17][18][19]

Pitch

Pitch perception
Figure 1. Pitch perception

Pitch is perceived as how "low" or "high" a sound is and represents the cyclic, repetitive nature of the vibrations that make up sound. For simple sounds, pitch relates to the frequency of the slowest vibration in the sound (called the fundamental harmonic). In the case of complex sounds, pitch perception can vary. Sometimes individuals identify different pitches for the same sound, based on their personal experience of particular sound patterns. Selection of a particular pitch is determined by pre-conscious examination of vibrations, including their frequencies and the balance between them. Specific attention is given to recognising potential harmonics.[20][21] Every sound is placed on a pitch continuum from low to high. For example: white noise (random noise spread evenly across all frequencies) sounds higher in pitch than pink noise (random noise spread evenly across octaves) as white noise has more high frequency content. Figure 1 shows an example of pitch recognition. During the listening process, each sound is analysed for a repeating pattern (See Figure 1: orange arrows) and the results forwarded to the auditory cortex as a single pitch of a certain height (octave) and chroma (note name).

Duration

Duration perception
Figure 2. Duration perception

Duration is perceived as how "long" or "short" a sound is and relates to onset and offset signals created by nerve responses to sounds. The duration of a sound usually lasts from the time the sound is first noticed until the sound is identified as having changed or ceased.[22] Sometimes this is not directly related to the physical duration of a sound. For example; in a noisy environment, gapped sounds (sounds that stop and start) can sound as if they are continuous because the offset messages are missed owing to disruptions from noises in the same general bandwidth.[23] This can be of great benefit in understanding distorted messages such as radio signals that suffer from interference, as (owing to this effect) the message is heard as if it was continuous. Figure 2 gives an example of duration identification. When a new sound is noticed (see Figure 2, Green arrows), a sound onset message is sent to the auditory cortex. When the repeating pattern is missed, a sound offset messages is sent.

Loudness

Loudness is perceived as how "loud" or "soft" a sound is and relates to the totalled number of auditory nerve stimulations over short cyclic time periods, most likely over the duration of theta wave cycles.[24][25][26] This means that at short durations, a very short sound can sound softer than a longer sound even though they are presented at the same intensity level. Past around 200 ms this is no longer the case and the duration of the sound no longer affects the apparent loudness of the sound. Figure 3 gives an impression of how loudness information is summed over a period of about 200 ms before being sent to the auditory cortex. Louder signals create a greater 'push' on the Basilar membrane and thus stimulate more nerves, creating a stronger loudness signal. A more complex signal also creates more nerve firings and so sounds louder (for the same wave amplitude) than a simpler sound, such as a sine wave.

Timbre

Timbre is perceived as the quality of different sounds (e.g. the thud of a fallen rock, the whir of a drill, the tone of a musical instrument or the quality of a voice) and represents the pre-conscious allocation of a sonic identity to a sound (e.g. “it’s an oboe!"). This identity is based on information gained from frequency transients, noisiness, unsteadiness, perceived pitch and the spread and intensity of overtones in the sound over an extended time frame.[9][10][11] The way a sound changes over time (see figure 4) provides most of the information for timbre identification. Even though a small section of the wave form from each instrument looks very similar (see the expanded sections indicated by the orange arrows in figure 4), differences in changes over time between the clarinet and the piano are evident in both loudness and harmonic content. Less noticeable are the different noises heard, such as air hisses for the clarinet and hammer strikes for the piano.

Loudness perception v5
Figure 3. Loudness perception
Timbre perception
Figure 4. Timbre perception

Sonic texture

Sonic texture relates to the number of sound sources and the interaction between them.[27][28] The word 'texture', in this context, relates to the cognitive separation of auditory objects.[29] In music, texture is often referred to as the difference between unison, polyphony and homophony, but it can also relate (for example) to a busy cafe; a sound which might be referred to as 'cacophony'. However texture refers to more than this. The texture of an orchestral piece is very different to the texture of a brass quintet because of the different numbers of players. The texture of a market place is very different to a school hall because of the differences in the various sound sources.

Spatial location

Spatial location (see: Sound localization) represents the cognitive placement of a sound in an environmental context; including the placement of a sound on both the horizontal and vertical plane, the distance from the sound source and the characteristics of the sonic environment.[29][30] In a thick texture, it is possible to identify multiple sound sources using a combination of spatial location and timbre identification. It is the main reason why we can pick the sound of an oboe in an orchestra and the words of a single person at a cocktail party.

Sound pressure level

Sound measurements
Characteristic
Symbols
 Sound pressure p, SPL,LPA
 Particle velocity v, SVL
 Particle displacement δ
 Sound intensity I, SIL
 Sound power P, SWL, LWA
 Sound energy W
 Sound energy density w
 Sound exposure E, SEL
 Acoustic impedance Z
 Speed of sound c
 Audio frequency AF
 Transmission loss TL

Sound pressure is the difference, in a given medium, between average local pressure and the pressure in the sound wave. A square of this difference (i.e., a square of the deviation from the equilibrium pressure) is usually averaged over time and/or space, and a square root of this average provides a root mean square (RMS) value. For example, 1 Pa RMS sound pressure (94 dBSPL) in atmospheric air implies that the actual pressure in the sound wave oscillates between (1 atm Pa) and (1 atm Pa), that is between 101323.6 and 101326.4 Pa. As the human ear can detect sounds with a wide range of amplitudes, sound pressure is often measured as a level on a logarithmic decibel scale. The sound pressure level (SPL) or Lp is defined as

where p is the root-mean-square sound pressure and is a reference sound pressure. Commonly used reference sound pressures, defined in the standard ANSI S1.1-1994, are 20 µPa in air and 1 µPa in water. Without a specified reference sound pressure, a value expressed in decibels cannot represent a sound pressure level.

Since the human ear does not have a flat spectral response, sound pressures are often frequency weighted so that the measured level matches perceived levels more closely. The International Electrotechnical Commission (IEC) has defined several weighting schemes. A-weighting attempts to match the response of the human ear to noise and A-weighted sound pressure levels are labeled dBA. C-weighting is used to measure peak levels.

Ultrasound

Ultrasound range diagram
Approximate frequency ranges corresponding to ultrasound, with rough guide of some applications

Ultrasound is sound waves with frequencies higher than the upper audible limit of human hearing. Ultrasound is not different from "normal" (audible) sound in its physical properties, except in that humans cannot hear it. Ultrasound devices operate with frequencies from 20 kHz up to several gigahertz.

Ultrasound is commonly used for medical diagnostics such as sonograms.

Infrasound

Infrasound is sound waves with frequencies lower than 20 Hz. Although sounds of such low frequency are too low for humans to hear, whales, elephants and other animals can detect infrasound and use it to communicate. It can be used to detect volcanic eruptions and is used in some types of music.

See also

Sound sources
Sound measurement
General

References

  1. ^ Fundamentals of Telephone Communication Systems. Western Electrical Company. 1969. p. 2.1.
  2. ^ ANSI S1.1-1994. American National Standard: Acoustic Terminology. Sec 3.03.
  3. ^ Acoustical Society of America. "PACS 2010 Regular Edition—Acoustics Appendix". Archived from the original on 14 May 2013. Retrieved 22 May 2013.
  4. ^ ANSI/ASA S1.1-2013
  5. ^ a b "The Propagation of sound". Archived from the original on 30 April 2015. Retrieved 26 June 2015.
  6. ^ Is there sound in space? Archived 2017-10-16 at the Wayback Machine Northwestern University.
  7. ^ Can you hear sounds in space? (Beginner) Archived 2017-06-18 at the Wayback Machine. Cornell University.
  8. ^ "What Does Sound Look Like?". NPR. YouTube. Archived from the original on 10 April 2014. Retrieved 9 April 2014.
  9. ^ a b Handel, S. (1995). Timbre perception and auditory object identification. Hearing, 425-461.
  10. ^ a b Kendall, R. A. (1986). The role of acoustic signal partitions in listener categorization of musical phrases. Music Perception, 185-213.
  11. ^ a b Matthews, M. (1999). Introduction to timbre. In P. R. Cook (Ed.), Music, cognition, and computerized sound: An introduction to psychoacoustic (pp. 79-88). Cambridge, Massachusetts: The MIT press.
  12. ^ Nemiroff, R.; Bonnell, J., eds. (19 August 2007). "A Sonic Boom". Astronomy Picture of the Day. NASA. Retrieved 26 June 2015.
  13. ^ Webster, Noah (1936). Sound. In Webster's Collegiate Dictionary (Fifth ed.). Cambridge, Mass.: The Riverside Press. pp. 950–951.
  14. ^ a b Olson, Harry F. Autor (1967). Music, Physics and Engineering. p. 249. ISBN 9780486217697.
  15. ^ "The American Heritage Dictionary of the English Language" (Fourth ed.). Houghton Mifflin Company. 2000. Archived from the original on June 25, 2008. Retrieved May 20, 2010.
  16. ^ Burton, R. L. (2015). The elements of music: what are they, and who cares? In J. Rosevear & S. Harding. (Eds.), ASME XXth National Conference proceedings. Paper presented at: Music: Educating for life: ASME XXth National Conference (pp.22 - 28), Parkville, Victoria: The Australian Society for Music Education Inc.
  17. ^ Viemeister, Neal F.; Plack, Christopher J. (1993), "Time Analysis", Springer Handbook of Auditory Research, Springer New York, pp. 116–154, doi:10.1007/978-1-4612-2728-1_4, ISBN 9781461276449
  18. ^ Rosen, Stuart (1992-06-29). "Temporal information in speech: acoustic, auditory and linguistic aspects". Phil. Trans. R. Soc. Lond. B. 336 (1278): 367–373. doi:10.1098/rstb.1992.0070. ISSN 0962-8436. PMID 1354376.
  19. ^ Moore, Brian C. J. (2008-10-15). "The Role of Temporal Fine Structure Processing in Pitch Perception, Masking, and Speech Perception for Normal-Hearing and Hearing-Impaired People". Journal of the Association for Research in Otolaryngology. 9 (4): 399–406. doi:10.1007/s10162-008-0143-x. ISSN 1525-3961. PMC 2580810. PMID 18855069.
  20. ^ De Cheveigne, A. (2005). Pitch perception models. Pitch, 169-233.
  21. ^ Krumbholz, K.; Patterson, R.; Seither-Preisler, A.; Lammertmann, C.; Lütkenhöner, B. (2003). "Neuromagnetic evidence for a pitch processing center in Heschl's gyrus". Cerebral Cortex. 13 (7): 765–772. doi:10.1093/cercor/13.7.765.
  22. ^ Jones, S.; Longe, O.; Pato, M. V. (1998). "Auditory evoked potentials to abrupt pitch and timbre change of complex tones: electrophysiological evidence of streaming?". Electroencephalography and Clinical Neurophysiology. 108 (2): 131–142. doi:10.1016/s0168-5597(97)00077-4.
  23. ^ Nishihara, M.; Inui, K.; Morita, T.; Kodaira, M.; Mochizuki, H.; Otsuru, N.; Kakigi, R. (2014). "Echoic memory: Investigation of its temporal resolution by auditory offset cortical responses". PLOS ONE. 9 (8): e106553. Bibcode:2014PLoSO...9j6553N. doi:10.1371/journal.pone.0106553. PMC 4149571. PMID 25170608.
  24. ^ Corwin, J. (2009), The auditory system (PDF), archived (PDF) from the original on 2013-06-28, retrieved 2013-04-06
  25. ^ Massaro, D. W. (1972). "Preperceptual images, processing time, and perceptual units in auditory perception". Psychological Review. 79 (2): 124–145. CiteSeerX 10.1.1.468.6614. doi:10.1037/h0032264.
  26. ^ Zwislocki, J. J. (1969). "Temporal summation of loudness: an analysis". The Journal of the Acoustical Society of America. 46 (2B): 431–441. Bibcode:1969ASAJ...46..431Z. doi:10.1121/1.1911708.
  27. ^ Cohen, D.; Dubnov, S. (1997), Gestalt phenomena in musical texture, archived (PDF) from the original on 2015-11-21, retrieved 2015-11-19
  28. ^ Kamien, R. (1980). Music: an appreciation. New York: McGraw-Hill. p. 62
  29. ^ a b Cariani, P., & Micheyl, C. (2012). Toward a theory of information processing in auditory cortex The Human Auditory Cortex (pp. 351-390): Springer.
  30. ^ Levitin, D. J. (1999). Memory for musical attributes. In P. R. Cook (Ed.), Music, cognition, and computerized sound: An introduction to psychoacoustics (pp. 105-127). Cambridge, Massachusetts: The MIT press.

External links

Amazon (company)

Amazon.com, Inc., doing business as Amazon (), is a multinational technology company focusing in e-commerce, cloud computing, and artificial intelligence in Seattle, Washington. It is one of the Big Four or "Four Horsemen" of technology along with Google, Apple and Facebook due to its market capitalization, disruptive innovation, brand equity and hyper-competitive application process.Amazon is the most valuable public company in the world ahead of Apple and Alphabet. It is the largest e-commerce marketplace and cloud computing platform in the world as measured by revenue and market capitalization. Amazon.com was founded by Jeff Bezos on July 5, 1994, and started as an online bookstore but later diversified to sell video downloads/streaming, MP3 downloads/streaming, audiobook downloads/streaming, software, video games, electronics, apparel, furniture, food, toys, and jewelry. The company also owns a publishing arm, Amazon Publishing, a film and television studio, Amazon Studios, produces consumer electronics lines including Kindle e-readers, Fire tablets, Fire TV, and Echo devices, and is the world's largest provider of cloud infrastructure services (IaaS and PaaS) through its AWS subsidiary. Amazon has separate retail websites for some countries and also offers international shipping of some of its products to certain other countries. 100 million people subscribe to Amazon Prime.Amazon is the largest Internet company by revenue in the world and the second largest employer in the United States. In 2015, Amazon surpassed Walmart as the most valuable retailer in the United States by market capitalization. In 2017, Amazon acquired Whole Foods Market for $13.4 billion, which vastly increased Amazon's presence as a brick-and-mortar retailer. The acquisition was interpreted by some as a direct attempt to challenge Walmart's traditional retail stores.

Audio engineer

An audio engineer (also sometimes recording engineer) helps to produce a recording or a live performance, balancing and adjusting sound sources using equalization and audio effects, mixing, reproduction, and reinforcement of sound. Audio engineers work on the "...technical aspect of recording—the placing of microphones, pre-amp knobs, the setting of levels. The physical recording of any project is done by an engineer ... the nuts and bolts." It's a creative hobby and profession where musical instruments and technology are used to produce sound for film, radio, television, music, and video games. Audio engineers also set up, sound check and do live sound mixing using a mixing console and a sound reinforcement system for music concerts, theatre, sports games and corporate events.

Alternatively, audio engineer can refer to a scientist or professional engineer who holds an engineering degree and who designs, develops and builds audio or musical technology working under terms such as acoustical engineering, electronic/electrical engineering or (musical) signal processing.

Drum kit

A drum kit — also called a drum set, trap set (a term using a contraction of the word, "contraption"), or simply drums — is a collection of drums and other percussion instruments, typically cymbals, which are set up on stands to be played by a single player, with drumsticks held in both hands, and the feet operating pedals that control the hi-hat cymbal and the beater for the bass drum. A drum kit consists of a mix of drums (categorized classically as membranophones, Hornbostel-Sachs high-level classification 2) and idiophones – most significantly cymbals, but can also include the woodblock and cowbell (classified as Hornbostel-Sachs high-level classification 1). In the 2000s, some kits also include electronic instruments (Hornbostel-Sachs classification 53). Also, both hybrid (mixing acoustic instruments and electronic drums) and entirely electronic kits are used.

A standard modern kit (for a right-handed player), as used in popular music and taught in music schools, contains:

A snare drum, mounted on a stand, placed between the player's knees and played with drum sticks (which may include rutes or brushes)

A bass drum, played by a pedal operated by the right foot, which moves a felt-covered beater

One or more toms, played with sticks or brushes (usually three toms: rack tom 1 and 2, and floor tom)

A hi-hat (two cymbals mounted on a stand), played with the sticks, opened and closed with left foot pedal (it can also produce sound with the foot alone)

One or more cymbals, mounted on stands, played with the sticksAll of these are classified as non-pitched percussion, allowing the music to be scored using percussion notation, for which a loose semi-standardized form exists for both the drum kit and electronic drums. The drum kit is usually played while seated on a stool known as a throne. While many instruments like the guitar or piano are capable of performing melodies and chords, most drum kits are unable to achieve this as they produce sounds of indeterminate pitch. The drum kit is a part of the standard rhythm section, used in many types of popular and traditional music styles, ranging from rock and pop to blues and jazz. Other standard instruments used in the rhythm section include the piano, electric guitar, electric bass, and keyboards.

Many drummers extend their kits from this basic configuration, adding more drums, more cymbals, and many other instruments including pitched percussion. In some styles of music, particular extensions are normal. For example, rock and heavy metal drummers make use of double bass drums, which can be achieved with either a second bass drum or a remote double foot pedal. Some progressive drummers may include orchestral percussion such as gongs and tubular bells in their rig. Some performers, such as some rockabilly drummers, play small kits that omit elements from the basic setup. Some drum kit players may have other roles in the band, such as providing backup vocals, or less commonly, lead vocals.

Electronic music

Electronic music is music that employs electronic musical instruments, digital instruments and circuitry-based music technology. In general, a distinction can be made between sound produced using electromechanical means (electroacoustic music), and that produced using electronics only. Electromechanical instruments include mechanical elements, such as strings, hammers, and so on, and electric elements, such as magnetic pickups, power amplifiers and loudspeakers. Examples of electromechanical sound producing devices include the telharmonium, Hammond organ, and the electric guitar, which are typically made loud enough for performers and audiences to hear with an instrument amplifier and speaker cabinet. Pure electronic instruments do not have vibrating strings, hammers, or other sound-producing mechanisms. Devices such as the theremin, synthesizer, and computer can produce electronic sounds.The first electronic devices for performing music were developed at the end of the 19th century, and shortly afterward Italian futurists explored sounds that had not been considered musical. During the 1920s and 1930s, electronic instruments were introduced and the first compositions for electronic instruments were made. By the 1940s, magnetic audio tape allowed musicians to tape sounds and then modify them by changing the tape speed or direction, leading to the development of electroacoustic tape music in the 1940s, in Egypt and France. Musique concrète, created in Paris in 1948, was based on editing together recorded fragments of natural and industrial sounds. Music produced solely from electronic generators was first produced in Germany in 1953. Electronic music was also created in Japan and the United States beginning in the 1950s. An important new development was the advent of computers to compose music. Algorithmic composition with computers was first demonstrated in the 1950s (although algorithmic composition per se without a computer had occurred much earlier, for example Mozart's Musikalisches Würfelspiel).

In the 1960s, live electronics were pioneered in America and Europe, Japanese electronic musical instruments began influencing the music industry, and Jamaican dub music emerged as a form of popular electronic music. In the early 1970s, the monophonic Minimoog synthesizer and Japanese drum machines helped popularize synthesized electronic music.

In the 1970s, electronic music began having a significant influence on popular music, with the adoption of polyphonic synthesizers, electronic drums, drum machines, and turntables, through the emergence of genres such as disco, krautrock, new wave, synth-pop, hip hop and EDM. In the 1980s, electronic music became more dominant in popular music, with a greater reliance on synthesizers, and the adoption of programmable drum machines such as the Roland TR-808 and bass synthesizers such as the TB-303. In the early 1980s, digital technologies for synthesizers including digital synthesizers such as the Yamaha DX7 were popularized, and a group of musicians and music merchants developed the Musical Instrument Digital Interface (MIDI).

Electronically produced music became prevalent in the popular domain by the 1990s, because of the advent of affordable music technology. Contemporary electronic music includes many varieties and ranges from experimental art music to popular forms such as electronic dance music. Today, pop electronic music is most recognizable in its 4/4 form and more connected with the mainstream culture as opposed to its preceding forms which were specialized to niche markets.

Evanescence

Evanescence () is an American rock band founded in Little Rock, Arkansas, in 1995 by singer and pianist Amy Lee and guitarist Ben Moody. After recording independent albums, the band released their first full-length album, Fallen, on Wind-up Records in 2003. Fallen sold more than 17 million copies worldwide and helped the band win two Grammy Awards out of seven nominations. A year later, Evanescence released their first live album, Anywhere but Home, which sold more than one million copies worldwide. In 2006, the band released their second studio album, The Open Door, which sold more than five million copies.The lineup of the group changed several times over the course of the first two studio albums' productions and promotions: David Hodges left in 2002, co-founder Moody left in 2003 (mid-tour), bassist Will Boyd in 2006, followed by guitarist John LeCompt and drummer Rocky Gray in 2007, and Terry Balsamo in 2015. As a result, none of the band's three studio albums feature the same lineup. The latter two changes led to a hiatus, with temporary band members contributing to tour performances.

The band reconvened in June 2009 with a new lineup; their next studio album, Evanescence, was released in 2011. It debuted at the top of the Billboard 200 chart with 127,000 copies in sales. The album also debuted at number one on four other different Billboard charts; the Rock Albums, Digital Albums, Alternative Albums, and the Hard Rock Albums charts. The band spent 2012 on tour in promotion of their latest album with other bands including The Pretty Reckless and Fair to Midland. Troy McLawhorn also became a full-time band member during this time. Following the end of the album's tour cycle in 2012, the band entered another hiatus.

In 2015, Evanescence emerged from hiatus and announced they would resume touring; however, they denied that new Evanescence material was being produced, as Lee was focusing on a solo project instead. In addition, Balsamo left the band and was replaced by Jen Majura. Late 2016 saw additional touring from the band and a statement from Lee that Evanescence would continue. In March 2017, Lee stated Evanescence was working on a fourth album for release later in 2017. Synthesis was released worldwide on November 10, 2017, and marked a stylistic change in the band's sound.

Frequency

Frequency is the number of occurrences of a repeating event per unit of time. It is also referred to as temporal frequency, which emphasizes the contrast to spatial frequency and angular frequency. The period is the duration of time of one cycle in a repeating event, so the period is the reciprocal of the frequency. For example: if a newborn baby's heart beats at a frequency of 120 times a minute, its period—the time interval between beats—is half a second (60 seconds divided by 120 beats). Frequency is an important parameter used in science and engineering to specify the rate of oscillatory and vibratory phenomena, such as mechanical vibrations, audio signals (sound), radio waves, and light.

Guitar

The guitar is a fretted musical instrument that usually has six strings. It is typically played with both hands by strumming or plucking the strings with either a guitar pick or the finger(s)/fingernails of one hand, while simultaneously fretting (pressing the strings against the frets) with the fingers of the other hand. The sound of the vibrating strings is projected either acoustically, by means of the hollow chamber of the guitar (for an acoustic guitar), or through an electrical amplifier and a speaker.

The guitar is a type of chordophone, traditionally constructed from wood and strung with either gut, nylon or steel strings and distinguished from other chordophones by its construction and tuning. The modern guitar was preceded by the gittern, the vihuela, the four-course Renaissance guitar, and the five-course baroque guitar, all of which contributed to the development of the modern six-string instrument.

There are three main types of modern acoustic guitar: the classical guitar (Spanish guitar/nylon-string guitar), the steel-string acoustic guitar, and the archtop guitar, which is sometimes called a "jazz guitar". The tone of an acoustic guitar is produced by the strings' vibration, amplified by the hollow body of the guitar, which acts as a resonating chamber. The classical guitar is often played as a solo instrument using a comprehensive finger-picking technique where each string is plucked individually by the player's fingers, as opposed to being strummed. The term "finger-picking" can also refer to a specific tradition of folk, blues, bluegrass, and country guitar playing in the United States. The acoustic bass guitar is a low-pitched instrument that is one octave below a regular guitar.

Electric guitars, introduced in the 1930s, use an amplifier and a loudspeaker that both makes the sound of the instrument loud enough for the performers and audience to hear, and, given that it produces an electric signal when played, that can electronically manipulate and shape the tone using an equalizer (e.g., bass and treble tone controls) and a huge variety of electronic effects units, the most commonly used ones being distortion (or "overdrive") and reverb. Early amplified guitars employed a hollow body, but solid wood guitars began to dominate during the 1960s and 1970s, as they are less prone to unwanted acoustic feedback "howls". As with acoustic guitars, there are a number of types of electric guitars, including hollowbody guitars, archtop guitars (used in jazz guitar, blues and rockabilly) and solid-body guitars, which are widely used in rock music.

The loud, amplified sound and sonic power of the electric guitar played through a guitar amp has played a key role in the development of blues and rock music, both as an accompaniment instrument (playing riffs and chords) and performing guitar solos, and in many rock subgenres, notably heavy metal music and punk rock. The electric guitar has had a major influence on popular culture. The guitar is used in a wide variety of musical genres worldwide. It is recognized as a primary instrument in genres such as blues, bluegrass, country, flamenco, folk, jazz, jota, mariachi, metal, punk, reggae, rock, soul, and many forms of pop.

ITunes

iTunes () is a media player, media library, Internet radio broadcaster, and mobile device management application developed by Apple Inc. It was announced on January 9, 2001. It is used to play, download, and organize digital multimedia files, including music and video, on personal computers running the macOS and Windows operating systems. Content must be purchased through the iTunes Store, whereas iTunes is the software letting users manage their purchases.

The original and main focus of iTunes is music, with a library offering organization, collection, and storage of users' music collections. It can be used to rip songs from CDs, as well as play content with the use of dynamic, smart playlists. Options for sound optimizations exist, as well as ways to wirelessly share the iTunes library. In 2005, Apple expanded on the core features with video support, later also adding podcasts, e-books, and a section for managing mobile apps for Apple's iOS operating system, the last of which it discontinued in 2017.

The original iPhone smartphone required iTunes for activation and, until the release of iOS 5 in 2011, iTunes was required for installing software updates for the company's iOS devices. Newer iOS devices rely less on the iTunes software, though it can still be used for backup and restoration of phone contents, as well as for the transfer of files between a computer and individual iOS applications. iTunes has received significant criticism for a bloated user experience, with Apple adopting an all-encompassing feature-set in iTunes rather than sticking to its original music-based purpose.

Motown

Motown Records is an American record label owned by Universal Music Group. It was originally founded by Berry Gordy Jr. as Tamla Records on January 12, 1959, and was incorporated as Motown Record Corporation on April 14, 1960. Its name, a portmanteau of motor and town, has become a nickname for Detroit, where the label was originally headquartered.

Motown played an important role in the racial integration of popular music as an African American-owned label that achieved significant crossover success. In the 1960s, Motown and its subsidiary labels (including Tamla Motown, the brand used outside the US) were the most successful proponents of what came to be known as the Motown Sound, a style of soul music with a distinct pop influence. During the 1960s, Motown achieved spectacular success for a small label: 79 records in the top-ten of the Billboard Hot 100 between 1960 and 1969. Following the events of the Detroit Riots of 1967 and the loss of key songwriting/production team Holland-Dozier-Holland the same year over pay disputes, Gordy began relocating Motown to Los Angeles, California. The move was completed in 1972, and Motown later expanded into film and television production, remaining an independent company until 1994, when it was sold to PolyGram before being sold again to MCA Records' successor Universal Music Group when it acquired PolyGram in 1999.Motown spent much of the 2000s headquartered in New York City as a part of the UMG subsidiaries Universal Motown and Universal Motown Republic Group. From 2011 to 2014, it was a part of The Island Def Jam Music Group division of Universal Music. In 2014, however, UMG announced the dissolution of Island Def Jam, and Motown relocated back to Los Angeles to operate under the Capitol Music Group, now operating out of the landmark Capitol Tower. In 2018, Motown was inducted into Rhythm & Blues Hall of Fame class at the Charles H. Wright Museum, and Motown legend Martha Reeves received the award for the label.

Piano

The piano is an acoustic, stringed musical instrument invented in Italy by Bartolomeo Cristofori around the year 1700 (the exact year is uncertain), in which the strings are struck by hammers. It is played using a keyboard, which is a row of keys (small levers) that the performer presses down or strikes with the fingers and thumbs of both hands to cause the hammers to strike the strings.

The word piano is a shortened form of pianoforte, the Italian term for the early 1700s versions of the instrument, which in turn derives from gravicembalo col piano e forte and fortepiano. The Italian musical terms piano and forte indicate "soft" and "loud" respectively, in this context referring to the variations in volume (i.e., loudness) produced in response to a pianist's touch or pressure on the keys: the greater the velocity of a key press, the greater the force of the hammer hitting the strings, and the louder the sound of the note produced and the stronger the attack. The name was created as a contrast to harpsichord, a musical instrument that doesn't allow variation in volume. The first fortepianos in the 1700s had a quieter sound and smaller dynamic range.

An acoustic piano usually has a protective wooden case surrounding the soundboard and metal strings, which are strung under great tension on a heavy metal frame. Pressing one or more keys on the piano's keyboard causes a padded hammer (typically padded with firm felt) to strike the strings. The hammer rebounds from the strings, and the strings continue to vibrate at their resonant frequency. These vibrations are transmitted through a bridge to a soundboard that amplifies by more efficiently coupling the acoustic energy to the air. When the key is released, a damper stops the strings' vibration, ending the sound. Notes can be sustained, even when the keys are released by the fingers and thumbs, by the use of pedals at the base of the instrument. The sustain pedal enables pianists to play musical passages that would otherwise be impossible, such as sounding a 10-note chord in the lower register and then, while this chord is being continued with the sustain pedal, shifting both hands to the treble range to play a melody and arpeggios over the top of this sustained chord. Unlike the pipe organ and harpsichord, two major keyboard instruments widely used before the piano, the piano allows gradations of volume and tone according to how forcefully a performer presses or strikes the keys.

Most modern pianos have a row of 88 black and white keys, 52 white keys for the notes of the C major scale (C, D, E, F, G, A and B) and 36 shorter black keys, which are raised above the white keys, and set further back on the keyboard. This means that the piano can play 88 different pitches (or "notes"), going from the deepest bass range to the highest treble. The black keys are for the "accidentals" (F♯/G♭, G♯/A♭, A♯/B♭, C♯/D♭, and D♯/E♭), which are needed to play in all twelve keys. More rarely, some pianos have additional keys (which require additional strings). Most notes have three strings, except for the bass, which graduates from one to two. The strings are sounded when keys are pressed or struck, and silenced by dampers when the hands are lifted from the keyboard. Although an acoustic piano has strings, it is usually classified as a percussion instrument rather than as a stringed instrument, because the strings are struck rather than plucked (as with a harpsichord or spinet); in the Hornbostel–Sachs system of instrument classification, pianos are considered chordophones. There are two main types of piano: the grand piano and the upright piano. The grand piano is used for Classical solos, chamber music, and art song, and it is often used in jazz and pop concerts. The upright piano, which is more compact, is the most popular type, as it is a better size for use in private homes for domestic music-making and practice.

During the 1800s, influenced by the musical trends of the Romantic music era, innovations such as the cast iron frame (which allowed much greater string tensions) and aliquot stringing gave grand pianos a more powerful sound, with a longer sustain and richer tone. In the nineteenth century, a family's piano played the same role that a radio or phonograph played in the twentieth century; when a nineteenth-century family wanted to hear a newly published musical piece or symphony, they could hear it by having a family member play it on the piano. During the nineteenth century, music publishers produced many musical works in arrangements for piano, so that music lovers could play and hear the popular pieces of the day in their home. The piano is widely employed in classical, jazz, traditional and popular music for solo and ensemble performances, accompaniment, and for composing, songwriting and rehearsals. Although the piano is very heavy and thus not portable and is expensive (in comparison with other widely used accompaniment instruments, such as the acoustic guitar), its musical versatility (i.e., its wide pitch range, ability to play chords with up to 10 notes, louder or softer notes and two or more independent musical lines at the same time), the large number of musicians and amateurs trained in playing it, and its wide availability in performance venues, schools and rehearsal spaces have made it one of the Western world's most familiar musical instruments. With technological advances, amplified electric pianos (1929), electronic pianos (1970s), and digital pianos (1980s) have also been developed. The electric piano became a popular instrument in the 1960s and 1970s genres of jazz fusion, funk music and rock music.

Record producer

A record producer or music producer oversees and manages the sound recording and production of a band or performer's music, which may range from recording one song to recording a lengthy concept album. A producer has many, varying roles during the recording process. They may gather musical ideas for the project, collaborate with the artists to select cover tunes or original songs by the artist/group, work with artists and help them to improve their songs, lyrics or arrangements.

A producer may also:

Select session musicians to play rhythm section accompaniment parts or solos

Co-write.

Propose changes to the song arrangements, and

Coach the singers and musicians in the studioThe producer typically supervises the entire process from preproduction, through to the sound recording and mixing stages, and, in some cases, all the way to the audio mastering stage. The producer may perform these roles themself, or help select the engineer, and provide suggestions to the engineer. The producer may also pay session musicians and engineers and ensure that the entire project is completed within the record label's budget.

Sound film

A sound film is a motion picture with synchronized sound, or sound technologically coupled to image, as opposed to a silent film. The first known public exhibition of projected sound films took place in Paris in 1900, but decades passed before sound motion pictures were made commercially practical. Reliable synchronization was difficult to achieve with the early sound-on-disc systems, and amplification and recording quality were also inadequate. Innovations in sound-on-film led to the first commercial screening of short motion pictures using the technology, which took place in 1923.

The primary steps in the commercialization of sound cinema were taken in the mid- to late 1920s. At first, the sound films which included synchronized dialogue, known as "talking pictures", or "talkies", were exclusively shorts. The earliest feature-length movies with recorded sound included only music and effects. The first feature film originally presented as a talkie was The Jazz Singer, released in October 1927. A major hit, it was made with Vitaphone, which was at the time the leading brand of sound-on-disc technology. Sound-on-film, however, would soon become the standard for talking pictures.

By the early 1930s, the talkies were a global phenomenon. In the United States, they helped secure Hollywood's position as one of the world's most powerful cultural/commercial centers of influence (see Cinema of the United States). In Europe (and, to a lesser degree, elsewhere), the new development was treated with suspicion by many filmmakers and critics, who worried that a focus on dialogue would subvert the unique aesthetic virtues of soundless cinema. In Japan, where the popular film tradition integrated silent movie and live vocal performance, talking pictures were slow to take root. Conversely, in India, sound was the transformative element that led to the rapid expansion of the nation's film industry.

Sound recording and reproduction

Sound recording and reproduction is an electrical, mechanical, electronic, or digital inscription and re-creation of sound waves, such as spoken voice, singing, instrumental music, or sound effects. The two main classes of sound recording technology are analog recording and digital recording.

Acoustic analog recording is achieved by a microphone diaphragm that senses changes in atmospheric pressure caused by acoustic sound waves and records them as a mechanical representation of the sound waves on a medium such as a phonograph record (in which a stylus cuts grooves on a record). In magnetic tape recording, the sound waves vibrate the microphone diaphragm and are converted into a varying electric current, which is then converted to a varying magnetic field by an electromagnet, which makes a representation of the sound as magnetized areas on a plastic tape with a magnetic coating on it. Analog sound reproduction is the reverse process, with a bigger loudspeaker diaphragm causing changes to atmospheric pressure to form acoustic sound waves.

Digital recording and reproduction converts the analog sound signal picked up by the microphone to a digital form by the process of sampling. This lets the audio data be stored and transmitted by a wider variety of media. Digital recording stores audio as a series of binary numbers (zeros and ones) representing samples of the amplitude of the audio signal at equal time intervals, at a sample rate high enough to convey all sounds capable of being heard. A digital audio signal must be reconverted to analog form during playback before it is amplified and connected to a loudspeaker to produce sound.

Prior to the development of sound recording, there were mechanical systems, such as wind-up music boxes and, later, player pianos, for encoding and reproducing instrumental music.

Soundtrack

A soundtrack, also written sound track, can be recorded music accompanying and synchronized to the images of a motion picture, book, television program, or video game; a commercially released soundtrack album of music as featured in the soundtrack of a film, video, or television presentation; or the physical area of a film that contains the synchronized recorded sound.

Speed of sound

The speed of sound is the distance travelled per unit time by a sound wave as it propagates through an elastic medium. At 20 °C (68 °F), the speed of sound in air is about 343 meters per second (1,234.8 km/h; 1,125 ft/s; 767 mph; 667 kn), or a kilometre in 2.9 s or a mile in 4.7 s. It depends strongly on temperature, but also varies by several meters per second due to which gases are present.

The speed of sound in an ideal gas depends only on its temperature and composition. The speed has a weak dependence on frequency and pressure in ordinary air, deviating slightly from ideal behavior.

In common everyday speech, speed of sound refers to the speed of sound waves in air. However, the speed of sound varies from substance to substance: sound travels most slowly in gases; it travels faster in liquids; and faster still in solids. For example, (as noted above), sound travels at 343 m/s in air; it travels at 1,480 m/s in water (4.3 times as fast as in air); and at 5,120 m/s in iron (about 15 times as fast as in air). In an exceptionally stiff material such as diamond, sound travels at 12,000 metres per second (27,000 mph); (about 35 times as fast as in air) which is around the maximum speed that sound will travel under normal conditions.

Sound waves in solids are composed of compression waves (just as in gases and liquids), and a different type of sound wave called a shear wave, which occurs only in solids. Shear waves in solids usually travel at different speeds, as exhibited in seismology. The speed of compression waves in solids is determined by the medium's compressibility, shear modulus and density. The speed of shear waves is determined only by the solid material's shear modulus and density.

In fluid dynamics, the speed of sound in a fluid medium (gas or liquid) is used as a relative measure for the speed of an object moving through the medium. The ratio of the speed of an object to the speed of sound in the fluid is called the object's Mach number. Objects moving at speeds greater than Mach1 are said to be traveling at supersonic speeds.

Synthesizer

A synthesizer or synthesiser (often abbreviated to synth) is an electronic musical instrument that generates audio signals that may be converted to sound. Synthesizers may imitate traditional musical instruments such as piano, flute, vocals, or natural sounds such as ocean waves; or generate novel electronic timbres. They are often played with a musical keyboard, but they can be controlled via a variety of other devices, including music sequencers, instrument controllers, fingerboards, guitar synthesizers, wind controllers, and electronic drums. Synthesizers without built-in controllers are often called sound modules, and are controlled via USB, MIDI or CV/gate using a controller device, often a MIDI keyboard or other controller.

Synthesizers use various methods to generate electronic signals (sounds). Among the most popular waveform synthesis techniques are subtractive synthesis, additive synthesis, wavetable synthesis, frequency modulation synthesis, phase distortion synthesis, physical modeling synthesis and sample-based synthesis.

Synthesizers were first used in pop music in the 1960s. In the late 1970s, synths were used in progressive rock, pop and disco. In the 1980s, the invention of the relatively inexpensive Yamaha DX7 synth made digital synthesizers widely available. 1980s pop and dance music often made heavy use of synthesizers. In the 2010s, synthesizers are used in many genres, such as pop, hip hop, metal, rock and dance. Contemporary classical music composers from the 20th and 21st century write compositions for synthesizer.

The Sound of Music

The Sound of Music is a musical with music by Richard Rodgers, lyrics by Oscar Hammerstein II and a book by Howard Lindsay and Russel Crouse. It is based on the memoir of Maria von Trapp, The Story of the Trapp Family Singers. Set in Austria on the eve of the Anschluss in 1938, the musical tells the story of Maria, who takes a job as governess to a large family while she decides whether to become a nun. She falls in love with the children, and eventually their widowed father, Captain von Trapp. He is ordered to accept a commission in the German navy, but he opposes the Nazis. He and Maria decide on a plan to flee Austria with the children. Many songs from the musical have become standards, such as "Edelweiss", "My Favorite Things", "Climb Ev'ry Mountain", "Do-Re-Mi", and the title song "The Sound of Music".

The original Broadway production, starring Mary Martin and Theodore Bikel, opened in 1959 and won five Tony Awards, including Best Musical, out of nine nominations. The first London production opened at the Palace Theatre in 1961. The show has enjoyed numerous productions and revivals since then. It was adapted as a 1965 film musical starring Julie Andrews and Christopher Plummer, which won five Academy Awards. The Sound of Music was the last musical written by Rodgers and Hammerstein; Oscar Hammerstein died of stomach cancer nine months after the Broadway premiere.

The Sound of Music (film)

The Sound of Music is a 1965 American musical drama film produced and directed by Robert Wise, and starring Julie Andrews and Christopher Plummer, with Richard Haydn and Eleanor Parker. The film is an adaptation of the 1959 stage musical of the same name, composed by Richard Rodgers with lyrics by Oscar Hammerstein II. The film's screenplay was written by Ernest Lehman, adapted from the stage musical's book by Lindsay and Crouse. Based on the memoir The Story of the Trapp Family Singers by Maria von Trapp, the film is about a young Austrian woman studying to become a nun in Salzburg, Austria in 1938 who is sent to the villa of a retired naval officer and widower to be governess to his seven children. After bringing and teaching love and music into the lives of the family through kindness and patience, she marries the officer and together with the children they find a way to survive the loss of their homeland through courage and faith.

The film was released on March 2, 1965 in the United States, initially as a limited roadshow theatrical release. Although critical response to the film was widely mixed, the film was a major commercial success, becoming the number one box office movie after four weeks, and the highest-grossing film of 1965. By November 1966, The Sound of Music had become the highest-grossing film of all-time—surpassing Gone with the Wind—and held that distinction for five years. The film was just as popular throughout the world, breaking previous box-office records in twenty-nine countries. Following an initial theatrical release that lasted four and a half years, and two successful re-releases, the film sold 283 million admissions worldwide and earned a total worldwide gross of $286,000,000.

The Sound of Music received five Academy Awards, including Best Picture and Best Director. The film also received two Golden Globe Awards, for Best Motion Picture and Best Actress, the Directors Guild of America Award for Outstanding Directorial Achievement, and the Writers Guild of America Award for Best Written American Musical. In 1998, the American Film Institute (AFI) listed The Sound of Music as the fifty-fifth greatest American movie of all time, and the fourth greatest movie musical. In 2001, the United States Library of Congress selected the film for preservation in the National Film Registry, finding it "culturally, historically, or aesthetically significant".

Trip hop

Trip hop (sometimes used synonymously with "downtempo") is a musical genre that originated in the early 1990s in the United Kingdom, especially Bristol. It has been described as "a fusion of hip hop and electronica until neither genre is recognizable", and may incorporate a variety of styles, including funk, dub, soul, psychedelia, R&B, and house, as well as other forms of electronic music. Trip hop can be highly experimental.Deriving from later idioms of acid house, the term was first used by the British music media to describe the more experimental variant of breakbeat emerging from the Bristol Sound scene in the early 1990s, which contained influences of soul, funk, and jazz. It was pioneered by acts like Massive Attack, Tricky, and Portishead. Trip hop achieved commercial success in the 1990s, and has been described as "Europe's alternative choice in the second half of the '90s."

This page is based on a Wikipedia article written by authors (here).
Text is available under the CC BY-SA 3.0 license; additional terms may apply.
Images, videos and audio are available under their respective licenses.