|E.S.R.> Scanning the Journals : ECHOES|
|our selection [ text and various links to be viewed from this site ] ALL TEXTS WRITTEN BY THOMAS D. ROSSING|
+ Vol.12, Nr.3,
+ Vol.12, Nr.3, Summer 2002
The Journal of the Acoustical Society of Japan
(E) has changed its name to Acoustical Science and Technology;
beginning with the January 2001 issue, and it will be published both
online and in print version "until such time as the electric journal has
become thoroughly popular." To make the journal more international,
contributions from countries other than Japan are invited. The Acoustical
Society of Japan has been publishing an English journal since 1980. At
first it was published quarterly, but since 1986 it has been
first issue of Acoustical Science and Technology has a nice mix of
acoustics papers on such subjects as a blood flow noise transducer,
matching between a musical score and its performance, medical ultrasonic
imaging, effects of oriental lacquer on soundboards of musical
instruments, measurement of equal-loudness level contours, evaluation of
acoustic radiation efficiency for ship structures, mode shapes of quartz
crystal resonators, effect of materials on musical instrument timbre, and
adaptation to frequency change. Titles and abstracts of papers in the
Journal of the Acoustical Society of Japan (J) are included.
imitation in birds provides good material, for studying the basic biology
of vocal learning, according to a paper entitled "Dynamics of the Vocal
Imitation Process: How a Zebra Finch Learns its Song" in the 30 March
issue of Nature. Zebra finch males develop their song between 35
and 90 days after hatching, the sensitive period for vocal learning. When
a young, male is reared singly in the company of an adult male, it
develops a song that is a close copy of the sounds and temporal order of
that male's song. Techniques were developed for inducing the rapid onset
of song irritation in young zebra finches and for tracking vocal change
until a match to a model song was achieved: Tracking the transition
revealed that imitations of dissimilar sounds can emerge from successive
'renditions of the same prototype, and that developmental trajectories for
some sounds followed paths of increasing mismatch until an abrupt
correction occurred by period doubling. These dynamics, the authors feel,
are likely to reflect underlying neural and articulatory constraints on
the production and imitation of sounds.
The genetic and
environmental contributions to differences in musical pitch perception
abilities were studied by using 284 twin pairs: The results of the study
are reported in a paper in the 9 March issue of Science. Results of
a Distorted Tunes Test, which requires subjects to judge whether simple
popular melodies contain notes with incorrect .pitch, suggest that
variation in musical pitch recognition is primarily due to highly
heritable differences in auditory functions not tested by conventional
audiologic methods. Pitch
perception may offer a window into brain
processes that are also used in language;
according to the authors. Although, absolute pitch (the ability to
recognize an isolated note) is to some degree trainable, scores on tests
of relative pitch perception don't change appreciably over an individual's
lifetime, suggesting that, as with language, there's hard wiring
Humans and most other
animals localize a
sound source by noting difference in arrival time and
difference in intensity at two ears: Even flies can do it, according to a
paper in the 5' April, issue of Nature. Although their eardrums are
less than 0.5 mm apart, Orrnia ochracea is capable of locating a
loudspeaker broadcasting cricket songs, to within 2° of the midline.
This means that it can detect a change, in interaural time difference of a
'mere 50 ns. How does it achieve this? The eardrums are connected
internally by a cuticle-based bridge that acts as a flexible lever so that
the tympana can vibrate in two distinct ways with different resonant
frequencies. Over the range of 5 to 20'kHz, the actual motion of the
tympana is the result of a linear combination of these two modes. As a
result, the maximum difference in the amplitude of vibration of the two
eardrums increases to 10 dB, and the maximum time delay between the
mechanical displacements of the two eardrums increases to about 55 s.
Furthermore, low-jitter, phase receptor responses are pooled to achieve
hyperacute time coding using a set of specific coding strategies in the
nervous system. The results suggest that nanoscale/microscale directional
microphones patterned after the Ormia ochracea ear have the
potential for highly accurate directional sensitivity, independent of
How does an accomplished orchestra
conductor identify a specific
musician within a multi- player section?A brief
communication in the 1 February issue of Nature presents evidence
from brain-potential recordings that experienced professional conductors
develop enhanced auditory localization mechanisms in peripheral space.
Seven conductors, seven professional pianists, and seven nonmusicians were
tested in a paradigm used originally to demonstrate superior sound localization in
congenitally blind subjects. The subjects listened. to brief pink-noise
bursts from central and peripheral arrays of loudspeakers. Whereas a spatial gradient
was evident in all three groups for central auditory space, only the
conductors displayed a gradient for the periphery. From
magnetoencephalographic recordings; the effect was found to arise in the
secondary auditory cortex.
The inner ear is a complex sensory organ that
enables sound to be heard and balance to be maintained. Composed of, the
fluid-filled cochlea and the semi-circular canals; the inner ear is formed
from a thickening of the embryonic ectoderm called the otic pacode. How
the otic pacode develops is not much understood, however. A paper in the 8
December issue of Science reports that a family of signalling
molecules that initiate inner ear development in chick embryos has been
Special issues of journals on a single subject
are becoming more common. The July/August issue of Acustica, edited
by D: Murray Campbell, is devoted to musical wind instrument acoustics.
The lead paper, by B: Fabre and A. Hirschberg, reviews the physical
modelling of flue instruments: Several papers address organ pipes: A list
of the authors of the 14 papers reads like a who's who of musical
"Listening for the Music of Gravity," the title of a report on
gravitational wave detectors in the March issue of Cern Courier;
sounds like a headline writer's playful error until you read on and
learn that the frequency range of the ultrasensitive interferometers is
approximately the audible frequency range. The article discusses both the
300-m TAMA detector in Japan and the 2-km LIGO detector in the United
States. The latter detector, near Hanford, Washington, uses laser beams in
twin 2-kin beam tubes. Other gravitation: wave detectors are being
constructed in Hannover, Germany and Cascina; Italy. "Within a couple
years, the ultra sensitive ears of interferometers around the world should
be poised to listen at last to the song of gravitation waves," the article
government is proposing to waive environmental laws so as to enable the
Navy to deploy a controversial sonar system for submarine detection,
according to a story ,in the 29 March issue of Nature. The
Surveillance Towed Array Sensor System (Surtess) listens for reflections
bouncing off submarines in response to bursts of sound emitted by the
system. The Navy says the system is needed because modern submarines are
too quiet to be detected by passive systems. However, the new sonar system
has been the subject of intense debate because of its potentially adverse
effects on marine mammals. Tests on the system were classified information
until about six years ago, when environmentalists learned about it. The
National Marine Fisheries Service, part of the National Oceanic and
Atmospheric Administration, has released a draft of new rules that give
the Navy a five-year exemption from the Marine Mammal Protection Act. The
Navy promises not to use the system within 20 kilometers of the coast and
to shut it down whenever mammals are detected within one kilometer.
seriousness of disoriented ocean mammals is presented in a story in the
science section of the April 10 issue of New York Times. In March
2000, more than a dozen whales were stranded on a beach in the Bahamas. At
least six-died. A task force from the Marine Fisheries Service and the
Navy concluded that it was highly likely that the stranding was caused by
sonar transmissions from Navy ships that were performing antisubmarine
exercises nearby. Now some biologists and environmentalists fear that such
mass strandings will become more common if the Navy wins approval for
Do you want to listen to sizzles, crackles, whistles, and roars from Planet Earth? NASA scientists have installed a very low frequency,(VLF) radio receiver in Huntsville, Alabama, that receives radio waves from several hundred hertz to 10 kHz, uses a computer to convert them to sound waves and sends them out as streaming audio at <www.spaceweathercom/ glossary/inspire.html>, according to a story in the March 4 issue of The Oakland Tribune. Though the noises are transmitted 24 hours a day, the best time to log on and listen is the hour before dawn and the hour after dusk.
from Vol.11, Nr.4, Fall2001
People with developmental dyslexia have difficulty learning to read, regardless of how intelligent they are. The prevailing view of dyslexia involves the idea that learning an alphabetic writing system requires the brain to map letters to mental representation of the corresponding speech' sounds (phonemes). But this view has been challenged, according to a commentary by Franck Ramus in the 26 July issue of Nature, by the discovery that people with this disorder also have an array of subtle sensory effects, such as auditory tasks that require the perception of brief or rapid speech and non-speech sounds. Moreover, there is evidence that the brains of some dyslectics have subtle neurological abnormalities in certain areas of the visual and auditory systems, the so-called magnocellular pathways:
A team of neuroscientists at the University of Helsinki also found that audiovisual aids can lessen dyslexia; according to a report in the August 28' issue of the Proceedings of the National Academy of Sciences. In each training session, children played a computer game in which sequences of .3 to 15 sounds were represented on the screen as horizontal sequences of rectangles. Sounds that increase in pitch, for example,. were symbolized by rectangles that ascend like steps, longer lasting sounds appeared as longer rectangles; and louder sounds as thicker rectangles. Such audiovisual training yielded substantial improvement on tests of spelling and reading speed and comprehension.
By exploiting the properties of surface
acoustic waves (SAW) at short distances and the high resolution of an
atomic force microscope, researchers at
The Stanford-Berlin team has used the nonlinearity of the force acting between the tip and the surface, allowing two waves from the interdigital fingers to be mixed, so that higher frequencies can be detected. The difference frequency can be tuned so that it lies just below the first resonant frequency of the cantilever. If one signal is kept constant, it is possible to detect the amplitude and phase of a high-frequency signal. By observing ultrasonic scattering from small objects; the researchers demonstrate that the technique has high spatial resolution, a new and attractive feature. This new acoustical imaging technique should help scientists to understand, the macroscopic elastic properties of composite materials and also shed light on the elasticity of biological materials.
Molecular sensing in hearing and balance is reviewed in an insight feature in the 13 September issue of Nature. The brief article from the National Institute on Deafness and Other Communication Disorders points out that 28 million Americans are deaf or hearing impaired, making this the third most common chronic condition affecting the elderly. Vestibular and/or balance problems are reported in about 9% of the populations who are 65 years of age or older. Fall-related injuries such as hip fracture are a leading cause of death and disability in the elderly population; and many of these are related to balance disorders. Recent studies in hearing and balance have led to major advancements: in our understanding of the signal transduction processes in the auditory and vestibular systems. Hair-cell stereocilia in the cochlea can be viewed as true molecular sensor organelles. Although the exact sequence of events is not well understood, regulation at the level of the hair cell may involve the membrane protein prestin, a novel protein with shared homology to another protein, pendrin; a member of a family of sulphate/anion transport proteins.
A hundred yeas old speech machine, constructed by Dr. Marage in 1901, is pictured in the October issue of Scientific American. The apparatus apparently consists of reproductions of the human mouth pronouncing five vowel sounds. "Not only the larynx but also the cheeks play an important part in the production of sound, adding the harmonies which give the voice its character," an excerpt from the October 1901 issue reads:
Seismological studies of the Earth's solid inner core have revealed that compressional waves traverse the inner core faster along near-polar paths than in the equatorial plane, according to a paper in the 6 September issue of Nature: Calculations based on a simple model of polycrystalline texture accounts for this inner core anisotropy:
Cochlear implants may improve vision along with
hearing, according to a paper in the June issue of Neuron:
Researchers at the
Acoustics in the News – Volume 12, Number 1, Winter 2002
• Biologists have known that a parasitic fly stalks crickets by sound, even though the fly’s head is too small for the sound location mechanisms used by most animals. Some flies can pinpoint sound to within 2 degrees, although the side-by-side eardrums of a fly span only about a millimeter. That’s because there’s a bridge of stiff material connecting the two membranes (see “Scanning the Journals” in ECHOES Spring 2001). Paper 2aEAl by Ronald Miles, et al. at the Ft. Lauderdale ASA meeting described a silicon nitride micro phone diaphragms designed to employ similar operating principles, and this idea was reported in the December 8 issue of Science News. Using techniques for making microchips, the researchers have made 1-mm by 2-mm silicon diaphragm with a structure that resembles the fly ear. The next step in making a microphone is to attach electrical pickups.
• A baby’s adorable babbling brings smiles to parents, but it may also be a precursor to language, according to report in the December 1 issue of Science News. Researchers at McGill University cite the fact that babies babble out of the right side of their mouths as evidence that the infantile sounds are more than noise, since past studies have established that people generally open the right side of the mouth more than the left side when talking, whereas nonlinguistic tasks requiring mouth opening are symmetric or left-centered. Linguistic asymmetry is thought to occur because the neural circuits controlling language reside in the brain’s left hemisphere, and each brain hemisphere usually operates the opposite side of the body.
• Improving learning by improving classroom acoustics was the subject of a report on ABC News, November 9, 2001. Over the past decade, schools in several states, including Florida, New York, Washington and California have micro phones and speakers in the classroom to help students hear their teachers, but many acoustical engineers say the micro phones are a Band-Aid solution to a critical problem that seriously impedes a student’s ability to learn. “Wearing microphones is a solution if using crutches is a solution to broken legs,” David Lubman is quoted as saying. “When classrooms are reverberant, amplification doesn’t help. It makes it louder but not clearer.” Under the new U.S. guide lines classrooms would need to keep noise levels below 35 dB, about the sound level of a quiet country living room. Studies have shown that most classrooms in the United States have sound levels of 45 dB or more. The main culprits in classroom noise (besides boisterous students) are heating and ventilation systems.
A so go (small drum) is a common spectator prop in the stands at baseball and soccer games in Korea. But they also appeared in the Delta Center at Salt Lake City during the speed skating Olympic qualifying races, according to a story in the Desert News for October 27. There were roughly 120 Korean fans in the crowd of 3300 for the 500-meter event. Whenever a Korean skater rounded a turn on the track, the fans leaned forward and pounded the so go drums furiously.
Made of cowhide and hand-painted in yellow, red and blue, the colors of the Korean speed skaters’ uniforms, the drums measure 10 inches in diameter. Held by one hand and struck with a bamboo stick, the drums make a hollow sound that rumbles powerfully underneath the yells and cheers of an arena crowd. “High frequency sounds, like yells and whistles, tend to be absorbed, but the low frequency sounds take longer to die out,” explained acoustician William Strong.
• A lecture by Wendy Sadler, entitled “Music to your ears: the story of sound, synths and CDs,” at the Open University was the first presentation in the 2002 schools lecture tour of the Institute of Acoustics. Sadler, manager for public programmes at Techniquest in Cardiff, is the youngest ever lecturer in the 19-year history of the school tours, according to a report in the December issue of Physics World. Her lecture will be followed by visits to 23 schools in England, Scotland, and Wales.
• A team of geneticists at the University of California at San Francisco and Los Angeles has begun a study aimed at finding a gene or genes that may contribute to absolute pitch abil ity, according to a story in the January 14 San Francisco Chronicle. The team, which is hoping to recruit large numbers of subjects for its study, has an online test for prospective subjects at www.perfectpitch.ucsf.edu. The researchers are particularly anxious to find examples of absolute pitch clustered in families. Based on the evidence so far, most scientists believe that genes do play at least a subtle role, per haps by keeping a developmental window open wider and longer during early childhood, when note-naming ability generally takes shape.
• One of the biggest problems in delivering drugs to tumors is getting them in there, and now it appears that ultrasound may be able to help, according to a story in the December 22 issue of Science News. Johnathan Kruskal and his colleagues at Israel Deaconess Medical Center in Boston have shown that exposing tumors in mice to ultrasound can make blood vessels more permeable to drugs, opening up the possibility of getting more into the tumor with a lower dose. How useful this effect will turn out to be is still an open question, but its a promising lead for improving chemotherapy.
• The fans and pumps are so loud on the international space station that astronauts who spend nearly six months on board consider noise one of the top habitability issues, according to a note in the November 20 issue of Newsday. “It’s sort of like being in a factory,” commented NASA astronaut Jim Voss. Even though he wore ear plugs every night while he slept, Voss suffered partial hearing loss during his space station stint, although his hearing recovered to near normal after his return to the relative quiet of Earth in August.
Scanning the Journals
• Bird songs are complex acoustic patterns comprising notes of many frequencies, but the physical processes that produce these songs can be surprisingly simple, according to a paper in Physical Review Letters 87, 28101 (2001). Like the human larynx, a bird’s vocal organ consists of folds in the passage that connects the lungs to the throat. These folds open and close to produce notes with frequencies between 1 and 2 kHz. Individual “syllables” in the song last between 10 and 300 milliseconds. By treating the vocal organ of a canary as a harmonic oscillator, physicists at Rockefeller University and Ciudad University in Argentina developed a simple formula that accurately mimics at least three distinct notes in the song-bird’s repertoire. The formula, which relates the air pressure and elasticity to the frequency of the note, models the spectra of a short falling note, a long rising note, and a medium-length note that rises then falls.
• A new type of omnidirectional sound source, consisting of a powerful loudspeaker feeding a small aperture through a reverse horn for concentrating the acoustical energy is described in Acustica 87, 505 (2001). Although the total available sound power is less than with traditional omnidirectional (polyhedral) sources, the directivity diagram is much smoother.
• Two interesting papers on reverberation appear in the September and October issues of J. Audio Eng. Soc. In the September issue, Yang-Hann Kim and Sang-Tae Ahn describe a reverberation model based on objective parameters of subjective perception. The selected objective parameters are reverberation time RT, early decay time EDT, initial-time delay gap t,, objective clarity C and strength of arriving energy G. These represent the subjective perception of a room or concert hall well, and can be used in designing an artificial reverberator, for example.
In the October issue, Barry Blesser presents an interdisciplinary synthesis of reverberation viewpoints. Artificial reverberator algorithms, which are implemented using digital signal processing, can be best understood by considering their relationships to several disciplines: the perceptual metrics of the auditory system, the statistical properties of the acoustic spaces. the artistic needs of the music culture, and the mixing
techniques in the recording studio. Both the early reverberations, containing the unique spatial personality, and the late part, containing the statistically random process, play different roles in each of the related disciplines. The unifying theme is the question of how the human auditory system builds a sense of space.
• A translation of a paper (1978) by H. Bohlen that first describes the “Bohlen-Pierce scale,” of interest to Western music theorists, appears in Acustica 87, 617 (2001). Based on a consonance criterion that is founded on combination tones, the twelve-step scale is derived from the major triad. The same procedure, extended to consonant intervals not used in the twelve-step scale, then leads to a thirteen-step scale. This scale is presented in just and equal tempered tuning. Approaches are shown to a tonal system, based on this scale, and to the realization of 13-tone music.
• A sonic crystal is to sound waves in air what a photonic crystal is to light waves or a semiconductor is to electrons: it permits the passage of waves at some energies but not others. According to a paper in the 14 January issue of Physical Review Letters, scientists in Spain have used a sonic crystal, an arrangement of aluminum rods, as an acoustic lens for focusing sound waves at audible frequencies. They thereby create an interferometer which, like its lightwave counterpart, causes two wavetrains of soundwaves to interfere with each other in a characteristic pattern.
• Horses and camels have tendons more than 600 mm long connected to muscle fibers less than 6 mm long which may function as vibration dampers to protect bones and tendons from potentially damaging vibrations, according to a paper in the 27 December issue of Nature. For mammals with masses greater than a few kilograms, tendon elasticity substantially reduces the energetic cost of running. Each time a foot hits the ground, these tendons are stretched, and they recoil elastically as the foot leaves the ground. When the foot of a running animal hits the ground, the impact sets the leg vibrating, at 30 to 40 Hz in horses, and it is thought that the short muscle fibers help to damp out these vibrations.
• An anomalous acoustoelectric effect has been discovered in a manganite thin film by a collaboration of physicists in Russia, Poland, and Ukraine, according to a paper in Physics Review Letters 87, 146602 (2001). The acoustoelectric effect (AE) occurs when an acoustic wave propagates along an electrically conducting surface and drags electric charge along it due to strong coupling between phonons and electrons. The researchers grew a manganite thin film atop a piezoelectric lithium-niobium-oxygen substrate on which they then launched a surface acoustic wave (SAW). They unexpectedly found that a component of the AE current did not reverse when the SAW traveled in the opposite direction.
Scanning the Journals
• “Hearing tests, environmental measurements and acoustic phenomena may together explain why boats and animals col lide” is the subtitle of an article on “Manatees, Bioacoustics and Boats” in the March-April issue of American Scientist. After more than two decades of manatee-protection policies that have focused on slowing boats passing through manatee habitats, the number of injuries and deaths associated with collisions has increased, the authors point out. Researchers at the Charles E. Schmidt (not ASA’s Charles E. Schmid) College of Science at Florida State University have determined that manatees have a functional hearing range from 400 to 46,000 Hz, with a peak sensitivity between 16 and 18 kHz. Thus they are unable to hear the dominant low frequency sounds of most boats, and they may be least able to hear the propellers of boats that have slowed down in compliance with boat speed regulations intended to reduce collisions. Furthermore, the Lloyd mirror effect can attenuate or cancel the propagation of low frequency sounds generated near the surface, where the risk of collisions with ships and boats is greatest.
• Auditory spatial perception is strongly influenced by visual clues. A paper in the 14 March issue of Nature shows that an auditory aftereffect occurs from adaptation to visual motion in depth. After a few minutes of viewing a square moving in depth, a steady sound was perceived as changing loudness in the opposite direction. Adaptation to a combination of auditory and visual stimuli changing in a compatible direction increased the aftereffect and the effect of visual adaptation almost disappeared when the directions were opposite. For processing of motion in depth, the auditory sys tem responds to both auditory changing intensity and visual motion in depth.
• “Sending Sound to the Brain” is the title of an article in the 8 February issue of Science. The review article on cochlear and auditory brainstem implants is part of a special section on “Bodybuilding: The Bionic Human.” Early cochlear implants were designed primarily to replace the lost function of the cochlea with little regard for the way the brain processes and adapts to auditory information. Deaf patients without an intact auditory nerve may be helped by the next generation of auditory prostheses: surface or penetrating auditory brainstem implants that bypass the auditory nerve and directly stimulate auditory processing centers in the brainstem. Existing auditory brainstern implant (ABI) technology (implanted in about 200 patients thus far) stimulates the surface of the ventral cochlear nucleus in the auditory brainstem, the next stage of auditory processing after the cochlea.
A simple photoacoustic detector, suitable for use in student laboratories, is described in the October issue of The Physics Teacher. The converter cell consists of a half-blackened pick le jar driven by the fluctuating light output of a conventional ac-powered incandescent bulb. Light is absorbed by the blackened surface and the energy is dissipated as heat. The surrounding gas heats and cools in synchrony with the periodically varying light flux, and under favorable conditions the result ing pressure variations can be detected by the ear at twice the line frequency. Essentially, the pickle-jar converter acts as a Helmholtz resonator.
• Pulses of energy called planetary waves traverse the globe, protecting the Arctic ozone layer and influencing weather and climate, according to an article in the 24 January issue of Nature. At their largest scale, they straddle the Earth. Without them, ozone depletion in the stratosphere would be worse and more frequent than it is now. The Southern Hemisphere pro vides a taste of what could happen to the Arctic ozone layer if the influence of the Northern Hemisphere’s planetary waves weakens, which the build-up of greenhouse gases in the atmosphere could cause.
• The midbrain contains an auditory map of space that is shaped by visual experience, according to a letter in the 3 January issue of Nature. When barn owls are raised wearing spectacles that horizontally displace the visual field, for example, the auditory space map in the external nucleus of the inferior colliculus shifts according to the optical displacement of the prisms. The authors report on studies of the effect of a restricted, unilateral lesion in the portion of the optic tectum that represents frontal space. Such a lesion eliminates adaptive adjustments in the auditory map that represents frontal space on the same side of the brain. Topographic visual activity in the optic tectum could serve as the template that instructs the auditory space map.
• A physics teacher in Spain has calculated speeds of sound and conversational sound levels in the atmospheres of 7 planets with atmospheres, plus the Jovian satellite Titan and the extrasolar planet HD209458B, and published them in a paper in the April issue of The Physics Teacher. Excluding HD209458B (which is very hot), Jupiter has the highest speed of sound (1200 mls).
• Ingenious software developed by the British company Sensaura makes ordinary loudspeakers “come alive,” according to a note in the February issue of Scientific American. The program enables ordinary computer speakers, television sets and headphones to create the illusion that sounds are coming from anywhere around a listener’s head. Last November the Royal Academy of Engineering recognized the accomplishment by awarding the MacRobert Award to Sensura. So far the most popular application has been in enhancing the sounds of computer games, but the same technology can also enrich the music from CDs and MP3 files.
• Although it is not clear whether humans are able to learn while they are sleeping, evidence is shown in a paper in the 7 February issue of Nature that human newborns can be taught to discriminate between similar vowel sounds when they are fast asleep. Mismatch negativity (MMN) was used to deter mine the ability of newborns to detect a change in speech sounds. MMN can be observed in young infants throughout all sleep stages as well as when they are awake.
• Physicists have been contributing in an increasingly significant way to modeling the brain and designing effective substitutes for sensory inputs to the brain, including cochlear implants, according to an article in the February issue of Physics World. Interacting more closely with the brain than any other prosthetic device, cochlear implants are an incredible medical and bioengineering achievement. They convert external acoustic information into electrical stimuli and then present this information directly to the auditory nerve via electrodes inside the cochlea. To describe mathematically how this information is handled by the brain is an extremely difficult task that has only been seriously tackled in recent years. There appears to be some link between the underlying nature of quantum mechanics and aspects of brain behavior.
• The November/December issue of Acustica is a special issue on tomography and acoustics. The papers deal with tomography and its applications especially to the atmosphere and the oceans. The papers resulted from a workshop at the University of Leipzig March 6-7, 2001.
• The effect of the duration of head-related impulse responses (HRIRs) on the localization of virtual sound is discussed in a paper in the January/February issue of Journal of the Audio Engineering Society. The accuracy with which three subjects could localize virtual and free-field sound was measured using an absolute localization paradigm incorporating 354 possible sound-source locations. Localization performance gradually decreased as the HRIR duration was reduced, and first became significantly worse than that for free-field sound at HRIR durations ranging from 0.32 to 5.12 ms. Localization performance for virtual sound was not disrupted dramatically until the HRIR duration was reduced to 0.64 ms.
• A multi-year project at the IBM Research Laboratory in Yorktown Heights, NY aims to develop a speech recognition system that will understand 20 languages and operate with 98 percent accuracy, according to a note in the May issue of Technology Review. IBM hopes the system will improve on present day systems by including new algorithms that consider the context of the conversation.
• Chimaeric sounds have been used to investigate the relative perceptual importance of the envelope and the fine structure of a sound, according to a paper in the 7 March issue of Nature. Chimaeric sounds have the envelope of one sound and the fine structure of another, not unlike hybrid sounds in electronic music that combine features from different sounds. The envelope is found to be the most important for speech reception, while the fine structure is most important for pitch perception and sound localization. When the two features are in conflict, the sound of speech is heard at a location determined by the fine structure, but the words are identified according to the envelope. Speech chimaeras were created by combining either a speech sentence and noise or by combining two separate speech sentences.
• Brain imaging techniques, such as positron emission tomography (PET) and functional magnetic resonance imaging, and methods that measure active of neurons in the cerebral cortex. such as electroencephalography and magnetoencephalography (MEG), have been used to study the relationship between melody, harmony and rhythm in music, according to a note in the 7 March issue of Nature. By studying activity in the auditory cortex in response to music, scientists have concluded that the secondary cortex mainly focuses on harmonic, melodic and rhythmic patterns, and the tertiary auditory cortex is thought to integrate these patterns into an overall perception of the music.
• The April issue of Acoustics Australia is a special issue on Ocean Acoustics edited by L. J. Hamilton. Included in it are papers on “Scattering in the ocean,” “Australian research in ambient sea noise,” “An introduction to ship radiated noise,” and “Seafloor data for operational predictions of transmission loss in shallow ocean areas.”
• “The power of hearing” is the title of an interesting article on hearing and cochlear mechanics by Thomas Duke in the May issue of Physics World. He credits cosmologist Tommy Gold with first suggesting (in 1948) that the ear employs an active process that adds energy at the frequency that it is trying to detect, rather like the regenerative radio receiver. This active process counteracts friction in the cochlea, so that sharp frequency tuning and high gain can both be achieved. The wisdom of Gold’s suggestion was underlined in 1971 when William Rhode used the Mössbauer effect to measure the velocity of the basilar membrane and show that the frequency tuning was far sharper than in the dead cochleae that von Bekesy studied in his celebrated research.
In the 1980s researchers identified bundles of hair cells as being responsible for the active amplification. It was found that outer hair cells are electromotile; when a voltage is applied, the cell body contracts longitudinally. Each of the bundles appears to be capable of generating oscillations at a particular frequency. When one of these nonlinear dynamical systems is on the verge of vibrating, it is especially sensitive to periodic disturbances at frequencies close to its characteristic frequency. The onset of spontaneous oscillations, corresponding to what is known as a “Hopf bifurcation,” may be the source of otoacoustic emissions in our ears.
• A detailed investigation of phonon modes in DNA macro molecules is presented in the May issue of Physical Review E (volume 65, paper 051903). Experimental evidence confirms the presence of multiple dielectric resonances in the sub millimeter-wave spectra (0.01 to 10 Hz). A direct comparison of spectra between different DNA samples reveals a large number of modes and a reasonable level of sequence-specific unique ness.
• Surface waves play an important role in the exchange of mass, momentum and energy between the atmosphere and the ocean. Wave breaking supports air-sea fluxes of heat and gas, which have a profound effect on weather and climate, but wave breaking is poorly quantified and understood, according to a paper “Distribution of breaking waves at the ocean surface” in the 2 May issue of Nature. Using aerial imaging and analysis it was found that the distribution of the length of breaking fronts per unit area of sea surface is proportional to the cube of the wind speed and that, within the measured range of the speed of the wave fronts, the length of breaking fronts per unit area is an exponential function of the speed of the front. Furthermore, the fraction of the ocean surface mixed by breaking waves, which is important for air-sea exchange, is dominated by wave breaking at low velocities and short wavelengths.
• The June issue of Sound and Vibration, guest edited by Malcolm Crocker, features 3 articles on the Vibroacoustics Environment of Spacecraft during launch and flight. Terry Scharton writes about “Vibration and acoustic testing of space craft;” William Hughes and Mark McNelis write about “Recent advances in vibroacoustics;” and Rabi Margasahayam and Raoul Caimi discuss “Launch pad vibroacoustics research at the Kennedy Space Center.” As Crocker reminds us in his opening editorial, the noise from spacecraft rocket engines on launch pads is very intense and causes vibration not only of the spacecraft vehicle but of the launch tower and related support facilities as well. In some cases the vibration can be of sufficient magnitude to cause fatigue and eventual failure of some parts.
• The March issue of Acoustical Science and Technology includes a tutorial paper on “Neural mechanisms of binaural hearing.” Two of the three cues used to localize sound are bin aural, involving a comparison of the level and/or timing of the sound at each ear. The third cue depends on sensitivity to the elevation-dependent pattern of spectral peaks and troughs that result from multiple sound waves interfering at the tympanic membrane. Different physiological mechanisms process these different localization cues. Neurons in the dorsal cochlear nucleus are selectively sensitive to the spectral notches that result from interference between sound waves at the ear. Interaural level differences are initially processed in the lateral superior olive by neurons receiving inhibition from one ear and excitation from the other. Interaural time differences are converted into discharge rate by neurons in the medial superior olive with excitatory inputs from both ears and that only fire when their inputs are coincident. The contribution of such coincidence detectors to sound-source localization is discussed in the light of recent observations.
• The perception of reverberation time in small listening rooms is discussed in a paper in the May issue of the Journal of the Audio Engineering Society. Small rooms are characterized by short reverberation times and strong resonances. However, most of the parameters used to describe the acoustics of small rooms were derived from studies in large diffuse rooms with long reverberation times. The aim of this study was to determine the difference limen for midfrequency reverberation times shorter than 0.6 s, which are usually encountered in small rooms. The difference limen was found to be 0.042±0.015 s.
• A paper in the August issue of Applied Acoustics assesses the tuning and damping of the historical carillon bells in Perpignan, France and their changes through restoration. The modal frequencies and decay rates were estimated by means of the matrix pencil algorithm, a parametric signal processing method. Tuning was found to be accurate except for the highest notes. On average, the bells ring 15% longer after the restoration, which included sanding their oxide layer. Damping rates are also more consistent throughout the range of the instrument.
• A paper in the May issue of the Journal of the Audio Engineering Society describes a model of loudness applicable to time-varying sounds. The stages of the model include: (a) a finite impulse response filter representing transfer through the outer and middle ear; (b) Calculation of the short-term spectrum using the fast Fourier transform; (c) Calculation of an excitation pattern from the physical spectrum; (d) Transformation of the excitation pattern to a specific loudness pattern; (e) Determination of the area under the specific loud ness pattern. This gives a value for the instantaneous loudness, from which the short-term perceived loudness can be calculated using an averaging mechanism similar to an automatic gain control system, with attack and release times. Finally, the over all loudness impression is calculated from the short-term loud ness using an averaging mechanism with longer attack and release times.
• Reverberation in rectangular long enclosures with diffusely reflecting boundaries” is discussed in the January/February issue of Acta Acustica. A computer model divides every boundary into a number of patches and replaces patches and receivers with nodes in a network. For a number of hypothetical long enclosures, the model shows that with the increase of source-receiving distance the RT30 increases continuously and the early decay time increases rapidly until it reaches a maximum and then decreases slowly. Decay curves are concave in the near field and then become convex. For diffusely as opposed to geometrically reflecting boundaries, the sound attenuation along the length is notably greater, and air absorption is more effective with regard to both reverberation and sound attenuation.
• “A study of timing in two Louis Armstrong is the title of one of the papers in a special collection of papers on timing and rhythm in jazz in the Spring issue of Music Perception. Precise timing analysis of two mid-tempo solos focused on stop-time sections. Two key elements of swing were analyzed: placement of the downbeats and the swing or triplet ratio. For these solos, Armstrong played fairly close to the beat with a swing ratio of about 1.6 to 1.
• Using acoustics to study surface roughness in agricultural surfaces is the subject of a paper in the July issue of Applied Acoustics. Sound propagating parallel to a smooth porous ground attenuates more rapidly than in a free space due to absorption in the air filled pores. Furthermore, additional attenuation occurs for propagation over a rough surface and these additional attenuation mechanisms can be used to quantify the surface roughness, even on a porous surface. Modeling results are discussed for a variety of surfaces ranging from impermeable to loosely packed soil. Data on steep wedges yielded a roughness length scale twice that of previous studies on gravel.
• “Localization of virtual sound as a function of head-related impulse response duration” is the title of a paper in the January/February issue of the Journal of the Audio Engineering Society. The accuracy with which three participants could localize virtual and free-field sound was measured using an absolute localization paradigm incorporating 354 possible sound-source locations. Whereas some previous studies have suggested that the localization of virtual sound is affected only by extreme smoothing of head-related transfer functions, this study indicates that localization can be subtly disrupted by modes smoothing. The localization performance for virtual sound generated from 10.24- and 20.48-ms head-related impulse responses was as good as that for free-field sound.
• Tunneling of acoustic waves through the forbidden transmission of acoustic band gap array is the subject of an interesting pedagogical paper in the July issue of American Journal of Physics. The acoustic band gap is created in a waveguide with a periodically spaced series of dangling side branches. Using an impulse response method, the transmission properties of the array are characterized and the regions of forbidden transmission identified. Tunneling pulses, whose frequency content lies completely within the forbidden transmission region, are used to explore the concepts of tunneling time and group velocity. The group velocity of the tunneling pulse is considerably larger than the speed of sound. The analogous experiments are well known for electromagnetic waves but not acoustic waves.
• “Estimation of the underwater explosion depth from the modified cepstral analysis of sea reverberation” is the title of a paper in the May/June issue of Acoustical Physics. A mathematical model of the signal produced by an underwater explosion is used to obtain the dependence of the explosion depth on the argument at which the cepstrum of the signal reaches its maximum.
• Subjective experiments on human phase perception are discussed in a paper entitled “The effect of group delay spectrum on timbre” in the January issue of Acoustical Science and Technology. The stimuli in the experiments reported had a flat amplitude spectrum and a group delay spectrum with a single Gaussian peak. The first experiment used stimuli with different peak values having center frequencies fixed at 1 kHz and 4 kHz. When the peak values of the stimuli were between -1 ms and 2 ms, they are perceived to be zero phase regardless of their center frequencies and bandwidths. When the peak values are less than -8 ms or more than 10 ms and the bandwidths are less than one equivalent rectangular bandwidth, they are perceived to be similar.
ERRATA: The work on “Manatees, Bioacoustics and Boats” (first item in Scanning the Journals in the Spring 2002 issue) was done at Florida Atlantic University, not Florida State University, as reported.
• From scientific investigations dating back nearly a century, it is known that whips make loud sounds when their tips attain supersonic speeds and send shock waves through the air. New calculations by applied mathematicians Alain Goriely and Tyler McMillen at the University of Arizona shed new light on the phenomenon, according to the 1 June issue of Science News. These calculations, soon to appear in Physical Review Letters, indicate that a loop rolling down the whip’s length also goes supersonic when it’s near the tip and begins to uncoil. This creates the whip’s signature cracking sound, they say. Goriely and McMillen developed equations that account for a loop’s curvature, tension, and speed as it zips along an extended, elastic rod. By feeding their equations into a computer, they determined that the leading edge of the loop would break the sound barrier while still slightly curled. Even though the tip’s speed is also super sonic, the tip at that moment remains in the leading edge’s wake and can’t create shock waves.
Peter Krehl of the Ernst Mach Institute in Freiburg, Germany, who examined a cracking whip with a photographic method that shows shock waves, points out, however, that the tip of a whip must play a role since a whip won’t crack without one. Nathan Myhrvold of Intellectual Ventures in Bellevue, Washington agrees that the new analysis is inconclusive because it neglects the cracker. Myhrvold’s computer simulations indicate that some big dinosaurs could have created sonic booms with their whip-like tails, possibly for communication.
• On Weekend Edition. National Public Radio, Saturday, June 1, host Scott Simon also interviewed Alain Goriely, mathematics professor at the University of Arizona, about the sound of a cracking whip. Goriely pointed out that the sound you hear is a mini sonic boom, not unlike the sound that you hear when somebody shoots a gun or an airplane goes supersonic. The equation that describes whips is not too difficult to write, but good solutions are difficult to come up with, he said. The wave starts with pretty low velocity but reaches about twice or maybe three times the speed of sound for some part of the whip.
• A front-page story in the June 10 edition of the Pittsburgh Post- Gazette picks up on the acoustics of a cat’s meow, as presented in paper 3aABb2 by Nicholas Nicastro at the Pittsburgh ASA meeting. Cats do communicate with people, but that doesn’t mean they communicate like people. Nicastro hypothesizes that cats, which were domesticated more than 5000 years ago, have learned to use and shape their meows in ways that appeal to, or at least engage, humans. The meow is unusual in that cats rarely use it with each other; hissing, spitting, and purring seem to suffice for their peers. Cats seem to reserve their meows largely for humans and over the course of more than 5000 years of domestication have learned to tailor their meows for the human ear. The shorter the meows, the more pleasant and less urgent they seemed to humans. The longer meows seemed more urgent and less pleasant.
• To achieve true surround sound from all directions requires at least five loudspeakers placed carefully around a room. Now, according to a “Physics in action” story in the May issue of Physics World, a British company has developed a single loud speaker panel that promises surround sound. The device is based on the same phased-array technology that is used in radio telescopes and underwater sonar applications. It uses 254 tiny magnetic transducers to produce a three-dimensional sonic interference pattern that changes in size and shape as the relative phase between the signals is adjusted. In this way, sound waves can be directed toward the walls and the ceiling of the listening room, where they are reflected to produce surround sound. Moreover, viewers can modify the overall effect by remote control rather than having to rearrange the furniture. To compute the phase and timing of the signals requires 12 gigaflops of computing power. Such processing power would have been unthinkable a few years ago. but now the powerful processor in the sound projector costs only $30.
• The “hum” is back, this time in Kokomo, Indiana, according to a story in the June 17 issue of the New York Times. The symptoms are pretty much the same as the Taos Hum (ECHOES, Autumn 1995) and the Copenhagen Hum (ECHOES, Fall 1997). Various residents describe a sound “like butter crackling in a skillet” or the buzz of a busy high way. Many blame the hum, which began in 1999. for health problems including headaches, nausea, diarrhea, fatigue and joint pain. The city has appropriated $100,000 to study the problem. and is seeking proposals from “qualified acoustical engineering consultants.” One engineer, sent there by Fox TV to do a survey, noted a fairly strong component of infrasound around 10 Hz, which is inaudible to most people, but can be sensed by others. Studies made in connection with the Copenhagen Hum (see ECHOES. Fall 1997) found that the ability to hear the hum appears to be hereditary, because of the number of sisters, mothers, and daughters who can hear it. There were not so many brothers, sons. and fathers who heard it, however. Using directional finding equipment, the low frequency noise in the homes of the Danish hum hearers was traced to various industrial plants.
• A special INSIGHT section on Neurobiological Systems in the 16 May issue of Nature includes two articles of interest to ECHOES readers. The first deals with instructed learning in the auditory localization pathway of the barn owl. Our ability to localize the source of a sound relies on complex neural computations that translate auditory localization cues into representations of space. The dominant localization cues are the relative timing and the level of the sound at both ears. In barn owls, the visual system is important in teaching the auditory system how to translate cues. Adaptive adjustment of sound localization has been demonstrated by subjecting owls to a variety of sensory manipulations and measuring the effects of these on the accuracy of auditory orienting behavior. For example, when one ear is plugged, owls initially mislocalize sounds towards the side of the open ear, but after many weeks of experience with an earplug they recover accurate orienting responses. When the earplug is removed, the owls initially make orienting errors in the opposite direction but these errors, too, disappear with experience.
Another review article in this INSIGHT section deals with what songbirds teach us about learning. The way in which songbirds leam their songs has some striking parallels to human speech acquisition. With the discovery of discrete brain structures required for singing, songbirds are providing valuable insights into neural mechanisms of learning.
• Numerical optimization of violin top plates is the subject of a paper in Acta Acustica/Acustica 88,278 (2002). The study shows that it is possible to compensate for differences in the material parameters of violin top plates by changing the distributions of plate thickness and arch height, thus keeping the eigenfrequencies unchanged. The cellular structure of wood is modeled with a honeycomb model, and the thickness and arch height compensation is determined through a stochastic optimization method called simulated annealing.
• "Talk to the Machine" is the title of a special R&D report on speech recognition machines in the September issue of IEEE Spectrum. Although the average person probably thinks of dictating reports to a PC or dialing an automated call center for flight schedules, some of the most novel and challenging work being done now involves putting speech recognition into toys, MP3 players, car navigation systems, cellphones and palm digital assistants. The push for embedded speech recognition results from manufacturers trying to cram more functions than ever into smaller devices and finding there isn't enough room for all the buttons and displays. A voice inter-face avoids the frustration of delving through multiple menus. Nearly all the speech recognition engines on the market today are based on hidden Markov models, which are used to represent how phonemes and allophones are pronounced and how fast they are spoken. Introduced three decades ago, their popularity shows little sign of waning. Although a recent entry, Microsoft's speech research group (with over 100 workers in Redmond and Beijing) is now second only to IBM's.
• Hyper-focusing a sound wave with time-reversed acoustics has been demonstrated experimentally by research in France, according to a paper in the 16 September Physical Review Letters. Even when a sound wave is launched by the tiniest nanomachine, it is often difficult or impossible to focus the sound wave down to the size of the machine itself, because conventional lenses don't capture a wave at its source but many wavelengths away, in the far-field. As a result, the lens cannot focus the wave to a spot smaller than half a wave-length, a roadblock called the diffraction limit which general-ly dictates the smallest details one can see with an optical microscope or the smallest circuits one can carve in a computer chip using light and lenses. The French researchers have now demonstrated a new way of breaking the diffraction limit by using time-reversed acoustics, a technique that takes an incident sound wave, produces a backwards-sounding version of it, and sends the reversed version right back to the source of the original sound. In the new experiment, a loudspeaker is connected to a 1.9-mm-thick glass plate and a 500 kHz sound wave 5 seconds long bounces chaotically inside the plate, the boundaries acting as small lenses for the wave.
• Undersea listening devices are helping physicists look for ultrahigh-energy neutrinos, according to a note in the 4 October issue of Science. A decade ago, these undetectable particles, moving so fast they carry as much energy as a pitched ball, were a theoretical possibility but nothing more. Now physicists are firmly convinced that Earth is constantly being bombarded by ultrahigh-energy neutrinos, which are part of the debris generated when extremely energetic cosmic rays slam into the atmosphere, water, or rock, creating showers of particles. Using data from submarine-listening facilities, sever-al projects are paving the way for higher profile efforts soon to come. To make an audible click in the ocean, a neutrino must carry a huge amount of energy, perhaps 1010 electron volts. Physicists from Stanford University are calibrating instruments belonging to the Atlantic Undersea Test and Evaluation Center (AUTEC), for example, to determine whether they are capable of picking up the sounds of passing neutrinos.
• A paper in Acoustical Physics 48, 430 (2002), translated from Akusticheskii Zhumal, deals with calculation of the nonlinearity parameter for gaseous and liquid mixtures. Relations for estimating the derivatives of the speed of sound with respect to temperature and pressure in two-component mixtures are obtained.
• Inhibition in the brain plays a key role in sound localization, according to an article in the Search and Discovery section of the October issue of Physics Today. Scientists at the Max Planck Institute of Neurobiology and University College, London have shown that the neuronal response to interaural time differences (ITDs) in Mongolian gerbils is determined by fast inhibitory inputs within the brain. Neurons in the medial superior olive (MSO) in the brain receive inputs from both cochleas via the auditory nerves. But MSO neurons also receive inputs from two other processing centers, the medial and lateral nuclei of the trapezoidal body and these appear to be inhibitory. To examine the role of inhibitory inputs, the researchers measured neuron firing rates following the injection of strychnine, which blocks the glycine receptors and effectively turns off the inhibition and shifts the peak in the ITD response toward zero.
• The May/June issue of Acta Acustica/Acustica is a special issue devoted to Psychoacoustics, physiology and models of the central auditory system. The issue includes review articles by D. Hammershoi and H. Mailer on Methods for Binaural Recording and Reproduction; A.R. Palmer and T.M. Shackleton on The Physiological Basis of the Binaural Masking Level Difference; and B.C.J. Moore and H. Gockel on Factors Influencing Sequential Stream Segregation as well as 10 scientific papers. The special issue was edited by Birger Kollmeier.
• A paper entitled "Sound level meter standards for the 21st Century" in the August issue of Acoustics Australia reports on a new sound level meter standard (IEC61672-1:2002). This paper, based on the 2001 President's Prize paper awarded at the Australian Acoustical Society Conference, reminds us that most modem sound level meters rely on digital rather than analog circuitry.
• The shape of a cracking whip is the subject of a paper in the 17 June issue of Physical Review Letters. The crack of a whip is produced by a shock wave created by the supersonic motion of the tip of the whip in air (see Acoustics in the News in ECHOES 12 (3), p.8). A dynamical model for the propagation and acceleration of waves in the motion of whips is presented in this paper. The contributions of tension, tapering, and boundary conditions in the acceleration of an initial impulse are studied theoretically and numerically.
• A new kind of ocean wave has been detected at the Hawaii-2 Observatory, according to a paper in Geophysical Research Letters 29, 57 (2002). Propagating at the speed of sound in water, the new wave, which appears to be a coupled acoustic and Rayleigh wave, induces motion of the sea-floor sediments and creates regions of expansion and compression in the water. The new wave requires that the Rayleigh wave-length be shorter than the water's depth and that the shear velocity at the interface not exceed the sound velocity in water.
• An ultra-sparse code underlies the generation of neutral sequences in a songbird, according to a letter in the 5 September issue of Nature. Sequences of motor activity are encoded in many vertebrate brains by complex spatio-temporal patterns of neural activity; however, the neural circuit mechanisms underlying the generation of these pre-motor patterns are poorly understood. In songbirds, one prominent site of pre-motor activity is the fore-brain robust nucleus of the archistriatum which generates sequences of spike bursts during song and recapitulates these sequences during sleep.
• About a third of the planned $250 million International Monitoring System (IMS), which includes detectors for infrasound and hydroacoustic waves, is up and running, although the data are being fed only to government-run centers in a few dozen nations that have signed the Comprehensive Nuclear Test Ban Treaty of 1996, according to a note in the 5 July issue of Nature. So far, the United States has opposed ratification of the Treaty, leaving it somewhat in limbo. The IMS is expected to be fully working by 2007.