Medical Physiology, 3rd Edition

Temporal Representations: Time-Measuring Circuits

To localize sound, the brain compares the timing and intensity of input to the ears

Neural circuits are very good at resolving time intervals, in some cases down to microseconds or less. One of the most demanding tasks of timing is performed by the auditory system as it localizes the source of certain sounds. Sound localization is an important skill, whether you are prey, predator, or pedestrian. Vertebrates use several different strategies for localization of sound, depending on the species, the frequency of the sound, and whether the task is to localize the source in the horizontal (left-right) or vertical (up-down) plane. In this subchapter, we briefly review general strategies of sound localization and then explain the mechanism by which a brainstem circuit measures the relative timing of low-frequency sounds so that the source of the sounds can be localized with precision.

Sound localization along the vertical plane (the degree of elevation) depends, in humans at least, on the distinctive shape of the external ear, the pinna. Much of the sound that we hear enters the auditory canal directly, and its energy is transferred to the cochlea. However, some sound reflects off the curves and folds of the pinna and tragus before it enters the canal and thus takes slightly longer to reach the cochlea. Notice what happens when the vertical direction of the sound changes. Because of the arcing shape of the pinna, the reflected path of sounds coming from above is shorter than that of sounds from below (Fig. 16-13). The two sets of sounds (the direct and, slightly delayed, the reflected) combine to create sounds that are slightly different on entering the auditory canal. Because of the interference patterns created by the direct and reflected sounds, the combined sound has spectral properties that are characteristic of the elevation of the sound source. This mechanism of vertical sound localization works well even with one ear at a time, although its precise neural mechanisms are not clear.

image

FIGURE 16-13 Detection of sound in the vertical plane. The detection of sound in the vertical plane requires only one ear. Regardless of the source of a sound, the sound reaches the auditory canal by both direct and reflected pathways. The brain localizes the source of the sound in the vertical plane by detecting differences in the combined sounds from the direct and reflected pathways.

For humans, accurate determination of the direction of a sound along the horizontal plane (the azimuth) requires two working ears. Sounds must first be processed by the cochlea in each ear and then compared by neurons within the CNS to estimate horizontal direction. But what exactly is compared? For sounds that are relatively high in frequency (~2 to 20 kHz), the important measure is the interaural (i.e., ear-to-ear) intensity difference. Stated simply, the ear facing the sound hears it as louder than the ear facing away because the head casts a “sound shadow” (Fig. 16-14A). If the sound is directly to the right or left of the listener, this difference is maximal; if the sound is straight ahead, no difference is heard; and if the sound comes from an oblique direction, intensity differences are intermediate. Note that this system can be fooled. A sound source straight ahead gives the same intensity difference (i.e., none) as a sound source directly behind.

image

FIGURE 16-14 Sound detection in a horizontal plane. A, Two ears are necessary for the detection of sound in a horizontal plane. For frequencies between 2 kHz and 20 kHz, the CNS detects the ear-to-ear intensity difference. In this example, the sound comes from the right. The left ear hears a weaker sound because it is in the shadow of the head. B, For frequencies <2 kHz, the CNS detects the ear-to-ear delay. In this example, the width of the head is 20 cm, and sound with a frequency of 200 Hz (wavelength of 172 cm) comes from the right. The peak of each sound wave reaches the left ear ~0.6 ms after it reaches the right.

The interaural intensity difference is not helpful at lower frequencies. Sounds below ~2 kHz have a wavelength that is longer than the width of the head itself. Longer sound waves are diffracted around the head, and differences in interaural intensity no longer occur. At low frequencies, the nervous system uses another strategy—it measures interaural delay (see Fig. 16-14B). Consider a 200-Hz sound coming directly from the right. Its peak-to-peak distance (i.e., the wavelength) is ~172 cm, which is considerably more than the 20-cm width of the head. Each sound wave peak will reach the right ear ~0.6 ms before it reaches the left ear. If the sound comes from a 45-degree angle ahead, the interaural delay is ~0.3 ms; if it comes from straight ahead (or directly behind), the delay is 0 ms. Delays of small fractions of a millisecond are well within the capabilities of certain brainstem auditory neurons to detect. Sounds need not be continuous for the interaural delay to be detected. Sound onset or offset, clicks, or any abrupt changes in the sound give opportunities for interaural time comparisons. Obviously, measurement of interaural delay is subject to the same front-back ambiguity as interaural intensity, and indeed, it is sometimes difficult to distinguish whether a sound is in front of or behind your head.

The brain measures interaural timing by a combination of neural delay lines and coincidence detectors

How does the auditory system measure interaural timing? Surprisingly, to detect very small time differences, the nervous system uses a precise arrangement of neurons in space. Figure 16-15A summarizes the neuroanatomy of the first stages of central auditory processing within the brainstem. Notice that neurons in each of the cochlear nuclei receive information from only the ear on that one side, whereas neurons from the medial superior olivary (MSO) nucleus—and higher CNS centers—receive abundant input from both ears. Because horizontal sound localization requires input from both ears, we may guess that “direction-sensitive neurons” will probably be found somewhere central to the cochlear nuclei. When cochlear nucleus neurons are activated by auditory stimuli, their action potentials tend to fire with a particular phase relationship to the sound stimulus. For example, such a neuron might fire at the peak of every sound wave or at the peak of every fifth sound wave. That is, its firing is phase locked to the sound waves, at least for relatively low frequencies. Hence, cochlear neurons preserve the timing information of sound stimuli. Neurons in the MSO nucleus receive synaptic input from axons originating in both cochlear nuclei, so they are well placed to compare the timing (the phase) of sounds arriving at the two ears. Recordings from MSO neurons demonstrate that they are exquisitely sensitive to interaural time delay, and the optimal delay for superior olivary neurons varies systematically across the nucleus. In other words, the MSO nucleus has a spatial map of interaural delay. The MSO also has a systematic map of sound frequency, so it simultaneously maps two qualities of sound stimuli.

image

FIGURE 16-15 CNS processing of sounds. A, A cross section of the mammalian medulla is shown. After a sound stimulus to the cochlea, the cochlear nerve carries an action potential to the cochlear nucleus, which receives information only from the ear on the same side. However, higher auditory centers receive input from both ears. B, Neurons in the MSO nucleus are each tuned to a different interaural delay that provides information about the horizontal direction of sound sources. Only when action potentials from the right and left sides arrive at the MSO neuron simultaneously does the neuron fire an action potential (coincidence detection). In this example, the two action potentials are coincident at MSO neuron D because the brief acoustic delay to the left ear is followed by a long neuronal-conduction delay, whereas the long acoustic delay to the right ear is followed by a brief neuronal-conduction delay. Additional neural mechanisms, such as inhibitory circuits, probably contribute to the determination of interaural delays in the mammalian brain.

In the brains of birds, and perhaps also in mammals, the tuning of MSO neurons to interaural delay seems to depend on neural circuitry that combines “delay lines” with “coincidence detection,” an idea first proposed by Jeffress in 1948. Delay lines are the axons from each cochlear nucleus; their length and conduction velocity determine how long it takes sound-activated action potentials to go from a cochlear nucleus to the axon's presynaptic terminals onto MSO neurons (see Fig. 16-15B). Axons from both the right and left cochlear nuclei converge and synapse onto a series of neurons in the MSO nucleus. However, each axon (each delay line) may take a different time to conduct its action potential to the same MSO neuron. The difference in conduction delay between the axon from the right side and that from the left side determines the optimal interaural delay for that particular olivary neuron. It is the olivary neuron that acts as the coincidence detector: only when action potentials from both the left- and right-ear axons reach the postsynaptic MSO neuron simultaneously (meaning that sound has reached the two ears at a particular interaural delay) is that neuron likely to receive enough excitatory synaptic transmitter to trigger an action potential. If input from the two ears arrives at the neuron out of phase, without coincidence in time, the neuron will not fire. All these postsynaptic MSO neurons are fundamentally the same: they fire when there is coincidence between input from the left and right. However, because neurons arrayed across the olive are mapped so that the axons connecting them have different delays, they display coincidence for different interaural delays. Thus, each is tuned to a different interaural delay and a different sound locale along the horizontal axis. The orderly arrangement of delay lines across the olive determines each of the neurons' preferred delays (and thus sound location preferences) and leads to the orderly spatial mapping of sound direction.

The neural circuit we just described, which combines axonal delay lines and coincidence detection neurons, may not be the main mechanism by which interaural timing is measured in mammalian brains. In the auditory system of gerbils, it appears that synaptic inhibition contributes to the sensitivity of MSO neurons to interaural delay. It is possible that delay lines, synaptic inhibition, and perhaps other neuronal properties combine to optimize the measurement of timing in mammals.

Neural maps of sound localization are an interesting example of a sensory map that the brain must compute. This computed map contrasts with many other sensory maps that are derived more simply, such as by an orderly set of connections between the sensory receptor sheet (e.g., the retinal photoreceptors) and a central brain structure (e.g., the superior colliculus), as described in the preceding subchapter (see Fig. 16-8). The cochlea does not have any map for sound location. Instead, the CNS localizes low-frequency sounds by calculating an interaural time-delay map, using information from both ears together. Other circuits can build a computed map of interaural intensity differences, which can be used for localization of high-frequency sounds (see Fig. 16-14A). Once these two orderly sensory maps have been computed, they can be remapped onto another part of the brain by a simple system of orderly connections. For instance, the inferior colliculus receives parallel information on both timing delay and intensity difference; it transforms these two sets of information, combines them, and produces a complete map of sound direction. This combination of hierarchic (lower to higher centers) and parallel information processing is ubiquitous in the CNS and is a general strategy for the analysis of much more complex sensory problems than those described here.