Lange Review Ultrasonography Examination, 4th Edition

Chapter 1. Sonography: Principles, Techniques, and Instrumentation

Charles S. Odwin, Ronald R. Price, and Arthur C. Fleischer

Study Guide


Ultrasound is a longitudinal, mechanical wave that carries variations of quantities referred to as acoustic variables. Ultrasound is defined as an acoustic wave that has a frequency higher than the upper limit of human hearing. That limit is typically assumed to be above 20,000 cycles per second (or 20,000 Hz). The unit hertz (Hz) is the internationally accepted term for cycles per second.

Ultrasound waves are produced by oscillatory motion of particles in a medium, creating regions of compression and rarefaction. The continued movement of particles propagating through a medium is the result of collision between particles that make up the medium.

Ultrasound can be continuous or pulsed. In the continuous mode, the vibratory motions are produced by the source in an uninterrupted stream, whereas in the pulsed mode, the sound is delivered in a series of packets, or pulses. Almost all diagnostic ultrasound applications use pulsed ultrasound.

The following terms are commonly used in diagnostic medical sonography:

Longitudinal wave is a wave in which the particles of the medium are in a direction parallel to the wave propagation (as opposed to shear waves, also known as transverse waves, in which particles of the medium travel in a direction that is perpendicular to the wave propagation).

Mechanical wave is a wave that requires a medium in which to travel and, therefore, cannot propagate in a vacuum.

Acoustic variable. Each of the following is considered an acoustic variable: pressure, temperature, density, particle motion (distance). Note that all of these variables change as an acoustic wave passes through the medium.

Parameters of a wave (Fig. 1–1A). The following terms are common to all waves:


FIGURE 1–1A. The parameters of a wave. The frequency of this wave variable is 3 Hz (or cycles per second). A period is one complete cycle; therefore, this wave consists of three periods. Note: The vertical direction is compression and downward direction is rarefaction and both represent pressure and density. Otherwise, it represents a positive (upward) or negative (downward) change in the acoustic variable.

Cycle. A cycle is composed of one compression and one rarefaction, or a complete positive and negative change in an acoustic variable.

Frequency (f) is the number of cycles per second. Frequency describes how many times the acoustic variable (whether it be pressure, density, particle motion, or temperature) changes in one second. Units: hertz (Hz), megahertz (MHz).


Period is the time it takes for 1 cycle to occur; the inverse of frequency. Units: seconds (s), microseconds (μs).


As the frequency increases, the period decreases. Conversely, as the frequency decreases, the period increases.

Wavelength (λ) is the distance the wave must travel in 1 cycle. Wavelength is determined by both the source of the wave and the medium in which it is propagating (Fig. 1–1B). Units: meters (m), millimeters (mm).


FIGURE 1–1B. A wavelength represents the distance between two adjacent wave peaks.


With a velocity or propagation speed (c) of 1,540 m/s, the wavelength of 1 MHz is 1.54 mm, of 2 MHz is 0.77 mm, and of 3 MHz is 0.51 mm.

Propagation speed is the maximum speed with which an acoustic wave can move through a medium, determined by the density and stiffness of the medium. Propagation speed increases proportionally with the stiffness (i.e., the stiffer the medium, the faster the variable will travel). Density is the concentration of mass per unit volume, and propagation speed is inversely proportional to density. Units: meters/second (m/s), millimeters/microsecond (mm/μs).


It should be emphasized that compressibility is the opposite of stiffness. If compressibility increases, then the propagation speed decreases.

Propagation speed is greater in solids > liquids > gases. Propagation speed (c) is equal to frequency (f) times wavelength (λ) {c = f × l}. Because the propagation speed is constant for a given medium, if the frequency increases, the wavelength will decrease. Conversely, if the frequency decreases, the wavelength will increase.


If the frequency of an ultrasound wave traveling through soft tissue is increased from 5 to 10 MHz, what happens to the wavelength?

Steps to Solution:



Doubling the frequency halves the wavelength in a given medium. Note how the wavelength gets smaller.


Pulse repetition frequency (PRF) is the number of pulses per second. Units: hertz (Hz), kilohertz (kHz).

The PRF used depends on imaging depth. As the imaging depth increases, the PRF must decrease. This phenomenon is characteristic of the pulse-listening period-receiving cycle of the transducer. The longer it takes the returning signals (echoes) to come back to the transducer, the greater the interval between pulses. Therefore, the farther away a target, the longer the return trip, and the greater the interval between transmissions of the pulses wave.

Pulse repetition period (PRP) is the time from the beginning of one pulse to the beginning of the next (Fig. 1–2A). Units: seconds (s), milliseconds (ms).


FIGURE 1–2A. Pulse repetition period (PRP).


The PRP increases as imaging depth increases. When depth decreases, the PRP decreases.

Pulse duration (PD) is the time it takes for a pulse to occur: the period of the ultrasound in the pulse multiplied by the number of cycles in the pulse (see Fig. 1–2A). Units: seconds (s), milliseconds (ms), pulse duration = number of cycles (n) × period (p)

Duty factor is the fraction of time that the transducer is generating a pulse.

Maximum value: 1.0. In continuous wave, the transducer is always generating a pulse. A second transducer acts as the listening device.

Minimum value: 0.0. The transducer is not being excited (therefore, no pulse will be generated). In clinical imaging, using pulse-echo system the duty factor ranges from 0.001 to 0.01. Units: unitless.


Note: Because the duty factor is unitless, and PD is usually in microseconds, it is necessary to divide by 1,000 to cancel out the units in the formula. In using this formula, the units must match (PD and PRP both must be in seconds, milliseconds, or microseconds). If not, a correction factor, such as the 1,000 in the denominator, must be used.

The duty factor can also be computed by the following formula:


Spatial pulse length (SPL) is the distance over which a pulse occurs (Fig. 1–2B). Unit: millimeters (mm). Spatial pulse length (SPL) = wavelength (λ) × number of cycles in a pulse (n).


FIGURE 1–2B. Spatial pulse length (SPL).

Amplitude is the maximum variation that occurs in an acoustic variable. It indicates the strength of the sound wave. To arrive at this variation, the undisturbed value is subtracted from the maximum value, and the unit for the acoustic variable is applied (Fig. 1–3). Peak-to-peak amplitude (P–P) is the maximum to minimum value.


FIGURE 1–3. A wave amplitude. Amplitude is equal to the maximum value minus the normal value. Peak-to-peak (P–P) amplitude is equal to the maximum plus absolute value of the minimum.

Power is the rate of energy transferred. The power is proportional to the wave amplitude squared. Unit: watts (W).

power ~ amplitude2

Intensity is the power in a wave divided by the area of the beam. Unit: watts per centimeter squared (W/cm2).


Note: The intensity is proportional to the amplitude squared. If the amplitude doubles, then the intensity quadruples.


The ultrasound power and the intensity of the ultrasound beam are not identical, although the two terms are sometimes used interchangeably. The ultrasound power is the rate at which work is done; it is equal to the work done divided by the time required to do the work. The intensity is the power per unit area and represents the strength of the ultrasound beam. The intensities used in diagnostic medical ultrasound applications range from 1 to 50 mW/cm2. Understanding of the ultrasound intensity is important when studying the biologic effects of ultrasound in tissue (discussed later in this chapter).

Intensities have both a peak value and an average value. The intensity of the sound beam as it travels through a medium varies across the beam (spatial intensity) and with time (temporal intensity).

Spatial peak (SP) is intensity at the center of the beam. Spatial average (SA) is intensity averaged throughout the beam.

Temporal peak (TP) is maximum intensity in the pulse (measured when the pulse is on).

Temporal average (TA) is intensity averaged over one on-off beam cycle (takes into account the intensity from the beginning of one pulse to the beginning of next).

Pulse average (PA) is intensity averaged over the duration of the single pulse.

Six intensities result when spatial and temporal considerations are combined:


In pulsed ultrasound, the TP is greater than the PA, which is greater than the TA. When using continuous-wave ultrasound, however, TP and TA intensities are the same.

Spatial peak intensity is related to SA by the beam uniformity ratio (BUR).

BUR is a unitless coefficient that describes the distribution of ultrasound beam intensity in space. The higher the SP, the more concentrated and the higher the SA, the less concentrated the intensity. Units: unitless.


Temporal average intensity is related to TP by the duty factor (DF). Units: unitless.



Attenuation is the reduction of the sound beam’s amplitude and intensity as it travels through a medium. This is why the echoes from deep structures are weaker than those from more superficial structures. The factors that contribute to attenuation are the following:

Absorption is the conversion of sound energy into heat. Absorption is the major source of attenuation in soft tissues.

ScatteringDiffuse scattering is the redirection of the sound beam after it strikes rough or small boundaries, when the wavelength is larger than the reflecting surface. Liver parenchyma and red blood cells represent diffuse scattering.

Reflection is the return of a portion of the ultrasound beam back toward the transducer (an echo). Of interest in diagnostic sonography is specular reflection, which occurs when the wavelength of the pulse is much smaller than the boundary it is striking, and the surface is smooth. The best examples of specular reflectors are the diaphragm, liver capsule, and gallbladder walls. Reflection of the ultrasound beam depends on the acoustic impedance mismatch at the boundary between two media (discussed in detail later in this chapter).

The unit in which attenuation is given is the decibel (dB). The decibel is the unit of intensity ratio, or power; it is the quantity obtained by taking 10 times the log of the ratio of two intensities.


Attenuation coefficient is the attenuation per unit length of sound wave travel. For soft tissue, it is approximately one-half of the operating frequency of the transducer; that is, for every centimeter per MHz, there is approximately 0.5 dB of attenuation. For example, if the operation frequency of a transducer is 5 MHz, then the attenuation coefficient is approximately 2.5 dB/cm.


Note: Path length is the distance the sound beam travels in a medium. The actual calculation of decibel values is complex and need not be part of the sonographer’s bank of common knowledge, but the sonographer should understand that because decibels are exponents, a small change in decibels can mean a large change in resulting values. The most useful way to handle these values is to memorize the commonly encountered ones (Table 1–1).

TABLE 1–1 • Decibel Values of Attenuation


Example 1

The ultrasound beam produced by a 4 MHz transducer has an initial intensity of 20 mW/cm2 after traveling through 3 cm of tissue. What is the intensity of the beam at the end of this path?


(See Table 1–1.)

To obtain the final intensity, multiply the intensity ratio by the original intensity:


The intensity was, therefore, reduced to 25% of its original value. Another way to do this example is to note that a 3-dB reduction means halving a value. Because 6 dB = 3 dB + 3 dB, an attenuation of 6 dB reduces the power by one-half (20 W → 10 W), then by one-half again (10 W → 5 W).

Example 2

After passing through soft tissue media, an ultrasound beam has an initial intensity of 100 mW/cm2. Calculate the amount of attenuation.


Note: In strict mathematical terms, the 40 dB should be negative, but for our purpose, it can be simply stated as 40 dB of attenuation. The attenuation, therefore, was 40 dB (–40 dB).

The half-intensity depth is the distance at which the intensity will be one-half that of the original; the distance the sound beam will travel through a medium before its intensity is reduced by 50%. It is calculated by the formula:


The half-intensity depth can also be calculated from the frequency:


The half-intensity depth is a good indicator of the frequency that should be selected to view different structures in the body. For example, if 50% of the intensity is gone before one reaches a certain depth, then it is obvious that deeper structures will receive less of the sound beam and, thus, generate weaker echoes. Therefore, to visualize deep structures it is necessary to use a lower frequency

Time gain compensation (TGC) is an electronic compensation for tissue attenuation.

TGC near gain increases or decreases the echo brightness in the near field.

TGC far gain increases or decreases the echo brightness in the far field.

Overall gain increases or decreases the overall brightness in the image.


Echoes are the reflections of the sound beam as it travels through the media. An echo is generated each time the beam encounters an acoustic impedance mismatch, but its strength depends on a number of factors. One very important factor, the angle of incidence, is the angle at which the incident beam strikes a boundary. The angle of incidence is equal to the angle of reflection (Fig. 1–4A).


FIGURE 1–4A. (A) An oblique incidence striking a boundary; (B) refraction of the sound beam; (C) reflection of the sound beam. Note: An oblique incidence is not a normal incidence. A normal incidence is 90° (perpendicular). An incidence beam can be

1. Perpendicular (normal)

2. Oblique incidence (an incidence beam at an oblique angle) not perpendicular. The angle of incidence is the angle of any of the incidence beams.

Perpendicular incidence is a beam traveling through a medium perpendicular to a boundary and encountering the boundary at a 90° angle (Fig. 1–4B). Perpendicular incidence is also known as normal incidence.1 The portion of the beam that is not reflected continues in a straight line; this is called transmission.


FIGURE 1–4B. The transmission of the perpendicular incidence sound beam, also called normal incidence. (A) Normal incidence striking a boundary perpendicularly; (B) the intensity transmitted; (C) reflection of energy at the boundary of medium 1 and medium 2. Note: Beam (C) actually travels back along the beam coming from (A), but it is depicted separately.

Perpendicular incidence will produce a reflection when the acoustic impedance changes at the boundary. Acoustic impedance is the product of the density of a medium and the velocity of sound in that medium.


At an acoustic impedance mismatch, the sound beam will proceed (transmission), be reflected, or both. The relationship between perpendicular incidence and the intensity of the echoes can be characterized by the following formulas:


The ITC can also be calculated by the formula:



Given two media, one with an acoustic impedance of 20 rayls and the other with an acoustic impedance of 40 rayls, calculate the intensity reflection coefficient (IRC), the intensity transmission coefficient (ITC), the reflected intensity, and the transmitted intensity. (Assume that the incident intensity is 10 mW/cm2.)



If the reflected intensity is equal to the IRC times the original intensity, then reflected intensity = 0.11 × 10 mW/cm2 = 1.1 mW/cm2.

If the transmitted intensity is equal to the ITC times the original intensity, then transmitted intensity = 0.89 × 10 mW/cm2 = 8.9 mW/cm2.

Oblique incidence is an angle of incidence that is not 90° perpendicular to a boundary. The angle of transmission will be equal to the angle of incidence as long as the propagation speeds of the media on each side of the boundary are equal. If the propagation speeds are different, however, then the angle of incidence will not be equal to the angle of transmission. The change in direction, the difference in the angle of incidence and the angle of transmission (Fig. 1–5A, B, and C), is called refraction (Snell’s law).


FIGURE 1–5A. In medium 1, the propagation speed is 4 m/s; in medium 2, the propagation speed is 2 m/s; therefore, the beam bends toward the normal plane. (A) Incidence striking a boundary; (B) refraction of the sound beam; (C) reflected beam.


FIGURE 1–5B. In medium 1, the propagation speed is 4 m/s; in medium 2, the propagation speed is 4 m/s; therefore, the angle incidence will be equal to the angle of transmission with no refraction. (A) Incidence striking a boundary; (B) transmitted beam; (C) reflected beam.


FIGURE 1–5C. In medium 1, the propagation speed is 4 m/s; in medium 2, the propagation speed is 6 m/s; therefore, the beam bends away from the normal angle. (A) Incidence striking a boundary; (B) refraction of the sound beam; (C) reflected beam.

The angle of incidence is equal to the angle of reflection, but the angle of transmission is variable and can be calculated as follows:


Note: The above equation is only an approximation; at larger angles, it is subject to larger error. To obtain true accuracy, use the full form of Snell’s law:


The range equation is the relationship between the round-trip travel time of the pulse and the distance to a reflector. This equation determines the position a reflector will have in depth on the display monitor.


If we assume the propagation speed to be constant at 1,540 m/s or 1.54 mm/μs, then one-half the propagation speed is equal to 0.77 mm/μs, and the formula can be simplified:


If we assume that for every 13 μs the pulse travels 1 cm, then d = t/13. the value of (t) must be given in microseconds if the propagation speed is in millimeters per microseconds. The range equation defines the position a reflector will have in depth on the display monitor.

Contrast Agents and Tissue Harmonic Imaging

Contrast agents for ultrasound have included colloidal suspensions, emulsions, liquids, solid particles, and gas-filled microbubbles. At the present time, contrast agents based on gas-filled bubbles dominate those that are Food and Drug Administration-approved and in common clinical use. The first gas-filled microbubbles used air; however, agents that are more recent are microspheres containing trapped perfluorocarbon gas. The choice of a gas-filled structure to enhance reflectivity is obvious if we refer back to the intensity reflection coefficient (IRC). In the IRC, it is the difference between the acoustic impedance (Z) of the contrast agent and its surroundings that is important. By making the Z of the agent very small relative to the surrounding tissue (achieved by using a gas), the reflectivity of the agent becomes much greater than the surrounding tissue reflectivity.

Harmonic imaging is a result of the nonlinear propagation of the sound beam as it passes through tissue. Harmonic images were first recognized when imaging gas-filled contrast agents in which a portion of the energy being transmitted at a fundamental frequency (f) was being reflected (backscattered) at higher harmonic frequencies (2f, 3f, etc.).

Later, it was recognized that harmonic frequencies were also being produced in tissues. The advantage of the harmonic beam is that it has less dispersion (narrower) than the fundamental frequency and also has smaller side lobes. The narrower beam results in increased lateral resolution, and the reduced side lobes reduce image clutter. Harmonic images are created by eliminating the fundamental frequency and selectively recording the higher-frequency echo components.


transducer is a device that converts one form of energy to another. In diagnostic sonography, the transducer converts electrical energy to pressure energy (acoustic energy) and vice versa.

1. Active element

A. Piezoelectric principle is the conversion of electrical energy to pressure energy and vice versa. Ultrasound (pressure energy) is generated by electric stimulation of the piezoelectric element causing expansions and contractions of the element, which, in turn, generate the ultrasound pulse. The resultant ultrasound pulse produces a similar distortion of the element and then converts back to an electric signal (Fig. 1–6).


FIGURE 1–6. Applied electrical voltage. (A) Physical compression on the crystal will generate a potential difference across the faces of the crystal. The effect is called piezoelectric effect. (B) Voltage applied on the crystal will generate mechanical energy (ultrasound). The effect is called reverse piezoelectric effect.

B. Material. The active element can be natural (e.g., quartz, tourmaline, Rochelle salt) or synthetic (e.g., lead zirconate titanate [PZT], barium titanate, lithium sulfate). Synthetic elements are most commonly used in today’s diagnostic equipment because of their availability and low cost. To turn one of these manufactured substances into a piezoelectric element, it is heated to its Curie point, or the temperature at which a ferroelectric material such as many piezoelectric materials, loses its magnetic properties. The dipoles within the material are then polarized with an electric current. When the element cools, the dipoles are fixed (Fig. 1–7). The material is cut and shaped, then housed in the transducer.


FIGURE 1–7. Synthetic material with dipoles. (A) Nonpolarized; (B) Polarized; (C) Dipole.

C. Properties of elements (crystals). The frequency of the acoustic wave produced by a standard pulsed-wave imaging system is determined by the thickness of the piezoelectric element and the propagation speed of the crystal. The propagation speed of the crystal is approximately three to five times greater than the speed of ultrasound in soft tissue, namely, 4 to 8 mm/μs. The thinner the crystal, the greater the frequency


The diameter of the crystal does not affect the pulse frequency; it does, however, determine the lateral resolution. Neither the impedance of the matching layer nor the thickness of the backing material is a primary determinant of ultrasound frequency.

In contrast to pulse wave, the frequency of continuous-wave ultrasound is equal to the frequency of the electric voltage that drives the piezoelectric crystal. In simpler terms, when the pulser of a continuous-wave system produces an electric signal with a frequency of 6 MHz, the frequency of the emitted acoustic signal will also be 6 MHz.

2. Damping material (backing material) is an epoxy resin attached to the back of the element that absorbs the vibrations and reduces the number of cycles in a pulse (Fig. 1–8). By reducing the number of cycles, the following are accomplished:

A. Pulse duration (PD) and spatial pulse length (SPL) are reduced. PD = number of cycles (n) × time (t), where t = period of ultrasound in pulse. SPL = number of cycles (n) × wavelength (λ).

By reducing these two factors, the axial resolution will be improved.


B. Bandwidth (the width of the frequency spectrum) is increased by increasing the damping. When the bandwidth increases, the quality factor (Q factor) of the transducer decreases.

C. The duty factor is decreased.


FIGURE 1–8. Components of a transducer.

3. Matching layer is a substance placed in front of the transducer element’s face material to decrease the reflection at the transducer-tissue interface. The matching layer is necessary because the impedance difference between the transducer crystal and the soft tissue is so large that most of the energy will be reflected back at the skin surface. The matching layer provides an intermediate impedance, allowing transmission of the ultrasound beam into the body.

The thickness of the matching layer is usually equal to one-quarter of the wavelength.1 Multiple layers are often used to avoid reflections caused by the variety of frequencies and wavelengths present in short pulses. In addition to the matching layer of the transducer, a coupling gel is used to form a transducer surface-skin contact that will eliminate air and prevent reflection at this boundary

Bandwidth and Quality Factor

The transducer produces more than one frequency. For example, the operating frequency may be 3.5 MHz, but a spectrum of other frequencies are also generated, known as the bandwidth. The shorter the pulse, the more of these other frequencies are generated. Therefore, the bandwidth and the pulse length are inversely proportional; as the pulse length decreases, the bandwidth increases (Fig. 1–9). Continuous-wave ultrasound has a very narrow bandwidth.


FIGURE 1–9. Bandwidth (A) with damping, (B) with no damping. Note: Damping increases bandwidth.

If the bandwidth increases, the Q factor decreases. If, however, the operating frequency increases, the Q factor increases. A low Q factor indicates:

1. Broad bandwidth

2. Low operating frequency

3. Shortened pulse length

4. Uniform near field (Many frequencies in a pulse result in a more uniform intensity distribution.)

Types of Transducers

There are several ways to classify transducers; one is the way the sound beam is swept (or steered). This process can be either mechanical or electrical.

mechanical transducer (Fig. 1–10) has a scan head that contains a single disk-shaped active element. One type of mechanical transducer is the oscillatory or rotary type, which has an element physically attached to a mechanical device to move it through a pathway (see Fig. 1–10A). A second type has an oscillatory mirror that mechanically moves while the element remains stationary (see Fig. 1–10B).


FIGURE 1–10. (A) mechanical sector real-time transducer that is mechanically steered and mechanically focused. (B) A mechanical sector real-time transducer that moves a mirror instead of the transducer. (C) Image presentation from a mechanical sector.

Focusing the beam produced by a mechanical transducer is achieved by curvature of the crystal, a curved lens on the crystal, or the reflecting mirror. Focusing occurs at a specific depth on both the horizontal and vertical planes. To change the focal depth, the operator must select another transducer with the desired focal zone. The mechanical transducer produces a sector-shaped image (see Fig. 1–10C). Mechanical transducers are mechanically steered (MS) and mechanically focused (MF).

The annular array is a mechanical transducer. The transducer element consists of 5 to 11 rings of transducer elements mounted on a mechanically moved (steered) arm (Fig. 1–11). The advantage of the annular array over the single element transducer is the presence of many elements, allowing for electronic focusing. By focusing transmission and reception of the ultrasound energy, greater depth resolution is achieved. The image produced by an annular array is also a sector. Annular arrays are mechanically steered (MS) and electronically focused (EF).


FIGURE 1–11. Annular array real-time transducer probe that contains four transducer rings (multielement) on a mechanically steered shaft.

An electronic transducer is an assembly of multiple elements called an array. There are many types of arrays, each with a particular set of characteristics:

Linear sequential array (linear array). Shown in Fig. 1–12. This type of transducer produces a rectangular image (Fig. 1–13B).


FIGURE 1–12. Linear sequential array. (A) A real-time linear-array transducer. (B) Design of a linear segmental phased-array transducer. These transducers consist of a long strip of piezoelectric crystals divided into elements, that are arranged next to each other; (C) operation of a linear segmental phased-array transducer. The crystal elements are pulsed in groups of four in this example, with each group sending and receiving in succession.


FIGURE 1–13. Linear sequential array. (A) Focusing in the plane of the long axis of the transducer is electronic; focusing in the plane perpendicular to the long axis is mechanical. (B) Image presentation from a linear phased array transducer. Note the rectangular image.

Curved array (radial array, convex array). The arrays of transducer elements are arranged with specific curvature (Fig. 1–14). Focusing the beam is achieved by internal and electronic focusing; there is no beam steering. The curved design of the transducer head creates a sector or trapezoid image.


FIGURE 1–14. Curved linear array. The array of transducer elements are arranged with a specific curvature. There is no beam steering; focusing of the beam is achieved internally by mechanical and electronic means.

Sector phased array (phased array). The voltage pulses are applied to the entire groups of elements with varying time delays. The beam can be electronically focused (EF) and steered (ES). The image format is sector (Fig. 1–15).


FIGURE 1–15. Sector phased array real-time transducer. This diagram illustrates how electronic pulses are used to steer the ultrasound beam.

Focusing Techniques

Transducers can be either mechanically or electronically focused. Mechanical focusing is accomplished by using a curved crystal or an acoustic lens for each element. This type of focusing is usually applied to mechanical transducers and will improve lateral resolution by limiting the beam width.

There are two types of electronic focusing: transmit focusing and receive focusing.

Transmit focusing. Electronic focusing during transmission is accomplished by firing a group of elements with a small time delay (nanoseconds) between various elements in the group. The wavefront generated by each element in the group will arrive at a specific point in space, resulting in a focused beam. Using transmit focus will improve lateral resolutions and create several possible focal zones. Multizone transmit focusing will result in a slower frame rate. If, for example, there are three focal zones, then the frame rate will be reduced as compared to a single focus zone. If the rate is very slow, then the image will flicker causing a “perceived” distortion of the image. Received focusing. Electronic focusing of the received echoes, by electronically delaying the return of the signals to the processing system within the diagnostic unit, the optimum range of the focal zone can be extended. This process will enhance image clarity.

Sound Beam

sound beam is the acoustic energy emitted by the transducer. The beam can be pulsed or continuous wave. Ultrasound waves follow Huygens’s principle, which states that the resultant beam is a combination of all sound arising from different sources (wavelets) on the transducer crystal face. Focusing is the superimposition (algebraic summation) of all sound waves in the beam.13 As the various wavelets within a beam collide, interference (constructive and destructive) results in the formation of a sound beam (Fig. 1–16).


FIGURE 1–16. The ultrasound beam from an unfocused transducer.

Constructive interference. The waves are not in phase, producing a decrease in amplitude or even zero amplitude. Zero amplitude occurs if the out-of-phase waves completely cancel each other.

The beam is composed of a near zone, focal point, and a far zone.

1. Near zone (near field or Fresnel zone) is the portion of the sound beam in which the beam diameter narrows as the distance from the transducer increases until it reaches its narrowest diameter. This distance is the near zone length (NZL). At the far end of the NZL, the diameter of the beam is equal to one-half the diameter of the transducer. The NZL is also related to the frequency: increasing the frequency increases the NZL, and vice versa. Components of the beam of a focused transducer are shown in Fig. 1–17.


FIGURE 1–17. The components of the ultrasound beam in a focused transducer. Note that the diameter of the beam is equal to the diameter of the transducer face.

2. Focal point is the point at which the beam reaches its narrowest diameter. As the diameter narrows, the beam width resolution improves, becoming the best at the focal point. The focal zone is the distance between equal beam diameters that are some multiple of the diameter of the focal point (often two times the diameter of the focal point). The focal zone extends toward the transducer from the focal point, and toward the far zone. The focal length is the region of the beam from the transducer to the focal point.

3. Far field (far zone or Fraunhofer zone) is the portion of the sound beam (after the NZL) in which the diameter of the beam increases as the distance from the transducer increases. At a distance of two times the NZL, the beam diameter once again equals the diameter of the transducer. The divergence of the beam in the far field is inversely proportional to the crystal diameter and frequency. The larger the transducer element and the higher the frequency, the smaller the angle of divergence in the far field (Fig. 1–18A, B).


FIGURE 1–18. (A) Beam diversion with increasing crystal diameter. (B) Transducer crystal size shown in relationship to frequency. Note: The higher the frequency of the transducer, the smaller the beam diameter, and the longer the near-zone. The angle of beam divergences in the far field is smaller with a higher frequency transducer.


There are two types of resolution: lateral and axial (Fig. 1–19A, B).


FIGURE 1–19. (A) Lateral Resolution: the ability of the ultrasound beam to separate two structures lying at a right angle (perpendicular) to the beam direction. Lateral resolution is also referred to as azimuthal, transverse, angular, or horizontal. (BAxial Resolution: The ability of the ultrasound beam to separate two structures lying along the path of (parallel to) the beam direction. Axial resolution is also referred to as linear, longitudinal, depth, or range.

Lateral resolution (azimuthal, transverse, angular, or resolution) is equal to the beam diameter (see Fig. 1–19A). The distance between two interfaces has to be greater than the beam diameter (width) for the two interfaces to be resolved as separate entities. Lateral resolution applies to interfaces perpendicular to the direction of the sound beam. With an unfocused transducer, lateral resolution is best in the near field; with a focused transducer, the lateral resolution is best at the focal point. A transducer with a smaller diameter will improve lateral resolution as the beam diverges in the far zone. Transducers are sometimes designed with an acoustic lens in order to narrow the sound beam; this results in better lateral resolution.

Axial resolution (linear, range longitudinal, or depth resolution) is related to the SPL. Two interfaces at different depths will be distinguished from each other only if the distance between them is equal to or greater than one-half the SPL (see Fig. 1–19B).


To obtain maximum image quality, axial resolution (RA) should be as small as possible. Axial resolution improves when wavelength or the number of cycles per second decreases (both of these factors are related to SPL). Frequency also affects the axial resolution. As the frequency increases, the wavelength decreases, thus the axial resolution improves. However, as the frequency increases, the depth of penetration decreases, creating a need to compromise resolution for adequate penetration into the tissues. This compromise is why the frequency range for diagnostic procedures is usually between 2 and 10 MHz.

Temporal Resolution

Temporal resolution is related to time and motion and is determined by frame rate. Video imaging with a frame rate greater than 30 images per second will give visual perception of full motion (real time). Less than 30 images per second will have a jerky image. Temporal resolution is improved by

• High frame rate

• Narrow sector

• Fewer pulses

• Low line density


The components of a pulsed-echo diagnostic ultrasound system are the pulser, receiver, scan converter, and display (Fig. 1–20).


FIGURE 1–20. Components of a pulsed-echo ultrasound system.


The pulser produces an electric voltage that activates the piezoelectric element, causing it to contract and expand to produce the longitudinal compression wave (sound beam). A second function of the pulser is to signal the receiver and scan converter that the transducer has been activated. Each electric pulse generates an ultrasonic pulse. The number of ultrasonic pulses per second is defined as the pulse repetition frequency (PRF). With array transducers, the pulser is responsible for the delay and variations in pulse amplitude needed for electronic control of beam scanning, steering, and shaping. In improving the dynamic range of multielement transducers, the pulser suppresses grating lobes, a process termed dynamic apodization.1

Increasing the power output control of a system will raise the intensity by signaling the pulser to put out more voltage. To reduce the potential for harmful bioeffects, it is desirable to keep the power low. Therefore, to increase the number of echoes displayed, it is recommended that the operator increase the gain control, not the power.


The receiver processes electric signals returned from the transducer (i.e., ultrasonic reflections converted into electric signals by the transducer). Processing involves amplification, compensation, compression, demodulation, and rejection (Fig. 1–21).


FIGURE 1–21. The five functions of the receiver. (A) Amplification of both pulses. (B) Compensation for the weaker pulses. (C) The difference between the pulse amplitudes is reduced. (D) The pulses are converted to another form. (E) The weaker pulse is rejected. (Kremkaw FW. Diagnostic Ultrasound: Principles, Instruments, and Exercises. 3rd ed. Philadelphia: WB Saunders; 1989.)

Amplification is the process that increases small electric voltages received from the transducer to a level suitable for further processing. This process is sometimes referred to as “over-all-gain” enhancement or increase. Gain is the ratio of output electric power to input electric power and is measured in decibels. Dynamic range is the range of values between the minimum and maximum echo amplitudes. It is the ratio of the largest power to the smallest power in the working range of the diagnostic unit. Dynamic range is also expressed in decibels.

Compensation is also referred to as gain compensation, swept gain, or time gain compensation. It is the mechanism that compensates for the loss of echo strength caused by the depth of the reflector. It allows reflectors with equal reflection coefficients to appear on the screen with equal brightness and to compensate, to a certain extent, for the effects of attenuation caused by greater depth. For the average soft tissues, the attenuation coefficient is equal to one-half the frequency (expressed in decibels per centimeter).

Compression is the internal process in which larger echoes are equalized with smaller echoes. Compression decreases dynamic range.

Demodulation is the process of converting voltages delivered to the receiver to a more useful form. Demodulation is done by rectification (removal of negative components, replacement with positive values) and smoothing (averaging of the new wave form).

Rejection is also termed suppression or threshold. Rejection is the elimination of smaller amplitude voltage pulses produced by weaker reflections. This mechanism helps to reduce noise by removing low-level signals that do not contribute to meaningful information in the image.

Scan Converter (Memory)

The scan converter, or memory, transforms the incoming echo data into a suitable format for the display, storing all of the necessary information for the two-dimensional image. As the tissue is scanned, several images (frames) are acquired per second. Memory allows for a single scan consisting of one or more frames to be displayed. Most instruments have enough memory to store the last several frames scanned (cine loop). There are two types of scan converters (memories): analog and digital.

Analog scan converters are found in older machines, these consist of semiconductors arranged in square matrices. As the ultrasound pulse transverses the tissues, an electronic beam scans the square matrix. It is swept in the same direction as the beam in the body. The current within the electronic beam corresponds to the intensity of the returning echoes. If the echoes are weak, the current in the electronic beam is decreased and vice versa. The strengths of the electrical charges are precisely what are stored in the individual insulators of the matrix. These electronic charges have values that correspond to brightness levels. To read the stored images, the electronic beam is scanned across the stored matrix. The stored charge in each element affects the current in the electronic beam, and together these charges are used to vary the brightness of display.

Digital scan converters store image brightness values as numbers instead as of electrical charges. A digital scan converter consists of three components: an analog-to-digital (A–D) converter, which changes the voltages of received signals into numeric values; a digital memory, which stores these image echo values; and a circuit, which translates these stored numbers back into analog (voltage) values when needed (a digital-to-analog, or D–A, converter).

The digital memory component is the same as computer memory. Modern computers use circuits that have only two states: off and on. Within the computer these states may be represented; for example, by the absence or presence of electrical current, the open or closed condition of switches, or the direction of magnetization on a magnetic disk or tape. Each of these examples has two states, which the computer considers zero (0) or one (1). At first, it seems that this system, called the binary number system, or binary4 is not very useful for computing, but it can represent any number that the more common decimal system can. Instead of using increasing powers of 10, as the decimal system does, the binary system uses increasing powers of 2. In the decimal system, the right-most digit represents units, the next to the left tens, the next hundreds, etc. In the binary system, the rightmost binary digit or bit is ones, the next left twos, the next fours, the next eights, etc.1 (Fig. 1–22).


FIGURE 1–22. A 10 × 10 pixel, 4-bit-deep digital memory. (Kremkaw FW. Diagnostic Ultrasound: Principles, Instruments, and Exercises. 3rd ed. Philadelphia: WB Saunders; 1989.)


How is this system used to represent ultrasound images? Imagine that the image is divided into many small squares similar to a checkerboard. Each square is assigned a number that represents the ultrasound echo amplitude. For example, for white-on-black displays, the highest echo value is white, and the lowest is black (the reverse is true for black-on-white displays).2 A square located in a part of the image that has the highest echo values (e.g., echoes in a gallstone) would be assigned a high number value, whereas, a square in the surrounding bile would receive a small number value. In color-flow imaging, each square would be assigned a number that represents the Doppler shift value. If the squares are made small enough, then the eye will not be able to see them as separate. Typically, ultrasound images are divided into 512 by 512 of these small squares.1

The squares are called picture elements, or pixels. The number 512 is a power of two (two to the ninth power) and also happens to fit well in a standard television frame. This number yields an image containing 262,144 pixels.

If each of these pixels could store only one binary value, then the results would be very much like an old bistable image, having only black and white values. Each pixel could be either black or white, with no gray values. To store gray scale images, each pixel must have more than one binary digit (or bit). For example, with three bits per pixel each pixel could represent eight different shades of gray. To calculate how many different shades of gray can be represented by a pixel containing a set number (n) of databits, the following formula can be applied:


The largest value that can be represented by a given number of bits is calculated by the formula:


If, for example, we are considering three bits, we could represent eight different shades of gray with the largest gray value equal to seven.

Most ultrasound machines generate images with four to eight bits of gray scale (16 to 256 shades of gray). Color-flow Doppler machines need more bits to represent the various colors. Because the machine makes no distinction of color (to the machine, the image is merely an array of numbers), the color values are stored as a number value for each of three primary colors. Combinations of these three primaries (usually red, green, and blue) can yield almost any color.

Fig. 1–22 is an example of a 10 × 10 pixel matrix with 4 bits/pixel. To calculate how many bits an ultrasound image contains, the following formula can be applied:




In computer terminology, eight binary digits or eight bits equal 1 byte. To determine the number of bytes of memory an image requires, divide the number of bits per image by eight:



If an image has 10 pixel columns and rows with 2 bits/pixels, then


If a 512 × 512 pixel image has 8 bits/pixel, then


Thus, a single image can contain over 2 million bits or more than 1/4 million bytes. To reduce the number of digits used to describe these values, multipliers are applied, such as kilo-, mega-, giga-(Table 1–2). These multipliers are not identical to their counterparts in the metric system, however. For example, 1 kilobit is not 1,000 bits, but 1,024 bits or 2 to the 10th power. For convenience, large numbers may be rounded (e.g., 262,144 bytes may be rounded to 260 kilobytes).

TABLE 1–2 • Unit of Measurement in Computer Terminology


The image can be stored in the digital memory as numbers, but it cannot be viewed unless the numbers are converted back to an image. Otherwise, a large list of numbers is all that would be displayed. The third part of the digital scan converter does the following conversion. It takes the number values stored in the memory and changes them back into an analog voltage. The voltage varies the brightness of a spot on the cathode ray tube, generating an image that the human eye can interpret. The hardware that performs this function is the digital-to-analog (D–A) converter.


Picture archiving and communications system, more commonly known as PACS, enables images such as x-ray, computed tomography (CT), and sonograms to be stored electronically and viewed on liquid crystal display (LCD) screens. The images can be manipulated to improve the ability to make diagnoses or to be transferred over a secure World Wide Web to be shared with other imaging experts. Fig. 1–23 shows PACS viewing and workstation.


FIGURE 1–23. PACS web-based workstation for reading, archiving, and distribution of image via LCD flat screens.

Digital imaging and communicating in medicine, known as DICOM, is a standard protocol that makes digital information compatible with all manufacturing equipment.


Conventional ultrasound transducers are based on the principle that high-frequency ultrasound has difficulty penetrating deep into the body. A new digital technology called coded excitation provides good penetration and high resolution at the same time. When applied, this technique improves axial resolution, contrast resolution, signal-to-noise ratio, and penetration depth.

Pre-processing (write zoom) occurs during movement of the image called scanning and includes

• Before storage in the scan converter

• Time gain compensation (TGC)

• Write magnification

• Cannot be performed on a frozen image

Post-processing (read zoom) is the adjustment or changes to ultrasound images after storage in the scan converter, which include

• Any change after the freeze frame

• Measurements

• Read magnification

• Contrast and brightness

Both pre-processing and post-processing are the manipulation of image data.


frame is a single image composed of multiple scan lines. To produce a dynamic or moving image, numerous frames are required. To freeze a frame, or stop the image to record or view it, the memory of the system is activated. The frame rate (FR) is the number of frames displayed or scanned per second. In most diagnostic medical sonography or echocardiography systems, the frame rate is usually 10–60 frames/s. If the display frame rate is below 20/s, then the real-time image appears to flicker, preventing the eye from integrating the images.

The pulse repetition frequency (PRF) is the number of pulses produced by the transducer in a given time period. It is related to the number of lines per frame and the frame rate by the formula:


The PRF, LPF, and FR are directly related to the propagation speed. The maximum effective velocity is 77,000 cm/s, or one-half the propagation speed of ultrasound in soft tissues (1,540 m/s or 154,000 cm/s). The one-half value results from the pulse having to make a round-trip to be received.


Note: LPF × FR = PRE. Hence, the equation can also be stated


Improving image quality by increasing the lines per frame will reduce the frame rate if the depth remains constant. Increasing the depth of penetration while maintaining a constant number of lines per frame also reduces the frame rate. The frame rate can be increased if the depth of penetration is decreased, assuming the LPF is constant.

The display format refers to how the image appears on the screen, as either a rectangular display or a sector display. A rectangular display image appears in the form of a rectangle. The width of the display is given in centimeters; the line density is expressed as the number of lines per centimeter. To determine the line density for a rectangular display the lines per frame are divided by the display width in centimeters.


sector display yields a pie-shaped image. The scans form an angle so that the line density is expressed as lines per degree.


The scan converter, electronic circuitry in the machine’s display, transforms a rectangular or arc-shaped image into a rectangular video frame, and adds the text and graphics (such as depth markers).

Modes of Display

The A-mode, or amplitude mode, is a one-dimensional graphic display with vertical deflections of the baseline. The height of the deflection represents the amplitude, or strength, of the echo (y-axis); the distance in time is a function of where on the horizontal baseline the deflection occurs (x-axis).

The B-mode, or brightness mode, displays the echoes as variations in the brightness of a line of spots on the image. The position of the spot on the baseline is related to the depth of the reflecting structure; the brightness is proportional to the strength of the echo. Each row of spots represents information obtained from a single position of the transducer or scanning beam. When successive rows of these spots are integrated into an image, a B-scan is produced. In B-mode, the x-axis represents depth and the z-axis represents brightness. There is no y-axis in B-mode.

The M-mode, or motion mode, is a two-dimensional recording of the reflector’s change in position, or motion, against time. The vertical axis represents depth and the horizontal represents time. Most M-modes display the brightness of the signal in proportion to the strength of the echo. This mode is most commonly used for the study of dynamic structures such as the heart.


Unlike a Grecian urn, which is an artifact from a past culture, the term in diagnostic medical sonography has a very different implication. If refers to something seen on an image that does not, in reality, exist in the anatomy studied. An artifact can be beneficial to the interpretation of the image, or it can detract from this process. For example, certain artifacts are known to occur in cystic structures and are notably absent from a solid mass, and this information can therefore, be used in a beneficial way when determining the nature of a mass. Conversely, there are artifacts that can appear similar to the placenta, making delineation of the limits of the placenta more difficult. Artifacts can be subdivided by the physical principals that produce them; namely, resolution artifacts, propagation artifacts, attenuation artifacts, or miscellaneous artifacts.

Resolution Artifacts

Axial resolution is the failure to resolve two separate reflectors parallel to the beam.

Lateral resolution is the failure to resolve two separate reflectors perpendicular to the beam.

Speckle is scatter in tissues, causing interference effects referred to as noise.

Section thickness is the finite width of the beam producing extraneous echoes, or debris, in normal anechoic, or echofree structures.

Propagation Artifacts

Reverberation is repetitive reflections between two highly reflective layers. The bouncing back and forth increases travel time, causing the signals to be displayed at different depths. The reverberations are seen on the image as equally spaced bands of diminishing amplitude.

Refraction is the change in direction of the sound beam as it passes from one medium to another. This phenomenon will cause a reflection to appear improperly positioned on the image.

Multipath. Because the returning signal does not necessarily follow the same path as the incident beam, the time required for some parts of the signal to return to the transducer will vary, causing reflections to appear at incorrect depths.

Mirror image is generated when objects present on one side of a strong reflector are also shown on the other side of the reflector. Such artifacts are commonly seen around the diaphragm. These types of artifacts produce a duplicated copy to appear incorrectly on the image.

Attenuation Artifacts

Shadowing is the reduction in echo strength of signals arising from behind a strong reflector or attenuating structure. Structures such as gallstones, renal calculi, and bone will produce shadowing.

Enhancement is an increase in the amplitude of echoes located behind a weakly attenuating structure. The increase pertains to the relative strength of the signals as compared with neighboring signals passing through more highly attenuating media. For example, stronger reflections may be seen behind a fluid-filled structure than behind a solid structure (e.g., the urine-filled bladder versus a solid tumor of the uterus).

Refraction or edge shadowing. The beam may bend at a curved surface and lose intensity, producing a shadow. If the beam is traveling from a higher velocity medium (less dense) to a low-velocity medium, a narrower shadow will be generated. Conversely, a sound beam traveling from a low-velocity medium to a higher one will project a wider shadow.

Miscellaneous Artifacts

Comet tail is produced by a strong reflector; similar in appearance to reverberation. The comet tail, however, is composed of thin lines of closely spaced discrete echoes. Comet tail artifacts frequently occur with the presence of gas bubbles, surgical clips, biopsy needle, or bullet fragments.

Ring down is thought to be caused by a resonance phenomenon and is associated with gas bubble. It also appears very similar to reverberation, producing numerous parallel echoes. Sometimes discrete echoes cannot be differentiated, giving the appearance of a continuous emission of sound.

Propagation speed error. Most diagnostic ultrasound equipment operates on the assumption that the speed of sound in the body is 1,540 m/s. This is not always true because different tissues have different propagation speeds. If the beam passes from a medium of one speed into a medium of a greater speed, then the calculated distance will be less than the actual distance, causing the echo to be erroneously displayed too close to the transducer. If the propagation speed decreases, then the echo will appear farther from the transducer than it actually is.

Side lobes are the result of the transducer element being finite in size. The difference in vibration at the center and edge results in acoustic energy emitted by the transducer flowing along the main axis of the sound beam. The energy that diverts from the main path is the cause of the side lobes, which will generate reflections at improper, off-axis locations in the image. Side lobes are created by a single crystal transducer. Apodization is a technical term for changing the shape of the ultrasound beam with different voltages. This is used to reduce both side lobes and grating lobes.

Grating lobes are seen with linear array transducers, which also produce off-axis acoustic waves as a result of the regular spacing of the active elements. All grating lobes will cause reflections to appear at improper, off-axis locations in the image. A grating lobe is created by multiple crystal transducers (arrays). Subdicing is a method of dividing the transducer elements into small elements. This is used to reduce grating lobes.

Range ambiguity. As noted earlier, the range equation relates the depth of a reflector to the propagation speed and the pulse round-trip time. The maximum depth (dmax) of a reflector that can be unambiguously recorded is:


Thus, the pulse repetition period (PRP) that controls the field-of-view (FOV) also determines the maximum depth of a reflector that can be unambiguously recorded. Echoes from a transmitted pulse that return after a time equal to the PRP will be erroneously recorded at a depth closer to the transducer.


To guarantee efficiency of performance, all ultrasound diagnostic equipment is tested under a quality assurance (QA) program. To ensure that the instrument is operating correctly and consistently, it is checked for the following:

1. Imaging performance

2. Equipment performance and safety

3. Beam measurements

4. Acoustic output

5. Preventative maintenance (e.g., replacing worn parts before they actually fail)


The American Institute of Ultrasound in Medicine (AIUM) has designed a test object specifically to measure imaging performance of an ultrasound system (Fig. 1–24). The AIUM test object is a “tank” consisting of a series of stainless steel rods, 0.75 mm in diameter, arranged in a specific pattern between two transparent plastic sides, with the other boundaries formed by thin, acrylic plastic sheets.14 The tank is filled with a mixture of alcohol, an algae inhibitor, and water, which allows the propagation speed to approximate the speed of sound in soft tissues (1,540 m/s). The results obtained are not affected by normal fluctuations in room temperature; the speed varies less than 1% for a temperature variation of 5°Celsius (5°C).


FIGURE 1–24. AIUM 100-mm test object.

The following factors are measured by the AIUM test object (Table 1–3):

TABLE 1–3 • Performance Measurements for the AIUM Test Object


System sensitivity is measured by determining the weakest signal that the system will display.

Axial resolution is determined by placing the transducer on face A and scanning rod group (a). The six rods are separated by 4, 3, 2, and 1 mm, respectively. The system’s axial resolution in millimeters is equal to the distance between the two closest yet distinguishable echoes.

Lateral resolution is measured by placing the transducer on face B and scanning rod group (b). The lateral resolution is equal to the distance between the two closest rods in this group.

Dead zone (ring-down) is the region of the sound beam in which imaging cannot be performed; the area closest to the transducer. To determine the extent of the dead zone, the transducer is placed on face A and rod group (d) is scanned. The distance from the transducer to the first rod imaged is equal to the length of the dead zone. The dead zone decreases with higher frequency and the region can be visualized with the application of an acoustic standoff gel. Position is between the transducer and the patient.

Range accuracy (depth accuracy) is measured by placing the transducer on face A and scanning rod group (e). For the system to be operating properly, the echoes should appear at their actual depths and spacings within 1 mm (the rods in this group are 2 cm apart). Checking the range accuracy ensures the accuracy of the internal calipers of the system.

In addition to the AIUM test object, other devices have been designed to measure different parameters of imaging performance. The beam profiler is designed to record three-dimensional reflection amplitude information. It consists of a pulser, receiver, transducer, and tank equipped with rods placed at different distances from the transducer.1

The transducer is pulsed and scanned across the rods. The fluctuation in amplitude of each reflection returning to the transducer is recorded in an A-mode pattern. The hydrophone is one of several devices that measure acoustic output; it consists of a small transducer element mounted on a narrow tube.1 When used with an oscilloscope, the voltage produced in response to variations in pressure can be displaced and evaluated. The output produced by the hydrophone permits calculation of the period; pulse repetition period, and pulse duration. The hydrophone can also be used as a beam profiler.

Tissue/Cyst Phantom

This test device contains a medium that simulates soft tissue (Fig. 1–25A). Enclosed in the phantom are structures that mimic cysts and solid masses and a series of 0.375-mm targets, in two groups. Each group measures depth and angular resolution. The phantom is used to evaluate the ultrasound system and transducer performance. Sonographic equipment can be evaluated for scattering, attenuation, depth and axial resolution, vertical and horizontal distance calibration, and ring down.5 Cyst-mimicking target structures are positioned vertically to permit a line-of-target group simultaneously (Fig. 1–25B).


FIGURE 1–25 A. Multipurpose tissue/cyst phantom ATS Model 539 (Courtesy of ATS Laboratories, Inc, St. Bridgeport, CT. Reprinted with permission.) B. Drawing of anechoic target group ATS Model 539 (Courtesy of ATS Laboratories Inc, St. Bridgeport, CT. Reprinted with permission.)


The first hydrophone was invented during World War I6 and used for underwater echo detection. Hydrophones used in diagnostic ultrasound are used by engineers and physicists to measure or calculate

• Pressure amplitude

• Intensities

• Period and wavelength

• Pulse duration and pulse repetition period


To date, there is no concrete evidence to support any truly detrimental bioeffects from the application of diagnostic ultrasound to human tissues.1 The study of possible effects is ongoing, however, and the definitive answer has not been found. It is generally agreed that the potential value of the information obtained from the procedure far outweighs the possibility of deleterious effects. Greater study of the microscopic effects of sound on tissue will have to take place before additional conclusions can be reached. To clarify what is known to date, the potential bioeffects are categorized in two groups: thermal index and mechanical index. These indices are displayed on all new ultrasound monitors as two sets of acronyms, TI and MI (Fig. 126).


FIGURE 1–26. These indexes are displayed on all new ultrasound monitor as two sets of acronym TI and MI (white arrows).

Thermal index (TI) is the ratio of total acoustic power that is required to cause a rise in temperature increase of 1°C. TIs are produced primarily by the mechanisms of attenuation. As a major component of attenuation, absorption by the tissue leads to a rise in tissue temperature. Increased temperatures can cause irreversible damage, depending on the extent of the exposure. It is generally agreed that exposure producing a maximum temperature of 1°C can be used without any effects. A rise in temperature of the tissues to 41°C or above is considered dangerous to a fetus. The longer that this temperature is maintained, the greater the potential risk for damage.1

Temperature rise is dependent on tissue type, scanning time, and depth of tissue. Three types of thermal indices exist that correspond to different types of tissue:

TIS. Thermal index in soft tissue

TIB. Thermal index in bone

TIC. Thermal index in cranial bone

Mechanical index (MI) is an estimate of pressure amplitude that occurs in tissue. It is an indicator of potential cavitation.

Cavitation is the result of pressure changes in the medium causing gas bubbles to form; it can produce severe tissue damage. The two types of cavitation are stable cavitation and transient cavitation.

Stable cavitation involves microbubbles already present in tissue that respond by expanding and contracting when pressure is applied. These microbubbles can intercept and absorb a large amount of the acoustic energy. Stable cavitation can result in shear stresses and microstreaming in the surrounding tissues. In stable cavitation, these microbubbles tend to expand and contract without bursting.

Transient cavitation is dependent on the pressure of the ultrasound pulses. The tissue microbubbles expand and collapse violently. This type of cavitation can cause highly localized, violent effects involving enormous pressures, markedly elevated temperatures, shock waves, and mechanical stress. Cavitation may occur with short pulses and during the peak rarefactional pressure of the wave.7 It has been shown that pulses with peak intensities of >3,300 W/cm2 can induce cavitation in mammals.1 Precise determination of when cavitation will occur is not currently within our capabilities. For specific conditions of homogeneous media, it is possible to estimate an index for the cavitation threshold. In transient cavitation, these micro-bubbles tend to expand and contract and burst. The collapse of the microbubbles causes a localized temperature elevation, which can reach a very high temperature.7


ALARA (as low as reasonably achievable) is a principle recommended to minimize patient ultrasound exposure:

• Use a high receiver gain setting and a lower power output setting

• Avoid using high TI and MI values

• Minimize scanning time

• Use a higher frequency transducer when possible

• Use a focused transducer

• Avoid temperature elevation

• Avoid using spectral Doppler on early embryo when possible

The following guidelines are adapted from an official statement by the AIUM: Bioeffects Considerations for the Safety of Diagnostic Ultrasound. Bethesda, MD, American Institutes of Ultrasound Medicine, 1988. The reader is urged to read the full AIUM text.

Intensity. There are no independently confirmed significant biological effects in mammalian tissues exposed in vivo with unfocused transducers with intensities below 100 mW/cm2 and below 1 W/cm2 for focused transducers.

Exposure. Exposure times can be >1 s and <500 s for an unfocused transducer; and <50 s/pulse for a focused transducer. No significant bioeffects have been observed even at higher intensities than noted above (as long as the intensity × time product is <50 J/cm2).

Thermal. A maximum temperature rise of 1°C is acceptable, but an increase in the in situ temperature to 41°C or greater is hazardous to fetuses.

Cavitation. Can result if pressure peaks are greater than 3,300 W/cm2. However, it is not possible to specify a threshold at which cavitation will occur.

Randomized studies are the best method for assessing potential effects. There are no independent confirmed biologic effects on patients or operators1


In order to identify and promote strategies to improve patient care and safety, hospital and health care schools are implementing training for health care workers in an effort to reduce the amount of medical errors. The national board in ultrasound now includes patient care and safety in the examination content outline.

Common causes for medical errors in ultrasound include the following:

• Wrong patient

• Wrong site

• Wrong procedure

• Missed diagnoses

• Missed pathology

In order to help in the reduction of scanning the wrong patient, the sonographer should first identify that the correct patient is present and then introduce him or herself to the patient with the employment ID visible to the patient.

Time Out: Immediately before Starting the Procedure with the Patient Present

• Check for the correct patient with name, date of birth and medical record number

• Check if clinical history corresponds to the requested examination

• Check if the examination requested is for the patient present

• Check for latex allergy


• Prior to the image being released to the physician or PACS, make sure it can be used for interpretation.

• Confirm that the side markers on the post-procedure images are correct.

• Check again to confirm that the images correspond to the correct patient.


Informed consent is a written or verbal consent to undergo a medical or surgical treatment. The consent should include the following:

• What the procedure involves

• The benefits or risks of the procedure

• The right to refuse treatment

• Alternatives

• Witness to the consent

• Certified language line, for non-English patient

Patient can revoke consent at any time.8


Universal precautions are a set of precautions designed to prevent HIV, hepatitis B virus, and other blood-borne pathogens when providing health care. These precautions involve the utilization of protective barriers such as gloves, gowns, and masks. The following are applicable to universal precautions:

• Vaginal secretions

• Semen

• Amniotic fluid

• Cerebrospinal fluid

• Pleural fluid

• Peritoneal fluid

Feces, sweat, urine, and sputum do not apply to the universal precautions. The Centers for Disease Control and Prevention (CDC) recommends hand washing before and after procedures to reduce the spread of microorganisms. Hand washing should be done for at least 15–20 seconds with water and soap. This should be done even if gloves will be worn during a procedure.8


Transvaginal transducers are reusable instruments. Cross-contamination with reusable medical devices is possible if a precautionary method is not employed. The current methods used to prevent transmission of infection with transvaginal transducers include the use of

1. Cold chemical disinfectants

2. Disposable probe covers

Both methods are required to prevent cross-infection from the transducer because although the probe is covered, a microscopic tear in the cover could expose the transducer to bacteria or viruses from the vaginal mucosa. The piezoelectric crystal of the transducer is heat sensitive. Therefore, steam autoclaves should not be used because excessive heat could depolarize the transducer.9


Color-flow imaging arrived on the medical scene in answer to a basic medical need: an ability to look at cardiovascular blood flow noninvasively. The technology emerged from the development of multigate Doppler systems, that first appeared in 1975.10 Although these systems used color Doppler only inside an M-mode display, they established both the multigate approach and the use of color to encode motion. In 1983, the first real-time echocardiography color-flow system became commercially available.11 The first commercial color-flow vascular imaging device followed in 1986. Since then, nearly all ultrasound manufacturers have added color-flow imaging capabilities to their product lines.

Because the ultrasound community had no terminology standards for displaying color-coded information, color-flow imaging (CFI) has acquired several alternate names including color Doppler imaging (CDI), color-flow Doppler, and angiodynography. In fact, CFI includes both Doppler and non-Doppler depictions of flow in color such as color velocity imaging (CVI).12 Based on the number of instruments in use, however, Doppler-based CFI (DCFI) is the most common technology sonographers will see. In addition, CVI is not currently being produced or marketed. As a consequence, this chapter focuses only on DCFI instrumentation and how it fits into the major applications of imaging.

The current applications of DCFI are extensive and increasing vigorously. Basic and clinical research is extending the usefulness of this imaging modality. In addition, within the research departments of many ultrasound companies, new technologies are shaping the speed and capabilities of DCFI. As in other parts of ultrasound, understanding the instrumentation can go a long way toward understanding how to conduct clinical examinations and read the images.

The Essential Doppler Color-Flow Image

The primary feature of the color-flow image is its simultaneous depiction of stationary soft tissues in gray scale and moving soft tissues in color. For the most part, the moving soft tissue we are interested in is blood within the cardiovascular system. The technology, however, can be configured to provide a color depiction of myocardial motion as well as flowing blood.13 Despite this special application, the gray scale and color relationship opens the use of DCFI for two major applications: echocardiography and vascular imaging. However, as you will discover, any moving echo source within and sometimes outside the scanning field can produce color in the image. Setting up the system correctly, however, can limit the color-flow information to moving blood.11

DCFI is the son of duplex imaging and Doppler multigate analysis. Duplex imaging is older than multigate analysis and has several different forms. It includes the combination of either a continuous-wave or a single-point (pulsed-Doppler) spectrum with an image.14 The image can be an M-mode trace, a real-time B-mode image, or, almost paradoxically, a color-flow image. (The paradox is not real, however. DCFI and a singlepoint spectrum look at the same events, but they do so from different points of view. As a result, they can be profitably combined into a common presentation.)

Multigate analysis is a method of collecting Doppler data from several adjacent spatial locations. A multigate system analyzes each of several sampling sites for flow events using Doppler signal processing. The limitation on this form of signal processing is time. Multigate systems look at each of several sites serially; thus, as the number of sites increases, the time required to make a composite image also increases.12 As a result, as the time needed to form a composite image frame increases, the corresponding image frame rate decreases.

Current machines use a number of modern signal-handling techniques to keep the image frame rates as high as possible. The essential color-flow presentation provides the following pieces of information directly from the image: (1) the existence of flow, (2) its location in the image, (3) its location in the anatomy, (4) its direction relative to the transducer, (5) its direction relative to the anatomy, and (6) its pattern over space and time.

Because the color image shows flow over space and time, we can use the image to locate specific characteristics within the flow pattern. For example, the higher-velocity flow segments (major streamlines and poststenotic jets) are visible within the heart and larger vessels. In addition, the image clearly shows the difference between a complex flow pattern resulting from anatomy and the poststenotic flow pattern (turbulence) associated with disease.15

The ability to clearly show the patterns of flow depends on advanced technologies focused on asking the right technological questions. Everything comes together at the image, where we begin the discussion.

Doppler Color-Flow Imaging Technology

Producing an Image

DCFI begins by making a multigate image for both the grayscale and the Doppler segments of the image. By design, the system divides each beam location in the scanning field into a series of small sampling sites, each of which translates into a specific location in the digital scan converter image.16 Fig. 1–27 is an example of this division using a linear array. The digital scan converter design in the ultrasound machine determines the size and spacing of these sampling sites.17


FIGURE 1–27. Sampling the scanning beams. LA is the linear array, SS is a sample site, and LOS represents the scanning lines sight. Each sampling site represents a position in the digital scan converter.

The sampling intervals used to make the gray-scale and color segments of the DCFI depend on the image. For example, a gray-scale image requires sampling intervals no greater than one wavelength.16 The gray-scale image rests on detection of the echo signal amplitudes, that the signal processing converts into gray-scale intensities. Sampling intervals that are greater than one wavelength simply do not display tissue texturing well enough to support good gray-scale imaging. To show the differences among tissues, a gray-scale image must show the differences among the various tissue textures.

Sampling for Doppler information has a different set of requirements. At the outset, Doppler signal processing requires more time than amplitude detection. For example, a single pulselisten cycle can provide the information for a single gray-scale image line of sight (LOS). Doppler, however, requires anywhere from 4 to 100 pulse–listen cycles to build a single Doppler image LOS.12 The increased time is needed to detect the phase shifts in the echo signals that encode the reflector motion. This extended sitting on a single LOS to detect motion is called dwell time or ensemble time. Practicalities will limit the dwell time on each LOS to a range of 4–32 cycles. As a result, the sampling intervals are usually larger and fewer than in conventional gray-scale imaging. The smallest Doppler sampling sites are at one-wavelength intervals. Often, to shorten the time to form one frame of the combined gray-scale and color real-time image, the Doppler sampling sites may be several wavelengths long. There is a limit here, too. Sampling sites larger than 1 mm provide a poor depiction of vascular flow patterns. Fig. 1–28 shows how the sampling intervals can affect the depiction of flow patterns in a vessel.


FIGURE 1–28. Flow image resolution and sampling intervals. V is the vessel wall, NV is the normal velocity profile, LS is the profile with large-interval sampling, and SS is the profile with small-interval sampling. The smaller the sampling, the better the depiction of flow.

The heart poses a different set of requirements. Because we do not need to see the same detailed flow patterns required in vascular imaging, color-flow echocardiography can use larger sampling intervals.11 By reducing the time required to make the colored portion of an image, the combined frame rates can be accelerated enough to depict events in both adult and pediatric hearts. Even these techniques, however, may be inadequate. In these cases, the system can still obtain higher frame rates by limiting the flow interrogation to a smaller number of Doppler LOSs in the image. Limiting Doppler signal processing to a specific region of interest (ROI), or window, can help restore the frame rates to usable levels. Despite a limited ROI, the interrogation window can be moved to permit a look at flow over the entire FOV At each Doppler sampling site, the DCFI system looks at the returning echo signals for changes in phase and the presence of Doppler shift frequencies.

Changes in phase. Changes in the phase or timing of an echo signal not only show that an echo source is moving but also reveal its direction of motion.18 The ultimate reference for this motion is the transducer. As in duplex Doppler imaging, movement toward the transducer is called forward motion; movement away from the transducer is called reverse motion. The color-flow system encodes this directional information into color, typically red and blue. Fig. 1–29 shows this color assignment geometry for a linear array. These same rules apply to every ultrasound beam in either a sector or a linear scanning field. Because no universally accepted standard exists for assigning color to direction, most systems have a flow-reverse button that switches the color assignment. This often permits setting arterial flow in red and venous flow in blue. Obviously, in complex vascular patterns, this rule may not hold throughout the image. In this case, a pulsatile flow pattern usually identifies an artery from a vein with lower velocities and a respiratory dependence.


FIGURE 1–29. Color assignment to the direction of flow. LA is the linear array, B is the ultrasound beam, SG is the scanning field geometry, F is forward motion, R is a reverse motion, and H is a horizontal line. Flow vectors pointing on the F arc are all one color. Flow vectors pointing on the R arc are the opposite color. Anatomy further restricts blood flow to the geometry of the vessel.

Doppler shift frequencies. Each Doppler image sampling site is a range gate that represents the position of the Doppler sample volume. If the sample volume is within a blood flow pattern, a spectrum of Doppler shift frequencies composes the resulting signal. The system, however, cannot display a frequency spectrum within each colored pixel that combines to form a color-flow image. Instead, most color-flow systems determine a representative frequency and encode this frequency into a color quality.19

All current color-flow systems use some form of average frequency to represent the Doppler shift frequency within a sample site. The average frequency is a good choice because it is less sensitive to noise than most alternatives. In some systems, the average frequency comes from an online spectral analysis.19 In others, autocorrelation and signal-averaging techniques produce the average value. Regardless of the type of system, the signal processing encodes the average frequency into one of several color qualities.

Color has three inherent qualities we can use to encode information: hue, brightness, and saturation. The hue of a color represents its basic frequency or wavelength. For example, red and blue are different hues, and so are yellow and green. Some systems encode the Doppler shift frequency information into hue, presenting a variety of different colors, with each color representing a different average Doppler shift frequency.11

The brightness of a color represents its energy content. For example, increasing or decreasing the illumination on a color patch changes the brightness of the perceived color without changing its hue. Most DCFI designs use changes in hue rather than color brightness to encode the average frequency. At the same time, the brightness of the color may be modulated to smooth the color edges.

Saturation expresses the purity of a color. A color with 100% saturation is considered completely pure. For example, a pure or 100%-saturated red would appear on the display screen as a deep red. Changing the saturation means adding some white light to the color, thus, a less saturated red appears whiter. Many systems allow the user to choose color assignment rules including the use of color saturation to encode the average frequency information.20These color assignments make jets and major streamlines appear whiter than the surrounding color.

Beam Contributions to Sampling

Forming the ultrasound beam and the beam’s subsequent motion has a strong role to play in making a color-flow image. Most DCFI systems use a phased-array, linear array, or curved linear array transducer. Only a few systems use a mechanical scan. This preference for electronic scanning is not merely a matter of chance.

The Doppler effect cannot distinguish between a moving ultrasound beam and a stationary echo source or a stable ultrasound beam with a moving echo source. Without special techniques to control the pattern of beam motion, a mechanical scanner has a steadily moving beam (see Fig. 1–30). This continuous motion means that the Doppler signal processing always sees some movement between the tissue echo sources and the ultrasound beam. This movement produces a set of low Doppler shift frequencies that can hide low blood flow velocities.


FIGURE 1–30. Electronic and mechanically steered beams. PA is a phased array, SL represents the scanning lines, MS is the mechanical scanning head, T is a transducer, M is the direction of motion, TB is the position of the transmit beam, and RB is the position of the receive beam. Electronic steering permits fixed positions for each scan line.

One clear advantage of the electronic systems is the formation of a stationary ultrasound beam (Fig. 1–30) in each LOS position. In this scanning pattern, a stationary beam appears at each LOS in the scanning plane.21Electronic beam forming and steering have a price as well, however. Every transducer, regardless of size, acts as if it were a hole or aperture in space. In this model (Fig. 1–31), the ultrasound comes from a point source behind the aperture. As the waves travel through the aperture, the waves and aperture interact to produce a diffraction pattern. Most of the energy comes through the aperture and forms a large central lobe of energy. The remaining energy diffracts into a set of side lobes that can broaden and smear the beam.


FIGURE 1–31. The transducer is a diffracting apertures. F is the virtual ultrasound field behind the aperture, A is the aperture, SL represents the diffraction side lobes, and M is the main beam. Each transducer including individual array elements acts as a diffracting hole in space.

An array of transducer elements (whether linear or curved) produces a similar set of side lobes. Because these lobes come from the summation of side lobes from each transducer element as if from a diffraction grating, they are called grating lobes. When the electronic control positions the beam perpendicular to the array, the grating lobes can be relatively small. By using several different cancellation techniques, however, engineers can suppress the grating lobes as much as –60 dB (1/1,000th) or more below the main lobe of energy. When the steered beam points off to the side, however, the number and size of the grating lobes increase (Fig. 1–32).11 Again, the result can be a smearing of the ultrasound beam and a loss of lateral resolution, a loss that can affect the accurate placement of color within an image.


FIGURE 1–32. Grating lobe formation with beam steering. TE represents transducer elements, GL represents grating lobes, and M is the main lobe. Steering increases the formation of side lobes, smearing the ultrasound beam.

Signal Processing

Once the echo signals are inside the machine, they face a diverse set of analyses. When and how these analyses occur will determine the character of the final color-flow image.

Within the Doppler-based machines, signal processing can take on two different forms. First, a system can use the same signal to make both the gray-scale and Doppler images. This is synchronous signal processing.12Alternatively, the system can use different signals to form the gray-scale and Doppler images. This is asynchronous signal processing.12 Nearly all DCFI machines designed for vascular applications are now asynchronous, dividing the data collection between gray-scale imaging and color mapping to form each composite image frame.

Synchronous signal processing. Fig. 1–33 shows the basic organization of a synchronous signal-processing system. Replacing the linear array with a single transducer and replacing the B-mode image with an M-mode trace produced the earliest synchronous system: the M/Q system.2223 This system used the same signals to produce both an M-mode display and a point spectrum. All synchronous systems use the same transducer, coherent transmitter, and receiver because they extract different information from a common signal. After reception, the signals divide into two pathways: one for the gray-scale image, the other for the Doppler image. The system uses a priority function to place color properly within the gray-scale image.


FIGURE 1–33. Synchronous signal processing. LA is the linear array CT is a coherent transmitter, DR is a receiver, AD represents amplitude detection, DD represents Doppler detection, DSC is the digital scan converter, and M is the color monitor. Synchronous signal processing uses the same signal to make the gray-scale and Doppler images.

To form image frames at speeds useful to echocardiography, most echocardiography color-flow systems use synchronous signal processing.11 These systems use a phased array to form and steer the ultrasound beams (Fig. 1–34). Although beam steering spreads the beam because of grating lobes and side lobes, the effects of these beam distortions do not detract seriously from echocardiography images.


FIGURE 1–34. Formation of a phased array beam and steering. PA represents the phased array elements, B is the beam, SG is the scanning field geometry, and the arrow shows beam movement. The phased array has a limited aperture size that limits the focal point size and focal range. Thus, focusing is poorest at the edges of the sector.

In synchronous vascular imaging, the system shapes and focuses an ultrasound beam along an LOS perpendicular to the linear array. Fig. 1–35 shows the organization of such a system. The beam scans down the array to form a rectangular scanning field. Zone focusing on “transmit” and dynamic focusing on “receive” provide a narrow beam over the FOV.24


FIGURE 1–35. Scanning field organization in synchronous signal processing. LA is the linear array, LOS represents the scanning lines-of-sight, W is a wedge, B is the beam, and the arrow shows beam motion. The wedge provides a Doppler angle between the moving blood and the ultrasound beam.

The system tests each sample site along each beam for flow. If flow exists, the corresponding image pixel becomes colored; if not, the pixel becomes gray scale. In this manner, the processing builds the image on a sample-site by sample-site basis.

Synchronous DCFI for the vascular system faces a Doppler angle requirement. Most of the vessels in the neck, arms, and legs are parallel, or nearly so, to the skin surface, which places an unsteered beam from a linear array 90° to the flow pattern. In this situation, the system can either beam-steer or use a wedge standoff to provide the necessary Doppler angle. A wedge standoff is a water-filled plastic device that slips on and off a linear array as needed. Despite the apparent simplicity of a wedge, most synchronous vascular imaging systems use electronic beam steering to provide the Doppler angle.

The physical simplicity of the wedge can be deceiving. For example, design engineers must consider not only beam formation, but also how the wedge may change the beam because of refraction, scattering from air bubbles, bacterial growth inside the wedge, and antiseptic treatments of the wedge that do not destroy the plastics that compose the wedge.

Asynchronous signal processing. Asynchronous signal-processing systems use different ultrasound beams and signals to create the composite gray-scale and Doppler images.

Asynchronous systems use separate transmitters for the gray-scale and Doppler portions of the image. Only the transducer array and a central coordinating timer are common to the separate signal pathways to the scan converter. Fig. 1–36 shows the organization of an asynchronous imaging system.


FIGURE 1–36. Asynchronous signal processing. LA is the linear array ImT, is the image transmitter, CT is the coherent transmitter, DR is a receiver, AD represents amplitude detection, DD represents Doppler detection, DSC is the digital scan converter, M is the color monitor, T is a common timer with control signals (t), and DR is the Doppler receiver. Asynchronous signal processing uses different signals for the gray-scale and color portions of the image.

Most asynchronous systems use beam steering to obtain the Doppler image while keeping the gray-scale image beams perpendicular to the transducer. Fig. 1–37 shows how the two scanning fields overlay. Because the system uses two different transmitters, the Doppler carrier frequency can be different from the imaging frequency. For example, gray-scale imaging might be at 5.0 MHz and Doppler imaging at 3.0 MHz.


FIGURE 1–37. Scanning organization for asynchronous signal processing. LA is the linear array, DB is the Doppler beam, IB is the grayscale beam, and the arrows show beam motion. Beam steering provides a Doppler angle for making the Doppler image. The operating Doppler and gray-scale frequencies can differ.

The operating cycle interweaves the Doppler and grayscale beams to produce two separate images. This interweaving reduces the potential frame rates for the system. Because the sample sites for the two fields of view do not coincide, they cannot accumulate into a common memory in a simple manner. Instead, they pass to separate memories and finally overlay one over the other in the digital scan converter.

Because the two scanning fields have different orientations, the color and gray-scale imaging do not have a one-to-one correspondence over the composite image field (Fig. 1–37). Portions of the steered Doppler image are outside the gray-scale field, just as portions of the gray-scale scanning field are outside the Doppler scanning field. One way of keeping the overall image frame rate high is to confine the color signal processing to a small mobile window, or ROI. This technology arrangement is a common practice for all color-flow systems, both vascular and cardiac.

Amplitude signal processing (power Doppler imaging). Doppler signal processing is generally one of deriving the Doppler frequency components within a returning echo signal. Fast Fourier transform (FFT) analysis operates on the composite wave form and produces a range of discrete frequency components. Thus, an FFT provides two properties of these component signals for analysis and portrayal in color: (1) Doppler signal frequencies and (2) Doppler signal amplitudes. DCFI encodes the Doppler shift frequencies into color to show both the direction and the speed of blood flow. An alternative to frequency analysis is to determine the power spectrum of the Doppler signal amplitudes using the system directional (in-phase and quadrature) channels. Although this technique may have different trade names, the technology is most often called amplitude or power Doppler imaging.15 Importantly, although the name of the technique includes the word power, it does not increase the acoustic power delivered to the patient. Rather, it is a different method of signal processing using normal signal amplitudes and output power values.

Signal processing for Doppler frequency information offers advantages and disadvantages. At the outset, frequency information shows both the direction and the relative velocity of the blood flow in the image. At the same time, it is very sensitive to noise, the Doppler angle, and is subject to high-frequency aliasing.

Power Doppler, on the other hand, is not as sensitive to system noise as frequency-based Doppler, but it is more sensitive to displaying flow boundaries. In addition, power Doppler is relatively angle independent and is nonaliasing. Because of these last three advantages, power Doppler can better show overall vascularity and better supports three-dimensional depictions of perfusion into organs and masses.25

Power Doppler, however, forfeits detailed flow information within the vessels. In addition, it is very sensitive to soft tissue motion and the so-called “flash artifact” produced by this motion. To date, much effort has gone into developing techniques that can suppress the flash artifact and improve power Doppler imaging.

Practical Issues

Cardiac Imaging Requirements

In general, viewing the heart with ultrasound requires intercostal and subcostal imaging with low-frequency ultrasound.11 Parasternal DCFI naturally places the ultrasound beam approximately 90° to the flow pattern. As a result, apical and subcostal views of the heart are needed to place flow patterns parallel to the ultrasound beams. The phased-array and the short-radius curved linear arrays are the transducers of choice for viewing through these thoracic and abdominal windows. The sector angles range from 30° to 180°.

Because blood is a low attenuator (0.15 dB/cm per MHz), viewing the heart with ultrasound does not require the same sort of front-end design (delay lines and receivers) that vascular imaging requires. In addition, the high frame rates, combined with large fields of view, impose large sampling intervals on the cardiac image, and the scanning uses a sector format. All of these factors combine to make the cardiac color-flow device right for the heart and wrong for the vascular system.

Because the Doppler sampling intervals can be relatively large in echocardiography, detecting a regional turbulence is not always easy. To help locate flow disorganization (spectral broadening) for any sampling location, most cardiac systems determine not only the mean frequency at a sample site, but the signal variance as well.11 In many cardiac devices, color-coding for an increasing variance introduces a green tint to the primary color.

Vascular Imaging Requirements

Vascular DCFI involves all available peripheral vessels, the large, upper thoracic vessels, and the deeper vessels in the abdomen. A linear array typically is used to view the peripheral vessels. This linear scanning field sets the stage for using changes in color to show changes in the direction of flow. In contrast, a sector scan of a linear vessel produces a continuously changing Doppler angle (Fig. 1–38) and, thus, continuously changing color. Instead, sector-scanning transducers, such as the phasedarray and the curved linear array, are used to view abdominal vasculature. These transducers permit both subcostal and intercostal scanning to visualize the deeper abdominal vessels that may be within the rib cage.


FIGURE 1–38. Vascular imaging with a sector scanning field. PA is a phased array, SG is the scanning field geometry, V is the vessel, and NB is the beam perpendicular to flow in the vessel (arrows). Because each beam position has a different Doppler angle, the colors in the image change rapidly.

The sector fields, however, make reading the images more difficult. Identifying arteries and veins requires knowing both the direction and the pulsatility of the flow. Large FOVs and longer processing times for the color-flow image often make the effective frame rates too low to permit an easy determination of pulsatility. Using a single-point spectrum and a color-flow image together as well as decreasing the ROI can yield information about vascular pulsatility.

Displays of Frequency and Velocity

All current color-flow images using Doppler are two-dimensional maps of the Doppler shift frequencies. After all, color flow uses Doppler, too.

Many systems show the color values in velocity (centimeters per second) rather than frequency (hertz). This sort of display suggests a direct measurement of velocity in color. As in all color-flow Doppler determinations of velocity, the values represent a solution to the Doppler equation (Fig. 1–39). In this display, however, the velocity is not the absolute velocity of the red cells. Instead, the image values represent the closing velocity along the ultrasound beam. Absolute velocities would require a continuous correction of all angles to the flow patterns throughout the image. Fig. 1–40 shows this closing velocity relationship.


FIGURE 1–39. Calculation of velocity with the Doppler equation. V is the velocity, c is the ultrasound propagation velocity, Df is the Doppler shift frequency, Fo is the carrier frequency, and θ is the Doppler angle. Doppler machines measure frequency and calculate “true” velocity, V, based on an estimated Doppler angle entered by the sonographer.


FIGURE 1–40. The closing velocity geometry. LA is the linear array, B is the ultrasound beam, V is the target velocity, and V cos 8 is the closing velocity. Closing velocity is the component of motion along the ultrasound beam.

Color-Flow Imaging Artifacts

Because DCFI incorporates both B-mode imaging and Doppler signal processing, it is subject to the same artifacts that affect ultrasound in general. Three primary sources of confusion in DCFI are (1) range ambiguity artifacts, (2) Doppler high-frequency aliasing, and (3) soft tissue vibrations.

Fig. 1–41 shows the organization of events required to obtain a range ambiguity artifact. The high power and faster frame rates typical of DCFI offer ample opportunities for this artifact.26 In DCFI, the artifact appears as diffuse, nonpulsatile colors, suggesting flow that may not actually exist where it appears in the image.


FIGURE 1–41. The range ambiguity artifact. LA is the linear array, PL1 is the initial pulse-listen cycle, IU1 is the incident ultrasound from PL1, S1 is a phantom echo source inside the field boundary (FB), S2 is a real echo source outside the scanning field, DR1 is the deeper returning echo, and SA represents the simultaneous arrival of the two echo signals. Range ambiguity occurs when echo sources outside the scanning field appear in the image.

In a pulsed Doppler system, the Doppler shift frequencies are being sampled at the pulse repetition frequency (PRF) of the system. High-frequency aliasing occurs when the Doppler shift frequency exceeds the system PRF sampling frequency. This aliasing limit is known as the Nyquist limit, which is PRF/2. When aliasing occurs in CFI, both the colors and the single-point Doppler spectra “wrap around” the display format (i.e., the high frequencies in one direction appear as lower frequencies in the opposite direction) and confuse the appearance of flow. To remove aliasing, a sonographer must either increase the PRF (shorten the FOV) or decrease the Doppler shift frequency associated with the highest velocity.22 You can decrease the Doppler shift frequency by either decreasing the Doppler carrier frequency or moving the transducer to place the Doppler angle closer to 90°. Because of the increasingly pronounced error production with Doppler angles above 70°, however, it is better to choose a lower carrier frequency than to increase the Doppler angle above 70° (e.g., at 75°, the velocity calculation has an inherent error rate of 6.5%/degree; thus, an error of ±3° in estimating the true Doppler angle creates a velocity calculation error of ±19.5%).6

A not uncommon source of DCFI confusion is the mechanical vibration of soft tissues. For example, tissue vibrations can occur if a patient talks or if the blood flow happens to be producing a bruit, or noise.27 These tissue vibrations can fill an image of an artery or vein with lots of color outside the vessel walls. The low-frequency pulse from the heart also can fill an abdominal image with a burst of color known as a flash artifact, which can cause problems for power Doppler as noted earlier.25

Applying the Technology to Real Images

Using these ideas of how the various color-flow systems work, we are now in a position to examine some examples of DCFI. They range from the depiction of flow within an M-mode recording to high-resolution imaging of the vascular system.

Fig. 1–42 shows the combination of an M-mode recording with DCFI. All motion in this image is referenced to the transducer as a closing velocity; that is, only as motion directly toward or away from the transducer. In this example, the Doppler frequencies have been calculated into closing velocities, which do not necessarily represent the true velocities of the cardiac blood flow. This system encodes the presence of spectral broadening by adding green to the primary directional colors. The green color flow in diastole demonstrates: (1) the presence of aortic regurgitation and (2) the ensuing turbulence as the left ventricle fills through an open mitral valve.


FIGURE 1–42. Color flow imaging of a mitral valve. The M-mode tracing views the mitral valve from the upper portion of the cardiac window, aimed down toward the mitral valve. Red is flow toward the transducer; blue is flow away. Closing velocity values (aliasing limits) appear on the color bar. The red flow between the interventricular septum and closed mitral valve is ejection through the left ventricular outflow tract. During ventricular filling, the blue-green flow along the anterior mitral valve leaflet is reversed, turbulent flow produced by an aortic regurgitation. Depth markers are at the right of the image, and an EKG trace provides timing at the bottom of the image. (Reproduced with permission from Philips Healthcare.)

Determining the true flow pattern is easier with a two-dimensional image of the heart. Fig. 1–43 provides a clear view of the heart in long axis, with the color flow confined to the heart’s chambers. Again, the Doppler frequency map is calculated into closing velocity values. This image shows a nonaxial regurgitating jet extending from the aortic root into the left ventricle. In this case, the color-flow images show not only the existence of the jet but also its nonaxial geometry.


FIGURE 1–43. Cardiac color flow imaging. This parasternal long-axis view of the heart shows (from the top down) the right ventricle, the interventricular septum, and the left ventricle containing the mitral valve leaflets. Red is flow toward the transducer; blue is away. Closing velocity values (aliasing limits) appear on the color bar. The color-imaging window (blue boundary) shows a nonaxial blue-green turbulent jet (regurgitation) originating at the aortic root. Without color, the nonaxial quality of the jet could not be easily determined. (Reproduced with permission from Philips Healthcare.)

Fig. 1–44 (normal carotid artery flow) depicts a normal carotid artery with an early synchronous DCFI system. The Doppler angle comes from a wedge standoff at the top of the image. The sampling rate in this image is at one-wavelength intervals (0.2 mm at 7.5 MHz) for both the gray-scale image and the color portion of the image. The internal flow pattern of the vessel shows a normal flow separation and reversal in the carotid bulb. The flow direction is from image right to left, away from the transducer, causing the vessel to appear red. The higher Doppler shift frequencies appear whiter, depicting the higher-velocity portions of the flow


FIGURE 1–44. Color flow imaging of a carotid artery. The long-axis view of a normal carotid artery bifurcation shows the common carotid artery branching into internal (upper branch) and external (lower branch) carotid arteries. Flow is right to left, away from the transducer (red bar at the top of the image). The vessel above the carotid (blue) is the jugular vein. Within the carotid bulb is a normal flow separation and reversal (blue). Because higher Doppler shift frequencies are whiter, the major streamlines appear whiter in the image. The required Doppler angle comes from the wedge standoff (black triangular space at the top of the image). The peripheral dots are 1-cm markers, and the aliasing frequencies appear at the top and bottom of the color bar. (Reproduced with permission from Siemens Healthcare, Malvern, PA.)

The beam steering that is typical of asynchronous signal processing changes the organization of a similar image of a normal carotid artery. This system depicts different average Doppler shift frequencies in different color hues. As in Fig. 1–45, however, the image clearly shows the flow separation and reversal (blue portion in the red vessel) that is typical of a normal carotid bulb. The beam steering in this image limits simultaneously showing flow and soft tissue anatomy throughout the image. Moving the olor-flow processing window and changing the scanning position, however, permits a full interrogation of most vessels.


FIGURE 1–45. Color flow imaging of a carotid artery. The image shows the internal carotid artery and bulb. Flow in the artery (red) is from right to left, with a flow separation and reversal in blue. The major, nonaxial streamline (yellow) in the bulb is along the anterior wall. A small segment of aliasing (green) appears in the streamline. The more anterior blue vessel is the jugular vein. The Doppler angle in this image comes from beam steering. The white parallelogram shows the steering angle and the boundary for color signal processing in the image. The aliasing frequency limits appear on the color bar, with depth markers and a transmit focal point position on the right side of the image. (Reproduced with permission from Philips Healthcare.)

Any highly vascular tissue or structure is a good candidate for DCFI when trying to separate out ambiguous anatomy. Flow within major fetal vessels appears in Fig. 1–46. This image is formed with a convex linear array, producing a sector image pattern. Because of the complexity of the vascular anatomy and the changing angle between the vessels and the beams, the immediate color encoding does not always indicate arteries and veins. Instead, the vessel pulsatility and its position relative to internal anatomical landmarks tell the story. If the system frame rate is too slow, however, a single-point, pulsed Doppler spectrum will be the most reliable means of determining the pulsatile flow patterns of arteries and the steadier flow patterns of veins.


FIGURE 1–46. Color flow imaging of a fetal abdomen. This image shows the fetal aorta with renal and iliac branches. Flow away from the transducer is blue; flow toward the transducer is red. The peripheral dots are 1-cm markers, and the aliasing frequencies appear at the top and bottom of the color bar. A convex curved array forms a sector scanning field. (Reproduced with permission from Siemens Healthcare, Malvern, PA.)

When wall disease in a vessel or simply vascular anatomy disturbs the flow pattern, a single-point spectrum cannot show the source or character of the disturbance. High-resolution DCFI, however, as shown in Fig. 1–47, clearly depicts not only curving vessel walls, but also a major streamline moving across the vessel in response to flow inertia.


FIGURE 1–47. Color flow imaging of a common carotid artery. This artery has a curve that sends a major streamline (arrow) across the vessel lumen. Flow is from right to left toward the transducer. The upper red color bar represents motion toward the transducer. The flow deviation is about 15° steeper than the vessel wall. The vertical markers on the image left indicate 0.5-cm intervals. The aliasing closing velocities appear at the top and bottom of the color bar. (Reproduced with permission from Siemens Healthcare, Malvern, PA.)

In the presence of stenosis, the flow within the narrowing increases velocity. The narrowing also often appears in the image as a physical narrowing of the color distribution as the spatial signal processing maps a reduced lumen. Fig. 1–48 shows these two results, the narrowing of the color distribution and the acceleration (central green portion of the color in the lumen poststenosis) due to a carotid artery stenosis.


FIGURE 1–48. Color flow imaging of a carotid artery stenosis. This artery has a significant stenosis that is narrowing the flow channel (narrow color distribution), causing a poststenotic turbulence (mixed colors). Prestenotic flow is right to left, away from the transducer and colored red. The poststenotic flow is toward the transducer and colored blue. The markers on the image left are 0.5-cm intervals. The aliasing closing velocities appear at the top and bottom of the color bar. PLQ is a soft, anechoic plaque. STN is the narrowest region of the stenosis. (Reproduced with permission from Siemens Healthcare, Malvern, PA.)

When frame rates are high enough and the Doppler sampling is fine enough, we begin to see some of the subtler flow physiology. For example, each pulse in the vascular system is a mechanical wave that travels down the vessels to be reflected at changes in hydraulic impedance. Fig. 1–49 shows the intersection of two traveling pulse waves: The red portion is the incident wave, and the blue portion is the reflected wave. The color-flow image and the Doppler spectrum show the connection between the triphasic flow pattern of this high-resistance vessel and the passing of forward and reversed pulse waves within the artery.


FIGURE 1–49. Color flow imaging of a superficial femoral artery and vein. These normal vessels show typical flow patterns during the cardiac and respiratory cycle. Arterial flow from left to right, and a distally reflected pulse wave (blue) arriving to cross the incident pulse wave (red). The superficial femoral vein (blue posterior vessel) is flowing right to left as flow and color fill the vein’s residual lumen. This color flow image clearly shows the reversal of flow in a triphasic or biphasic flow pattern comes from a traveling pulse wave. The peripheral dots are 1-cm markers, and the aliasing frequencies appear at the top and bottom of the color bar. The black space at the top of the image is a wedge standoff. (Reproduced with permission from Siemens Healthcare, Malvern, PA.)

Although power Doppler can fill an image with color as in Fig. 1–50, the processing loses the information about flow direction. As a consequence, the carotid arteries and the jugular vein have the same color. As the sampling approaches the vessel walls, the signals are coded for a decrease in signal brightness.


FIGURE 1–50. Power Doppler depiction of flow in a carotid artery. ECA is the external carotid; ICA is the internal carotid, PLQ is an anechoic plaque narrowing the carotid bulb. The depth marker on the image left shows 0.5-cm intervals. The upper right corner of the ROI includes a portion of the jugular vein flowing in opposite direction to the carotid arteries with the same color as the arteries. (Reproduced with permission from Siemens Healthcare, Malvern, PA.)


DCFI is a combination of gray-scale anatomical information and a colored depiction of flow events. It is an integrated image of form and function, anatomy, and physiology. The color portion of the image is not an image of blood, however; it is an image of motion. Using power Doppler, the color can depict the presence of flow with great sensitivity, but without some flow details.

Within each Doppler color-flow system, the amplitudes of echo signals become gray-scale intensities, while the frequency content of the signals becomes color. The echo signals for both may be the same or different, even in carrier frequency. This imaging modality depends on the sophistication and speed of contemporary digital signal processing. It also is an imaging modality that is changing and will continue to change the fields of ultrasound and medicine.


1. Kremkau FW Diagnostic Ultrasound: Principles, Instruments. 7th ed. St. Louis: Saunders Elsevier; 2006.

2. Edelman SK. Understanding Ultrasound Physics. 3rd ed. Woodlands, TX: EPS Inc.; 2005.

3. Pinkney N. A Review of the Concepts of Ultrasound Physics and Instrumentation. 4th ed. Philadelphia: Sonior; 1983.

4. Bushong SC, Archer RB. Diagnostic Ultrasound: Physics, Biology, and Instrumentation. St. Louis: Mosby-Year Book; 1991.

5Diagnostic Ultrasound: Test Equipment and Accessories. New York: Nuclear Associates, Catalog U-2; 1991; 2-3.

6. Halpern GP A Naval History of World War 1. P343. The United States Naval Institute; Annapolis, MD: Naval Institute Press; 1994.

7. Miele FR. Essentials of Ultrasound Physics. The Board Review Book. Forney, TX; Pegasus Lecturers and Inc. 2008.

8. Craig, M. Essentials of Sonography and Patient Care. 2nd ed. St. Louis: Saunders Elsevier; 2006.

9. Odwin C, Fleischer CA, Keepie D. Probe covers and disinfectant for transvaginal transducers. J Diagnostic Med Sonogr 1989; 6:130-135.

10. Fish PJ. Multichannel, direction resolving Doppler angiography. Abstracts of 2nd European Congress of Ultrasonics in Medicine. 72, 1975.

11. Omoto R, ed. Color Atlas of Real-Time Two-Dimensional Doppler Echocardiography. Tokyo: Shindan-ToChiryo; 1984.

12. Powis RL. Color flow imaging: understanding its science and technology. J Diagnostic Med Sonogr 1988; 4:236-245.

13. Gorcsan J. Tissue Doppler echocardiography. Curr Opin Cardiol 2000; Sept 15:323-329.

14. Burns PN. Instrumentation and clinical interpretation of the Doppler spectrum: carotid and deep Doppler. In: Conventional & Color-Flow Duplex Ultrasound Course. Proc AIUM Spring Education Meeting. 1989; 29-38.

15. Persson AV, Powis RL. Recent advances in imaging and evaluation of blood flow using ultrasound. Med Clin North Am 1986; 70:1241-1252.

16. Ophir J, Maklad NF. Digital scan converters in diagnostic ultrasound imaging. Proc IEEE. 1979; 67:654-664.

17. Atkinson P, Woodcock JR Doppler Ultrasound and Its Use in Clinical Measurement. New York: Academic Press; 1982.

18. Goldstein A, Powis RL. Medical ultrasonic diagnostics in ultrasonic instruments and devices: reference for modern instrumentation, techniques and technology. In: Mason WP, Thurston RN, eds. Physical Acoustics Series. Vol. 23A. New York: Academic Press; 1999.

19. Powis RL. Color flow imaging technology. In: Basic Science of Flow Measurement. Proc Syllabus AIUM 1989 Spring Education Meeting. 1989; 27-33.

20. Merritt RBC. Doppler color flow imaging. J Color Ultrasonogr 1987; 15:591-597.

21. Havlice JF, Taenzer JC. Medical ultrasonic imaging: an overview of principles and instrumentation. Proc IEEE 1979; 67:620-641.

22. Powis RL, Powis WJ. A Thinker’s Guide to Ultrasonic Imaging. Baltimore: Urban & Schwarzenberg; 1984.

23. Baker DW, Daigle RE. Noninvasive ultrasonic flow-metry. In: Hwang, NHC, Normann NA, eds. Cardiovascular Flow Dynamics and Measurements. Baltimore: University Park Press; 1977.

24. McDicken WN. Diagnostic Ultrasonics: Principles and Use of Instruments. 2nd ed. New York: John Wiley & Sons; 1981.

25. Murphy KJ, Rubin JM. Power Doppler: it’s a good thing. Semin Ultrasound CT MRI 1997; Feb. 18:13-21.

26. Goldstein A. Range ambiguities in real-time. Ultrasound 1981; 9:83-90.

27. Middleton WD, Erickson S, Melson GL. Perivascular color artifact: pathologic significance and appearance on color Doppler US images. Radiology 1989; 171:647-652.

28. Shelly G, Cashman T. Computer Fundamentals for an Information Age. Brea, CA: Anaheim Publishing; 1984.


Answers and Explanations