Since the discovery of x-rays, imaging has been a vital component of clinical medicine. Historically, imaging modalities have often been divided into two general categories: Structural (or anatomical) and functional (or physiologic). Anatomical modalities, depicting primarily morphology with high-spatial resolution, include x-rays (plain radiography), computed tomography (CT), magnetic resonance imaging (MRI), and ultrasound (US). Functional modalities, depicting primarily information related to underlying metabolism and biochemistry, include (planar) scintigraphy, single-photon emission computed tomography (SPECT), positron emission tomography (PET), magnetic resonance spectroscopic imaging (MRSI), functional magnetic resonance imaging (fMRI), and now, optical (bioluminescence and fluorescence) imaging. Importantly, however, the traditional distinction between anatomical and functional imaging modalities is increasingly arbitrary, as dynamic and/or static CT, MRI, and US imaging may be performed following administration of a blood flow or molecularly targeted contrast agent and functional images derived. The functional modalities form the basis of the rapidly advancing field of “molecular imaging,” defined as the direct or indirect noninvasive monitoring and recording of the spatial and temporal distribution of in vivo molecular, genetic, and/or cellular processes for biochemical, biologic, diagnostic, or therapeutic applications.1 Focusing on nuclear and optical modalities, the current chapter reviews the rapidly advancing technologies of multimodality, small-animal, and intraoperative imaging as applied to in vivo visualization and characterization of tumors.
Information derived from multiple modalities is often complementary, for example, localizing the site of an apparently abnormal metabolic process to a pathologic structure such as a tumor. Integration of such complementary information may be helpful and, in a clinical setting, even critical. In addition to anatomic localization of “signal” foci, registration and fusion of multimodality images provide a number of important advantages: intra- as well as intermodality corroboration of diverse images; more accurate and more certain diagnostic and treatment-monitoring information; image guidance of external-beam radiation therapy; and potentially, more reliable internal radionuclide dosimetry (e.g., in the form of radionuclide image-derived “isodose” contours superimposed on images of the pertinent anatomy). Differences in image size and dynamic range, voxel dimensions and depth, image orientation, subject position and posture, and information quality and quantity often make it difficult, however, to unambiguously colocate areas of interest in multiple image sets. Following alignment of the respective images in a common coordinate system (a procedure referred to as registration), fusion is required for the integrated display of these aligned images. The objectives of image registration and fusion of multimodality images, therefore, are (A) to appropriately modify the format, size, position, and even shape of one or both image sets to provide a point-to-point correspondence between images and (B) to provide a practical integrated display of the images thus aligned.
The image registration and fusion process2,3 is illustrated diagrammatically and in general terms in Figure 43.1. The first step in the registration of multiple image sets is reformatting of one image set (the “floating,” or secondary, image) to match that of the other image set (the reference, or primary, image). (Alternatively, both image sets may be transformed to a new, common image format.) Three-dimensional (3D), or tomographic, image sets are characterized by: the dimensions (i.e., the length [δX], width [δY], and height [or axial dimension] [δZ]) of each voxel; the image matrix (i.e., XxYxZ where X is the number of rows and Y the number of columns in each tomographic image and Z is the number of tomographic images [or slices]); and the image depth (e.g., in bytes), which defines the dynamic range of signal displayable in each voxel (e.g., a word-mode, or two byte-“deep,” PET or SPECT image can display up to 216 = 65,536 counts per voxel for 16-bit words). The foregoing image parameters are provided in the image “header,” a block of alphanumeric data which may either be in a stand-alone text file associated with the image file or incorporated into the image file itself. Among the image sets to be registered, either the finer matrix is reformatted to the coarser matrix by combining voxels or the coarser matrix is reformatted to the finer matrix by interpolation of voxels. One of the resulting 3D image sets is then magnified or minified to yield primary and secondary images with equal voxel dimensions. Finally, the “deeper” image is rescaled to match the depth of the “shallower” matrix. Usually, the higher-spatial resolution and finer-matrix structural (e.g., CT or MR) image is the primary image and the functional (e.g., PET or SPECT) image is the secondary image. The second step in image registration is the actual transformation (translation, rotation, and/or deformation [warping]) of the reformatted secondary image set to spatially align it, in three dimensions, with the primary image set. The third and fourth steps are, respectively, evaluation of the accuracy of the registration of the primary and transformed secondary images and adjustment, iteratively, of the secondary image transformation until the registration (i.e., the goodness-of-alignment metric) is optimized. The fifth and final step is image fusion, the integrated display of the registered images.
In both clinical and laboratory settings, there are two practical approaches to image registration: “Software” and “hardware” approaches.2,3 In the software approach, images are acquired on separate devices, imported into a common image-processing computer platform, and registered and fused using the appropriate software. In the hardware approach, images are acquired on a single, multimodality device and transparently registered and fused with the device’s integrated software. Both approaches are dependent on software sufficiently robust to recognize and import diverse image formats. The availability of industry-wide standard formats, such as the ACR-NEMA DICOM standard (i.e., the American College of Radiology [ACR]–National Electrical Manufacturers Association [NEMA] for Digital Imaging and Communications in Medicine [DICOM] standard),4,5 is therefore critical.
The fusion of multimodality image sets may be as simple as simultaneous display of images in a juxtaposed format. A more common, and more useful, format is an overlay of the registered images, where one image is displayed in one color table and the second image in a different color table. Typically, the intensities of the respective color tables as well as the “mixture” of the two overlaid images can be adjusted. Adjustment (e.g., with a software slider) of the mixture allows the operator to interactively vary the overlay so that the designated screen area displays only the first image, only the second image, or some weighted combination of the two images, each in its respective color table.
FIGURE 43.1. The image registration and fusion process. See text for details. (From Zanzonico P. Multimodality image registration and fusion. In: Dhawan AP, Huang HK, Kim DS, eds. Principles and Advanced Methods in Medical Imaging and Image Analysis. Singapore: World Scientific Publishing Co.; 2008:413–435. Copyright 2008, with permission.)
Software Image Registration
Software-based transformations of the secondary image set to spatially align it with the primary image set are commonly characterized as either “rigid” or “nonrigid”.6–8 In a rigid transformation, the secondary image is only translated and/or rotated with respect to the primary image. The Euclidean distance between any two points (i.e., voxels) within an individual image set thus remains constant. In nonrigid, or deformable, transformations (commonly known as “warping”), selected subvolumes within the image set may be expanded or contracted and/or their shapes altered. Translations and/or rotations may be performed as well. Such warping is therefore distinct from any magnification or minification performed in the reformatting step, where distances between points all change by the same relativeamount. Unlike rigid transformations, which may be either manual or automated, nonrigid transformations are generally automated.
Registration transformations are often based on alignment of specific landmarks visible in the image sets; this is sometimes characterized as the “feature-based” approach.6–8 Such landmarks may be either intrinsic (i.e., one or more well-defined anatomic structure[s] or the body contour [or surface outline]) or extrinsic (i.e., one or more fiducial markers placed in or around the subject). Feature-based registration generally requires some sort of preprocessing “segmentation” of the image sets being aligned, that is, identification of the corresponding features (e.g., fiduciary markers) of the image sets. Feature-based image registration algorithms may be automated by minimization of the difference(s) in position of the pertinent feature(s) between the image sets being aligned.
Other registration algorithms are based on analysis of voxel intensities (e.g., counts in a PET or SPECT image) and are characterized as “intensity-based” approaches.6–8 These include alignment of the respective “centers of mass” (e.g., counts) and orientation (i.e., principal axes) calculated for each image set; minimization of absolute or sum-of-square voxel intensity differences between the image sets; cross-correlation (i.e., maximizing the voxel intensity correlation between the image sets); minimization of variance (i.e., matching of identifiable homogeneous regions in the respective image sets); and matching of voxel intensity histograms.7 Such intensity-based approaches implicitly assume that the voxel intensities in the images being aligned represent the same, positively correlated parameters (e.g., counts) and thus are directly applicable only to intramodality image registration. As illustrated in Figure 43.2, showing sequential PET brain images of the same patient,7 misalignment of the image sets produces visualizable structure in the difference images (bottom row of Figure 43.2A), that is, the voxel-by-voxel intensity differences are not zero. In contrast, accurate registration yields difference images whose voxel-by-voxel intensity differences are equal to zero within statistical uncertainty (or “noise”) and therefore an absence of visualizable structure (bottom row of Fig. 43.2B). Alternatively, for two image sets A and B, a two-dimensional (2D) joint histogram (also known as the “feature space”) (Fig. 43.3)7 can be constructed by plotting, for each combination of intensity a in image A and intensity b in image B, the point (a, b) whose intensity reflects the number of occurrences of the combination of intensities a and b. Thus, a more intense (or darker) point in the joint histogram indicates a larger number and a less intense (or lighter) point indicates a smaller number of occurrences of the combination (a, b). When two identical image sets are aligned (matched), all voxels coincide and the plot in the voxel intensity histogram is the line of identity (i.e., a = b for all voxels). As one of the image sets is rotated relative to the other (illustrated in Fig. 43.3 by rotations of 10 degrees and then 20 degrees), for example, the joint histogram becomes increasingly blurred (i.e., dispersed). Alignment of the images can therefore be achieved by minimizing the dispersion in the joint intensity histogram. Like other intensity-based approaches, this approach is most readily adaptable to intramodality image sets but in principle can be applied to intermodality images by appropriate mapping of one image intensity scale to the other intensity scale (Fig. 43.4).8
Another widely used automated registration algorithm is based on the statistical concept of mutual information (MI),8,9 also known as transinformation or relative entropy. The MI of two random variables Aand B is a quantity that measures the statistical dependence of the two variables, that is, the amount of information that one variable contains about the other. MI measures the information about A that is shared by B. If A and B are independent, then Acontains no information about B and vice versa and their MI is therefore zero. Conversely, if A and B are identical, then all information conveyed by A is shared with Band their MI is maximized. Accurate spatial registration of two such image sets thus results in maximization of their MI and vice versa. As illustrated in Figure 43.58 for registration of a brain MR image with itself, the joint histogram of two images changes as the alignment of the images changes. When the images are registered, corresponding signal foci overlap and the joint histogram will show certain clusters of grayscale values. As images become increasingly misaligned (illustrated in Figure 43.5 with rotations of 2, 5, and then 10 degrees of the brain MRI relative to the original image), signal foci will increasingly overlap that are not their respective counterparts on the original image. Consequently, the cluster intensities for corresponding signal foci (e.g., skull and skull, brain and brain, etc.) will decrease and new noncorresponding combinations of grayscale values (e.g., of skull and brain) will appear. The joint histogram will thus become more dispersed; as described above, minimization of this dispersion is the basis of certain intensity-based registration algorithms. At the same time, the MI (Equations  and ), which is minimized when the two images are aligned, will increase. However, unlike other intensity-based approaches, no assumptions are made in the MI approach regarding the nature of the relationship between image intensities (e.g., a positive or a negative correlation). MI is thus a completely general goodness-of-alignment metric and can be applied to inter- as well as intramodality registration and automatically without prior segmentation.
FIGURE 43.2. Intramodality image registration based on minimization of voxel intensity differences. A: Selected brain images of sequential misaligned (i.e., nonregistered) PET studies of the same patient, with the section-by-section difference images in the bottom row. B: The same image sets as in (A), now aligned by minimization of the voxel-by-voxel intensity differences. (From Hutton BF, Braun M, Thurfjell L, et al. Image registration: An essential tool for nuclear medicine. Eur J Nucl Med Mol Imaging. 2002;29:559–577. With kind permission from Springer Science+Business Media.)
FIGURE 43.3. Intramodality image registration based on matching of voxel intensity histograms. The joint intensity histograms of a transverse-section brain MR image with itself when the two image sets are originally matched (i.e., aligned) and when misaligned by counterclockwise rotations of 10 and 20 degrees, respectively. See text for details. (From Hutton BF, Braun M, Thurfjell L, et al. Image registration: An essential tool for nuclear medicine. Eur J Nucl Med Mol Imaging. 2002;29:559–577. With kind permission from Springer Science+Business Media.)
The concepts of entropy and MI are developed more formally below. Given “events” (e.g., grayscale values) e1, e2, …, en with probabilities (i.e., frequencies of occurrence) p1, p2, …, pn in an image set, respectively, the entropy (specifically, the so-called “Shannon entropy”) H is defined as follows8:
The term log indicates that the amount of information provided by an event is inversely related to the probability (i.e., frequency) of that event: The less frequent an event, the more significant is its occurrence. The information per event is thus inversely weighted by the frequency of its occurrence. The uniform “background” (eBG) occupies a large portion of a CT image (i.e., pBG is large), for example, and therefore contributes relatively little information (i.e., log is small); it would not contribute substantially to accurate alignment with an MR image. The Shannon entropy is also a measure of the uncertainty of an event. When all events (e.g., all grayscale values in an image) are equally likely to occur (as in a highly heterogeneous image), the entropy is maximala. When an event or a range of events are more likely to occur (as in a uniform image), the entropy is minimal. In addition, the entropy is a measure of dispersion of an image’s probability distribution (i.e., the probability of a grayscale value versus the grayscale value): a highly heterogeneous image has a broad dispersion and a high entropy whereas a uniform image has no dispersion and minimal entropy. Entropy thus has several interpretations: The information content per event (e.g., grayscale value), the uncertainty per event, and the statistical dispersion of events in an image.
FIGURE 43.4. An intermodality (CT and MR) joint intensity histogram. The featureless (i.e., uniform) area corresponding to brain tissue in the transverse-section head CT image (left panel), in contrast to the anatomic detail in the corresponding area of the MR image (middle panel), yields a distinct vertical cluster (arrow) in the CT-MR joint histogram (right panel). (Reprinted from Maintz JB, Viergever MA. A survey of medical image registration. Med Image Anal. 1998;2:1–36, with permission from Elsevier.)
FIGURE 43.5. Effect of misregistration on joint intensity histograms and mutual information (MI) between a transverse-section brain MR image (top row) and itself. Shown are the joint intensity histograms and MI (middle row)when the two image sets are originally matched (i.e., aligned) and when misaligned by clockwise rotations of 2, 5, and 10 degrees, respectively (bottom row). See text for details. (Reprinted from Maintz JB, Viergever MA. A survey of medical image registration. Med Image Anal. 1998;2:1–36, with permission from Elsevier.)
For two images A and B, the MI(A, B) may be defined as followingb
MI(A,B) = H(B) - H(B | A) (2)
where H(B) is the Shannon entropy of image B (derived from the probability distribution of its grayscale value) and H(B|A) is the conditional entropy of image B with respect to image A (derived from the conditional probabilities p[b|a], the probability of grayscale value b occurring in image B given that grayscale value a occurs in the corresponding voxel in image A). When interpreting entropy in terms of uncertainty, MI(A, B) thus corresponds to the uncertainty in image B minus the uncertainty in image B when image A is known. Intuitively, therefore, MI(A, B)—the image-B information in image A—is the amount by which the uncertainty in image B decreases when image A is given. Because images A and B can be interchangeable, MI(A, B) is also the information image B contains about image A and it is therefore MI. Registration thus corresponds to maximizing MI; the amount of information images have about each other is maximized when, and only when, they are aligned. If a subject is imaged by two different modalities, there is presumably considerable MI between the spatial distribution of the respective signals in the two image sets no matter how diverse (i.e., unrelated) they may appear to be. For example, the distribution of 18F-fluoro-deoxyglcuose (FDG) visualized in a PET scan is, at some level, dictated by (i.e., dependent on) the distribution of different tissue types imaged by CT.
Regardless of the algorithm employed, the evaluation and adjustment of the registration requires some metric of its accuracy. It may be as simple as visual (i.e., qualitative) inspection of the aligned images and a judgment by the operator that the registration is or is not “acceptable.” A more objective, and ideally quantitative, evaluation of the accuracy of the registration is, of course, preferred. One goodness-of-alignment metric, for example, is the sum of the Euclidean distances between corresponding fiduciary markers (or anatomic landmarks) in the two image sets; the optimum alignment corresponds to the transformation yielding the minimum sum of distances. Another similarity metric, as discussed above, is the MI: when the MI between the two image sets is maximized, they are optimally aligned.
Hardware Image Registration: PET-CT and SPECT-CT
Hybrid devices such as PET-CT and SPECT-CT scanners overcome many of the practical difficulties associated with multimodality image registration and fusion. The major manufacturers of PET, SPECT, and CT scanners now also market such multimodality scanners,10–14 combining state-of-the-art PET or SPECT and CT scanners in a single device. These instruments provide near-perfect registration of images of in vivo function (PET or SPECT) and anatomy (CT) using a measured, and presumably fixed, rigid transformation between the image sets. These devices have already had a major impact on clinical practice, particularly in oncology15; PET-CT devices, for example, have virtually eliminated “PET-only” scanners from the clinic. Although generally encased in a single seamless housing, the PET and CT gantries in such multimodality devices are separate; the respective fields of view are separated by a distance of the order of 1 m and the PET and CT scans are performed sequentially (Figs. 43.6 and 43.7).
FIGURE 43.6. Schematic diagram (side view) of typical commercially available PET-CT scanner. (From Townsend DW, Carney JP, Yap JT, et al. PET/CT today and tomorrow. J Nucl Med. 2004;45 suppl 1:4S–14S, with permission.)
In addition to PET-CT scanners, SPECT-CT scanners16,17 are now commercially available. The design of SPECT-CT scanners is similar to that of PET-CT scanners in that the SPECT and CT gantries are separate and the SPECT and CT scans are acquired sequentially, not simultaneously. In such devices, the separation of the SPECT and CT scanners is more apparent (Fig. 43.8) because the rotational and other motions of the SPECT detectors effectively preclude encasing them in a housing with the CT scanner. Multimodality imaging devices for small animals (i.e., rodents)—PET-CT, SPECT-CT, and even SPECT-PET-CT devices—are now commercially available as well.3,18
Multimodality devices simplify image registration and fusion— conceptually as well as logistically—by taking advantage of the fixed geometric arrangement between the PET and CT scanners or the SPECT and CT scanners in such devices. Further, because the time interval between the sequential scans is short (i.e., a matter of minutes) and the subject remains in the same imaging position, it is unlikely that a subject’s geometry will change significantly between the PET or SPECT scan and the CT scan. Accordingly, a rigid transformation matrix (i.e., translations and rotations in three dimensions) can be used to align the PET or SPECT and the CT image sets. This matrix can be measured using a “phantom,” that is, an inanimate object with PET- or SPECT- and CT-visible landmarks arranged in a well-defined geometry. The transformation matrix required to align these landmarks can then be stored and used to automatically register all subsequent multimodality studies, since the device’s geometry and therefore this matrix should be fixed.
An important application of PET-CT in oncology, determination of extent of disease, and of its impact on patient management are illustrated in Figure 43.9.19 18F-FDG, by far the most widely used PET radiotracer, has profoundly impacted the clinical management of cancer patients. However, reading PET and CT images separately or in the juxtaposed format shown (Fig. 43.9A and B) makes it difficult to definitively identify the anatomic site (i.e., tumor versus normal structure) of the focus of activity in the neck. The overlaid PET and CT images (Fig. 43.9C), on the other hand, unambiguously demonstrate that the FDG activity is located within the muscle, a physiologic normal variant. The FDG activity in the neck thus was not previously undetected disease, a finding which would have significantly impacted the subsequent clinical management of the patient.
Figure 43.10 illustrates an experimental (i.e., preclinical) application of SPECT-CT. A debilitating and painful consequence of advanced prostate and other cancers is the development of skeletal metastases. At the same time, it has been observed that many cancer cell lines as well as primary human tumors synthesize bombesin, which appears to act in an autocrine fashion to stimulate the growth of the tumor cells from which they originated via membrane bombesin receptors. In a preclinical model in mice, Winkelman et al.20 demonstrated that SPECT-CT of a bombesin receptor (BB2)–binding radioligand, indium-111 (111In)-DOTA-8-Aoc-BBN(7-14)NH2, can provide a combined structural and functional map—skeletal anatomy and bombesin receptor status—of metastatic bone lesions, as illustrated in Figure 43.10. By directly targeting metastatic tumor cells in bone using a specific (e.g., bombesin) receptor-binding radioligand, rather than observing by CT the secondary effect of osteolysis, more sensitive and specific early diagnosis of skeletal metastases may be possible.
FIGURE 43.7. A typical imaging protocol for a combined PET-CT study: (A) the topogram, or scout CT scan, for positioning; (B) the CT scan; (C) generation of CT-based PET attenuation correction factors; (D) the PET scan over the same longitudinal range of the patient as the CT scan; (E) reconstruction of the attenuation-corrected PET emission data; (F) the attenuation-corrected PET images; and (G) display of the final fused PET-CT images. (From Yap JT, Carney JP, Hall NC, et al. Image-guided cancer therapy using PET/CT. Cancer J. 2004;10:221–33, with permission from Lippincott Williams & Wilkins.)
FIGURE 43.8. Photograph of a clinical SPECT-CT scanner (Precedence, Philips Medical). Note that, in contrast to PET-CT scanners, the subsystems are not encased in the same housing, so as to allow the rotational and other motions of the SPECT subsystem.
Hardware Image Registration: PET-MRI and SPECT-MRI
Multimodality imaging is now well established in routine clinical practice. As noted, new PET installations are comprised almost exclusively of combined PET-CT scanners rather than PET-only systems. However, PET-CT and SPECT-CT have certain notable shortcomings, including the inability to perform simultaneous data acquisition, the significant radiation dose to the patient contributed by CT, and the inability of CT to distinguish different soft tissues.21–23 Compared to CT, MRI provides not only better contrast among soft tissues but also functional-imaging capabilities. The combination of PET or SPECT with MRI may thus provide many advantages which go well beyond simply combining functional PET or SPECT information with structural MRI information. Among multimodality imaging studies, therefore, PET- or SPECT-MRI may ultimately provide the greatest yield of information by combining the quantitative molecular imaging capabilities of PET or SPECT (including the large number and variety of radiotracers) with the excellent anatomic resolution, marked soft tissue contrast, and functional imaging capabilities provided by MRI (e.g., perfusion by dynamic contrast-enhanced [DCE] imaging) and MRSI (e.g., quantitation of regional concentrations of metabolites such as lactate, citrate, and choline).21–23
As noted, an important advantage of PET- or SPECT-MRI over other multimodality imaging studies is that the image data can be acquired simultaneously (as illustrated in Figure 43.11) because the PET or SPECT and MR imaging signals (i.e., γ- or x-rays and radiofrequency [RF] waves, respectively) do not interfere with one another. In contrast, for PET- or SPECT-CT, the respective image data are acquired sequentially because the PET or SPECT and CT signals (i.e., γ- or x-rays) are similar and largely indistinguishable. Implicit in the registration of such sequentially acquired images is the assumption that the subject is morphologically and functionally stable over the time interval between and during the image acquisitions. Of course, registration of such sequentially acquired functional and structural images is very forgiving of the short time interval (typically only several minutes) between the PET or SPECT and CT scans, as anatomy does not actually change over such short time intervals (except, perhaps, for filling of the urinary bladder, transit of gas through the bowel, and so on). Registration of sequentially acquired functional images, however, is potentially more problematic, as functional properties, such as blood flow, hypoxia, neuronal activation, etc. may change transiently over time frames of minutes and even seconds.
FIGURE 43.9. Registered and fused FDG PET and CT scans of a patient with lung cancer and an adrenal gland metastasis. A: Coronal PET images show increased FDG uptake in the primary lung tumor (single arrow in left panel) and in the metastasis in the left adrenal gland (double arrow in left panel) but also in an area in the left side of the neck (arrow in right panel). B: Transaxial PET and CT images through the focus of activity in the neck. C: The registered PET-CT images, using the fused, or overlayed, display. The arrow identifies the location in the neck of this unusual, but nonpathologic, focus of FDG activity on the fused images. (From Schöder H, Erdi Y, Larson SM, et al. PET/CT: A new imaging technology in nuclear medicine. Eur J Nucl Med Mol Imaging. 2003;30(10):1419–1437, with permission.)
FIGURE 43.10. A preclinical SPECT-CT study in a tumor-bearing mouse. A: Progression of two metastases in the tibia (arrows) visualized by CT surface renderings (top row) and sagittal images (bottom row) following intracardiac injection of PC3 prostate tumor cells. B: Pinhole SPECT image obtained at 1 hour post injection of 111In-DOTA-8-AOC-BBN(7-14) NH2. C: Overlay of anatomic CT images with the SPECT image, showing that the foci of radiotracer uptake correspond to the lytic bone metastases. D: Photomicrograph of the histopathology of the two bone metastases. (From Winkelmann CT, Figueroa SD, Sieckman GL, et al. Non-invasive MicroCT imaging characterization and in vivo targeting of BB2 receptor expression of a PC-3 bone metastasis model. Mol Imaging Biol. 2012;14:667–675. With kind permission from Springer Science+Business Media.)
FIGURE 43.11. A: Photograph (upper panel) of a conventional block detector used in PET scanners, with a pixilated scintillation crystal backed by PMTs. Crystal maps (i.e., image produced by irradiation of the block detector) in the absence of a magnetic field (lower left panel), clearly showing the uniform response of the crystal elements (with one focus of counts per crystal element) and showing the gross distortion of the response of the detector in the presence of a 7-T magnetic field (lower right panel). B: Photograph (upper panel) of the components of a block detector for PET again using a scintillation crystal (upper left panel) but avalanche photodiode (APDs; upper left panel) in place of PMTs. Crystal maps showing the comparably uniform response of the APD-based block detector in the absence of a magnetic field (lower left panel) and in the presence of a 7-T magnetic field (lower right panel). (From Pichler BJ, Kolb A, Nägele T, et al. PET/MRI: Paving the way for the next generation of clinical multimodality imaging applications. J Nucl Med. 2010;51:333–336, with permission.) C: Photographs of an APD-based PET insert for PET-MRI (lower left panel) and of the components of the detector assembly (upper left panel) and a drawing of the PET insert in place of the magnet. (From Catana C, Wu Y, Judenhofer MS, et al. Simultaneous acquisition of multislice PET and MR images: Initial results with a MR-compatible PET scanner. J Nucl Med. 2006;47:1968–76, with permission.)
Many technical challenges, related to possible interference between the modalities’ hardware, have to be solved when combining PET or SPECT and MRI.21,23–31 Most notably, conventional PET and SPECT detectors are based on photomultiplier tubes (PMTs), which do not operate properly in the presence of a magnetic field (Fig. 43.11), and various approaches have been pursued to overcome this and other challenges in combining nuclear and MR modalities. The most straightforward approach is placement of the PET or SPECT and MR scanners in series in a manner analogous to current PET-CT and SPECT-CT devices. However, this would require magnetic shielding of the nuclear scanner and/or imposition of a relatively large distance between the PET or SPECT and MR scanners. (For SPECT-MR scanners, physical separation of the SPECT and MR hardware has been pursued but has been restricted to relatively low [i.e., 0.1 T] field strengths.32) Further, because data acquisition for PET, SPECT, and MRI individually is time consuming, sequential imaging may result in prohibitively long overall study times (well over 1 hour) in a clinical setting. Space constraints are also a consideration. Most importantly, physical separation of the scanners would eliminate the ability to perform the nuclear and MR scans simultaneously, which, as noted, is perhaps the most compelling feature of such hybrid devices, as illustrated in Figure 43.12. The preferable approach, therefore, is integration of the PET or SPECT hardware into the MR gantry. As discussed below, this has been accomplished in both PET-MR and SPECT-MR scanners.
Because of its various hardware components, the PET or SPECT subsystem can interfere with the performance of the MRI subsystem by compromising the homogeneity of the MRI’s main magnetic field and the RF field and thereby degrade MR image quality. Bismuth germanate (BGO) and lutetium orthooxysilicate (LSO) crystals, for example, produce only minor magnetic-field distortion, whereas gadolinium orthosilicate (GSO) and lutetium gadolinium orthosilicate (LGSO) have sharply different magnetic susceptibilities than tissue and thus markedly distort MR images. At the same time, the variable MR gradients may induce eddy currents in conductive materials of the nuclear detectors and distort the effective applied gradient field. As noted, the effects of the MR subsystem on the performance of the PET or SPECT subsystem are perhaps even more severe. High magnetic fields exclude the use of PMTs employed in conventional PET and SPECT scanner detectors, as the path of the electrons between the dynodes in the evacuated PMT is deflected from their normal trajectory by the external magnetic field (Fig. 43.11A).21,33 In addition, the RF fields and also the gradient system pulses of the MR subsystem can degrade the performance of and even damage the PET and SPECT electronics. One approach to overcoming this limitation is the use of long optical fibers (up to several meters in length) to couple the detector crystal to a PMT positioned outside the MR subsystem’s fringe field (i.e., at magnetic fields of no greater than 10 mT). In this way, only the x- and γ-ray detection elements lie within the magnetic field and the scintillations are directed out of the field by the fibers. Despite the challenges presented by such systems, including the large number of optical fibers required (one per detector element) and partial loss of the light signal over the length of the fibers, several prototypes of such PET-MRI scanners (with field strengths up to 3 T) have been fabricated.
FIGURE 43.12. Dynamic PET-MR imaging study of a tumor xenograft. A:18F-labeled fluoro-thymidine (FLT) PET image of a BALB/c mouse with a CT-26 colon carcinoma xenograft showing areas of high FLT uptake corresponding to increased cell proliferation (left panel). Noncontrast-enhanced T1-weighted MR image, simultaneously acquired and registered with the FLT PET study and showing the gross morphology of the tumor and the underlying normal anatomy (right panel). The fused FLT PET and T1-weighted MR images (middle panel). B: Contrast (gadopentetate dimeglumine)-enhanced T1-weighted MR image of the same tissue section shown in (A), identifying muscle (red region of interest [ROI]), whole tumor (purple ROI), enhanced (i.e., perfused and therefore viable) tumor (light and dark blue ROIs), and nonenhanced (i.e., nonperfused and therefore necrotic) tumor (orange ROI). C: The signal-versus-time post injection curves for the dynamic contrast-enhanced (DCE) MR study, with the color-coded curves for the anatomic ROIs identified in (B). Note that the perfused, viable regions of tumor identified by DCE MR imaging (DCE MRI) are, unambiguously, also the most rapidly proliferating regions of the tumor as identified by FLT PET, demonstrating the value of truly simultaneous acquisition of the PET and MR images. (From Judenhofer MS, Wehrl HF, Newport DF, et al. Simultaneous PET-MRI: A new approach for functional and morphological imaging. Nat Med. 2008;14:459–465. Copyright 2008. Reprinted by permission from Macmillan Publishers Ltd.)
An alternative approach to PET- and SPECT-MRI is elimination of the PMTs altogether by using solid-state scintillation detectors such as avalanche photodiodes (APDs) (Fig. 43.11B and C) or solid-state (i.e., semiconductor) ionization detectors such as cadmium zinc telluride (CZT). The photodiode (composed of silicon, for example) is an alternative to the PMT for conversion of scintillations into electronic signals. Photodiodes are similar in operation to semiconductor detectors except that they are sensitive to light rather than x- or γ-rays. Photodiodes typically have a gain of only one (compared to the ∼106-fold gain of PMTs), and thus require low-noise electronics. APDs have considerably higher gains, of the order of 100 to 1,000, but still require low-noise read-out electronics. Importantly, they are relatively insensitive to magnetic fields and thus can be coupled directly to the scintillation detector blocks via conventional (i.e., short) light guides. It is still important, however, to provide RF shielding of the PET subsystem components within the magnetic field, for example, using copper mesh. This approach has been successfully applied to PET-MRI scanners in the laboratories of Pichler and Cherry, where prototype small-animal hybrid scanners have been fabricated and tested. In Pichler’s device,21,27,30,34 the PET ring was comprised of ten 12 × 12 LSO detectors directly coupled to a 3 × 3 APD array. A mouse RF coil was fitted into the PET detector ring and the entire assembly was then placed inside the gradient set of a 7-T small-animal MRI system. The PET insert had an axial field of view (FOV) of 19 mm and a transaxial FOV of 35 mm, with a spatial resolution of less than 2 mm full-width at half-maximum (FWHM). In Cherry’s design,33,35 a 7-T magnetic field was again used. The PET ring was comprised of 16 8 × 8 LSO detectors (crystal size: 1.43 × 1.43 × 6 mm3) directly coupled to a 14- × 14-mm2 position-sensitive APD array by a short (10-cm long) optical fiber bundle (16 1.95- × 1.95-mm2 fibers per bundle), with the APDs mounted just outside the MR scanner’s FOV. The PET insert had an axial FOV of 12 mm and a transaxial FOV of 35 mm. No significant negative interaction between the two PET and MR subsystems was observed. With these designs, no metallic component is placed in the MR FOV and the short light guides do not exhibit as much loss of light as extended optical fibers. However, the axial FOV of such a system is again limited by the number of light fibers that are needed.
Another approach to a hybrid PET-MR scanner is the so-called “split-magnet” design described by Lucas et al.,36 who modified a commercial small-animal PET system and placed it in a 1-T magnet with a centrally located 80-mm gap. The gap accommodated placement of the PET detector assembly, which used 120-cm long optical fibers to connect LSO block detectors (comprised of 12 × 12 detector elements 1-cm thick) to position-sensitive PMTs located at a low (<1-mT)-field strength radial position. Measurements indicated that optical fiber length from 10 to 120 cm and magnetic-field strength up to 1 mT had no perceptible effect on detector performance. In addition, high-quality MR images of mice were obtained with the PET subsystem in place in the split magnet.
Magnetic-field cycling represents yet another, fundamentally different approach to PET-MR, with the conventional static magnetic field replaced by two dynamically controlled fields. The first of these, the polarizing field, is usually ∼1 T and is used to build up the magnetization within the subject. It is switched on for only a short period of time, ∼1 s, and rapidly turned off. A second, lower-strength (∼0.1 T) magnetic field, the “readout” field, is then switched on and the MR image acquired. Field-cycled MRI, in contrast to conventional MRI, does not require highly uniform magnetic fields and thus is quite forgiving of any field inhomogeneities introduced by the presence of the PET subsystem. Small-animal MR and PET-MR scanners based on field cycling have been proposed by Gilbert et al.37 Importantly, however, PET and MR data acquisitions would be sequential, not simultaneous, since the PET data can only be acquired when both magnetic fields are off. This has the advantages that no optical fibers are needed and that the PMTs can be installed close to the scintillation crystals, avoiding excessive light loss.38 However, the long-term impact of repeated magnetic-field cycling on performance of the PMTs and the PET subsystem overall remains unclear.
FIGURE 43.13. Proof-of-principle single-photon scintigraphy-MR imaging study in a mouse. A: Diagrammatic representation of the system for simultaneous single-photon scintigraphy using a cadmium zinc telluride (CZT) ionization detector and MR imaging. Note that this is not a SPECT (i.e., tomographic) system but rather a planar imaging system. B: Static noncontrast T1-weighted MR image (upper left panel). Frame from dynamic 99mTc-sestamibi study (upper middle panel). “Fusion” of T1-weighted MR and 99mTc-sestamibi images; since the 99mTc-sestamibi image is a planar image (upper right panel), the MR and scintigraphic images are not truly fused. C:Frame from a dynamic MR study prior to injection of gadopentetate dimeglumine contrast (lower left panel). Frame from a dynamic contrast (gadopentetate dimeglumine)-enhanced MR study (lower middle panel). Kidney ROIs extracted from the static noncontrast T1-weighted MR image (lower right panel). D: Renal signal-versus-time curves following the 99mTc-sestamibi and gadopentetate dimeglumine injections. (Adapted from Hamamura MJ, Roeck WW, Ha S, et al. Simultaneous in vivo dynamic contrast-enhanced magnetic resonance and scintigraphic imaging. Phys Med Biol. 2011;56:N63–N69. © Institute of Physics and Engineering in Medicine. Published on behalf of IPEM by IOP Publishing Ltd. All rights reserved.)
An alternative to scintillation detectors coupled to either PMTs or APDs which has been used in prototype SPECT-MR scanners is the use of solid-state (i.e., semiconductor) ionization detectors (Fig. 43.13).39–42 However, when a semiconductor detector is placed in a magnetic field, electron-hole pairs created by the absorption of x- and γ-rays are subject to the so-called Lorentz force. As a result, when such a detector is placed in any orientation other than parallel or antiparallel to the magnetic field, electrons traveling toward the anode will undergo a shift in their detected position (Fig. 43.14). For the prototype “MRSPECT” system described by Hamamura et al.,39,40 for example, a mean “Lorenz shift” of 1.4 mm was measured. In that system, the correction for the Lorentz shift was performed prior to SPECT image reconstruction by shifting the nuclear projection data to their proper locations; this pixel-by-pixel correction was derived by imaging of a uniform flood source, analogous to derivation of γ-camera sensitivity corrections generally. The detector elements were coupled to an application-specific integrated circuit (ASIC) read-out board and the detector-ASIC board housing and cables were wrapped with a fine copper mesh for RF shielding. Although they provide excellent energy resolution for possible multi-isotope studies of single-photon emitters, semiconductor detectors are not well-suited for PET scanners because of their low stopping efficiency for the 511-keV annihilation photons and, to date, have not been used in PET-CT scanners.
FIGURE 43.14. Diagrammatic illustration of the Lorentz shift. After the interaction of an incident γ-ray with an ionization detector such as cadmium zinc telluride (CZT), the resulting electron is deflected by a distance δx because of the Lorentz force. Since the exact depth of the interaction of the γ-ray in the detector is statistical and can occur anywhere along the thickness of the element, there is a range of possible deflected distances δ∊. (Adapted from Hamamura MJ, Ha S, Roeck WW, et al. Development of an MR-compatible SPECT system (MRSPECT) for simultaneous data acquisition. Phys Med Biol. 2010;55:1563–1575. © Institute of Physics and Engineering in Medicine. Published on behalf of IPEM by IOP Publishing Ltd. All rights reserved.)
An important advantage of PET- or SPECT-CT is improved quantitation of radionuclide concentrations in situ afforded by CT-derived attenuation correction of the PET or SPECT images. CT images are essentially maps of the differential x-ray attenuation coefficients among tissues. By appropriate scaling of the attenuation coefficients thus derived from CT x-ray energies (∼100 keV) to radiation energies emitted by PET or SPECT radionuclides (e.g., 511 keV in the case of PET nuclides), accurate attenuation correction of the PET or SPECT images can be performed. On the other hand, MR images, of course, do not reflect radiation attenuation and conceivably cannot be used to derive attenuation correction factors. In fact, however, this is not the case. Anatomic MR images can be segmented into soft tissue, lung (air), and bone and appropriate energy-dependent reference values of attenuation coefficients assigned to the respective tissues thus segmented.22,43,44 Accurate attenuation correction can thus be performed in PET- and SPECT-MRI as well as PET- and SPECT-CT.
PRECLINICAL PET AND SPECT IMAGING
Increasingly, in vivo imaging of small laboratory animals (i.e., mice and rats) has emerged as a critical component of preclinical biomedical research.45 Small-animal imaging provides a noninvasive means of assaying biologic structure and function in vivo, yielding quantitative, spatially and temporally indexed information on normal and diseased tissues such as tumors. Importantly, because of its noninvasive nature, imaging allows serial (i.e., longitudinal) assay of rodent models of human cancer and cardiovascular, neurologic, and other diseases over the entire natural history of the disease process, from inception to progression, and monitoring of the effectiveness of treatment or other interventions (with each animal serving as its own control and thereby reducing biologic variability). This also serves to minimize the number of experimental animals required for a particular study. With the ongoing development of genetically engineered (i.e., transgenic and knockout) rodent models of cancer and other diseases, such models are increasingly more realistic in recapitulating the natural history and clinical sequelae of the corresponding human disease and the ability to track these disease models long-term is therefore critical. Importantly, in contrast to cell or tissue culture–based experiments, studies in intact animals incorporate all of the interacting physiologic factors—neuronal, hormonal, nutritional, immunologic, etc.—present in the complex in vivo milieu. Intact whole-animal models also facilitate investigation of systemic aspects of disease such as cancer metastasis, which are difficult or impossible to replicate in ex vivo systems. Further, because many of the same imaging modalities—PET, SPECT, CT, MRI, and US—used in the clinic are also used in the laboratory setting, the findings of small-animal imaging are readily translatable to patients.
Prior to the inception of “small-animal imaging,” experimental animals were generally imaged using clinical instrumentation (during off-hours, of course), and many useful studies were performed in this way. In many instances, however, the performance (most notably, the spatial resolution) of clinical imaging devices is inadequate (i.e., the spatial resolution is prohibitively coarse) for scientifically useful imaging of tumors and organs in mice and rats (Fig. 43.15). In addition to the need for better spatial resolution, the development of dedicated small-animal imaging instruments, and of centralized facilities to house these instruments, has been motivated by a number of practical considerations. First, by incorporating invasive and therefore clinically impractical corroborative assays (e.g., interstitial probe measurements, histology, immunohistochemistry) into small-animal imaging studies, new and/or existing clinical imaging paradigms can be clarified, validated, and/or improved in the laboratory and then translated back to the clinic. Second, biosecurity (i.e., protection from transmission of infectious and other diseases among experimental animals and between animals and humans) of immunodeficient and other genetically engineered animal models requires that such animals remain within a “clean” barrier facility and are not, for example, transported out of such a facility to a clinical imaging area and then back to the facility. Third, in certain institutions and jurisdictions, experimental animals are prohibited by regulation from entry into clinical areas. Fourth, the limited and, at times, unpredictable availability (i.e., at night, overnight, and/or on weekends and holidays) of clinical imaging instrumentation makes it very difficult to plan and perform experiments, especially experiments involving large numbers of animals and/or multiple imaging sessions.
FIGURE 43.15. Comparative PET images of tumor-bearing mice acquired with a clinical and a small-animal PET scanner. A: A photograph (not to scale) showing the orientation of the animals in the PET images. B: A coronal PET image of a mouse with a Lewis Y antigen-expressing HCT15 human colorectal carcinoma xenograft in each of its two hind limbs. The image was acquired at ∼2 days post injection of an yttrium-86 (86Y)-labeled humanized anti-Lewis Y antibody, hu3S193, using a clinical PET scanner, the GE Advance (General Electric Medical Systems), with a full-width at half-maximum (FWHM) spatial resolution of 6 mm and volume resolution of 216 mm3. C: A coronal PET image with an FSA II murine fibrosarcoma allograft in the right hind limb. The images were acquired at ∼1-hour post injection of 18F-FDG using a dedicated rodent PET scanner, the R4 microPET (Concorde Microsystems), with a FWHM spatial resolution of 2.2 mm and volume resolution of 10.6 mm3. All three tumors were comparable in size (1 to 1.5 cm in the largest dimension). Although the image acquired on the clinical scanner (B) clearly demonstrates high-contrast uptake of the radiotracer by the two tumors, it does not show any heterogeneity of uptake within the tumors. If any such heterogeneity is present, any parameter derived from the measured uptake will reflect some ill-defined value of any such parameter averaged over the entire tumor. In contrast, the image acquired on the small-animal scanner (C) distinguishes the differential uptake of FDG between biologically distinct cell subpopulations within the tumor, namely, high uptake in a viable rim and much lower uptake in a largely necrotic core. For any parameters derived from the tracer uptakes in (C), therefore, distinct, and more meaningful, parameter values can be derived for the viable rim and for the necrotic core. (From Zanzonico P. Noninvasive imaging for supporting basic research. In: Kiessling F, Pichler BJ, eds. Small Animal Imaging–Basics and Practical Guide. Heidelberg: Springer; 2011:3–16. With kind permission from Springer Science+Business Media.)
FIGURE 43.16. Coronal (top row with head at bottom) and transverse (bottom row with dorsal surface at top) R4 microPET (Concorde Microsystems) images at 3-hour post injection of the gallium-68-labeled F(ab′)2 fragment of Herceptin (68Ga-DOTA-F[ab′]2 Herceptin) into athymic nude mice with BT474 breast tumor xenografts on the right flanks; DOTA is the metal chelator 1,4,7,10-tetraazacyclodecane-N,N′,N′,N″′-tetraacetate. Herceptin, used in the treatment of some breast cancers, is an antibody directed against the HER2ν tyrosine kinase, which is overexpressed in BT474 and other breast tumors. HER2ν is a client protein of the heat shock protein 90 (HSP90) chaperone protein. A: One mouse underwent a baseline study followed by treatment with the geldanamycin derivative 17-AAG (an inhibitor of HSP90) followed 24 hours later by (B) a second scan with 68Ga-DOTA-F(ab′)2 Herceptin. The tumor uptake of 68Ga-DOTA-F(ab′)2 Herceptin decreased 50% between the pre- and posttreatment scans. C, D: The control (i.e., untreated) mouse also underwent two scans 24 hours apart, with no significant change in tumor uptake. E: As corroborated by the Western blots shown, 17-AAG induced degradation of HSP90 and, in turn, HER2ν. This study illustrates the application of imaging to characterization of the pharmacodynamics of molecularly targeted anticancer therapy. Hypothetically, for example, breast cancer patients can undergo pre- and posttherapy scans to identify responders (i.e., having scan results analogous to those in [A] and [B]) and nonresponders (i.e., having scan results analogous to those in [C] and [D]) to HSP90 inhibitors. Responders would then be effectively treated with such inhibitors whereas nonresponders would be switched to alternative treatment. (From Smith-Jones PM, Solit DB, Akhurst T, et al. Imaging the pharmacodynamics of HER2 degradation in response to Hsp90 inhibitors. Nat Biotechnol. 2004;22:701–706. Copyright 2004. Reprinted by permission from Macmillan Publishers Ltd.)
The radiation doses to experimental animals in PET, SPECT, and CT studies are considerably—one to two orders of magnitude—higher than those encountered in the corresponding clinical studies. Indeed, at absorbed doses of the order of 100 cGy, they approach single-fraction doses used in external-beam radiation therapy. Investigators should be aware of the magnitude of radiation doses encountered in small-animal PET, SPECT, and CT studies and potential radiogenic perturbation of their experimental system.
Imaging-based experimentation in small-animal models is now an established and widely used approach in basic and translational biomedical research and will no doubt remain an important component of such research. Several areas—drug development, treatment monitoring, and novel therapeutic strategies such as adoptive immunotherapy and gene therapy—are particularly productive in the application of small-animal imaging. In drug development, imaging-based assays are particularly amenable to quantitative characterization of pharmacokinetics and pharmacodynamics of new therapeutics and may accelerate the drug discovery process (Fig. 43.16). Transgenic and knockout mouse models of human disease may be used for identification and validation of “drugable” molecular targets. Clinically translatable imaging paradigms developed and validated in animal models may also provide earlier and more clinically meaningful assays of therapeutic response, enabling clinicians to rapidly distinguish “responders” from “nonresponders” and promptly switch patients from ineffective to potentially more effective therapies, thereby avoiding unnecessary toxicities, expense, and loss of time.
Preclinical PET and SPECT Radiotracers
Several different classes of radiotracers are currently employed in small-animal experimentation46–48 and largely mirror the types of radiotracers used clinically.49 These include the following: (1) “Biomarker” or “surrogate” imaging agents—related to some physiologic process (e.g., blood flow) or to some downstream effects of one or more endogenous molecular/genetic processes (e.g., 18F-FDG PET imaging reflecting upregulation of glucose transporters and/or glycolytic metabolic pathways in many tumors); (2) “direct” imaging of specific molecules based on binding of radiolabeled ligands (e.g., imaging of the α5β3 integrin, commonly overexpressed in tumor vasculature, with radiolabeled glycosylated RGD (arginine-glycine-aspartate)-containing peptides); and (3) “indirect” reporter-gene imaging.47,48,50 The reporter-gene imaging paradigm (Fig. 43.17), representing a convergence of molecular and cell biology and the imaging sciences, is providing new insights into signal transduction pathways, oncogenesis, endogenous molecular genetic/biologic processes, and response to therapy in animal models of human disease. In addition, reporter-gene nuclear imaging is now being applied clinically to the nascent field of adoptive immunotherapy of cancer and will likely find applications in gene therapy as well.51,52
FIGURE 43.17. Design of a reporter-gene construct and the indirect reporter imaging paradigm. A: (1) The basic structure of a reporter-gene complex is shown, expressing herpes simplex virus 1 thymidine kinase (HSV1-tk) and/or luciferase. (Other reporter genes include the human norepinephrine transporter [hNET] and the human sodium iodide symporter [hNIS].) Control and regulation of gene expression is accomplished through promoter and enhancer regions that are located at the 5’ end (“upstream”) of the reporter gene. These promoter/enhancer elements can be “constitutive” and result in continuous gene expression (“always on”) or “inducible” (“conditionally on”) and sensitive to activation by transcription factors and promoters. Following the initiation of transcription and translation, the gene product accumulates. (2) In this case the reporter-gene product is the enzyme HSV1-tk, which phosphorylates certain radiolabeled thymidine analogs; these probes are not phosphorylated by endogenous mammalian thymidine kinase. The phosphorylated probe does not cross the cell membrane readily and is effectively “trapped” and accumulates selectively within transduced cells. Probe accumulation thus reflects the level of HSV1-tk enzyme activity and of HSV1-tk reporter-gene expression. (3) In this case, luciferase is the reporter-gene product and expression is detected via its catalytic action on the administered D-luciferin substrate resulting in production of bioluminescence. (See text.) B: Different reporter-gene constructs are transfected into target cells by a viral vector. Transcription of the reporter gene to messenger ribonucleic acid (mRNA) is initiated by constitutive or inducible promoters, and translation of the mRNA to a protein occurs on the ribosomes. The reporter-gene product can be a cytoplasmic or nuclear enzyme, a transporter in the cell membrane, a receptor at the cell surface or part of cytoplasmic or nuclear complex, an artificial cell-surface antigen, or a fluorescent protein. Often, a complimentary reporter probe (e.g., a radiolabeled, magnetic, or bioluminescent molecule) is administered and the probe signal is directly related to the level of reporter-gene product, thus reflecting levels of transcription, modulation and regulation of translation, protein–protein interactions, and/or posttranslational regulation of protein conformation and degradation. (From Serganova I, Blasberg RG. Multi-modality molecular imaging of tumors. Hematol Oncol Clin North Am. 2006;20:1215–1248, with permission.)
Preclinical PET and SPECT Scanners
A number of preclinical PET and SPECT devices are commercially available (Tables 43.1 and 43.2, respectively), both from major manufacturers and smaller “niche” companies. Preclinical PET scanners, which operate exclusively in 3D mode, are otherwise diverse in design, utilizing different scintillation detectors in combination with either position-sensitive PMTs or APDs and even gas-filled detectors. The superior spatial resolution of preclinical versus clinical PET scanners, 1- to 2-mm versus 4- to 6-mm FWHM, is due in part to the much smaller gantry diameter and therefore a less pronounced resolution-degrading effect of the noncolinearity of the annihilation γ-rays. Preclinical SPECT scanners, on the other hand, are rather similar in design, generally utilizing multiple thallium-doped sodium iodide (NaI[Tl]) scintillation detectors fitted with multiaperture pinhole collimators. The superior spatial resolution of preclinical versus clinical SPECT scanners, ∼1- versus ∼10-mm FWHM, is due to the magnification effect afforded by the pinhole collimation. Of course, this is achieved at the cost of lower sensitivity, though the use of multiaperture collimators and multiple (up to four) detectors as least partially mitigating the reduction in sensitivity. Preclinical devices are currently marketed as “PET-only” or “SPECT-only” scanners or as multimodality devices with integrated CT scanners, typically cone beam devices with flat-panel detectors. The Carestream Albira, Gamma Medica Triumph LabPET Solo, and Siemens Inveon are available as trimodality PET-SPECT-CT systems.
PROPERTIES OF COMMERCIALLY AVAILABLE PRECLINICAL PET SCANNERSa,b
INTRAOPERATIVE NUCLEAR AND OPTICAL TECHNOLOGIES
Nuclear Counting and Imaging
Beginning with the pioneering studies of Sweet53 60 years ago, intraoperative probes (i.e., counters) have evolved into an important, well-established technology in the management of cancer.54–57 Such probes are used in radioguided surgery to more expeditiously identify and localize sentinel lymph nodes and thereby reduce the extent and potential morbidity of surgical procedures and, to a much lesser extent, to identify and localize tumor margins as well as visually occult disease following systemic administration of a tumor-avid radiotracer (e.g., 18F-FDG).
PROPERTIES OF COMMERCIALLY AVAILABLE PRECLINICAL SPECT SCANNERSa,b
Radionuclide-based detection and localization of tumors, especially small tumors, has several well-known limitations58 which are mitigated through the use of intraoperative probes and, potentially, intraoperative γ-cameras. First, absolute tumor uptake of cancer-targeted radiotracers remains generally quite low, typically ∼0.1% or less of the administered activity per gram. Second, overall radiation detection sensitivity in vivo is low as well, ranging from about 0.1% for γ-camera imaging (including SPECT) to less than 10% for PET. This is exacerbated, of course, by the signal-degrading effect of attenuation of emitted radiation by overlying tissue. Third, a significant portion of the counts apparently emanating from a tumor or other targeted tissue may actually include counts originating elsewhere (i.e., from background activity in adjacent tissues) because of contrast- and resolution-degrading Compton scatter. However, because of the close proximity of a collimated detector to a tumor or sentinel lymph node which can be achieved at surgery, radionuclide detection of such structures can be enhanced using intraoperative probes or γ-cameras. In a study of simulated tumors in a torso phantom having uniform background activity, for example, Barber et al.59 demonstrated that a scintillation probe more sensitively detected tumors than a γ-camera over a wide range of conditions provided the probe was placed within 1 cm of the tumor. Alternatively, tumor or sentinel lymph node detection may also be improved under some circumstances using β (negatron or positron), rather than γ, detection60–62 because the very short range (typically ∼1 mm or less) of such particulate radiations eliminates the contribution of confounding counts from activity other than in the immediate vicinity of the detector. Of course, the short range of particulate radiations also limits the application of β-probes to intraoperative or endoscopic settings.
The most widely used type of intraoperative probe (e.g., in sentinel node detection) is the general-purpose “γ”-probe (Fig. 43.18), designed for counting of radionuclides emitting x- and/or γ-rays; such “single-photon” (i.e., nonpositron) emitters include technetium-99m (99mTc), indium-11 (111In), iodine-123 (123I), and iodine-131 (131I).63–65 γ-Probes generally use inorganic (i.e., nonplastic) scintillation detectors or solid-state (i.e., semiconductor) ionization detectors and lead or tungsten shielding and collimation. Scintillators used in such probes include thallium-doped sodium iodide (NaI[Tl]), thallium- and sodium-doped cesium iodide (CsI[Tl], and CsI[Na], respectively), and cerium-doped LSO. Semiconductors used in intraoperative probes include cadmium telluride (CdTe), CZT, and mercuric iodide (HgI2). The former offer high sensitivity whereas the latter better energy resolution (and therefore better scatter rejection). Clinically, however, scintillation- and ionization-detector–based probes provide generally comparable performance.63–65 Positron emitters such as 18F may also be counted with such probes by single-photon (i.e., noncoincidence) counting of the 511-keV annihilation γ-rays. As illustrated in Figure 43.19, however, this requires thicker collimation and shielding to prevent penetration of significant numbers of such highly energetic γ-rays emitted from outside of the FOV (as defined by the collimator aperture) from reaching the detector and thereby degrading spatial resolution as well as target-to-background contrast. This has also been demonstrated in preliminary clinical and preclinical studies.66–69
Because x- and γ-rays penetrate relatively long distances (of the order of 10 cm) of soft tissue, a major limitation of the use of γ-probes to specifically identify diseased tissue in radioguided surgery is the presence of variable, generally high levels of background activity in normal tissues. Thus, even with a γ-probe centered over a tumor, the contribution of counts originating from activity in normal tissue underlying the tumor and even outside the FOV (because of penetration of the collimation and shielding) may degrade the tumor-to-normal tissue contrast (e.g., reducing tumor-to-normal tissue count ratios to less than 1.5:166,70,71 and thus tumor detectability to the point where lesions may be missed. A potential solution to this limitation of radioguided surgery is the use of so-called “β-”probes, that is, intraoperative probes which specifically yield counts of negatrons or positrons. Because they have such short ranges in soft tissues (typically of the order of 1 mm or less), β-particles emitted by activity outside the probe’s FOV or underlying the surface tissue do not reach the detector and are not counted. (By the same token, minimal if any collimation and shielding is required (Fig. 43.19C)). As a result, the discrimination between higher-activity tumor and lower-activity normal tissues is enhanced (i.e., the tumor-to-normal tissue count ratios are increased). Of course, the short range of β-particle restricts the use of such probes to surface lesions; β-probes could not be used, for example, for (percutaneous) detection of sentinel lymph nodes.
FIGURE 43.18. The general design and operating principles of an intraoperative γ-probe. The hand-held probe (upper left panel) is comprised of a collimated, small-area (typically ∼1 cm in diameter) scintillation or solid-state ionization (i.e., semiconductor) detector (right panel). The probe itself is connected to a control unit (lower left panel) which typically provides both a visual read-out of the count rate and an audible signal related to the count rate, with the frequency of the latter signal increasing or decreasing in relation to the detected count rate. (From Heller S, Zanzonico P. Nuclear probes and intraoperative γ-cameras. Semin Nucl Med. 2011;41:166–181, and Zanzonico P, Heller S. The intraoperative gamma probe: Basic principles and choices available. Semin Nucl Med. 2000;30:33–48.)
FIGURE 43.19. Comparative thickness of γ-probe collimation and shielding required for low- to medium-energy x- and γ-rays of single-photon emitters (A), for the high-energy (511-keV) annihilation γ-rays of positron emitters (B), and for negatrons and positrons of β-particle emitters (C). Note the much thicker collimation and shielding required for counting of the annihilation γ-rays and the minimal collimation and shielding for counting of β-particles. (Reprinted from Seminars in Nuclear Medicine. Heller S, Zanzonico P. Nuclear probes and intraoperative gamma cameras. Semin Nucl Med. 2011;41:166–181. Copyright 2011. With permission from Elsevier. Courtesy of IntraMedical Imaging, Los Angeles, CA.)
β-probes generally utilize either semiconductor or plastic-scintillator detectors, since such detectors have lower effective atomic numbers and mass densities than inorganic scintillators such as NaI(Tl) and thus lower intrinsic efficiencies for x- and γ-rays, minimizing the potentially confounding count contribution of any such radiations accompanying β-particle emission.60–62,71,72 (For a pure β-particle emitter such as phosphorus-32, this would be unimportant.60,73) Daghighian et al.60 have developed and evaluated a plastic scintillator-based positron probe (Fig. 43.20). The basic design of this dual-detector probe (Fig. 43.20A) includes two scintillator detectors, a central solid-cylinder detector (Detector 1) and hollow-cylinder detector (Detector 2) in 1-mm-thick stainless steel cladding; the outputs of two detectors are passed by fiber optic cabling to separate PMTs (PMT 1 and PMT 2, respectively). The detector 1 counts result from both positrons and the 511-keV annihilation γ-rays associated with positron emission whereas the stainless steel cladding of detector 2 completely attenuates the positrons and allows only the annihilation γ-rays to enter the detector and generate counts from that detector. Because of the differences between detectors 1 and 2, their sensitivities for the 511-keV γ-rays are different. The detector 1-to-detector 2 ratio of the measured sensitivities for 511-keV γ-rays is the weighting factor by which the detector 2 count rate is multiplied and then subtracted from the detector 1 count rate to yield the detector 1 positron-only count rate. This probe was evaluated using the phantom setup shown in Figure 43.20B, with a small 18F-containing capsule simulating a tumor and a uniform 18F-filled cylindrical container simulating underlying normal-tissue activity. The probe was then scanned across the phantom and the detector 1 and 2 count rates at lateral positions relative to the “tumor” (i.e., capsule) recorded; the results, in terms of the measured count rates and the capsule (i.e., tumor)-to-background ratios, are plotted in Figs. 43.20C and D, respectively. These results (particularly the dramatic improvement— from ∼2 to ∼10—in the tumor-to-background ratios (Fig. 43.20D)) clearly demonstrate the feasibility of this dual-detector design and weighted-subtraction algorithm for β-probes in general and positron probes in particular. This has also been demonstrated in preliminary clinical and preclinical studies.68,69
The sensitivity and specificity of detection of sentinel lymph nodes using current approaches such as preoperative γ-camera imaging, γ-probes, and the “blue dye” technique are quite high. Newman,74 for example, performed a meta-analysis of nearly 70 published studies and found an overall sensitivity of over 90% and a false-negative rate of only 8.4% for detection of such nodes in breast cancer. For preoperative γ-camera imaging, detection rates of 72% to 85% have been reported.75 Sentinel lymph nodes were successfully detected using intraoperative γ-probes in 98% of patients in whom such nodes were successfully imaged preoperatively, with a false-negative rate of only 7%. And, for sentinel nodes not visualized by preoperative lymphoscintigraphy, there was a 90% detection rate intraoperatively. Importantly, however, negative preoperative lymphoscintigraphy often predicted a negative intraoperative probe result and the foregoing improvement in the detection rate intraoperatively was primarily because of the use of blue dye.75 The American Society of Breast Surgeons has recommended a sensitivity of at least 85% and a false-negative rate of less than 5% as acceptable for sentinel node detection in breast cancer.76 There remains a need, therefore, to develop techniques to improve the sensitivity and the false-negative rates of sentinel lymph node detection. Intraoperative γ-camera imaging may provide the improvement required to satisfy the foregoing requirements. Mathelin et al.,77–79 for example, found that the use of an intraoperative small (5- × 5-cm) FOV γ-camera for detection of sentinel lymph nodes in breast cancer was practical. In a case report,78 intraoperative γ-camera imaging allowed detection of an additional sentinel lymph node (metastatic and with low radiotracer uptake) that was not visualized by preoperative imaging or with a γ-probe, suggesting that intraoperative γ-camera imaging may reduce the false-negative rate.
Despite such promising preliminary data, it is not clear that the development and deployment of intraoperative γ-camera technology and the incremental improvement in the sentinel lymph node detection rate that such technology may provide will prove cost effective. Certain considerations, however, lend support to the development of this technology. One such consideration is the variable level of proficiency among surgeons in γ-probe–based detection of sentinel lymph nodes76. Even with considerable training and experience, not all surgeons achieve a detection rate of 90% or better. In addition, certain sentinel lymph nodes are problematic anatomically or otherwise in terms of detectability. These include nodes which are unusually deep, close to (less than 30 mm from) the injection site or high-activity normal tissues, or have a low (less than 1%) radiotracer uptake.75,79–81 A γ-camera system having a spatial resolution of 3 mm or better at a distance (depth) of the order of 1 cm would likely visualize such problematic nodes intraoperatively. Such an imaging system would offer other practical advantages over probes: the signal is provided in the familiar format of a scintigraphic image rather than a numerical display or variable-frequency tone; the larger FOV of even small γ-cameras (several centimeters) than that of probes (less than 1 cm) allows more rapid interrogation of large areas and/or longer sampling, with collection of more counts and reduction in statistical uncertainty (noise); more straightforward re-examination of the surgical site post lymph node excision to verify removal of foci of activity; and less reliance on potentially obliterated and otherwise ambiguous preoperative skin markings directing where measurements are to be performed intraoperatively.81 Intraoperative γ-camera systems thus merit development and evaluation.
FIGURE 43.20. A: Basic design of a dual-detector β-probe. B: Experimental phantom setup for evaluation of the performance of the probe shown in (A). The phantom consisted of a small 18F-containing capsule simulating a tumor and a uniform 18F-filled cylindrical source simulating underlying normal-tissue activity. The probe was then scanned across the phantom. Note that the capsule (i.e., tumor)-to-background (i.e., normal tissues) activity concentration ratio was 10:1. C: The measured detectors 1 and 2 count rates and the calculated weighted difference of the detectors 1 and 2 count rates (see text) as a function of the lateral position of the probe relative to the capsule. D: The capsule (i.e., tumor)-to-background count-rate ratios for detector 1 with and without weighted subtraction of the detector 2 count rates (see text). PMT, photomultiplier tube. (From Daghighian F, Mazziotta JC, Hoffman EJ, et al. Intraoperative beta probe: A device for detecting tissue labeled with positron or electron emitting isotopes during surgery. Med Phys. 1994;21:153–157, with permission.)
A number of small FOV intraoperative γ-camera systems have been developed.63,82,83 The earliest systems were hand-held devices having FOVs of only 1.5 to 2.5 cm in diameter and using conventional NaI(Tl) or CsI(Tl) scintillation detectors. Later units used 2D arrays (mosaics) of scintillation crystals connected to a position-sensitive PMT and, more recently, semiconductors such as CdTe or CdZnTe (CZT). The main problems with these early units were their very small fields of view and the resulting large number of images required to interrogate the surgical field and the difficulty in holding the device sufficiently still for the duration (up to 1 minute) of the image acquisition. More recently, larger FOV devices have been developed which are attached to an articulating arm for easy and stable positioning. These systems are nonetheless fully portable and small enough overall to be accommodated in typical surgical suites.
Abe et al.84 evaluated a hand-held CZT-based semiconductor γ-camera known as the eZ-SCOPE (Anzai Medical, Tokyo, Japan). As illustrated in Figure 43.21, the device is light enough (820 g) to hold for a short time (up to ∼1 minute). The CZT detector has a 3.2- × 3.2-cm FOV and is 5-mm thick, with an efficiency of 87% and energy resolution of 9% for 99mTc γ-rays. Its collimators are easily exchanged. The CZT crystal is divided into a 16 × 16 array of 2- × 2-mm pixels, with integral and differential uniformities of 1.6% and 1.3%, respectively, with low-energy high-resolution (LEHR) collimation. System spatial resolution with the LEHR collimation was 2.3-, 8-, and 15-mm FWHM at 1, 5, and 10 cm, respectively. As shown in Figure 43.21B and C, the camera is able to clearly image sentinel lymph nodes as well as lymphatic vessels. The small 3.2- × 3.2-cm FOV remains limiting, however: As shown in Figure 43.21B and C, for example, a single lymph node occupies nearly half of the FOV and searching the surgical field can thus be time-consuming. A pinhole collimator was therefore subsequently incorporated into a newly designed version of the system, the Sentinella 102 (see below). In order to generate a larger effective FOV (30 × 30 cm), a computer program to integrate multiple adjacent images was developed and tested in mouse studies. The exact spacing of the individual images and the occasional low-count pixels at the periphery of the images were problematic; however, initial experience with the eZ-SCOPE was nonetheless favorable overall.
FIGURE 43.21. A: Photograph of the eZ-SCOPE intraoperative γ-camera (Anzai Medical, Tokyo, Japan). B: Sample eZ-SCOPE image of a lymph node in a patient. C: Sample eZ-SCOPE images of a lymph node (top left) and lymphatic vessel in a patient (top right) and conventional γ-camera image of the same patient obtained preoperatively (bottom). See text for additional details.
A CZT-based semiconductor γ-camera semiconductor with a larger FOV, 4 × 4 cm, than that of the eZ-SCOPE was developed by General Electric (Haifa, Israel). The 4- × 4-cm pixilated detector consists of a 16 × 16 array of 2.5- × 2.5-mm pixels. Using parallel-hole collimation, the spatial resolution was 5-mm FWHM at a distance of 5 cm with a sensitivity of 100 cps/MBq. Energy resolution for 99mTc was 8%, somewhat better than the ∼10% value typically quoted for conventional γ-cameras. In one experiment, this system could clearly resolve 99mTc-filled spheres 1 cm in diameter in contact with one another at distances of up to 6 cm; a γ-probe could only distinguish the two sources at a 1-cm depth and only when separated by at least 2 cm. A potential advantage of γ-camera imaging is the ability to resolve sources that may overlap one another in one view by acquiring additional views at different angles, as illustrated by the results of the phantom experiment shown in Figure 43.22. This may be helpful, especially in breast cancer, in localizing a sentinel lymph node at a different depth from the injection site and obscured by the injected activity.
FIGURE 43.22. Setup and results of a 99mTc phantom imaging experiment with the intraoperative γ-camera developed by General Electric (Haifa, Israel). A: Schematic diagram (side view) of phantom, with two 1-cm spheres at depths of 2 and 4 cm and a third sphere, 1.4 cm in diameter, at a depth of 2 cm and directly over the 2-cm sphere at a depth of 4 cm. The two small spheres and the large sphere had activity concentrations of 1 and 2.5 μCi/mL, respectively. B: Images (identified as “1,” “2,” and “3,” respectively) were acquired at angles of –45, 0, and +45 degrees relative to an axis perpendicular to the top of the phantom (i.e., an axis in the plane of the diagram). The resulting γ-camera images demonstrate the ability to resolve overlying foci of activity by acquiring views at multiple angles. (From Kopelman D, Blevis I, Iosilevsky G, et al. Sentinel node detection in an animal study: Evaluation of a new portable gamma camera. Int Surg. 2007;92:161–166, with permission, and with kind permission from Springer Science+Business Media; Kopelman D, Blevis I, Iosilevsky G, et al. A newly developed intraoperative gamma camera: Performance characteristics in a laboratory phantom study. Eur J Nucl Med Mol Imaging. 2005;32:1217–1224.)
FIGURE 43.23. A: Photograph of the preoperative compact imager (POCI) intraoperative γ-camera. B: Intraoperative lymphoscintigraphy, with the POCI in position for imaging of the patient’s left axilla. C:POCI image (10-second acquisition time), showing two foci of activity corresponding to two neighboring lymph nodes. (Pitre S, Ménnard L, Ricard M, et al. A hand-held imaging probe for radio-guided surgery: Physical performance and preliminary clinical experience. Eur J Nucl Med Mol Imaging. 2003;30:339–343. With kind permission from Springer Science+Business Media.)
A hand-held camera, known as the preoperative compact imager (POCI) and utilizing a CsI(Na) scintillation crystal coupled to a focusing image intensifier tube and position-sensitive diode, was developed in France85 (Fig. 43.23A). Its FOV is 4 cm in diameter. With high-resolution parallel-hole collimation, its 99mTc sensitivity with scatter is 250 cps/MBq at 1 cm and 125 cps/MBq at 5 cm and its spatial resolution 3.9-, 4.8-, and 7.6-mm FWHM at 1, 2, and 5 cm, respectively. Images are acquired in a matrix of 50 × 50 pixels. The energy resolution of the POCI, 28%, is rather poor, however, and the wide energy windows thus required result in inclusion of substantial amounts of scatter in the image, a particular disadvantage when a lymph node is close to the injection site. Figure 43.23B illustrates the manual positioning of the POCI camera during intraoperative lymphoscintigraphy. Figure 43.23C presents a representative intraoperative image, with clear visualization of two adjacent lymph nodes. In a preliminary clinical study, lymph nodes in all three patients were identified with the POCI, including one in whom two deep nodes were missed with a γ-probe (most likely because of depth-related loss of sensitivity and proximity of the nodes to the injection site). The total imaging times depended upon the scan area and varied from 15 seconds to 3 minutes.
Another semiconductor γ-camera, utilizing CdTe, was developed by Tsuchimochi et al.86–88 Their choice of CdTe was based on its superior uniformity (integral uniformity: 4.5%) and energy resolution (7.8%) compared to CZT. The camera, referred to as the small semiconductor γ-camera (SSGC), uses an array of 32- × 32 5-mm-thick CdTe elements, with a matrix of 1.2- × 1.2-mm pixels and a 4.5- × 4.5-cm FOV. The collimation, comprised of tungsten, had 1.2- × 1.2-mm square apertures to match the pixel arrangement. Spatial resolution without scatter was 3.9-, 6.3-, and 11.2-mm FWHM at 2.5, 5, and 10 cm, respectively. The 99mTc sensitivity at the surface without scatter was 300 cps/MBq, comparable to that of the POCI and better than that of a conventional γ-camera with LEHR collimation (∼100 cps/MBq). The results of preliminary phantom and clinical imaging studies with the SSGC were encouraging.
A small FOV γ-camera equipped with pinhole collimation, known as the Sentinella 102, has been developed by General Equipment for Medical Imaging, South America.89–93 It uses a single 4- × 4-cm CsI(Na) scintillation crystal and a position-sensitive PMT, with images acquired in a 300 × 300 matrix. Interchangeable pinhole apertures 1, 2.5, and 4 mm in diameter are available, yielding an effective FOV of 20 × 20 cm at a distance of 18 cm. The detector assembly weighs 1 kg and is mounted on an articulating arm (Fig. 43.24A). The 99mTc sensitivity ranged from 200 to 2,000 cps/μCi at 1 cm and 60 to 160 cps/μCi at 10 cm, depending on the pinhole aperture used. The FWHM spatial resolution over the detector face is 5.4 to 8.2 mm, 7.3 to 11 mm, and 10 to 18 mm at 3, 5, and 10 cm, respectively, again depending on the pinhole aperture used. Beyond ∼3 cm, therefore, the spatial resolution is poorer than that of cameras with parallel-hole collimation. However, despite the coarser resolution, the advantage of a pinhole collimation lies in the larger effective FOV at such distances. Such a system can therefore be used at large distances to rapidly survey, with coarser resolution, a large area and then examine suspicious areas at smaller distances and finer resolution.89,90,94 In initial clinical studies, acquisition times of 20 to 60 seconds per image were required. Because the distortion associated with pinhole collimation varies with position within the FOV (i.e., is worse toward the periphery) as well as distance, the Sentinella 102 camera is equipped with a laser positioning system, with two intersecting lines being projected onto the surface of the region being imaged (Fig. 43.24B). This allows positioning of suspicious foci of activity at the center of the FOV, where image quality is best (Fig. 43.24C). The Sentinella 102 camera is also equipped with a long-lived gadolinium-153 (153Gd) pointer for real-time positioning; the image of the 153Gd pointer source is acquired in a separate energy window from the 99mTc image and is displayed as a small marker superimposed on the 99mTc image.
FIGURE 43.24. A: Photograph of the Sentinella 102 small field-of-view γ-camera (General Equipment for Medical Imaging, South America). The detector assembly is shown mounted on the system’s articulating arm. B:Illustration of the device’s laser positioning system, with two intersecting red lines projected onto posterior surface of the patient’s right knee joint. This patient had malignant melanoma of the right heel and lymphoscintigraphy was performed to identify the popliteal sentinel lymph node. C: Posterior γ-camera images of the patient’s right knee joint before (left) and after (right) surgical excision of the popliteal node. The pre-excision image (left) clearly shows the node centered in the field of view and the postexcision image (right) is notably absent of any such focus of activity, demonstrating complete removal of the node.
The Institut Pluridisciplinaire Hubert Curien (Strasbourg, France) developed an intraoperative γ-camera known as the “CarolIReS”.77–79 This device has a relatively large-area 50- × 50-mm cerium-doped GSO scintillation crystal and parallel-hole collimation with 2-mm-wide apertures. Its 99mTc spatial resolution was 10-mm FWHM at 5 cm, sensitivity 130 cpm/kBq, and energy resolution 45%. A prototype version of this device with a larger 100- × 100-mm FOV has been fabricated as well. In a preliminary clinical study with the CarolIReS camera, Mathelin et al.79 compared the depth of lymph nodes estimated by imaging to their actual depth measured at surgery and found a generally good correlation, except in instances where only a portion of the sentinel lymph node was in the camera’s FOV. For 7 of 11 nodes whose depth could be estimated, the image-derived depth was correct.
Overall, small FOV γ-cameras have demonstrated detection rates for sentinel lymph nodes equal to or better than those of nonimaging γ-probes, despite having sensitivities (e.g., expressed in cps/MBq) typically about 10-fold lower than those of such probes. The ability of such devices to image a surgical field intraoperatively and thus insure complete excision of lesions is a useful enhancement of surgical management of cancer. The acquisition times per image are typically well under 1 minute, so the overall duration of the surgical procedure should not be significantly prolonged. In addition, the use of pinhole collimation, despite having lower sensitivity than parallel-hole collimation, permits initial imaging at longer distance to visualize a larger anatomic area of interest followed by imaging at shorter distance to pinpoint and otherwise characterize suspicious foci of activity. Importantly, the scintigraphic image format is familiar to surgeons, likely facilitating clinical acceptance and integration of intraoperative imaging.
Optical and Near-Infrared Imaging
Despite the very limited penetrability of optical and near-infrared (NIR) light in tissue, specialized technologies have led to widespread and very productive use of light—both bioluminescence and fluorescence—for in vivo imaging of rodents95,96 and, to a much more limited extent to date, of human subjects; in the case of the latter, this has been restricted to fluorescence imaging.97 In the most common (i.e., preclinical) optical imaging paradigm, animals are placed in a light-tight imaging enclosure and the emitted optical or NIR signal is imaged by a charge-coupled detector (CCD). In bioluminescence imaging (Fig. 43.25A), cells (e.g., tumor cells) which are to be localized or tracked in vivo must first be genetically transduced ex vivo to express a so-called “reporter gene,” most commonly, a luciferase gene (Fig. 43.17). After the cells have been implanted, infused, or otherwise administered to the experimental animal, the luciferase substrate (e.g., D-luciferin in the case of firefly luciferase) is systemically administered. Wherever the administered substrate encounters the luciferase-expressing cells, the ensuing reaction (such as the D-luciferin-luciferase reaction) emits light, which is detected and localized by the imaging system. The CCD in bioluminescence imaging is maintained at a very low temperature (of the order of –100°C), thereby ensuring that any electronic output it produces results from light striking the CCD rather than the background “dark current” (which would be prohibitively high at ambient temperatures). In this way, the otherwise undetectably small signal originating in vivo and escaping from the surface of the animal can produce an image. In fluorescence imaging (Fig. 43.25B), cells to be imaged may either be genetically transduced ex vivo to express a fluorescent molecule (or fluorophore) such as green fluorescent protein (GFP) or a fluorophore probe targeting the cells of interest may be systemically administered. In either case, the animal is then illuminated with light at an appropriate excitation wavelength (obtained by filtration or with a laser) to energize the fluorophore in situ and the resulting emitted light (which has a slightly different wavelength than the excitation light) is itself filtered and detected by the CCD; the difference in wavelengths between the excitation and emitted light is known as the Stokes shift. The excitation light may be provided by reflectance (or epi-) illumination or by transillumination of the animal. Further, by computer processing, the abundant spontaneous fluorescence of the animal’s tissues as well as of foodstuffs in the gut must be mathematically separated, or “deconvolved,” from the overall fluorescence to yield an image specifically of the fluorophore; this is sometimes known as “spectral unmixing.” In practice, the resulting luminescence or fluorescence image is generally superimposed on a conventional (i.e., white-light) photograph of the animal to provide some orientation as to the anatomic location of the signal(s) in vivo (Fig. 43.25C).
FIGURE 43.25. Bioluminescence (A) and fluorescence optical or near-infrared (B) in vivo imaging. For fluorescence imaging, the excitation light source may be a white-light source whose emitted light is passed through conventional glass filters to yield a light over a narrow wavelength range centered about the excitation wavelength of the fluorophore being imaged. Alternatively, it may be a laser light source tuned to the appropriate wavelength. In so-called “multispectral” systems, multiple excitation and emission wavelengths and thus multiple fluorophores may be imaged simultaneously. C, D: These sample images show pseudo-color bioluminescence images superimposed on a grayscale photograph and on a 3D rendering of a mouse, respectively. CCD, charge-coupled detector. (From Zanzonico P. Noninvasive Imaging for Supporting Basic Research. In: Kiessling F, Pichler BJ, eds. Small Animal Imaging–Basics and Practical Guide. 2011:3–16. With kind permission from Springer Science+Business Media.)
Because light emitted at any depth of tissue is scattered and otherwise dispersed as it passes through overlying tissue before emanating from the surface of the animal, the apparent size of the light source (Fig. 43.25C and D) is considerably larger than its actual size. Despite the excellent spatial resolution of the CCDs themselves, the effective resolution of optical and NIR imaging is generally rather coarse. Further, for planar optical and NIR imaging, the resulting images are only semiquantitative: Absorption and scatter of the emitted light as it passes through overlying tissue makes the measured signal highly depth dependent. Thus, a focus of cells lying deep within tissue may appear less luminescent or fluorescent than an identical focus of cells at a more shallow depth; if excessively deep, such a focus of cells may be undetectable altogether. NIR radiations, however, have a substantially higher penetrability through tissue than blue to green radiations. Importantly, therefore, by using laser transillumination for excitation of administered NIR molecular probes in situ, tomographic fluorescence images can be mathematically reconstructed.96 The resulting 3D images—in contrast to planar images—are quantitative: the signal intensity thus reconstructed is directly related to the local concentration of the fluorophore.
Bioluminescence imaging, because it requires genetic transduction of the cells to be imaged, likely has limited applicability in patients but has proven invaluable in preclinical research. However, with recent advances in adoptive immunotherapy of cancer, bioluminescence imaging conceivably may have some clinical utility in an intraoperative or endoscopic setting to assess tumor targeting of immune effector cells. To date, however, no such studies have been performed. Intraoperative, endoscopic, and even surface fluorescence imaging of patients has been performed and continues to advance (see below).
Illustrative Applications of Optical and NIR Imaging: Preclinical
Noninvasive imaging of molecular genetic and cellular processes using various reporter-gene constructs complements existing ex vivo assays, and adds both spatial and temporal information to the understanding of different molecular and genetic processes. A number of preclinical studies, for example, have described successful monitoring of various signal transduction pathways. Noninvasive reporter-gene imaging has also been successfully applied to monitoring various types of gene therapy, including gene delivery and expression mediated by retroviral, adenoviral, and herpes viral vectors. Reporter-gene imaging has found wide application in the development and monitoring of different adoptive cell therapies. Noninvasive in vivo molecular-genetic imaging developed over the past two decades has utilized optical imaging as well as nuclear (PET, SPECT) and MR imaging. The convergence of molecular and cell biology and imaging modalities has thus provided the opportunity to address new research questions, including oncogenesis and tumor maintenance and progression as well as responses to molecular pathway-targeted therapy. Several applications of the gene-imaging paradigm are detailed below.
Holland et al. have used bioluminescence imaging and a genetically engineered mouse model (GEMM) of glioma to investigate the biology of this very difficult malignancy. This spontaneous tumor model recapitulates the biology and clinical course of human glioma with remarkable fidelity (Figure 43.26A).98,99 To produce this GEMM of glioma (Fig. 43.26), Ntv-a INK4a-ARF–/–mice are injected intracranially with DF-1 cells infected with and producing oncogenic RCAS-PDGF retroviral vectors—with platelet-derived growth factor (PDGF) under control of the nestin-responsive element—within 24 hours of birth. Cell-surface Ntv-a receptors in transgenic mice bind and allow somatic cell transfer of the gene construct. Site (i.e., brain)-specific nestin, through the Ntv-a receptor, activates transcription of a negative regulatory element (NRE) and PDGF, stimulating proliferation and progression to glioma of RCAS-PDGF retrovector-infected cells. In a reporter (i.e., luciferase) gene–transfected version of this mouse line, the gene encoding luciferase is controlled by the human E2F1 promoter, which inhibits tumor-specific activity in vivo. Bioluminescence imaging has been used to noninvasively identify and localize gliomas, assess their proliferation, and monitor their response to therapy100 (Fig. 43.26B–D). G1 cell-cycle arrest, for example, by blockade of either the PDGF receptor (e.g., with an investigational agent designated “PTK787/ZK222584”) or mTOR using small-molecule inhibitors (e.g., the rapamycin analog CCI-779) was demonstrated and serially monitored by bioluminescence imaging.
Another application of bioluminescence imaging was in the genetic transfer of antigen receptors, shown to be a very effective approach for generating tumor-specific T lymphocytes.101–103 Unlike the physiologic T-cell receptor, chimeric antigen receptors (CARs) encompass immunoglobulin variable regions or receptor ligands as their antigen recognition moiety, thus permitting T cells to recognize tumor antigens in the absence of human leukocyte antigen expression. CARs encompassing the CD3Z chain as their activating domain induce T-cell proliferation in vitro. The requirements for genetically targeted T cells to function in vivo are less well understood. Animal models have therefore been developed to assess the therapeutic efficacy of human peripheral blood T lymphocytes targeted to prostate-specific membrane antigen (PSMA), an antigen expressed in prostate cancer cells and the neovasculature of various solid tumors. In vivo specificity and antitumor activity have been assessed in mice bearing established prostate adenocarcinomas, using serum prostate-secreted antigen, MRI, CT, and bioluminescence imaging to investigate the response to therapy (Fig. 43.27).103 In three tumor models, orthotopic, subcutaneous, and pulmonary, it was shown that PSMA-targeted T cells effectively eliminate prostate cancer. The eradication of xenogenic tumors in a murine environment shows that the adoptively transferred T cells do not absolutely require in vivo costimulation to function. Such results provided a compelling rationale for recently initiated Phase-1 clinical trials to assess PSMA-targeted T cells in patients with metastatic prostate cancer.
Over the years, reporter systems have developed for multimodality gene imaging using bioluminescence, fluorescence, nuclear, and MR imaging techniques. In one such system, a single fusion protein with three functional subunits, FLuc, GFP, and herpes simplex virus type 1 thymidine kinase (HSV1-tk), was produced, functionally characterized in vitro, and successfully applied in multimodality in vivo imaging studies in tumor-bearing nude mice (Fig. 43.28). HSV1-tk activity is assayed using radioactively labeled thymidine analogs (e.g., 124I- or 131I-2′-fluoro-2′-deoxy-1-β-D-arabinofuransyl-5-iodo-uracil [FIAU]) for PET and SPECT imaging; many such nuclear imaging probes have been developed.50 Such multimodality reporter-gene constructs provide for the transition from fluorescence microscopy and fluorescence-activated cell sorting (FACS) to in vivo bioluminescence imaging to in vivo nuclear (PET, SPECT, γ-camera) imaging.
FIGURE 43.26. A: Histology of a genetically engineered mouse model of glioblastoma with features similar to those of the human disease. B: Approximate correlation between the tumor size and bioluminescence signal. C:Longitudinal bioluminescence imaging study of PDGFB-induced gliomagenesis in EF-Luc N-tv-a transgenic mice. Left panel: One mouse imaged every third day for 39 days. Right panel: Five mice imaged daily for 5 days. D:Preclinical trials of PDGF-induced glioma-bearing Ef-Luc N-tv-a mice. Left panel: Longitudinal imaging of one mouse treated with the PDGF receptor (PDGFR) blocker PTK787/ZK222584. Right (graphs): Longitudinal study with five mice per cohort and the comparative bioluminescence imaging responses to no treatment (buffer only), PDGFR blockade with PTK787/ZK222584, and mTOR inhibition with the rapamycin analog CCI-779, with approximate correlation between the signal and intensity of immunohistochemical staining of tumor for proliferation with PCNA. (From Uhrbom L, Nerio E, Holland EC. Dissecting tumor maintenance requirements using bioluminescence imaging of cell proliferation in a mouse glioma model. Nat Med. 2004;10:1257–1260. Copyright 2004. Reprinted by permission from Macmillan Publishers Ltd.)
Illustrative Applications of Optical and NIR Imaging: Clinical
As noted, the clinical application of optical imaging to date has utilized fluorescence imaging in endoscopic and intraoperative settings. Fluorescence cystoscopy, for example, is now widely used to identify and localize urinary bladder cancer.104 The photosensitizer 5-aminolevulinic acid (ALA) is a precursor of the photoreactive (at 375 to 440 nm) protoporphyrin IX (PpIX). Although the mechanism is not yet well understood, ALA accumulates selectively in cancerous tissue, yielding tumor-to-nontumor activity concentration ratios of ∼20:1 within 2 hours of topical administration within the bladder. As illustrated in Fig. 43.29, fluorescence cystoscopy allows more sensitive and specific visualization of bladder cancer in vivo than conventional (i.e., white-light) imaging.
FIGURE 43.27. Demonstration, by multimodality (firefly luciferase [FLuc] bioluminescence, MR, and CT) imaging, of therapeutic effectiveness of adoptive immunotherapy of cancer. Prostate-specific membrane antigen (PSMA)- and FLuc (as a reporter gene)-transduced RM1 prostate cancer cells were injected intravenously into mice to produce a model of lung metastases and were then treated with systemically administered T cells genetically engineered to express either the PSMA-specific (PZ1) receptor or, as a negative control, the nonspecific (19Z1) receptor. The T cells were administered on the day indicated following infusion of the cancer cells. By all three modalities, the specific, but not the nonspecific, T cells eradicated the lung disease. (From Gade TP, Hassen W, Santos E, et al. Targeted elimination of prostate cancer by genetically directed human T lymphocytes. Cancer Res. 2005;65:9080–9088, with permission.)
In addition to endoscopic fluorescence imaging, large-field, planar optical and NIR fluorescence imaging potentially may improve human surgery by providing real-time image guidance to surgeons to identify tissue to be resected (such as tumors) and tissue to be avoided (such as blood vessels and nerves). As illustrated in Figure 43.30, the use of ALA has been extended to systemic administration and fluorescence imaging-guided resection of glioblastomas.105 Based on overexpression of folate receptor-α, intraoperative fluorescence imaging has also been applied to resection of ovarian cancer using folate conjugated via an ethylene diamine spacer to fluorescein isothiocyanate (FITC) (Fig. 43.31). To further advance the practical implementation of fluorescence imaging-guided surgery, Frangione et al. have developed the so-called fluorescence-assisted resection and exploration (FLARE) system.106–112 Briefly, the FLARE system consists of an imaging head mounted on an articulated arm and a cart containing control equipment, computer, and monitors (Fig. 43.32). The imaging head includes light-emitting diodes (LEDs) as the excitation-light source, heat-dissipation technology to maintain stability of the LEDs, and complementary metal oxide semiconductor (CMOS) cameras; it can be positioned anywhere in 3D space with six degrees of freedom. A customized software system enables the real-time display of color video and two NIR fluorescence channels at a rate of up to 15 frames per second. The software is capable of displaying the NIR fluorescence signal as a pseudocolored overlay on the color video, thereby providing anatomic guidance to the surgeon. Among its many applications to date, the FLARE system has been applied to sentinel lymph node resection in breast cancer surgery (Fig. 43.33)112 and has demonstrated improved visualization of nodes and of tumors.106–113
FIGURE 43.28. Noninvasive multimodality reporter-gene imaging of a mouse with a subcutaneous xenograft produced from genetically transduced (right shoulder) and from wild-type (nontransduced) U87 tumor cells (left shoulder). The transduced tumor cells express GFP, FLuc, and HSV1-tk as reporters. Fluorescence image of GFP (left panel), bioluminescence imaging of FLuc (middle panel), and transaxial PET images of 124I-2′-fluoro-2′-deoxy-1-β-D-arabinofuransyl-5-iodo-uracil (FIAU) at the levels indicated by the dotted white lines (right panel). Note that the negative-control wild-type tumor in the contralateral shoulder is not imaged by any of the modalities. (From Ponomarev V, Doubrovin M, Serganova I, et al. A novel triple-modality reporter gene for whole-body fluorescent, bioluminescent, and nuclear noninvasive imaging. Eur J Nucl Med Mol Imaging. 2004;31:740–751. With kind permission from Springer Science+Business Media.)
FIGURE 43.29. Cystoscopic imaging of urinary bladder cancer. Conventional (i.e., white-light) image (left panel) and fluorescence image (right panel) following topical administration of the protoporphyrin IX (PpIX) precursor 5-aminolevulinic acid (ALA). The red coloration of the cancerous tissue makes it far more apparent in the fluorescence image than in the white-light image. (From Witjes JA, Douglass J. The role of hexaminolevulinate fluorescence cystoscopy in bladder cancer. Nat Clin Pract Urol. 2007;4:542–549. Copyright 2007. Reprinted by permission of Macmillan Publishers Ltd.)
Specialized Optical and NIR-Imaging Modalities
A new approach to optical imaging is based on the emission of a continuum of visible light associated with the decay of certain radionuclides (actually, with the particles emitted as result of the radionuclide decay).114–123 This phenomenon, now known as the “Cerenkov effect,” was first observed in the 1920s and characterized in the 1930s by Cerenkov.115 In 1958, Cerenkov shared the Nobel Prize in Physics with colleagues Frank and Tamm for the discovery and explanation of the effect which now bears his name. Cerenkov radiation is perhaps familiar to some readers as the bluish “glow” observed in the water pools containing spent, but still radioactive, fuel rods at nuclear reactors. It arises when charged particles such as β-particles travel through an optically transparent, insulating medium at a speed greater than that of light in that medium. The Cerenkov effect, often analogized to the sonic boom that occurs at the instant a supersonic plane exceeds the speed of sound in air, occurs as the charged particles dissipate their kinetic energy by polarizing the electrons in the insulating medium (most commonly, water) as they travel through the medium. As these polarized electrons then relax (or re-equilibrate), and if the charged particle is traveling faster than light, constructive interference of the light thus emitted occurs, producing the grossly visible Cerenkov radiation.
The application of Cerenkov radiation to in vivo radionuclide imaging is a recent development.114,116–123 Phantom studies by Ruggiero et al. have demonstrated, using a commercial optical imaging system (Ivis 200, Caliper Life Sciences) equipped with a cryo-cooled CCD, measurable emission of Cerenkov radiation associated with a number of clinically relevant radionuclides, including 18F, copper-64 (64Cu), zirconium-89 (89Zr), iodine-124 (124I), iodine-131 (131I), and actinium-225 (225Ac)122 (Fig. 43.34A and B). Importantly, the optical Cerenkov signal is linearly related to activity concentrations (Fig. 43.34C), at least where the effects of attenuation and scatter are minimal. Ruggiero et al. have also produced planar Cerenkov images of the tumor localization of a 89Zr-labeled antibody in prostate tumor xenografts in mice which compare favorably, both qualitatively and quantitatively, with the 89Zr PET images122 (Fig. 43.35). Using intradermal tail injections of 18F-FDG, Thorek et al. subsequently performed both PET- and Cerenkov imaging–based lymphography in mice, with both modalities demonstrating excellent visualization of lymph nodes123 (Fig. 43.36). Initial clinical trials of Cerenkov imaging are currently underway to delineate its potential clinical utility; as illustrated in Figure 43.36, assisting the resection of sentinel lymph nodes appears to be one potential application.
FIGURE 43.30. Intraoperative white-light image (left panel) and fluorescence image (right panel) following systemic administration of aminolevulinic acid (ALA; 20 mg/kg 3 hours prior to surgery) of the intracranial resection site of a glioblastoma. The red coloration of the cancerous tissue, blue coloration of normal brain, and lack of color of nonperfused, necrotic tissue allows easier discrimination of these respective tissues than does the white-light image. This in turn should facilitate a more complete resection of the tumor and, presumably, better local control. (From Stummer W, Novotny A, Stepp H, et al. Fluorescence-guided resection of glioblastoma multiforme by using 5-aminolevulinic acid-induced porphyrins: A prospective study in 52 consecutive patients. J Neurosurg. 2000;93:1003–1013, with permission.)
FIGURE 43.31. A: Photograph of intraoperative use of fluorescence imaging system developed by Ntziachristos et al. B: Conventional color image (left panel) and fluorescence grayscale image (right panel) following systemic administration of FITC-conjugated folate (0.3 mg/kg) of intra-abdominal resection site of ovarian cancer. The tumor deposits are more clearly visualized on the fluorescence image than on the color image. C: Graphical presentation of the results of conventional versus fluorescence imaging-assisted resection of ovarian cancer, demonstrating that intraoperative fluorescence imaging (FLI) visualized significantly more ovarian cancer deposits than conventional imaging. (From van Dam GM, Themelis G, Crane LM, et al. Intraoperative tumor-specific fluorescence imaging in ovarian cancer by folate receptor-α targeting: First in-human results. Nat Med. 2011;17:1315–1319. Copyright 2011. Reprinted by permission from Macmillan Publishers Ltd.)
In photoacoustic imaging,124–126 tissues are illuminated with laser light3. Some of the delivered energy will be absorbed and converted into heat, leading to transient thermoelastic expansion of the illuminated tissue and thus ultrasonic (i.e., MHz-frequency) emissions. The ultrasonic waves thus emitted are then detected by ultrasonic transducers to form images. Image contrast is provided by the differential absorption among tissues of the incident excitation light. In contrast to fluorescence imaging, in which scattering in tissue degrades spatial resolution with increasing depth, photoacoustic imaging provides better spatial resolution (of the order of 100 μm) and deeper imaging depth (of the order of 1 cm or greater) because there is far less absorption and scattering in tissue of the ultrasonic signal compared to the emitted light signal in fluorescence imaging. When compared with US imaging, in which the contrast is limited because of the similarity in acoustical properties among tissues, photoacoustic imaging provides better tissue contrast as a result of the wider range of tissue optical properties. The optical absorption in biologic tissues can be because of endogenous molecules such as hemoglobin or melanin or exogenously administered contrast agents. Since blood exhibits orders of magnitude higher light absorption than other tissues, there is sufficient endogenous contrast provided by oxygenated hemoglobin (HbO2) and deoxygenated hemoglobin (Hb) for photoacoustic imaging to visualize blood vessels.
FIGURE 43.32. Photograph of the FLARE system developed by Frangione et al. for intraoperative fluorescence imaging. (From Troyan SL, Kianzad V, Gibbs-Strauss SL, et al. The FLARE intraoperative near-infrared fluorescence imaging system: A first-in-human clinical trial in breast cancer sentinel lymph node mapping. Ann Surg Oncol. 2009;16:2943–2952. With kind permission from Springer Science+Business Media.)
FIGURE 43.33. Use of the FLARE system in intraoperative localization and resection of sentinel lymph nodes (SLNs) in breast cancer surgery. Four peritumoral injections were performed of indocyanine green (ICG) conjugated to human serum albumen (10 μg of ICG in 0.2 mL per injection). The conventional color video images in the left panels show the surgical field and the incision in the proximity of four SLNs. The near-infrared (NIR) fluorescence video images (100 msec per exposure) show the injection site (top middle panel) and the four ICG-albumen–concentrating SLNs (arrows, middle panels). For the images in the lower two middle panels, the injection site was covered with an opaque surgical drape. The left panels show the overlayed, or merged, color and fluorescence images. The SLNs are clearly far more apparent in the fluorescence than in the color images. Inj, injection. (From Troyan SL, Kianzad V, Gibbs-Strauss SL, et al. The FLARE intraoperative near-infrared fluorescence imaging system: A first-in-human clinical trial in breast cancer sentinel lymph node mapping. Ann Surg Oncol. 2009;16: 2943–2952. With kind permission from Springer Science+Business Media.)
FIGURE 43.34. A: Phantom for evaluation of Cerenkov imaging of positron-emitting radionuclides, comprised of a circular arrangement of six 1-mL Eppendorf tubes filled with increasing activity concentrations. Left panel:Cerenkov image superimposed on a photograph of the phantom acquired with an Ivis 200 optical imaging system (Caliper Life Sciences). Right panel: PET image of the phantom acquired with a Focus 120 microPET scanner (Concorde Microsystems). B: Average radiance per unit activity concentration (in photons [p]/second [s]/cm2/steradian [sr] per kBq/μL) for different radionuclides as measured using the phantom arrangement and instrumentation described in (A). C: Linear correlation (r, 0.98) between average radiance (in p/s/cm2/sr) and activity concentration (in kBq/μL) for 89Zr evaluated again using the phantom arrangement and instrumentation described in (A). (Adapted from Ruggiero A, Holland JP, Lewis JS, et al. Cerenkov luminescence imaging of medical isotopes. J Nucl Med. 2010;51:1123–1130, with permission.)
FIGURE 43.35. A: PET image (left panel) and Cerenkov image (right panel) of a 89Zr-anti-PSMA (prostate-specific membrane antigen) antibody (J591) in a mouse with bilateral flank LNCaP prostate tumor xenografts at 96 hours post injection. The Cerenkov image is superimposed on a photograph of the mouse. The two xenografts are clearly visible in both images. B: Linear correlation (r = 0.89) between the Cerenkov image-derived average radiance (in p/s/cm2/sr) and the PET image-derived maximum tissue uptake (in percent of the injected dose per gram, %ID/g). (From Ruggiero A, Holland JP, Lewis JS, et al. Cerenkov luminescence imaging of medical isotopes. J Nucl Med. 2010;51:1123–1130, with permission.)
Most commonly, photoacoustic scanners use either a tomographic geometry (with an array of up to several hundred US transducers partially surrounding the subject)124 or a planar geometry employing a linear transducer array.127,128 The tomographic approach offers a large effective aperture for data collection, but suffers from a low frame rate (>10 minutes per frame), because of the need for hundreds to thousands of laser pulses per frame. The use of a linear array eliminates the need for scanning and thus a 2D frame can be acquired with many fewer laser pulses, providing much higher frame rates. In addition, in the tomographic geometry, the surface of the transducers are of the order of 1 cm from the surface of the subject (up to now, rodents) to accommodate an array of transducers encircling the subject. As a result, the mouse or rat must be immersed in water to provide the necessary acoustical coupling to the transducer; in the multispectral optoacoustic tomography (MSOT) system marketed by iThera Medical, the animal is suspended in a very thin membrane and then immersed in the water, thereby keeping the animal completely dry (Fig. 43.37A). Another preclinical photoacoustic imaging system, employing a linear transducer array, is marketed by Visualsonics.
FIGURE 43.36. Cerenkov imaging-guided lymph node resection in a normal mouse following intradermal tail injection of 18F-FDG (30 μCi). Left panel: Volume rendering of fused CT image (yellow color table) and PET image (green color table) showing lymphatics and lymph nodes in relation to the skeleton. Right panel: Cerenkov image superimposed on a photograph of the mouse following removal post sacrifice of the dorsal skin before (left image)and after (right image) resection of a luminescent inguinal lymph node. The resected node ex vivo is shown in the inset image. (From Thorek DL, Abou DS, Beattie BJ, et al. Positron Lymphography: Multimodal, high-resolution, dynamic mapping and resection of lymph nodes After intradermal injection of 18F-FDG. J Nucl Med. 2012;53:1438–1445, with permission.)
Photoacoustic imaging has been used successfully in preclinical models for tumor perfusion angiogenesis monitoring (see Fig. 43.37),125 blood oxygenation mapping, functional brain imaging, and melanoma detection, among other applications. The resulting functional images can be superimposed on high-resolution B-mode anatomic images.
FIGURE 43.37. A: Setup for photoacoustic imaging of regional perfusion in a tumor xenograft on the dorsal surface of a nude mouse. Note that the tumor (on the dorsal surface of the animal and therefore not seen) is “immersed” in water. The animal remains dry, however, because of the presence of a thin membrane between the animal and the water. B: Photograph of the 4T1 xenograft on the dorsal surface of the mouse. In (A) and (B), the dashed line indicates the approximate position of the transverse tissue section being imaged. C: Images (acquired at 790 nm) before (left panel) and 30 seconds after (right panel)intravenous injection of indocyanine green (IGC) (V, ventral; D, dorsal; L, left; R, right). Note the increase in image contrast in and around the tumor (arrow) after injection, identifying the more highly perfused portions of the tumor. (From Herzog E, Taruttis A, Beziere N, et al. Optical imaging of cancer heterogeneity with multispectral optoacoustic tomography. Radiology. 2012;263:461–468, with permission.)
Diffuse Optical Tomography
Diffuse optical tomography (DOT) utilizes NIR light to generate quantitative functional images of tissue with a spatial resolution of 1 to 5 mm at depths up to several centimeters.129,130 Propagation of NIR light through a medium is dominated by scattering rather than absorption—tissue absorption path lengths are ∼10 cm whereas scattering path lengths are less than 50 μm—and can be modeled as a diffusion process where photons behave stochastically (in a manner analogous to that of particles in random-walk modeling of diffusion). Quantitative measurements can be obtained by separating light absorption from scattering using spatial- or temporal-modulation techniques. Tissue molecular composition, including the determination of the concentrations of oxy- and deoxyhemoglobin, water, lipid, and exogenous probes, and tissue structure can be determined from absorption and scattering measurements, respectively. Time-modulation systems use picosecond optical pulses and time-gated photon-counting detectors; frequency-modulation systems use an RF-modulated light source, PMTs or fast photodiodes, and RF phase detectors. DOT has been applied to breast cancer diagnostics (4, 5), joint imaging (6, 7), and blood oximetry (i.e., activation studies) in human muscle and brain tissue (8, 10) as well as to cerebral ischemia and cancer studies in small animals. Commercial instruments are now available that yield tomographic and volumetric image sets. These devices are compact, portable, and relatively inexpensive (∼$150 K).
Optical Coherence Tomography
Optical coherence tomography (OCT) is an interferometric technique, typically employing low-coherence NIR light, to produce 2D images of tissue surface layers and structure.131 The principle of OCT is analogous to that of pulse-echo (i.e., B-mode) US imaging except OCT uses light instead of acoustic waves to delineate tissue structure by measuring reflectance of light rather than sound waves and thus achieves far better spatial resolution but with less depth penetration. The technique has been described as “an optical biopsy,” since OCT can produce near-histologic images (spatial resolution: 1 to 15 μm) without excision. Because of photon absorption and scattering, its sampled depth is limited to within several millimeters of the tissue surface. The 2D images can be assembled to construct a volumetric image set. In OCT, the axial resolution is proportional to the center wavelength and inversely proportional to the bandwidth of the light source and improves with the index of refraction of the sample. Originally developed for and still most commonly applied to ophthalmology (to obtain detailed images of retinal structure), OCT is being applied for cancer diagnosis and tissue characterization.
Raman Spectroscopic Imaging
When light interacts with matter, most of the light is elastically scattered, retaining its original energy, frequency, and wavelength; this phenomenon is also known as Rayleigh scattering (Fig. 43.38A). However, a small fraction of light is inelastically scattered, with the scattered light having a lower energy and frequency and longer wavelength than the incident light. The process leading to this inelastic scatter is termed the Raman effect,132 and the difference in wavelength between the incident and scattered light is called the Raman shift. Because photons with optical energies interact with outer-shell, or valence, atomic electrons, which are responsible for the intramolecular chemical bonds among atoms, materials having different molecular compositions will inelastically scatter light differently. Every molecule therefore has a distinct Raman spectrum (or “signature”), that is, a different Raman shift-dependent intensity of the scattered light; this is the basis of using Raman spectroscopy to identify the molecular constituents of various materials.132 By illuminating a sample with a highly collimated beam of light and at the same time either translating a scattered-light detector or the sample in two dimensions, spatial indexing of the Raman spectrum can be performed and a Raman spectrum image created. Though they may appear similar, the Raman effect is distinct from fluorescence in that the former represents a light-scattering phenomenon and the latter light absorption and re-emission. Like fluorescence imaging, Raman spectroscopic imaging has been applied to endogenous (or intrinsic) molecules naturally present in tissue and to exogenously administered materials (as in surface-enhanced Raman scattering [SERS] [see below]).
FIGURE 43.38. A: Raman effect, showing sample being illuminated with incident photons of wavelength (λi). Most of the incident photons are scattered elastically (Rayleigh scattering) and resulting scattered photons have same wavelength (λs) as the incident photons (i.e., λs = λi). A few photons are inelastically scattered (Raman scattering) at wavelengths longer than incident photons (i.e., λs > λi). The relative proportion of inelastically scattered photons is typically depicted using a Raman spectrum, which is a plot of scattered-photon intensity versus the Raman shift (i.e., the energy difference between the incident and scattered photons). Multiple different wavelengths of inelastically scattered light can occur and a spectrum plot can therefore include multiple peaks, although a single primary peak, as shown, is also possible. B: Molecular imaging-agent approach showing surface-enhanced Raman scattering (SERS) nanoparticles, which consist of a metallic core, a Raman active layer adsorbed onto the metal surface, and a shell coating the entire particle. An array of unique spectral signatures can be obtained by modifying the Raman-active layer of the nanoparticle. These unique Raman nanoparticles can serve as molecular imaging agents for in vitro and in vivo procedures. C: The intrinsic approach, showing a human tissue specimen being illuminated with a laser. The intrinsic Raman spectral signature of tissue can reveal important information about phosphate, protein, and lipid content of cells or tissue of interest. (From Zavaleta CL, Kircher MF, Gambhir SS. Raman’s “effect” on molecular imaging. J Nucl Med.2011;52:1839–1844, with permission.)
A drawback of the Raman effect as an analytical tool is that it is a very weak phenomenon, producing only 1 inelastically scattered photon for every 10 million elastically scattered photons. Technical advances, such as the introduction of lasers and of resonance-based enhancements, have greatly expanded the practical applications of the Raman effect. For years now, the Raman effect has been used in a variety of analytical applications and, more recently, in various in vitro cell assays and microscopy. The most commonly used enhancement methods are SERS and coherence anti-stokes Raman scattering (CARS); both SERS and CARS enhance the Raman signal by orders of magnitude. SERS involves adding metal (e.g., gold) nanoparticles which absorb the optical energy and yield an enhanced Raman signal by virtue of the metal surface transferring energy to nearby molecules (Fig. 43.38B). The resulting Raman signal provides picomolar sensitivity, which is compatible with tissue tracer concentrations achievable in vivo. Furthermore, labeling SERS nanoparticles with several different molecular ligands can provide simultaneous assay of multiple molecular components. CARS involves illumination with photons of different energies, with one photon energizing a molecule of interest from its ground state to an initial excited state and a second photon energizing the molecule from a “relaxed” state (i.e., at an energy level after releasing energy during the “laser off” time interval following absorption of the first photon) to a different higher-energy level; this second (or higher) tier of vibrational energy is ∼5-fold more intense than the Raman signal after the original pulse (Fig. 43.38C). The CARS technique is often used for high-resolution, 3D microscopy. Its advantages are that there is no need for administered probes (as in SERS) and rapid acquisition of images. Several preclinical studies have utilized the SERS or CARS techniques for in vivo molecular imaging of cell receptors (e.g., RGD-carbon nanotubes that bind to α5β3 integrin-expressing tumors), tumor microvessels, enzyme activity, pH, lipid composition, and myelin composition (Fig. 43.38C).
FIGURE 43.39. Triple-modality detection with nanoparticle probes of brain tumors in mice. Three weeks after orthotopic implantation with U87MG glioblastoma cells, the brain tumor-bearing mouse was injected intravenously with the nanoparticles, which localized in the tumor because of the EPR (enhanced permeability and retention) effect. Photoacoustic (PA), Raman, and MR images of the brain (skin and skull intact) were acquired before and 2, 3, and 4 hours post injection, respectively. Raman imaging was performed using a commercial Raman microscope (inVia, Renishaw) with a computer-controlled x,y-translation stage. A: Axial MR, PA, and Raman images. The post injection images of all three modalities demonstrated clear tumor visualization. The PA and Raman images were coregistered with the MR image, demonstrating good concordance within the tumor of the nanoparticle distribution among the three modalities. B: Volumetric rendering of MR images with the tumor segmented (red; top panel); overlay of the 3D PA image (green) over the MR image (middle panel); and overlay of the tumor-segmented MR and PA images (bottom panel) showing good colocalization of the PA signal within the tumor. C: Quantification of the imaging signals in the tumor shows a significant increase in the MRI, PA, and Raman signals after versus before the nanoparticle injection (“**” indicates p < 0.01, “***” indicates p < 0.001). Error bars represent the standard error of the mean, AU, arbitrary units. (From Kircher MF, de la Zerda A, Jokerst JV, et al. A brain tumor molecular imaging strategy using a new triple-modality MRI-photoacoustic-Raman nanoparticle. Nat Med. 2012;18:829–834. Copyright 2012. Reprinted by permission from Macmillan Publishers Ltd.)
Over the last decade, the biomedical applications of Raman spectroscopic imaging have grown dramatically. Because it is essentially a surface imaging technique (like most optical imaging techniques), Raman spectroscopic imaging has been applied mainly to examination of skin and of pathologic specimens as well as to small animals. For example, Raman mapping has enabled accurate identification of malignant from benign lesions and normal tissue in skin, brain, larynx, parathyroids, breast, and urinary bladder. Raman spectroscopic imaging has also been applied endoscopically in the colon. Recently, Kircher et al.133 have reported a clinically translatable molecular-imaging strategy using a novel triple-modality MRI-photoacoustic-Raman nanoparticle probe (Figs. 43.39 and 43.40).
FIGURE 43.40. Preclinical model of Raman-guided surgery. A: Following orthotopic implantation with GFP-transduced U87MG glioblastoma cells, the brain tumor-bearing mouse underwent craniotomy under general anesthesia. Quarters of the tumor were then sequentially removed (as illustrated in the photographs). B: Intraoperative Raman imaging was performed after each resection step, until the entire tumor had been removed by visual inspection. After the gross removal of the tumor, several small foci of Raman signal were found in the resection bed (outlined by the dashed white square). C: Subsequent immunohistochemical analysis of sections from these foci demonstrated an infiltrative pattern of the tumor in this location, forming finger-like protrusions extending into the surrounding brain tissue. CD11b (second panel from left) is a widely used microglial immunohistochemical marker. As shown in the Raman microscopy image (right panel), the Raman signal was observed within these protrusions, indicating the selective presence of MPRs in these protrusions. The white dashed box not drawn to scale. Raman signal in linear red color table. (From Kircher MF, de la Zerda A, Jokerst JV, et al. A brain tumor molecular imaging strategy using a new triple-modality MRI-photoacoustic-Raman nanoparticle. Nat Med. 2012;18:829–834. Copyright 2012. Reprinted by permission from Macmillan Publishers Ltd.)
The last decade has featured remarkable technical advances in tumor imaging both in preclinical and clinical settings. Among the most notable of these are multimodality imaging and optical imaging. Multimodality (i.e., PET-CT and SPECT-CT) scanners have already dramatically impacted clinical practice, particularly in oncology. PET-MRI and SPECT-MR scanners are still in their infancy and their ultimate impact remains to be seen. The same may be said for the nascent field of intraoperative radionuclide imaging. The emergence of optical imaging, especially in small-animal models, has been dramatic, and translation of optical (i.e., fluorescence) imaging to intraoperative and endoscopic applications is well underway.
1. Mankoff DA. A definition of molecular imaging. J Nucl Med. 2007;48:18N, 21N.
2. Zanzonico P. Multimodality image registration and fusion. In: Dhawan AP, Huang HK, Kim DS, eds. Principles and Advanced Methods in Medical Imaging and Image Analysis. Singapore: World Scientific Publishing Co.; 2008:413–435.
3. Zanzonico PB, Nehmeh SA. Introduction to clinical and laboratory (small-animal) image registration and fusion. Conf Proc IEEE Eng Med Biol Soc. 2006; 1:1580–1583.
4. Association ACoR-NEM. ACR-NEMA Digital Imaging and Communications Standard. Washington, DC; 1985.
5. Mildenberger P, Eichelberg M, Martin E. Introduction to the DICOM standard. Eur Radiol. 2002;12:920–927.
6. Hajnal JV, Hill DLG, Hawkes DJ. eds. Medical Image Registration. Boca Raton, FL: CRC Press; 2001.
7. Hutton BF, Braun M, Thurfjell L, et al. Image registration: An essential tool for nuclear medicine. Eur J Nucl Med Mol Imaging. 2002;29:559–577.
8. Maintz JB, Viergever MA. A survey of medical image registration. Med Image Anal. 1998;2:1–36.
9. Wells WM 3rd, Viola P, Atsumi H, et al. Multi-modal volume registration by maximization of mutual information. Med Image Anal. 1996;1:35–51.
10. Townsend DW. A combined PET/CT scanner: The choices. J Nucl Med. 2001;42: 533–534.
11. Townsend DW. Positron emission tomography/computed tomography. Semin Nucl Med. 2008;38:152–166.
12. Townsend DW, Beyer T. A combined PET/CT scanner: The path to true image fusion. Br J Radiol. 2002;75(Spec No):S24–S30.
13. Townsend DW, Beyer T, Blodgett TM. PET/CT scanners: A hardware approach to image fusion. Semin Nucl Med. 2003;33:193–204.
14. Townsend DW, Cherry SR. Combining anatomy and function: The path to true image fusion. Eur Radiol. 2001;11:1968–1974.
15. Schöder H, Erdi Y, Larson SM, et al. PET/CT: A new imaging technology in nuclear medicine. Eur J Nucl Med Mol Imaging. 2003;30(10):1419–1437.
16. Israel O, Goldsmith SJ. Hybrid SPECT/CT: Imaging in Clinical Practice. New York, NY: Taylor & Francis; 2006:244.
17. Seo Y, Mari C, Hasegawa BH. Technological development and advances in single-photon emission computed tomography/computed tomography. Semin Nucl Med. 2008;38:177–198.
18. Rowland DJ, Cherry SR. Small-animal preclinical nuclear medicine instrumentation and methodology. Semin Nucl Med. 2008;38:209–222.
19. Schöder H, Erdi YE, Larson SM, et al. PET/CT: A new imaging technology in nuclear medicine. Eur J Nucl Med Mol Imaging. 2003;30:1419–1437.
20. Winkelmann CT, Figueroa SD, Sieckman GL, et al. Non-invasive MicroCT imaging characterization and in vivo targeting of BB2 receptor expression of a PC-3 bone metastasis model. Mol Imaging Biol. 2012;14:667–675.
21. Pichler BJ, Kolb A, Nägele T, et al. PET/MRI: Paving the way for the next generation of clinical multimodality imaging applications. J Nucl Med. 2010;51: 333–336.
22. Pichler BJ, Wehrl HF, Kolb A, et al. Positron emission tomography/magnetic resonance imaging: The next generation of multimodality imaging? Semin Nucl Med. 2008;38:199–208.
23. Judenhofer MS, Wehrl HF, Newport DF, et al. Simultaneous PET-MRI: A new approach for functional and morphological imaging. Nat Med. 2008;14:459–465.
24. Judenhofer MS, Cherry SR. Applications for preclinical PET/MRI. Semin Nucl Med. 2013;43:19–29.
25. Ng TS, Bading JR, Park R, et al. Quantitative, simultaneous PET/MRI for intratumoral imaging with an MRI-compatible PET scanner. J Nucl Med. 2012;53: 1102–1109.
26. Peng BJ, Walton JH, Cherry SR, et al. Studies of the interactions of an MRI system with the shielding in a combined PET/MRI scanner. Phys Med Biol. 2010; 55:265–280.
27. Pichler BJ, Judenhofer MS, Catana C, et al. Performance test of an LSO-APD detector in a 7-T MRI scanner for simultaneous PET/MRI. J Nucl Med. 2006;47: 639–647.
28. Hofmann M, Pichler B, Schölkopf B, et al. Towards quantitative PET/MRI: A review of MR-based attenuation correction techniques. Eur J Nucl Med Mol Imaging. 2009;36(suppl 1):S93–S104.
29. Kolb A, Wehrl HF, Hofmann M, et al. Technical performance evaluation of a human brain PET/MRI system. Eur Radiol. 2012;22:1776–1788.
30. Pichler BJ, Judenhofer MS, Wehrl HF. PET/MRI hybrid imaging: Devices and initial results. Eur Radiol. 2008;18:1077–1086.
31. Sauter AW, Wehrl HF, Kolb A, et al. Combined PET/MRI: One step further in multimodality imaging. Trends Mol Med. 2010;16:508–515.
32. Goetz C, Breton E, Choquet P, et al. SPECT low-field MRI system for small-animal imaging. J Nucl Med. 2008;49:88–93.
33. Catana C, Wu Y, Judenhofer MS, et al. Simultaneous acquisition of multislice PET and MR images: Initial results with a MR-compatible PET scanner. J Nucl Med. 2006;47:1968–1976.
34. Pichler BJ, Judenhofer MS, Pfannenberg C. Multimodal imaging approaches: PET/CT and PET/MRI. Handb Exp Pharmacol. 2008;(185 Pt 1):109–132.
35. Chaudhari AJ, Joshi AA, Wu Y, et al. Spatial distortion correction and crystal identification for MRI-compatible position-sensitive avalanche photodiode-based PET scanners. IEEE Trans Nucl Sci. 2009;56:549–556.
36. Lucas AJ, Hawkes RC, Ansorge RE, et al. Development of a combined microPET-MR system. Technol Cancer Res Treat. 2006;5:337–341.
37. Gilbert KM, Handler WB, Scholl TJ, et al. Design of field-cycled magnetic resonance systems for small animal imaging. Phys Med Biol. 2006;51:2825–2841.
38. Handler WB, Gilbert KM, Peng H, et al. Simulation of scattering and attenuation of 511 keV photons in a combined PET/field-cycled MRI system. Phys Med Biol. 2006;51:2479–2491.
39. Hamamura MJ, Ha S, Roeck WW, et al. Development of an MR-compatible SPECT system (MRSPECT) for simultaneous data acquisition. Phys Med Biol. 2010;55:1563–1575.
40. Hamamura MJ, Ha S, Roeck WW, et al. Initial Investigation of preclinical integrated SPECT and MR imaging. Technol Cancer Res Treat. 2010;9:21–28.
41. Ha S, Hamamura MJ, Roeck WW, et al. Development of a new RF coil and gamma-ray radiation shielding assembly for improved MR image quality in SPECT/MRI. Phys Med Biol. 2010;55:2495–2504.
42. Hamamura MJ, Roeck WW, Ha S, et al. Simultaneous in vivo dynamic contrast-enhanced magnetic resonance and scintigraphic imaging. Phys Med Biol. 2011;56:N63–N69.
43. Marshall HR, Prato FS, Deans L, et al. Variable lung density consideration in attenuation correction of whole-body PET/MRI. J Nucl Med. 2012;53:977–984.
44. Marshall HR, Stodilka RZ, Theberge J, et al. A comparison of MR-based attenuation correction in PET versus SPECT. Phys Med Biol. 2011;56:4613–4629.
45. Kiessling F, Pichler BJ. Small Animal Imaging: Basics and Practical Guide. Berlin: Springer-Verlag; 2011:597.
46. Pomper MG. Translational molecular imaging for cancer. Cancer Imaging. 2005;5(Spec No A):S16–S26.
47. Serganova I, Blasberg RG. Multi-modality molecular imaging of tumors. Hematol Oncol Clin North Am. 2006;20:1215–1248.
48. Serganova I, Mayer-Kukuck P, Huang R, et al. Molecular imaging: Reporter gene imaging. Handb Exp Pharmacol. 2008;(185 Pt 2):167–223.
49. Vallabhajosula S. Molecular Imaging: Radiopharmaceuticals for PET and SPECT. Berlin: Springer-Verlag; 2009.
50. Gambhir SS, Yagboubi SS. Cambridge molecular imaging series. In: Cherry SR, Weber WA, van Bruggen N, eds. Molecular Imaging with Reporter Genes. New York, NY: Cambridge University Press; 2010.
51. Serganova I, Blasberg R. Reporter gene imaging: Potential impact on therapy. Nucl Med Biol. 2005;32:763–780.
52. Serganova I, Ponomarev V, Blasberg R. Human reporter genes: Potential use in clinical studies. Nucl Med Biol. 2007;34:791–807.
53. Sweet WH. The use of nuclear disintegration in the diagnosis and treatment of brain tumors. N Engl J Med. 1951;245:875–878.
54. Cody HS III. Sentinel Lymph Node Biopsy. London: Martin Dunitz; 2002.
55. Mariani G, Giuliano AE, Strauss HW. Radioguided Surgery: A Comprehensive Team Approach. New York, NY: Springer; 2008.
56. Povoski SP, Neff RL, Mojzisik CM, et al. A comprehensive overview of radioguided surgery using gamma detection probe technology. World J Surg Oncol. 2009; 7:11.
57. Gulec SA, Moffat FL, Carroll RG. The expanding clinical role for intraoperative gamma probes. In: Freeman LM, ed. Nuclear Medicine Annual 1997. Philadelphia, PA: Lippincott-Raven Publishers; 1997:209–237.
58. Woolfenden JM, Barber HB. Intraoperative probes. In: Wagner HN, Szabo Z, Buchanan JW, eds. Principles of Nuclear Medicine. 2nd ed. Philadelphia, PA: WB Saunders; 1995:292–297.
59. Barber HB, Barrett HH, Woolfenden JM, et al. Comparison of in vivo scintillation probes and gamma cameras for detection of small, deep tumours. Phys Med Biol. 1989;34:727–739.
60. Daghighian F, Mazziotta JC, Hoffman EJ, et al. Intraoperative beta probe: A device for detecting tissue labeled with positron or electron emitting isotopes during surgery. Med Phys. 1994;21:153–157.
61. Raylman RR, Fisher SJ, Brown RS, et al. Fluorine-18-fluorodeoxyglucose-guided breast cancer surgery with a positron-sensitive probe: Validation in preclinical studies. J Nucl Med. 1995;36:1869–1874.
62. Raylman RR, Wahl RL. A fiber-optically coupled positron-sensitive surgical probe. J Nucl Med. 1994;35:909–913.
63. Heller S, Zanzonico P. Nuclear probes and intraoperative gamma cameras. Semin Nucl Med. 2011;41:166–181.
64. Zanzonico P. The intraoperative gamma probe: Design, safety, and operation. In: Cody HS III, ed. Sentinel Lymph Node Biopsy. London: Martin Dunitz; 2008:45–68.
65. Zanzonico P, Heller S. The intraoperative gamma probe: Basic principles and choices available. Semin Nucl Med. 2000;30:33–48.
66. Essner R, Daghighian F, Giuliano AE. Advances in FDG PET probes in surgical oncology. Cancer J. 2002;8:100–108.
67. Essner R, Hsueh EC, Haigh PI, et al. Application of an [(18)F]fluorodeoxyglucose-sensitive probe for the intraoperative detection of malignancy. J Surg Res. 2001; 96:120–126.
68. Strong VE, Galanis CJ, Riedl CC, et al. Portable PET probes are a novel tool for intraoperative localization of tumor deposits. Ann Surg Innov Res. 2009;3:2.
69. Strong VE, Humm J, Russo P, et al. A novel method to localize antibody-targeted cancer deposits intraoperatively using handheld PET beta and gamma probes. Surg Endosc. 2008;22:386–391.
70. Wasselle J, Becker J, Cruse W, et al. Localization of malignant melanoma using monoclonal antibodies. Arch Surg. 1991;126:481–484.
71. Schneebaum S, Essner R, Even-Sapir E. Positron-sensitive probes. In: Mariani G, Giuliano AE, Strauss HW, eds. Radioguided Surgery: A Comprehensive Team Approach. New York, NY: Springer; 2008:23–28.
72. Raylman RR. Performance of a dual, solid-state intraoperative probe system with 18F, 99mTc, and (111)In. J Nucl Med. 2001;42:352–360.
73. Reinhardt H, Stula D, Gratzl O. Topographic studies with 32P tumor marker during operations of brain tumors. Eur Surg Res. 1985;17:333–340.
74. Newman LA. Current issues in the surgical management of breast cancer: A review of abstracts from the 2002 San Antonio Breast Cancer Symposium, the 2003 Society of Surgical Oncology annual meeting, and the 2003 American Society of Clinical Oncology meeting. Breast J. 2004;10(suppl 1):S22–S25.
75. Goyal A, Newcombe RG, Mansel RE, et al. Role of routine preoperative lymphoscintigraphy in sentinel node biopsy for breast cancer. Eur J Cancer. 2005; 41:238–243.
76. Tafra L, McMasters KM, Whitworth P, et al. Credentialing issues with sentinel lymph node staging for breast cancer. Am J Surg. 2000;180:268–273.
77. Mathelin C, Salvador S, Bekaert V, et al. A new intraoperative gamma camera for the sentinel lymph node procedure in breast cancer. Anticancer Res. 2008;28:2859–2864.
78. Mathelin C, Salvador S, Croce S, et al. Optimization of sentinel lymph node biopsy in breast cancer using an operative gamma camera. World J Surg Oncol. 2007;5:132.
79. Mathelin C, Salvador S, Huss D, et al. Precise localization of sentinel lymph nodes and estimation of their depth using a prototype intraoperative mini gamma-camera in patients with breast cancer. J Nucl Med. 2007;48:623–629.
80. Britten AJ. A method to evaluate intra-operative gamma probes for sentinel lymph node localisation. Eur J Nucl Med. 1999;26:76–83.
81. Aarsvold JN, Alazraki NP. Update on detection of sentinel lymph nodes in patients with breast cancer. Semin Nucl Med. 2005;35:116–128.
82. Hoffman EJ, Torni MP, Levin CS. Gamma and beta intra-operative imaging probes. Nucl Instr Methods Phys Res. 1997;392:324–329.
83. Scopinaro F, Soluri A. Gamma ray imaging probes for radioguided surgery and site-directed biopsy. In: Mariani G, Giuliano AE, Strauss HW, eds. Radioguided Surgery: A Comprehensive Team Approach. New York, NY: Springer; 2008:29–36.
84. Abe A, Takahashi N, Lee J, et al. Performance evaluation of a hand-held, semiconductor (CdZnTe)-based gamma camera. Eur J Nucl Med Mol Imaging. 2003;30:805–811.
85. Pitre S, Ménnard L, Ricard M, et al. A hand-held imaging probe for radio-guided surgery: Physical performance and preliminary clinical experience. Eur J Nucl Med Mol Imaging. 2003;30:339–343.
86. Oda T, Hayama K, Tsuchimochi M. [Evaluation of small semiconductor gamma camera–simulation of sentinel lymph node biopsy by using a trial product of clinical type gamma camera]. Kaku Igaku. 2009;46:1–12.
87. Tsuchimochi M, Hayama K, Oda T, et al. Evaluation of the efficacy of a small CdTe gamma-camera for sentinel lymph node biopsy. J Nucl Med. 2008;49: 956–962.
88. Tsuchimochi M, Sakahara H, Hayama K, et al. A prototype small CdTe gamma camera for radioguided surgery and other imaging applications. Eur J Nucl Med Mol Imaging. 2003;30:1605–1614.
89. Sánchez F, Benlloch JM, Escat B, et al. Design and tests of a portable mini gamma camera. Med Phys. 2004;31:1384–1397.
90. Sánchez F, Fernández MM, Giménez M, et al. Performance tests of two portable mini gamma cameras for medical applications. Med Phys. 2006;33:4210–4220.
91. Vermeeren L, Meinhardt W, Bex A, et al. Paraaortic sentinel lymph nodes: Toward optimal detection and intraoperative localization using SPECT/CT and intraoperative real-time imaging. J Nucl Med. 2010;51:376–382.
92. Vermeeren L, Valdés Olmos RA, Klop WM, et al. A portable gamma-camera for intraoperative detection of sentinel nodes in the head and neck region. J Nucl Med. 2010;51:700–703.
93. Vermeeren L, Valdés Olmos RA, Meinhardt W, et al. Intraoperative imaging for sentinel node identification in prostate carcinoma: Its use in combination with other techniques. J Nucl Med. 2011;52:741–744.
94. Ortega J, Ferrer-Rebolleda J, Cassinello N, et al. Potential role of a new hand-held miniature gamma camera in performing minimally invasive parathyroidectomy. Eur J Nucl Med Mol Imaging. 2007;34:165–169.
95. Contag PR, Olomu IN, Stevenson DK, et al. Bioluminescent indicators in living mammals. Nat Med. 1998;4:245–247.
96. Ntziachristos V, Ripoll J, Wang LV, et al. Looking and listening to light: The evolution of whole-body photonic imaging. Nat Biotechnol. 2005;23:313–320.
97. Taruttis A, Ntziachristos V. Translational optical imaging. AJR Am J Roentgenol. 2012;199:263–271.
98. Holland EC. Gliomagenesis: Genetic alterations and mouse models. Nat Rev Genet. 2001;2:120–129.
99. Uhrbom L, Holland EC. Modeling gliomagenesis with somatic cell gene transfer using retroviral vectors. J Neurooncol. 2001;53:297–305.
100. Uhrbom L, Nerio E, Holland EC. Dissecting tumor maintenance requirements using bioluminescence imaging of cell proliferation in a mouse glioma model. Nat Med. 2004;10:1257–1260.
101. Brentjens RJ, Latouche JB, Santos E, et al. Eradication of systemic B-cell tumors by genetically targeted human T lymphocytes co-stimulated by CD80 and interleukin-15. Nat Med. 2003;9:279–286.
102. Sadelain M, Rivière I, Brentjens R. Targeting tumours with genetically enhanced T lymphocytes. Nat Rev Cancer. 2003;3:35–45.
103. Gade TP, Hassen W, Santos E, et al. Targeted elimination of prostate cancer by genetically directed human T lymphocytes. Cancer Res. 2005;65:9080–9088.
104. Witjes JA, Douglass J. The role of hexaminolevulinate fluorescence cystoscopy in bladder cancer. Nat Clin Pract Urol. 2007;4:542–549.
105. Stummer W, Novotny A, Stepp H, et al. Fluorescence-guided resection of glioblastoma multiforme by using 5-aminolevulinic acid-induced porphyrins: A prospective study in 52 consecutive patients. J Neurosurg. 2000;93:1003–1013.
106. Ashitate Y, Stockdale A, Choi HS, et al. Real-time simultaneous near-infrared fluorescence imaging of bile duct and arterial anatomy. J Surg Res. 2012;176:7–13.
107. Ashitate Y, Tanaka E, Stockdale A, et al. Near-infrared fluorescence imaging of thoracic duct anatomy and function in open surgery and video-assisted thoracic surgery. J Thorac Cardiovasc Surg. 2011;142:31–38.e1–e2.
108. Frangioni JV. In vivo near-infrared fluorescence imaging. Curr Opin Chem Biol. 2003;7:626–634.
109. Frangioni JV. New technologies for human cancer imaging. J Clin Oncol. 2008;26:4012–4021.
110. Hutteman M, Choi HS, Mieog JS, et al. Clinical translation of ex vivo sentinel lymph node mapping for colorectal cancer using invisible near-infrared fluorescence light. Ann Surg Oncol. 2011;18:1006–1014.
111. Lee BT, Hutteman M, Gioux S, et al. The FLARE intraoperative near-infrared fluorescence imaging system: A first-in-human clinical trial in perforator flap breast reconstruction. Plast Reconstr Surg. 2010;126:1472–1481.
112. Troyan SL, Kianzad V, Gibbs-Strauss SL, et al. The FLARE intraoperative near-infrared fluorescence imaging system: A first-in-human clinical trial in breast cancer sentinel lymph node mapping. Ann Surg Oncol. 2009;16:2943–2952.
113. Kosaka N, Mitsunaga M, Longmire MR, et al. Near infrared fluorescence-guided real-time endoscopic detection of peritoneal ovarian cancer nodules using intravenously injected indocyanine green. Int J Cancer. 2011;129:1671–1677.
114. Beattie BJ, Thorek DL, Schmidtlein CR, et al. Quantitative modeling of Cerenkov light production efficiency from medical radionuclides. PLoS One. 2012; 7:e31402.
115. Cerenkov PA. Visible emission of clean liquids by action of gamma-radiation. C R Dokl Akad Nauk SSSR. 1934;2:451–454.
116. Dothager RS, Goiffon RJ, Jackson E, et al. Cerenkov radiation energy transfer (CRET) imaging: A novel method for optical imaging of PET isotopes in biological systems. PLoS One. 2010;5:e13300.
117. Holland JP, Normand G, Ruggiero A, et al. Intraoperative imaging of positron emission tomographic radiotracers using Cerenkov luminescence emissions. Mol Imaging. 2011;10:177–186.
118. Li C, Mitchell GS, Cherry SR. Cerenkov luminescence tomography for small-animal imaging. Opt Lett. 2010;35:1109–1111.
119. Liu H, Ren G, Miao Z, et al. Molecular optical imaging with radioactive probes. PLoS One. 2010;5:e9470.
120. Lucignani G. Čerenkov radioactive optical imaging: A promising new strategy. Eur J Nucl Med Mol Imaging. 2011;38:592–595.
121. Robertson R, Germanos MS, Li C, et al. Optical imaging of Cerenkov light generation from positron-emitting radiotracers. Phys Med Biol. 2009;54: N355–N365.
122. Ruggiero A, Holland JP, Lewis JS, et al. Cerenkov luminescence imaging of medical isotopes. J Nucl Med. 2010;51:1123–1130.
123. Thorek DL, Abou DS, Beattie BJ, et al. Positron lymphography: Multimodal, high-resolution, dynamic mapping and resection of lymph nodes after intradermal injection of 18F-FDG. J Nucl Med. 2012;53:1438–1445.
124. Xu MH, Wang LHV. Photoacoustic imaging in biomedicine. Rev Sci Instrum. 77:41–101.
125. Herzog E, Taruttis A, Beziere N, et al. Optical imaging of cancer heterogeneity with multispectral optoacoustic tomography. Radiology. 2012;263:461–468.
126. Ku G, Fornage BD, Jin X, et al. Thermoacoustic and photoacoustic tomography of thick biological tissues toward breast imaging. Technol Cancer Res Treat. 2005;4:559–566.
127. Kruger RA, Kiser WL Jr, Reinecke DR, et al. Thermoacoustic computed tomography using a conventional linear transducer array. Med Phys. 2003;30:856–860.
128. Zeng Y, DA X, Wang Y, et al. Photoacoustic and ultrasonic coimage with a linear transducer array. Opt Lett. 2004;29:1760–1762.
129. Hielscher AH. Optical tomographic imaging of small animals. Curr Opin Biotechnol. 2005;16:79–88.
130. Jian H. Diffuse Optical Tomography: Principles and Applications. Boca Raton, FL: CRC Press; 2010.
131. Huang D, Swanson EA, Lin CP, et al. Optical coherence tomography. Science. 1991;254:1178–1181.
132. Zavaleta CL, Kircher MF, Gambhir SS. Raman’s “effect” on molecular imaging. J Nucl Med. 2011;52:1839–1844.
133. Kircher MF, de la Zerda A, Jokerst JV, et al. A brain tumor molecular imaging strategy using a new triple-modality MRI-photoacoustic-Raman nanoparticle. Nat Med. 2012;18:829–834.
aThe analogy between signal entropy, used in the context of mutual information, and thermodynamic entropy thus becomes clear.
bIn information theory, there are actually a number of different definitions of mutual information.
3When radiofrequency (RF) pulses are used, the technology is termed, “thermoacoustic imaging.”