Altered Egos: How the Brain Creates the Self, 1st Edition


Chapter 1

  1. I refer to these conditions elsewhere as perturbations of the self(Feinberg, I997c). The clinical cases I describe in this book were collected over a fifteenyear period either during my training in neurology and psychiatry or later in my clinical practice. All the patients are real, but their identities have been concealed. Some of the clinical disorders I discuss here have been considered by neurologists in recent books. In his Descartes' Error(1994) and more recently in The Feeling of What Happens (1999), Antonio Damasio discusses some of the conditions I describe in this book. In the latter volume, Damasio offers his own theory of the neurobiology of the self and considers at length the role emotion plays in the neurobiology of the self. Damasio proposes that there are three levels of the self: the proto-self, that is an unconscious neural representation of the current state of the organism; the core-self, that is a conscious “second-order nonverbal account that occurs whenever an object modifies the proto-self”; and an autobiographical self, a conscious aspect of the self that is “is based on autobiographical memory which is constituted by implicit memories of multiple instances of individual experience of the past and of the anticipated future.” The autobiographical self is “based on permanent but dispositional records of core-self experiences.” The three levels of the self that are proposed by Damasio appear to be hierarchically arranged, with the proto-self the most basic and the autobiographical self the most advanced or complex aspect of the self. I do not explore in this book many of the brain regions, such as the brain stem, reticular formation, hypothalamus and thalamus that are necessary for the existence of basic consciousness, but Damasio provides an excellent and detailed discussion of these neuroanatomical structures, and the reader is referred to his work for a complete neuroanatomical discussion of these regions. In the present work, while I do not divide the self into discrete neuroanatomical levels, I do attempt to explain how different levels of the neuroaxis create the unified self, and therefore I view Damasio's and my approach as complimentary. For a helpful summary and commentary of Damasio's The Feeling of What Happens, see the recent review by Watt (2000). Richard Restak provides an insightful discussion of the neurology of the self and the problem of mental unity in The Modular Brain (1994). In Phantoms in the Brain (1998), Ramachandran and Blakeslee also analyze some of the neurological conditions that affect the self. In particular, see the chapters on anosognosia and Capgras syndrome for a consideration of some of the possible mechanisms of these disorders.


  1. McGinn, 1999.
  2. In order to understand how to read the scan, imagine that John is lying down, facing the ceiling, and you are standing at his feet, looking at his head. This vantage point puts the right side of his head on your left side. The pictures resulting from the scan can be thought of as horizontal slices through John's head. It is as if you cut an orange, slice by slice, and, after each slice, took a picture. In this book, the area of brain damage is cross-hatched. Throughout this book I refer to MRI and CAT scans. Although MRI scans and CAT scans are made using different technologies, both scans provide a picture of the brain's structure. For our purposes, the description of how to read John's MRI scan can apply to the reading of the CAT scans as well.
  3. Schilder and Stengel (1928; 1931) provided the first clinical descriptions of pain asymbolia that they initially called“Schmerzasymbolie.” In the most severe cases of pain asymbolia, the patient does not react to threats of physical harm of any sort. Persons with pain asymbolia understand on an “intellectual” level that something could hurt or injure them. The problem is that the patient does not show any emotional reaction to dangerous stimuli. For example, the patient understands that fire can cause burns, but will not flinch or withdraw if a match is held directly in front of the eyes. While the majority of cases with pain asymbolia have damage to the left hemisphere, some cases have been recorded that have damage to the right hemisphere (Berthier et al., 1988). Berthier and coworkers studied six cases with the condition, and all six patients had damage to an area of the brain called the insular cortex. They argued that the damage to this area of the brain interrupted the connectionsbetween the sensory cortex, an area of the brain responsible for the sense of touch, and the limbic system, a brain area that is responsible for emotions.
  4. I am not a philosopher, but I address some philosophical issues in this book to determine how my neurological model fits with some philosophical positions. To my surprise and pleasure, some philosophical work fits quite nicely with the model of the brain and mind that I propose. This convergence of neurology and philosophy is encouraging because a correct theory of the self and the mind will need to satisfy both scientists and philosophers. I address those philosophical viewpoints that pertain most directly to brain functioning. For comprehensive and accessible reviews of philosophical and psychological ideas about the self, see Jerome Levin (1992) and Wilkes (1988). For indepth philosophical discussions on the philosophy of self, see Cassam (1994; 1997). For recent discussions of the philosophy of mind and consciousness, see Searle (1984; 1992.); Nagel (1979; 1986; 1995); P. S. Churchland (1986); Dennett (1991); Rosenthal (1991); Flanagan (1991; 1992.); P. S. M. Churchland (1993; 1996); Chalmers (1996); McGinn (1997; 1999); Tye (1995); Block, Flanagan, and Guzeldere (1997).
  5. William James, 1890; reprinted 1983, p. 345.
  6. William James, 1892.; reprinted 1985, p. 70.
  7. Cited in Eccles, 1994, p. 1.
  8. Horgan, 1999, p. 23.



Chapter 2

  1. Classical descriptions of asomatognosic patients can be found in Macdonald Critchley's monumental work The Parietal Lobes (1953) and also in Gerstmann (1942). An analysis of the neuroanatomy of asomatognosia can be found in Feinberg et al., 1990.
  2. For a review of the neglect syndrome and its clinical manifestations, see Heilman et al., 1993, and Heilman et al., 1997; Ramachandran and Blakeslee, 1998.
  3. This case was cited in Nielsen, 1938.
  4. Nielsen, 1938.
  5. Spillane, 1942.
  6. Ullman, 1960.
  7. We examined a series of twelve patients with asomatognosia (Feinberg et al., 1990). All twelve had severe left hemispatial neglect. However, there were many patients in this study with neglect who did not demonstrate the syndrome of asomatognosia. Therefore we may conclude that neglect is a necessary but not sufficient condition for the occurrence of asomatognosia.
  8. Gainotti et al., 1972.; Albert, 1973.
  9. Heilman and Van Den Abell, 1979.
  10. Neurologist Marsel Mesulam (1985) suggests: “Patients with unilateral neglect. . . behave not only as if nothing were actually happening in the left hemispace but also as if nothing of any importance could be expected to occur there” (p.149).
  11. For other good examples of the drawings of patients with neglect, see Critchley, 1953; Mesulam, 1985.
  12. Ullman, 1960.
  13. Critchley, 1955.
  14. Weinstein and Kahn, 1955; Weinstein and Cole, 1964; Weinstein et al., 1964.
  15. Weinstein and Friedland, 1977; Weinstein, 1991.
  16. Gilliatt and Pratt, 1952.
  17. Critchley, 1974.
  18. Halligan, Marshall, and Wade, 1993.
  19. Halligan, Marshall, and Wade, 1995.
  20. Critchley, 1974.
  21. See the classic work Denial of Illnessby Weinstein and Kahn (1955). Excellent works that contain general discussions of psychological denial are Denial and Defense in the Therapeutic Situationby Theodore Dorpat (1985) and The Denial of Stress (1983) edited by Breznitz.Lewis (1991) provides a very helpful review of psychological denial as it pertains to neurological illness.
  22. On the topic of the relationship between confabulation and anosognosia for hemiplegia, see Feinberg et al. (1994); Feinberg (1997a);Feinberg and Roane (1997a). Critchley (1953) observed that when patients who were anosognosic for their hemiplegic arms were told to raise their paralyzed limbs, these patients would often insist that the arm really had moved or moved “less quickly” than the normal arm. V.S. Ramachandran (Ramachandran and Blakeslee, 1998) described a patient who claimed when Dr. Ramachandran


stood before her that she could touch his nose. The same patient claimed that she could clap with both her hands. We recently found a high degree of correlation between anosognosia, asomatognosia, and the tendency to confabulate limb movements (Feinberg, Roane, and Ali, 2000).

  1. Babinski, 1914; 1918.
  2. Anton, 1899. Anton's most famous case was a fifty-six-year-old seamstress named Ursula Mercz, who was blind yet denied her visual loss. For an excellent historical review and translation of some of Anton's work, see Forstl et al. (1993). See also McGlynn and Schacter (1989).

Chapter 3

  1. The terms “ego-close” and “ego-distant” can be found in the writings of Paul Schilder, 1965, pp. 2.98–99.
  2. Capgras and Reboul-Lachaux, 1923.
  3. Gluckman, 1968; Christodoulou, 1986a; Luauté, 1986.
  4. There are thousands of articles on the delusional misidentification syndromes, but two volumes that provide a comprehensive introduction to the subject are Ellis, Luauté, and Retterstol (eds.) 1994; and Christodoulou (ed.), 1986a.
  5. Merrin and Silberfarb, 1976; Todd, Dewhurst, and Wallis, 1981; Christodoulou, 1986b; 1991; Kimura, 1986; Signer, 1987; Förstl, Almeida, and Owen et al., 1991; Signer, 1992.; Spier, 1992; Mendez, Martin, Symth et al., 1992; Fleminger and Burns, 1993; Feinberg and Roane, 1997b.
  6. Larrivé and Jasienski, 1931.
  7. Davidson, 1941.
  8. Anderson, 1988; Anderson and Williams, 1994.
  9. Freud, 1909/1959.
  10. Rank, 1952.
  11. Freud, 1909/1959.
  12. Dostoyevsky, 1971.
  13. Ellis, Whitley, and Luauté, 1994. This article provides a valuable translation of the original article by Capgras and Reboul-Lachaux with enlightening commentary from the authors. (see note 14 below)
  14. Courbon and Fail, 1927. Ellis, Whitley, and Luauté (1994) contains a translation of this article as well.
  15. Burnham, 1956.
  16. Feinberg et al., 1999.
  17. Pick, 1903.
  18. Patterson and Zangwill, 1944.
  19. Levin, 1951
  20. Levin, 1933.
  21. Levin, 1953.
  22. Levin, 1945.
  23. Levin, 1968.
  24. Feinberg, 1997C; Feinberg and Roane, 1997a; 1997b.



  1. Benson et al., 1976.
  2. Ruff and Volpe, 1981.
  3. Feinberg and Shapiro, 1989. See also Burgess and Baxter et al., 1996.
  4. Staton, Brumback, and Wilson, 1982.
  5. Alexander et al., 1979. For a review of the limbic system, see LeDoux, 1996; also Ramachandran and Blakeslee, 1998.
  6. Alexander et al., 1979.
  7. Staton, Brumback, and Wilson, i982.
  8. Ellis and Young, 1990.
  9. Landis et al., 1986.
  10. Van Lancker and Klein, 1990; Van Lancker, 1991

Chapter 4

  1. Feinberg, 1997a; Feinberg and Roane, 1997b.
  2. Berlyne, 1972.
  3. Koppleman, 1980. For a review of the topic of confabulation, see also Moscovitch, 1995.
  4. Stuss et al., 1978.
  5. For an accesible yet comprehensive guide to memory research, see Schacter, 1996.
  6. Van der Horst, 1932.
  7. Victor, Adams, and Collins, 1989.
  8. Bonhoeffer, 1901; 1904.
  9. Eslinger, 1998. Damasio (1999) discusses at length the role that autobiographical memory plays in the creation of the self. See alsoRubin (1986).
  10. Cited by Schacter, 1996 from Tulving, 1983.
  11. Eslinger, 1998.
  12. Weinstein and Kahn, 1955.
  13. Weinstein, Kahn, and Morris, 1956.
  14. Weinstein and Lyerly, 1968.
  15. Baddeley and Wilson, 1986.
  16. Weinstein, Kahn, and Malitz, 1956.
  17. In Weinstein and Kahn, 1955, p. 105, the authors describe their use of the “king story” as a vehicle for bringing out denial.
  18. Weinstein, Kahn, and Morris, 1956.
  19. Ibid.
  20. Stuss et al., 1978.
  21. Kapur and Coughlan, 1980.
  22. Fischer et al., 1995.
  23. DeLuca and Cicerone, 1991; for a review on the subject of the sequelae of frontal aneurysm rupture, see DeLuca and Diamond, 1995.
  24. Stuss et al., 1978.
  25. See also Stuss, 1991, for additional thoughts on this issue.
  26. Gazzaniga, 1985.



Chapter 5

  1. Feinberg and Shapiro, 1989.
  2. Gluckman, 1968. Foley and Breslau (1982) briefly described seven cases with mirror misidentification. Like Gluckman's case, paranoid attitudes toward the mirror image were noted in patients of Foley and Breslau. Mirror misidentification has frequently reported to occur as part of the dementia of Alzheimer's disease. (See Rubin et al., 1988; Burns et al, 1990; and Mendez et al, 1992.) Spangenberg et ah (1998)reported a recent case of mirror misidentification that may have been due to vascular disease.
  3. William James, 1892; Reprinted 1985, p. 43.
  4. Gallup, 1970; 1977a; 1977b; 1982.; Gallup et ah, 1995. See also Povinelli et ah, 1993; 1997. For a spirited debate on the issue of self-awareness in nonhuman primates, see Heyes, 1998.
  5. For a review of some aspects of the double in religion, mythology, and folklore, see Todd and Dewhurst, 1955, 1962; and Christodoulou, 1986c.
  6. Tymms (1949) provides a fascinating and extensive review of the theme of the double in world literature. In this work, Tymms reviews the double in Richter's work.
  7. Cited by Tymms, 1949, p. 30.
  8. Todd and Dewhurst, 1955.
  9. Lhermitte, 1951; Damas Mora et ah, 1980.
  10. Todd and Dewhurst, 1955.
  11. Ibid.
  12. Devinsky et ah, 1989.
  13. Todd and Dewhurst, 1955.
  14. Lippman, 1953.
  15. Todd and Dewhurst, 1962.
  16. Barth, 1890, cited by Todd and Dewhurst, 1962.
  17. Rank, 1971, p. 83.
  18. Ibid., p. 84.
  19. Svendsen, 1934.
  20. Sperling, 1954.
  21. Nagera, 1969.
  22. Murphy, 1962.
  23. Frailberg, 1959.
  24. Critchley, 1979.
  25. The belief that a stranger is living in the house has been called the “phantom boarder syndrome.” (See Rowan, 1984; Rubin et ah, 1988;Burns et ah, 1990; and Malloy et ah, 1992.)

Chapter 6

  1. Innocenti, 1986; Trevarthen, 1991.
  2. Bogen (1993) provides an excellent review of the history of corpus callosotomy and the effects of corpus callosotomy on human behavior.



  1. For a review of this work, see Gazzaniga (1985); Gazzaniga and Volpe (1981); Gazzaniga (1970); and Gazzaniga and LeDoux (1978).
  2. Wigan, 1844a; 1844b. For a modern version of this hypothesis, see Puccetti, 1981.
  3. Harrington (1987) does a remarkable job reviewing and analyzing the history of work on the nature of the two cerebral hemispheres. This is a must-read for anyone interested in the history of neurology and brain research.
  4. Goldstein 1908, pp. 69–70; cited by Harrington, 1987.
  5. Feinberg, et al., 1992; Feinberg, Roane, and Cohen, 1998.
  6. Feinberg et al., 1992; Feinberg, 1997C.There have been several neuroanatomical studies of the alien hand syndrome, and controversy exists regarding the existence of clinical subtypes of the syndrome and its underlying anatomy. For additional review of these issues see Delia Sala, 1991; Gasquoine, 1993; Tanaka et al., 1996; Chan and Liu, 1999.
  7. Gazzaniga, 1970, p.107.
  8. Geschwind, et al., 1995.
  9. Ibid.
  10. Goldberg and Bloom, 1990. This report also includes a discussion of possible mechanisms of alien hand syndrome.
  11. Ibid.
  12. Bogen, 1993.
  13. Stuss and Benson, 1986, p. 88.
  14. Sperry, Gazzaniga, and Bogen, 1969; Sperry, 1984.
  15. Sperry, 1990; Trevarthen, 1991.
  16. Bogen, 1993; Sperry, 1990; Trevarthen, 1991.
  17. Examples and reviews of this fascinating research can be found in Levy, Trevarthen, and Sperry, 1972; Levy and Trevarthen, 1976; Levy, 1977, 1990; and Sperry, Zaidel, and Zaidel, 1979.
  18. Trevarthen, 1974.
  19. Levy, 1990, p. 235.
  20. Ibid.
  21. Trevarthen, 1991.
  22. Feinberg, 1997a; Feinberg and Roane, 1997a.

Chapter 7

  1. Kaas, 1989.
  2. Van Essen, Anderson, and Felleman, 1992.
  3. Kaas, 1993.
  4. See, for example, Van Essen et al., 1992.
  5. In philosophy, the problem of explaining the distributedness of the brain and the subjectively experienced seamlessness of consciousness is known ss the “grain” problem, argument, or objection. The use of the term “grain” with reference to the philosophy of consciousness originates with Sellar (1963). The “grain argument” is a counterargument to Feigl's (1967) “identity argument.” The identity argument asserts that mental states are identical with


neural states, for example, there is nothing about mental states that needs to be explained beyond the understanding of the neural states that create them. The “grain argument” points out that subjective experience is homogeneous, unified, and without grain (Teller, 1992), in contrast to the brain, which is a “'gappy,’heterogeneous, discontinuous conglomerate of spatially discrete events” (Meehl, 1966, p. 167). Based on these differences, neural events cannot be identical with mental states. See also Feinberg, 1997b, 2000.

  1. Boring, in his monumental A History of Experimental Psychology (1959)opines that Descartes “marks the actual beginning of modern psychology.”
  2. Descartes (Les Passions de l'ame, Pt. I, article 30, 1649). Translation from Beakley and Ludlow, 1992, p. 111.
  3. Descartes (Les Passions de l'ame, Pt. I, article 32, 1649). Translation from Clarke and O'Malley, 1996, p. 471.
  4. Sherrington, 1947.
  5. Ibid., p. xiv.
  6. Sherrington, 1941, p. 277. The earliest use of the term “pontifical cell” that I have found is James, 1890/1983, p. 181.
  7. For the reader who is interested in an introduction to the biology of the visual system, see Kandel, Schwartz and Jessell, 2000; A book that I have found particularly useful, especially on the issues that pertain to visual awareness, is Zeki's A Vision of the Brain (1993). I primarily used these two sources for this overview of the visual system.
  8. Hubel and Wiesel, 1962; 1965; 1977; 1968; For summaries of Hubel and Wiesel's work, see Hubel and Wiesel, 1979; and Hubel, 1988.
  9. Zeki, 1993, p. 298.
  10. For a nice discussion of the idea of the “grandmother cell” and some of the history of the idea, see Barlow, 1995.
  11. The cells of V1, the primary visual area, have relatively small receptive fields that make the precise localization of a stimulus' position in space possible. On the other hand, V5, an area further along the processing stream and specialized for movement perception, has cells with large receptive fields that respond to movement across large segments of visual space. The larger receptive fields of V5 cells that are adequate or even necessary for the detection of motion are clearly inadequate for precise spatial localization. Therefore, processing information about a large moving object at a certain point in space would require the information of both V1 and V5 to be conscious. But this arrangement poses a real problem for visual unification. There is a wonderful experiment by Movshon, Adelson, Gizzi, and Newsome (1985) that illustrates this problem beautifully. They have demonstrated that when certain shapes are viewed behind a small circular aperture, conflicting movement information would be perceived at the level of the primary visual cortex (V1). For instance, if the information went no further than V1, adjacent sides of a horizontally moving diamond would appear to be traveling in different directions. Due to their small receptive fields, the cells of V1 are able to encode the exact locus of a stimulus in space, but they are unable to discern the pattern of movement of a single object behind the aperture. The cells


of V5, unlike the cells of V1, are able to respond appropriately to the movement of the actual overall object. But if the “whole object” represented in V5 were the only conscious representation of the stimulus, the “local sign,” for example, the exact topographical locus of a stimulus in space, would be lost. In order to know where the whole object moving in space is located, input from V1 and V5 is required. Therefore, the visual system must enable both V1 and V5 to make “explicit” contributions to visual perception.

  1. Zeki, 1993, p. 301. In support of this position, Beckers and Zeki (1995) performed an experiment using a technique known as transcranial magnetic stimulation (TMS). In TMS, a magnetic pulse is applied to the scalp, creating a localized and reversible inactivation of the underlying brain. When the cells of area V1 of normal subjects are inactivated with a pulse applied to the back of the head, conscious motion perception that requires an active V5 is still possible, although the ability to judge the orientation of stationary objects, which requires exquisite point localization, is lost. Thus, the patient saw something “move” but didn't know exactly where it was. Based on this observation, Beckers and Zeki were able to conclude that V5 appears to make a conscious contribution to the perception of motion that is wholly independent of V1, the primary visual region.
  2. Dennett, 1991.
  3. Pinker addresses the issue of the homunculus from the standpoint of cognitive neuroscience in How the Mind Works, 1997, P. 79.
  4. Recently, there has been a tremendous amount written about the “binding problem.” For some reviews of this complicated topic, see Crick and Koch, 1990; Crick, 1994; von der Malsburg, 1995; König and Engel, 1995; Treisman, 1996; Hardcastle, 1997; Singer, 1998; 1999. There is a special issue of Consciousness and Cognition(volume 8, no. 2, June 1999) devoted to this topic that provides an excellent overview of the area. This issue contains an interesting article by Antti Revonsuo (1999) which specifically addresses the relationship between neuroscientific approaches to binding and the problem of mental unity from the standpoint of philosophy. Revonsuo distinguishes between perceptual binding, that he refers to as stimulus-related bindingand the phenomenal unity of experience that he termsconsciousness-related binding. He also refers to cognitive binding. He argues that some examples of stimulus-related binding, such as the demonstration of neural synchronization in an anesthetized (and therefore unconscious) animal, suggests that stimulus-related binding may be dissociated from consciousness-related binding. The degree that these forms of binding overlap requires further explication. In this book I deal with these various forms of binding as different aspects of the same fundamental neurophilosophical problem regarding the unity of the self. However, it is certainly possible, even likely, that different types of binding occur via different mechanisms.
  5. In addition to the references in note 20, for some earlier papers on the role of oscillations in binding and awareness, see Gray and Singer, 1989; Gray et al., 1989; Engel et al., 1991; Crick and Koch, 1990; and Gray et al., 1992.
  6. Crick, 1994, p. 245.



  1. Edelman, 1989.

24.Ryle, 1949.

  1. Tinbergen, 1951, p.125.

Chapter 8

  1. Kim, 1992. Kim also addresses the issues of emergence and reduction in the philosophy of consciousness in Kim, 1995. In a recent book,Kim (1998) notes that emergentism is showing “strong signs of a comeback” not only in philosophy, but in“psychology, cognitive science, systems theory, and the like” (pp. 8–9). For an excellent overview of emergence theory in the philosophy of the mind, see Beckermann, Flohr, and Kim (1992).
  2. Campbell, 1974.
  3. Pattee, 1970, p.119.
  4. Discussions of hierarchy theory and constraint in biological systems can be found in Whyte, Wilson, and Wilson (1969); Pattee (1973);Ayala and Dobzhansky (1974); Allen and Starr (1982); and Salthe (1985). Lively and engaging accounts of hierarchies in a host of settings can be found in the works of Arthur Koestler. Probably his best known work is The Ghost in the Machine (1967). For an excellent summary of his work see Koestler (1978). Koestler coined the term holonto describe the subwholes found in all hierarchies.
  5. Medawar and Medawar, 1977.
  6. Ibid., p. 177.
  7. Searle, 1992, pp. 112–13.
  8. Ibid., p.113.
  9. Morgan, 1923, p. 35.
  10. The brilliant scientist-philosopher Michael Polanyi (1965) provided an analysis of emergence as it pertains to stereoscopic vision. Polanyi's more general views on the emergence of consciousness can be found in Polanyi (1966) and (1968).
  11. Sperry, 1977. Reprinted in Trevarthen, 1990, p. 383.
  12. Ibid., p. 382. See also Sperry, 1965; 1966; and Sperry, 1984.
  13. Sperry, 1977. Reprinted in Trevarthen, 1990, p. 384.
  14. Scott, 1995, pp. 172, 5.
  15. For a discussion of the differences between nested and non-nested hierarchies, see Allen and Starr, 1982, pp. 38–47; and Salthe, 1985, pp. 9–11.
  16. Allen and Starr (1982) provided a definition of a nested hierarchy: “A nested hierarchy is one where the holon at the apex of the hierarchy contains and is composed of all lower holons. The apical holon consists of the sum of the substance and the interactions of all its daughter holons and is, in that sense, derivable from them. Individuals are nested within populations, organs within organisms, tissues within organs, and tissues are composed of cells” (p. 38). As noted in note 4, holons are subwholes or parts of hierarchies.
  17. This model helps explain the aperture problem raised by the experiment of Movshon and coworkers previously discussed (chapter 7, note 16). We saw


how the cells of V1, the primary visual areas, have small receptive fields that make the precise localization of stimulus position in space possible. On the other hand, V5, an area specialized for movement perception, which receives input from, among areas, area V1, has cells with large receptive fields that respond to movement across large segments of visual space. The brain areas V1 and V5 appear as separate, hierarchically arranged brain regions, with V1 feeding V5 downstream, and V5 feeding upstream back to V1. When viewed in this fashion, V1 and V5 appear as part of a non-nested hierarchy with a larger number of neurons converging on a smaller number of neurons and the simultaneous emergence of more specific properties. Indeed, in this process, the emergence of higher order features necessitates that the cells beyond V1 be responsive to stimuli across a large portion of the visual field. But if this were the only conscious representation, the exact topographic locus of a stimulus in space would be lost. However, as we have seen, the mind does not lose the properties of the lower order neurons (i.e., the exact position of the stimulus is not lost despite the creation of new properties). The brain operates as a nested hierarchical system, wherein V1 neurons continue to make a contribution to awareness as well as contribute to the emergence of the higher order features present in the cells of V5.

  1. At one time in the history of science, purpose was felt to be everywhere, especially when it came to human beings. The products of natural selection are so well suited to their tasks that it was assumed for centuries that someone or something had a purposeful hand in their design. Aristotle suggested that ‘final causes’ played a role in the creation of life, but this position is now seen, in the light of modern biology, as an example of “teleological thinking.” We now know that fins were not“intentionally” designed so that fish could swim, or that human hands were intentionally “designed” so that we could grasp with opposable thumbs. In other words, natural selection does not set out with a predetermined goal in mind at its outset (see Mayr, 1974, p. 96; see also Searle, 1992, pp. 51–52).
  2. Searle points out that some actions that may seem intentional actually are not. In order to make this point he compares the statements “I am thirsty” with the statement “My lawn is thirsty.” In the first instance, there is a real conscious entity that possesses a true intentional desire to drink. Searle calls this intrinsicintentionality. In the second statement, we do not suppose the lawn actually “thinks” it is thirsty, or has a desire to drink, or intends to soak up some water. The lawn does not have an intentional desire to drink, and therefore the lawn does not have an intentional state. Rather, when we say the lawn is thirsty, we are speaking metaphorically. Searle refers to this condition as asifintentionality (Searle, 1992, pp. 78–82). While agreeing with Searle, I would prefer to call the lawn's mechanism a teleonomic but non-purposeful action.
  3. Mayr, 1974; 1982.
  4. Braithwaite, 1953, p. 326.
  5. Myers, 1976.
  6. Our meanings and purposes did not just suddenly appear on the evolutionary landscape. Rather they have slowly evolved from the complexity of


organisms. With the highly evolved brain development of humans, the most advanced state of outside object projection, objectification, and differentiation from the self is made possible. Simple neural states such as those that result in reflexive behaviors ultimately evolved into complex neural states that carry ever more sophisticated meanings for the organism. There simply is no actual discontinuity between reflex and purposeful action, between the pupillary response to light and contemplating a Picasso painting. When compared with the simple reflexes of a snail, our human purposes seem prescient, our meanings sublime. I suspect, despite this, that if our intentions and awareness were compared with those of some hypothetical higher more evolved life form in some other place or time in the universe, they might appear to them as nothing more than reflexes.

  1. It is a daunting task to acquaint oneself with the writings of J. H. Jackson. I refer the reader to one of Jackson's most famous works, his Croonian lectures of 1884 (reprinted in Taylor J., ed., Selected Writings of John Hughlings Jackson,vol. 2, 1958, pp. 45–75.) An excellent resource on Jackson's work is Harrington, 1987. Also see Levin, 1953.
  2. Jackson, 1884 reprinted in Taylor, ed., vol. 2, 1958, p. 49.

26.Monrad-Krohn, 1924; Feiling, 1927.

  1. See Damasio, 1994, pp. 139–143, for more on the dissociation between voluntary and spontaneous emotional expression.
  2. See Ramachandran and Blakeslee, 1998, pp. 199–211, for an interesting take on pathological laughter.
  3. Consciousness and Cognition(volume 8, no. 2, June 1999).
  4. Scott, 1995.
  5. Harth (1993) suggested a different solution to the “inner eye” problem. He proposed that the unification of the mind occurs not at the top, but rather at the bottom, of the sensory pyramid. According to Harth: “Unlike previous attempts that have placed the (Cartesian) theater at the highest level of cerebral activity, I believe that the unification is located at the only place where sensory patters are still whole and preserve the spatial relations of the original scene—at the bottom of the sensory pyramid, not at the top. It is there that all the sensory cues and cerebral fancies conspire to paint a scene. There is also an observer: it is the rest of the brain looking down, as it were, at what it has wrought. Consciousness, which arises in this self-referent process, not only unifies the immediate sensory messages but also becomes the joiner of everything around us, past, present, and future.” The problem with Harth's approach is that while he claims that there is no “intelligent monitor at the top of the sensory pyramid,” his model nonetheless posits a higher order “observer,” an inner homunculus, observing the workings of the rest of the brain within a lower order “Cartesian Theater.” The Cartesian Theater is simply moved from the higher centers of the brain to regions lower down on the sensory hierarchy. In contrast, I have tried to show how all levels of the neural hierarchy contribute to the unified mind and the self without positing an inner homunculus or a centralization of neural pathways.



Chapter 9

  1. See Guzëldere, 1995, for a review of some contemporary viewpoints. Also see Velmans, 1991a; 1991b; 1995; Globus, 1973; 1976;Metzinger, 1995.
  2. Schopenhauer, cited in Janaway, 1989, pp. 119–20.
  3. George Henry Lewes, 1879, p. 459.
  4. Herbert Spencer, 1883, p. 157.
  5. Nagel, 1974.
  6. Ibid., p. 437.
  7. Nagel, 1986, p.7.
  8. Searle, 1992, p. 122.
  9. Ibid., p. 95.
  10. Brain, 1951, p. 13.
  11. Sherrington, 1947. Some surprisingly complicated behaviors turn out to be highly automatic. For instance, one can cut the spinal cord of a cat so that the portion of the spinal cord controlling the legs is disconnected from the brain. If the animal is then put on a treadmill, it is able to walk with a rhythmic pattern typical of the intact animal. No consciousness, no volition, only an isolated spinal cord and its local circuits control the stepping pattern.
  12. Lettvin et al., 1959.
  13. Sherrington, 1947, p. 324. The neural states that have meaning are no doubt within an organism's nervous system. But millions of years of evolution have established that the neural states caused by outside objects will automaticallycause the animals to respond in a fashion appropriate to where the stimuli are in the world, not where the neural states really are which is in the brain. This is not some sort of mental magic. The brain has found a way to produce neural states that capture features of the world. But the brain of course does not literally “capture” the world. We should not be deceived into thinking that the brain actually “absorbs” these features of the environment. We can artificially stimulate the brain with an electrode, in the absence of such stimuli, and create the same sensory effects. Rather, it is more accurate to say that what the nervous system does is create meanings. Therefore, a sharp pinprick is a combination of touch, point localization, impulse to withdraw, etc. All these sensory and motor elements are integrated into a meaning for the organism. The consciousness of the pinprick for the organism is the meaning for the organism, a meaning that varies with different neural states. For a recent discussion of projection, see Velmans, 1996.
  14. Dennett, 1992.
  15. Levine, 1983.
  16. Feigl, 1967.
  17. Teller, 1992, p. 199.
  18. Clarke and O'Malley, 1996, p. 9.
  19. Globus, 1973, p. 1129. See also Globus, 1976.
  20. See Dennett and Haugeland, 1987.
  21. Searle, 1984, p. 14. See also Searle, 1983. In his book Minds, Brains, and


 Searle enumerates four features of mental phenomena that do not seem to fit a “scientific” conception of the world. Intentionality is one of these features. The other three intractable features of mental phenomena are consciousness itself, the subjectivity of mental states, and mental causation. In this book, I have tried to show how these four features can fit within a fully scientific view of the brain and mind.

Chapter 10

  1. If the life of the whole organism is built on the life of the parts, can we reduce the life of the cell to its parts as well? Some interesting questions are raised when we try to reduce entities within the nested hierarchy of a living thing. For example, is an organic molecule in my femur bone alive? Does such a molecule “die” if I take it out of my bone and place it on the table? Is a single carbon atom within my living femur alive as well? Where does one draw the line? Like consciousness, life has a personal ontology and some aspects of a life are irreducibly relative to the being that possesses it. Consider that lonely carbon molecule in my femur. To you, this molecule has the same ontology whether it is in my body or on the table next to me. In contrast, when that carbon molecule is part of my life, part of the nested hierarchy of my body, it possesses a personal ontology that is unique to me. That unique feature of being part of my living self is lost if that molecule is removed from my body.
  2. For a refutation of the silicon chip argument, see Searle, 1997.

If you find an error or have any questions, please email us at Thank you!