"The Neuronal Correlates of Consciouness"
Metzinger, T., ed., pp. 103-110. MIT Press: Camrbidge, MA (2000)
The Unconscious Homunculus
The Salk Institute
Computation and Neural Systems Program
California Institute of Technology
AbstractWe discuss Jackendoffs "Intermediate Level Theory of Consciousness" as well as related work of others, that implies that we are not directly conscious of our thoughts. We apply this hypothesis to the visual system of the macaque monkey and discuss possible experimental tests.
It is universally agreed that it is not completely obvious how the activity of the brain produces our sensory experiences; more generally, how it produces consciousness. This is what Chalmers has dubbed "The Hard Problem" (Chalmers, 1996). Philosophers are divided about the likely nature of the solution to this problem and whether it is, indeed, a problem at all. For a very readable account of the nature of some of their discussions and disagreements the reader should consult the recent book by Searle (1997) with contributions from Chalmers and Dennett, the edited anthology by Shear (1997) as well as the collection of essays by Paul and Patricia Churchland (1998).
Our own view is that it is a plausible working assumption that some activity of the brain is all that is necessary to produce consciousness, and that this is the best line to follow unless and until there is clear decisive evidence to the contrary (as opposed to arguments from ignorance). We suspect that our present ideas about how the brain works are likely to turn out to be inadequate; that radically new ideas may be necessary, and that well-formulated suggestions (even way-out ones) should be carefully considered. However, we also believe that, while Gedanken experiments are useful devices for generating new ideas or for suggesting difficulties with existing ideas, they do not lead, in general, to trustworthy conclusions. The problem is one that should be approached scientifically and not merely logically. That is, that any theoretical scheme should be pitted against at least one alternative theory, and that real experiments should be designed to test between them. (As an example, see our hypothesis that primates are not directly aware of the neural activity in cortical area V1, the primary visual cortex; Crick and Koch, 1995).
The important first step is to find the neural correlate of consciousness (the NCC), for at least one type of consciousness. We will not repeat here our general approach to the problem, as this has been set out in a recent update of our views (Crick and Koch, 1998). In this paper we wish to venture a step further by asking what can be said about the precise nature of qualia from an introspective, first-person perspective. Another way to look at the matter is to emphasize that it is qualia that are at the root of the hard problem, and that one needs to have a clear idea of exactly under what circumstances qualia occur.
The Intermediate-Level Theory of Consciousness
In earlier publications about the visual system of primates (Crick and Koch, 1995) we suggested that the biological usefulness of visual consciousness in humans is to produce the best current interpretation of the visual scene in the light of past experience, either of ourselves or of our ancestors (embodied in our genes), and to make this interpretation directly available---for a sufficient amount of time---to the parts of the brain that plan possible voluntary motor outputs of one sort of another, including speech.
Philosophers have invented a creature they call a ``zombie", who is supposed to act just as normal people do but to be completely unconscious (Chalmers, 1995). While strictly logically possible, this seems to us to be an untenable scientific idea, but there is now suggestive evidence that part of the brain does behave like a zombie. That is, in some cases, a person uses current visual input to produce a relevant motor output, without being able to say what was seen. Milner and Goodale (1995) point out that a frog has at least two independent systems for action. These may well be unconscious. One is used by the frog to snap at small, prey-like objects, and the other for jumping away from large, looming objects. Why does our brain not consist simply of a series of such specialized zombie systems? We proposed (Crick and Koch, 1995) that such an arrangement is inefficient when very many such systems are required. Better to produce a single but complex representation and make it available for a sufficient time to the parts of the brain that make a choice among many different but possible plans for action. This, is our view, is what seeing is about.
Milner and Goodale (1995) suggest that in primates there are two systems, which we have called the on-line and the seeing systems. The latter is conscious, while the former, acting more rapidly, is not. If a bundle of such unconscious specialized on-line systems could do everything more efficiently than our present arrangement, we would not be conscious of anything.
We decided to re-examine the ideas of Ray Jackendoff (1987) as expressed in his book entitled Consciousness and the Computational Mind in which he put forward the Intermediate-Level Theory of Consciousness. Jackendoffs book, which is based on a detailed knowledge of cognitive science, is a closely argued defense of the at-first-sight paradoxical idea that we are not directly conscious of our thoughts, but only of a representation of them in sensory terms. His argument is based on a deep knowledge of modern linguistics and the structure of music, though he also makes some suggestions about the visual system.
Let us first consider Jackendoffs overall view of the mind/brain problem. His analysis postulates three very different domains. These are:
1. the brain
2. the computational mind
3. the phenomenological mind
The brain domain includes both the neurons (and associated cells) and their activities. The computational mind handles information by doing a whole series of "computations" on it. The level of the computational mind is not concerned with exactly how these computations are implemented---this is the standard AI view---but takes for granted that neural instantiation will eventually play an important role in constraining the theory. The domain of the phenomenological mind consists of qualia such as blueness, saltiness, painfulness, and so on. Jackendoff confesses he has no idea how to get blueness and the other experiences arise out of computation (Chalmers' hard problem). What he is concerned with is what type of computations have qualia associated with them. He is less concerned with the main problem that interests us, which is how some activities of the brain correlate with qualia, though he would agree with us that, roughly speaking, it is the transient results of the computations that correlate with qualia; most of the computations leading up to those results are likely to be unconscious. But since computations are implemented in neuronal hardware, these two questions can be connected by asking which parts of the brain are responsible for which computations.
Jackendoff remarks that common sense seems to tell us that awareness and thought are inseparable and that introspection can reveal the contents of the mind. He argues at length that both these beliefs are untrue. They contrast strongly with his conclusion that thinking is largely unconscious. What is conscious about thoughts is visual or other images, or talking to oneself. He maintains that visual and verbal images are associated with intermediate-level sensory representations, which are in turn generated from thoughts by the fast processing mechanisms in short-term memory. Both the process of thought and its content are not directly accessible to awareness.
An example may make this clearer. A bilingual person can express a thought in either language but the thought itself, which generates the verbal activity or imagery, is not directly accessible to him but only in these sensory forms.
Another way of stating these ideas is to say that most of what we are directly aware of falls under two broad headings:
1. a representation of the outer world (including our bodies), and
2. a representation of the inner world; that is, of our thoughts.
This implies that we are neither directly aware of the outer world nor of the inner world, although we have the persistent illusion that we are. Curiously enough, this idea, which seems very appealing to us, has attracted rather little attention from brain scientists though it dates back to at least as early as Immanuel Kant.
3. both of these representations are expressed solely in sensory terms.
To appreciate these arguments, the reader should consult Jackendoff (1987) as well as some updates to these ideas in Jackendoff (1996). For the visual system he proposed ideas based on the theories of David Marr. Marr argued in his posthumous book Vision (1982) that it would be almost certainly impossible for the brain to arrive at a visual representation, corresponding to what we consciously see, in only one step. He therefore suggested a hypothetical series of stages. In his analysis he concentrated on the documentation of shape, though he realized that a fuller treatment would include movement, texture and color.
Marr proposed four possible stages. The first one he called "Image" (there might be several of such steps). This simply represents the light intensity value at each point in the visual image. The second he called the "Primal sketch." This makes explicit important information about the two-dimensional image, such as edge segments, terminations, etc. The third stage was the "2-1/2 sketch." This makes explicit the orientation and rough depth of the visible surfaces, and contours of discontinuities in these quantities, in a viewer-centered coordinate frame. The fourth and final step he called the 3D model representation. This describes shapes and their special organization in an object-centered frame.
Work on the visual system of the macaque does indeed suggest that it consists of a series of stages (Felleman and Van Essen, 1991) and that these follow one another along the broad lines suggested by Marr, but the system probably does not display the exact stages he suggested, and is probably considerably more complicated. For the sake of convenience, though, we will continue to use his nomenclature.
Jackendoff proposes that we are directly conscious of an extended version of something corresponding roughly to Marrs 2-1/2D sketch but not of his 3D model. For instance, when we look at a persons face we are directly conscious of the shape, color, movement and so on, of the front of her face, (like the 2-1/2 D sketch) but not of the back of her head, though we can imagine what the back of her head might look like, deriving this image from a 3D model of the head of which we are not directly conscious.
The experimental evidence shows that the higher levels of the visual system, in the various inferotemporal regions, have neurons that do appear to respond mainly to something like an enriched 2-1/2D sketch, and show a certain amount of size, position and rotation invariance. This has been especially studied for faces and, more recently, for artificial bent-wire 3D shapes (Perrett, Oram, Hietanen and Benson, 1994; Logothetis, Pauls and Poggio, 1995; Logothetis and Pauls, 1995). We will discuss these results more fully in a later section.
We have located several other suggestions along similar lines. There are probably more (for a philosophical perspective, see Metzinger, 1995; for a dissenting view, see Siewert, 1998.)
We discovered a suggestion almost identical to Jackendoffs from a brief report of a meeting on consciousness (J. Consciousness Studies 4: 396, 1997), outlining the more recent ideas of Richard Stevens (1997). In brief, from periods of closely-observed introspection he has concluded that,
"Conscious awareness is essentially perceptual. It consists entirely of perceptual images. These may be directly stimulated by outside events or internally generated in the more elusive and less well defined form of remembered or imagined percepts."
Among perceptual images he includes unspoken speech. This is in striking agreement with Jackendoffs ideas, which were then largely unknown to him. Stevens also makes the point that consciousness is necessary for certain forms of evaluations, "because it is only when thoughts and possibilities are conscious in the form of words and/or images that we can begin to compare and contrast them."
An idea somewhat similar to Jackendoffs was put forward by Sigmund Freud. Consider this quotation from his essay on "The Unconscious," published in 1915:
In psycho-analysis there is no choice but for us to assert that mental processes are in themselves unconscious, and to liken the perception of them by means of consciousness to the perception of the external world by means of sense-organs.
(The quotation was brought to our attention in a paper by Mark Solms  who states that Freud probably derived the idea from Kant, either directly or indirectly.) As is well-known, Freud was driven to this idea by his studies on disturbed patients. "He found that without making this assumption he was unable to explain or even describe a large variety of phenomena which he came across." However it is not clear that Freud believed that the perception of mental processes was entirely sensory. That is, he stated points (1) and (2) above, but he did not make point (3) explicitly.
There is also the well-known claim by Karl Lashley. In his provocative 1956 book Cerebral Organization and Behaviour he wrote:
No activity of mind is ever conscious.[Lashleys italics] This sounds like a paradox, but it is nonetheless true. There are order and arrangement, but there is no experience of the creation of that order. I could give numberless examples, for there is no exception to the rule. A couple of illustrations should suffice. Look at a complicated scene. It consists of a number of objects standing out against an indistinct background: desk, chairs, faces. Each consists of a number of lesser sensations combined in the object, but there is no experience of putting them together. The objects are immediately present. When we think in words, the thoughts come in grammatical form with subject, verb, object, and modifying clauses falling into place without our having the slightest perception of how the sentence structure is produced. . . Experience clearly gives no clue as to the means by which it is organized.
In other words, Lashley believed that the processes underlying thoughts, imagery, silent speech and so on are unconscious while only their content may be accessible to consciousness. Again, it is not clear that Lashley was suggesting that all conscious thoughts are expressed solely in sensory terms.
It is worth noting that Jackendoff and Stevens arrived at broadly the same conclusion from significantly different evidence. The exclamation, "How do I know what I think till I hear what I say?" shows that the idea is not unknown to ordinary people.
Let us assume, therefore, that qualia are associated with sensory percepts, and make a few rather obvious points about them. Apart from the fact that they differ from each other (red is quite different from blue, and both from a pain, or a sound), qualia also differ in intensity and duration. Thus the qualia associated with the visual world, in a good light, are more vivid than a recollection of the same visual scene (vivid visual recollections are usually called hallucinations). A quale can be very transient, and pass so quickly that we may have little or no recollection of it. Neither of these two properties is likely to cause any special difficulties when we consider the behavior of neurons, since neurons can easily express intensity and duration.
However, there is a class of conscious percepts which have a rather different character from straightforward sensory percepts. Jackendoff originally used the term "affect" to describe them, though more recently he has substituted the term "valuation" (Jackendoff, 1996). Examples would be a feeling of familiarity, or novelty, or the tip-of-the-tongue feeling, and all the various emotions. It is not clear whether these feelings exist in their own right, or are merely certain mixtures of various bodily sensations. Stevens (1997) discusses "feels" associated with particular percepts, images or words. We propose to leave these more diffuse percepts on one side for the moment, though eventually they, too, will have to be explained in neural terms. Some of these may merely express simple relationships (such as "same as" or "different from") between sensory qualia.
The homunculus is usually thought of as a "little man inside the head," who perceives the world through the senses, thinks, and plans and executes voluntary actions. In following up this idea we came across a "Comment" by Fred Attneave (1961), entitled "In Defense of Homunculi." He lists two kinds of objections to a homunculus. The first is an aversion to dualism, since it might involve "a fluffy kind of nonmatter. . . quite beyond the pale of scientific investigation." The second has to do with the supposed regressive nature of the concept; who is looking at the brain states of the homunculus? Attneave notes, "that we fall into a regress only if we try to make the homunculus do everything. The moment we specify certain processes that occur outside the homunculus, we are merely classifying or portioning psychoneural functions; the classification may be crude but it is not itself regressive." He puts forward a very speculative over-all block diagram of the brain, involving hierarchical sensory processing, an affect system, a motor system, and a part he calls H, the homunculus. It is reciprocally connected to the perceptual machinery at various levels in the hierarchy, not merely the higher ones. It receives an input from the affective centers and projects to the motor machinery. (There are other details about reflexes, skills, proprioception, etc.) He emphasizes that his scheme avoids the difficulty of an infinite regress.
Attneave tentatively locates the homunculus in a subcortical location, such as the reticular formation, and he considers it to be conscious. Yet his basic idea is otherwise very similar to the one discussed above. We all have this illusion of a homunculus inside the brain (thats what "I" am), so this illusion needs an explanation. The problem of the infinite regress is avoided in our case, since the true homunculus is unconscious, and only a representation of it enters consciousness. This puts the problem of consciousness in a somewhat new light. We have therefore named this type of theory as one postulating an unconscious homunculus, wherever it may be located in the brain. The unconscious homunculus receives information about the world through the senses and thinks, plans and executes voluntary actions. What becomes conscious then is a representation of some of the activities of the unconscious homunculus in the form of various kinds of imagery and spoken and unspoken speech. Notice that this idea does not, by itself, explain how qualia arise.
The concept of the unconscious homunculus is not a trivial one. It does throw a new light on certain other theoretical approaches. For example, it may make Penroses worries about consciousness unnecessary. Penrose (1989, 1997) has argued that present-day physics is not capable of explaining how mathematicians think, but if all such thinking is necessarily unconscious---as mathematicians have testified (Hadamard, 1945) that certainly some of it is---then although something such as quantum gravity may be needed for certain types of thinking, it may not be required to explain consciousness as such. Penrose has given no argument that sensory experiences themselves are difficult to explain in terms of present-day physics.
Possible Experimental Approaches
In approaching a system as complex as the brain, it is important to have some idea, however provisional, as to what to look for. Let us therefore follow these authors and adopt the idea of the unconscious homunculus as a tentative working hypothesis, and ask what experiments might be done to support it. For the moment we will concentrate on the visual system.
What we are trying to identify is the activity of the brain that produces visual qualia. We have argued (Crick and Koch, 1995, 1998) that whatever other properties are involved we should expect to find neurons whose firing is in some way correlated with the type of qualia being perceived. So it is not unreasonable to ask which are the neurons whose activity is correlated with Marrs 2-1/2D sketch (roughly speaking, the visual features of which we are directly aware) and which are the neurons whose activity is correlated with Marrs 3D model (of which we are only indirectly aware). For the moment we will assume that this latter activity is represented somewhere in the cortex and leave aside other less likely possibilities, such as in the reticular formation or the claustrum.
As far as we know, there are only two sets of relevant experimented results. The first is due to Perrett and his co-workers in their study on the neurons in the alert macaque monkey that respond to faces (Perrett, Hietanen, Oram and Benson, 1992). Most of the neurons in the higher levels of the visual system that respond to faces fire only to one aspect of the face, usually a specific view. The firing is somewhat independent of scale, of small translations and of some degree of rotation (Pauls, Bricolo and Logothetis, 1996). These neurons look as if they are members of a distributed representation of a particular view of a face, as suggested by the theoretical work of Poggio (1990; see also Poggio and Edelman, 1990 and Logothetis, Pauls, Bülthoff and Poggio, 1994) and supported (on a lightly anaesthetized macaque) by Young and Yamane (1992).
However, Perrett et al. (1992) do report 6 neurons (4% of the total) that respond to all horizontal views of a head: that is, they are view-invariant. These might be taken to be part of a 3D model representation. However, it is known that some of the circuits in the hidden layers of a 3-level feed-forward neural network, trained by back-projection, often have somewhat unusual properties (Sejnowski and Rosenberg, 1987), so one could argue these apparent 3D model neurons are really only a small accidental part of a 2-1/2D sketch. Against this interpretation, Perrett et al. (1992) claim that these 6 neurons have a significantly longer latency (130 msecs against 119 msecs), suggesting that they are one step higher in the visual hierarchy. The crucial question is whether these minority of neurons are of a different type from the view-specific face neurons (for example, project to a different place), and this is not known.
The other example comes from the experiments of Logothetis and Pauls (1995) on the responses of the neurons in an alert macaque, again in the higher levels of the visual hierarchy, to artificial paper-clip-like models. Again, a minority of neurons (8 of the 773 cells analyzed) respond in a view-independent manner, but in these experiments the latencies were not measured, nor was it known exactly which type of neuron was being recorded (N. Logothetis, personal communication).
A naive interpretation of our general idea would be that the face representations in prefrontal cortex reported by Scalaidhe, Wilson and Goldman-Rakic (1997), would be implemented solely by view-independent neurons, and without any view-dependent ones. As far as we know, this has not yet been studied. Note that while the activity of view-independent neurons should always be unconscious, it does not follow that the activities of all view-dependent ones must always be conscious. Our unconscious thoughts may well involve neurons of this latter type.
We think this simple guess at the location of these two types of neurons is rather unlikely, though we would not be surprised if the percentage of neurons showing view-invariance turns out to be higher in prefrontal areas than the very small fractions reported in inferotemporal cortex. One might also find a higher percentage in such areas as the parahippocampal gyrus and the perihinal cortex, leading to the hippocampus. Whether they will also be found in parts of the thalamus and in the amygdala remains an empirically open question.
Another possibility is that, contrary to Jackendoff's suggestion, there is no true, object-centered (3D) visual representation in an explicit form in the brain. That is, object-centered information is never made explicit at the level of individual neurons, being coded instead in an implicit manner across a distributed set of neurons. While there are still unconscious computations that lead up to thoughts, the results of the computations are expressed directly in sensory, viewer-centered terms. In this were true, the search for view-invariant neurons in prefrontal cortex would be unsuccessful.
We have briefly considered the visual system, but though they are outside the scope of this paper, the same analysis should be applied to the other sensory systems, such as audition, somato-sensory, olfaction, and pain. It may not always be completely obvious what the difference is between (unconscious) thoughts and the (conscious) sensory representations of these thoughts in these systems. The crucial test to distinguish between these two is whether any qualia are involved beyond mental imagery and unspoken speech (e.g. the putative non-iconic thoughts of Siewert, 1998). We leave this to the future.
Another problem concerns our guess that unconscious thought processes may be located in some places in prefrontal cortex. Firstly, it is not clear exactly where prefrontal cortex ends as one proceeds posteriorly, especially in the general region of the insula. Secondly, the selection of "prefrontal" cortex (or a subset of thereof) in this way seems rather arbitrary. It would be more satisfactory if there were a more operational definition, such as those cortical areas receiving a projection from the basal ganglia, via the thalamus (usually thalamic area MD). It is conceivable that the rather rapid sequential winner-take-all operations performed by the basal ganglia may not be compatible with consciousness, but are frequently used by more rapid, unconscious thought processes.
As Stevens (1997) has stated, the picture that emerges from all of this is quite surprising. As has often been assumed, we are not directly aware of the outer world of sensory events. Instead, we are conscious of the results of some of the computations performed by the nervous system on the various neural representations of this sensory world. These results are expressed in various cortical areas (excluding primary visual cortex; Crick and Koch, 1995). Nor are we directly aware of our inner world of thoughts, intentions and planning (that is, of our unconscious homunculus) but -- and this is the surprising part -- only of the sensory representations associated with these mental activities. What remains is the sobering realization that our subjective world of qualia---what distinguishes us from zombies and fills our life with color, music, smells, and other vivid sensations---is probably caused by the activity of a small fraction of all the neurons in the brain, located strategically between the outer and the inner worlds. How this activity acts to produce the subjective world that is so dear to us is still a complete mystery.
Acknowledgments: We thank Dave Chalmers, Patricia Churchland, Ray Jackendoff, Thomas Metzinger, Graeme Mitchinson, Roger Penrose, David Perrett, Tomaso Poggio, and Richard Stevens. We thank the J.W. Kieckhefer Foundation, the National Institute of Mental Health, the Office of Naval Research and the National Science Foundation.
Attneave F (1961) In defense of homunculi. In: Sensory Communication. Rosenblith WA, ed., pp. 777-782. New York, NJ: MIT Press and John Wiley.
Chalmers D (1995) The Conscious Mind: In Search of a Fundamental Theory. New York, NY: Oxford University Press.
Churchland PM, Churchland PS (1998) On the Contrary: Critical Essays, 1987-1997. Cambridge, MA: MIT Press.
Crick F, Koch C (1995) Are we aware of neural activity in primary visual cortex? Nature 375: 121-123.
Crick F, Koch C (1998) Consciousness and neuroscience. Cerebral Cortex 8: 97-107.
Felleman DJ, Van Essen D (1991) Distributed hierarchical processing in the primate cerebral cortex. Cerebral Cortex 1: 1-47.
Freud S (1915) Das Unbewusste. Int. Zeitschrift Psychoanal. 3(4): 189-203 and 3(5): 257-269.
Hadamard J (1945) The Mathematician's Mind. Princeton, NJ: Princeton University Press.
Jackendoff R (1987) Consciousness and the Computational Mind. Cambridge, MA: MIT Press.
Jackendoff R (1996) How language helps us think. Pragmatics & Cognition 4: 1-34.
Logothetis, NK, Pauls J, Bülthoff HH, Poggio T (1994) View-dependent object recognition by monkeys. Current Biology 4: 401-414.
Logothetis NK, Pauls J, Poggio T (1995) Shape representation in the inferior temporal cortex of monkeys. Current Biology 5: 552-563.
Logothetis NK, Pauls J (1995) Psychophysical and physiological evidence for viewer-centered object representations in the primate. Cerebral Cortex 3: 270-288.
Marr D (1982) Vision. San Francisco, CA: Freeman.
Metzinger, T (1995). Einleitung: Das Problem des Bewußtseins. In: Bewußtsein, Metzinger, T., ed., Paderborn, Germany.
Milner D, Goodale M (1995) The Visual Brain in Action. Oxford, UK: Oxford University Press.
Pauls J, Bricolo E, Logothetis N (1996) View invcariant representations in monkey temporal cortex: Position, scale, and rotational invariance. In: Early Visual Learning. Nayar SK and Poggio T, eds., pp. 9-41. New York: Oxford University Press.
Penrose R (1989) The Emperor's New Mind. Oxford, UK: Oxford University Press.
Penrose R (1997) The Large, the Small and the Human Mind. Cambridge, UK: Cambridge University Press.
Perrett DI, Oram MW, Hietanen JK, Benson PJ (1994) Issues of representation in object vision. In: The Neuropsychology of High-Level Vision, MJ Farah and G Ratcliff, eds., pp. 33-61. Hillsdale: Lawrence Erlbaum.
Perrett DI, Hietanen JK, Oram, MW, Benson PJ (1992) Organization and functions of cells responsive to faces in the temporal cortex. Philosophical Transactions of the Royal Society London B 335: 23-30
Poggio T (1990) A theory of how the brain might work. Cold Spring Harbor Symp. Quant. Biol. 55: 899-910.
Poggio T, Edelman S (1990) A network that learns to recognize three-dimensional objects. Nature 343: 263-266.
Searle JR (1997) The Mystery of Consciousness. New York, NY: The New York Review of Books.
Scalaidhe SPO, Wilson FAW, Goldman-Rakic PS (1997) Areal segregation of face-processing neurons in prefrontal cortex. Science 278: 1135-1138.
Sejnowski TJ, Rosenberg CR (1987) Parallel networks that learn to pronounce English text. Complex Systems 1: 145-168.
Shear J (1997) Explaining Consciousness: The Hard Problem. Cambridge, MA: MIT Press.
Siewert, C.P. (1998) The Significance of Consciousness. Princeton: Princeton University Press.
Solms, M. (1997) What is Consciousness? Journal of the American Psychoanalytic Associations, 45:681-703.
Stevens R (1997) Western phenomenological approaches to the study of conscious experience and their implications. In: Methodologies for the Study of Consciousness: A New Synthesis, J Richardson and M Velmans, eds., pp. 100-123. Kalamazoo: Fetzer Institute.
Young MP, Yamane S (1992) Sparse population coding of faces in the inferotemporal cortex. Science 256: 1327-1331.