periodically active (myelinated) axons, and so on. Thus theoretical representations of
neurons became ever more intricate.
By the early 1970s advances in electron micrography and electrophysiology made
it evident to some neuroscientists that the dynamics of real neurons were far more
complex than the McCulloch–Pitts model implied. Thus in 1972 neurologist Steven
Waxman proposed the concept of a ‘multiplex neuron’ in which patches of membrane
near the branching regions of incoming (axonal) and outgoing (dendritic) processes
of the neuron are viewed as localities of low safety factor, where active nerve
impulses can become extinguished (Waxman, 1972).
In other words, an individual neuron might be able to perform logical computations
at its branching regions, making it more like an integrated circuit chip than a single
switch. As reasonable as it seemed to some of us at the time (Scott, 1975), this
expanded view of the neuron’s computational power was far from being universally
accepted. Jerry Lettvin, a noted electrophysiologist at MIT, told me in the spring of
1978 that when he and his colleagues reported blockages of nerve impulses in the
optic nerves of cats and speculated on the possibilities of this phenomenon for visual
information processing (Chung
et al., 1970), his NIH funding was cut off. ‘When you are
ready to start doing science again,’ he was told, ‘we are ready to resume supporting you.’
Why were such seemingly reasonable suggestions so widely ignored in the 1970s?
Although one can only speculate, four possibilities come to mind. (1) Admitting to
increased computational power of the individual neuron would tend to undercut the
validity of the extensive neural network studies that were based on the
McCulloch–Pitts model. (2) Many of the studies in this area were being done in the
Soviet Union (Khodorov, 1974), and Western scientists (particularly in the US) tended
to ignore or disparage Soviet science. (3) The reputations of some senior scientists
were invested in simpler models of the neuron. (4) There was a widespread and
uncritical belief in the concept of ‘all-or-nothing’ propagation of a nerve impulse,
which left no room for the concept of impulse failure at branching regions of the
axonal or dendritic trees.
One of the encouraging aspects of science, however, is that the truth does out in
time. Like old soldiers, the old guard withers away as evidence for the new paradigm
accumulates, until what was once considered flagrant speculation becomes widely
accepted as established knowledge. So it was with our collective perceptions of the
neuron. Towards the end of the 1980s, as the extensive list of references in
Biophysics
of Computation shows, the experimental, theoretical and numerical evidence for
impressive computational power of an individual neuron was compelling. Former
heresy has become common sense.
For young scientists who are interested in understanding the dynamics of the
human brain this change in collective attitude is of profound significance, to which
Koch’s book provides an ideal introduction. Written in a precise yet easy style, the 21
chapters of
Biophysics of Computation begin at the beginning, introducing the reader
to elementary electrical properties of membrane patches, linear cable theory and the
properties of passive dendritic trees. These introductory chapters are followed by two
on the properties of synapses and the various ways that synapses can interact to per-
form logic on passive dendritic trees. Then the Hodgkin–Huxley formulation for
impulse propagation on a single fibre is discussed in detail, and various simplifying
models are presented. As a basis for the Hodgkin–Huxley description the present