next up previous contents
Next: Spike Time Information Up: Parameter Adaptation Previous: Adapted Parameter Values

Information in the Firing Rate

Information is not defined in the abstract, but instead always relative to a given, fixed environment. (This
environment goes by  the technical name  of  the ''statistical ensemble of inputs.'' )
Hence, we should and will make no statements as to how poorly or well the neuron
transmits information about any stimulus ensemble other than the one used for learning.

If the number of (stimulus,firing rate) pairs in the data are limited, statistical biases  [Treves and Panzeri (1995)]  Panzeri and Treves (1996)Strong et al. (1997) ] lead to underestimating the mutual information.  In order to minimize
these biases in estimating the information-theoretic quantitites, we take 125,000 samples of the stimuli and associated firing rates and produced by the model neuron with the conductance parameters frozen to the final values given in the section  Adapted Parameters . Bias correction terms are computed as in  Panzeri and Treves (1996).
 
 We estimate three quantities:
 

1)
The entropy rate
\begin{displaymath}R(n) = - \frac{1}{T} \sum_{n=0}^\infty log[p(n)] p(n),\end{displaymath}

where $p(n)$ is the normalized frequency of observing $n$ spikes within the stimulus interval $T$. This measure reflects how broad the spike count distribution is.

2)
The mutual information rate between the count $n$ and the stimulus $x$:
\begin{align*}\frac{1}{T} {\cal{I}}(n; x) & = R(n) - R(n\vert x)\\& = \frac{1......um_{n=0}^\infty log[p(n\vert x)] p(n\vert x)\right] p(x)\right\}\end{align*}
 
where each stimulus was assigned to one of 256 uniform bins that spanned a width of four standard deviations ( $\pm 4 \sigma$) around the mean of the Gaussian distribution of synaptic conductance stimuli.
3)
The lower bound of the mutual information given by eq. 2 of the text, normalized to a rate:
\begin{displaymath}\frac{1}{T} {\cal{I}}_{\text{LB}} (n;x) = \frac{1}{T}\left\{......gma_n(x) \biggr) \, p(x) \, dx- \ln (\sqrt{2 \pi e})\right\}\end{displaymath}
 
 
Table 7: Entropy and mutual information in the spike count before and after the model neuron ``learns'' to produce a uniform distribution of firing rates.
 
  Mutual Information ${\cal{I}}$ Rate (bits/sec) MI Lower Bound ${\cal{I}}_{\text{LB}}$ Rate (bits/sec)  Entropy Rate (bits/sec)
Before Adaptation 4.70 1.25 8.95
After Adaptation 11.85 8.30 17.35
Difference +7.15 +7.05 +8.40
 
 

Since ${\cal{I}}_{\text{LB}}$ is a lower bound precisely because it substitutes a Gaussian distribution for the conditional spike count distribution, it can be a fairly weak bound, given that the spike count distributions are actually discrete and heavily skewed.  Note that  most of the entropy rate increase is mirrored by an increase in the mutual information rate, indicating that the amplification of the noise by the modulatory calcium and potassium conductances in the dendritic  compartment is marginal.
 

Table 8: Entropy and mutual information in the spike count before and after the model neuron ``learns'' to produce an exponential distribution of firing rates.
 
  Mutual Information ${\cal{I}}$ Rate (bits/sec)  MI Lower Bound ${\cal{I}}_{\text{LB}}$ Rate (bits/sec)  Entropy Rate (bits/sec)
Before Adaptation 4.70 1.25 8.95
After Adaptation 11.25 2.16 17.35
Difference +6.55 +0.91 +8.40
 
 

 


next up previous contents
Next: Spike Time Information Up: Parameter Adaptation Previous: Adapted Parameter Values 
Martin Stemmler

1998-08-16