next up previous contents
Next: Parameter Adaptation Up: Information Maximization in Single Neurons Previous: Model Description

Variational Derivatives and the Learning Rule

In response to every (randomly chosen) synaptic conductance presented to the model Hodgkin-Huxley neuron, the dendritic voltage waveform $V_{\text{dend}}(t)$ settles into a simple periodic limit cycle as dictated by the somatic spiking conductances. We assume that the time-course of adaptation to the state of perfectly periodic firing is fast compared to the duration of each stimulus. In the asymptotic limit of long duration stimuli, we can interchangeably compute the average over the stimulus duration or compute the average over the (periodic) interspike interval--such averaged quantities will be equal in magnitude.

To maximize the information in the firing rate, the peak conductances of the calcium and potassium conductances in the dendritic compartment change in response to each stimulus by
 \begin{displaymath}\Delta g_i = \eta(t) \Bigl\langle  \frac{\delta}{\delta V(t... ...\langle V \rangle) \, m_i h_i (E_i - V) \Bigr\rangle.\tag{3}\end{displaymath} (3)
To simplify the notation, we will write $V(t)$ for $V_{\text{dend}}(t)$ throughout, unless there is the possibility of confusion. Here $\langle m_i h_i \, (E_i - V) \rangle$ is proportional to the average current through the $i$-th conductance during one cycle of periodic firing. The learning rate $\eta(t)$ in eq. 3 sets the slow time scale of parameter adaptation.

This note details how the variational derivatives in eq. 3 and the related parameter adaptation rules are computed numerically. By assuming that the time constants for the modulatory potassium and calcium conductances in the dendritic compartment are voltage-independent, the computational burden is eased significantly. With this simplification, the first-order differential equations for the gating variables $m_i$ and $h_i$ read:
 

\begin{align*}\tau_i \frac{d m_i}{dt} = & m_{\infty,i}(V) - m_i, \ \tau_i \fra... ...h_{\infty,i}(V) = & \frac{1}{1 + \exp[ -(V- V_{h_i})/s_{h_i} ] },\ \end{align*}
where $s_{m_i}$ is the slope at the inflection point of the activation function, measured in $\text{mV}$,and $V_i$ is the position of this midpoint in $\text{mV}$.Whereas the slope $s_{m_i}$ is positive, the corresponding inactivation function slope $s_{h_i}$ is negative. The Boltzmann function is inherently suitable for numerical computations, since the derivatives of $m_{\infty,i}(V)$ can be developed algebraically as a power series in $m_{\infty,i}(V)$.For instance, the first derivative is

\begin{displaymath}\frac{d}{dV} m_{\infty,i}(V) =  \frac{1}{s_i} \,  m_{\infty,i}(V) \Bigl( 1 - m_{\infty,i}(V)  \Bigr).\end{displaymath}

By defining the exponential function $e_i(t) = \exp ( - t/ \tau_i)/\tau_i$, we first rewrite the differential equation for $m_{i}(t)$ as an integral equation
\begin{align*}m_{i}(t) & = \int_{-\infty}^t m_{\infty,i}[V(t')] e_i(t-t') \, d... ...e shorthand notation}m_{i}(t) & = m_{\infty,i}[V(t)] * e(t), \notag\end{align*}
where $*$ is the convolution operator.

The average activation $\langle m_i \rangle$ is a functional, i.e., a mapping from the space possible voltage functions of time onto a subset of the real numbers that describes the average activation. The functional derivative of the average activation is defined as

\begin{displaymath}\begin{split}\frac{\delta}{\delta V(t)} & \langle m_i \rang... ...[V(s)] * e_i(s) \, ds }{ \varepsilon } \Biggr\},\end{split}\end{displaymath}
where $T$ is the steady-state period of the firing cycle. Computing the average functional derivative of $\langle m_i \rangle$ is particularly simple when the associated time constant is voltage-independent, since $\langle m_i (t) \rangle = \langle m_{\infty,i}[V(t)] \rangle$:
\begin{displaymath}\int_0^{\tau_{\text{duration}}}\frac{\delta}{ \delta V(t)} ... ...=  \left\langle \frac{d}{dV} m_{\infty,i}[V(t)] \right\rangle.\end{displaymath}

Computing the variational derivative of products such as $\langle m_i [V(t)] h_i [V(t)]\rangle$ requires the introduction of additional variables $m^{(n)}(t)$ and $h^{(n)}(t)$ which obey the differential equation: 
 \begin{displaymath}\tau_i \frac{d m^{(n)}_i}{dt} = m^n_{\infty,i}(V) - m^{(n)}_i,\end{displaymath} (i)
such that $m_i^{(n)}$ depends on the $n$-th power of the steady state activation function, and $h_i^{(n)}$is defined analogously. The partial derivatives with respect to the peak conductance, conductance slope, and midpoint voltage require the computation of $m^{(n)}_i$ and $h^{(n)}_i$ for the integers $n=2$ and $n=3$.

 In terms of these new variables, the functional derivative of $\langle m_i (t) h_i (t) (E_i - V(t))\rangle$ in eq. 3 can be written as:

\begin{displaymath}\begin{split}\biggl\langle \frac{ \delta }{\delta V(t)} \... ...mes \Bigl[E_i - V(t) \Bigr] -m_i h_i\biggr\rangle \end{split}\end{displaymath}

next up previous contents
Next: Parameter Adaptation Up: Information Maximization in Single Neurons Previous: Model Description 
Martin Stemmler

1/14/1998