^{*}

Edited by: David Hansel, University of Paris, France

Reviewed by: Germán Mato, Centro Atomico Bariloche, Argentina; Gianluigi Mongillo, Paris Descartes University, France

*Correspondence: Daniel Soudry, Department of Statistics, Center for Theoretical Neuroscience, Columbia University, 1255 Amsterdam Avenue, New York, NY 10027, USA e-mail:

This article was submitted to the journal Frontiers in Computational Neuroscience.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

Long term temporal correlations frequently appear at many levels of neural activity. We show that when such correlations appear in isolated neurons, they indicate the existence of slow underlying processes and lead to explicit conditions on the dynamics of these processes. Moreover, although these slow processes can potentially store information for long times, we demonstrate that this does not imply that the neuron possesses a long memory of its input, even if these processes are bidirectionally coupled with neuronal response. We derive these results for a broad class of biophysical neuron models, and then fit a specific model to recent experiments. The model reproduces the experimental results, exhibiting long term (days-long) correlations due to the interaction between slow variables and internal fluctuations. However, its memory of the input decays on a timescale of minutes. We suggest experiments to test these predictions directly.

Long term temporal correlations, or “^{−α} statistics” (Keshner, ^{−α} statistics appear in human cognition (Gilden et al.,

Cortical neurons indeed contain processes taking place on multiple timescales. Many types of ion channels are known, with a large range of kinetic rates (Channelpedia, ^{−1} Hz), possibly due to the limited duration of the experiments, which involve intracellular recording.

This raises the question – would the neuron still have long memory on timescales longer than 10 s? Generally, the answer may depend on the type of stimulus used. For example, certain ion channels may “remember” non-sparse inputs longer than sparse inputs (Soudry and Meir,

_{m} and Action Potential (AP) occurrences _{m}. _{th} following the (sparse) stimulus, with _{m} ≫ τ_{AP}.

We find general conditions under which a neuron can generate ^{−α} statistics in its spiking activity, and show that this does not imply that a neuron has long memory of its history. Specifically, in order to generate ^{−α} statistics slow processes should span a wide range of timescales with slower processes having a higher level of internally generated fluctuations (e.g., more “noisy,” due to lower ion channel numbers). However, in a minimal model that generates this behavior, slow processes do not retain memory of the input fluctuations beyond a finite “short” timescale, even though they are affected by the membrane's voltage. A main reason for this is that the “fastest adaptation process” in the model adjusts the neuron's response in such a way that any perturbation in the response is canceled out, before slower processes are affected.

We fit the minimal model to the days-long experiments in Gal et al. (^{−α} statistics, responding in a complex and irregular manner from seconds to days. The synaptic isolation of the neurons in the network, and their low cross-correlations indicate that these ^{−α} fluctuations are internally generated in the neurons (Appendix D). We are able to reproduce their results (Figure ^{2} s (Figure

The remainder of the paper is organized as follows. We begin in section 2.1 by presenting the basic setup. Then, in section 2.2, we present the general framework for biophysical modeling of neurons. Working in this framework, in section 2.3 we recall the mathematical formalism from Soudry and Meir (^{−α} behavior in section 3.1, we provide in section 3.2 both general and “minimal” conditions for a neuron to display such scaling. In section 3.3 we consider the implications of the model for the input–output relation of the neuron, given general stationary inputs. In section 3.4 we demonstrate this numerically in a specific biophysical model which is fitted to the experimental results of Gal et al. (

In our notation 〈·〉 is an ensemble average, _{1}, …, _{n})^{⊤} is a column vector [where (·)^{⊤} denotes transpose], and a boldfaced capital letter _{mn}).

As in Soudry and Meir (_{m} and amplitude _{0}. The intervals between the stimulation times are denoted _{m} ≜ _{m + 1} − _{m} (Figure _{m} ≫ τ_{AP}, with τ_{AP} being the timescale of an AP (Figure _{m}, where _{m} = 1 if an AP occurred immediately after the _{m} is not generally the same as the common count process generated from the APs by binning them into equally sized bins (Appendix B.1) – unless _{m} is constant and equal to the bin size.

We assume both _{m} and _{m} are wide-sense stationary (Papoulis and Pillai, _{*} ≜ 〈 _{m} 〉 to be the mean probability to generate an AP and _{*} ≜ 〈 _{m} 〉 as the mean stimulation period. Furthermore, we denote _{m} ≜ _{m} − _{*} and _{m} = _{m} − _{*} as the perturbations of _{m} and _{m} from their means. An important tool in quantifying the statistics of signals is the power spectral density (PSD), namely the Fourier transform of the auto-covariance (Papoulis and Pillai,

with 0 ≤ ^{−1}_{*} in Hertz frequency units. Note that this PSD is proportional to the PSD of the common binned AP (Equation 70), under periodical stimulus and for low frequencies – which is the regime under which we will investigate the PSD (similarly to the experiment Gal et al.,

and the cross-PSD

We model the neuron in the standard framework of biophysical neural models – i.e., Conductance Based Models (CBMs). However, rather than focusing only on a specific model, we establish general results about a broad class of models. In this framework, the voltage dynamics of an isopotential neuron are determined by ion channels, protein pores which change conformations stochastically with voltage-dependent rates (Hille,

with ^{M} (e.g., slow sodium inactivation Chandler and Meves, _{r/s} (with zero mean and unit variance). Also, the matrices _{r/s} and the vectors _{r/s} can be written explicitly using the kinetic rates of the ion channels, while the matrices _{r/s} can be written using those rates in addition to ion channel numbers. Lastly, we denote

as the diffusion matrices (Orio and Soudry, ^{1}

PSD-based estimators are central tools in quantifying long term correlations (Robinson,

Typically, CBMs (Equations 4–6) contain many unknown parameters, and are highly non-linear. Therefore, it is quite hard to fit them using a purely simulation based approach, especially over long timescales, where simulations are long and models have more unknown parameters. Therefore, we developed a reduction method that simplifies analysis and enables fitting of such models. We refer the reader to Soudry and Meir (

In this method, we semi-analytically^{2}

where _{m} is a white-noise signal with zero mean and variance σ^{2}_{e} ≜ _{*} − ^{2}_{*} (recall _{*} is the mean probability to generate an AP) and _{*} (the excitability fixed point) and _{j} (an “effective weight” of component _{j}) can be found self-consistently together with _{*} as a function of _{*} (Soudry and Meir, _{Y} (_{+}, _{−} and _{0} denote the averages of the quantity _{s} during an AP response, a failed AP response and rest, respectively. Also, we denote

as the steady state mean value of _{s} [this would be _{*}, _{*}) in Equation 7 in Soudry and Meir, _{*} and _{*} are the respective steady state means of _{s} and _{s}. Additionally, we denote (definition below Equation 12 in Soudry and Meir,

as a “feedback” vector (see Figure

as the “closed loop transfer function” (including the effect of the feedback), with _{m} = 0) we obtain (Soudry and Meir,

Though Equation (10) relies on two simplifying assumptions, extensive numerical simulations (Soudry and Meir,

In the neuron, the slow excitability variables

with

being the “open loop” version of _{c} (

with ^{o}_{Y} (_{Y} (_{Y} (

and κ (

Note that ^{3}_{Y} (^{o}_{Y} (

In order to simplify analysis, we decompose the vector expressions in Equations (13, 14) to partial fractions.

If _{*} is diagonalizable, than we can write Equation (13) as (Appendix A.1)

where the poles λ_{k} are the inverse timescales of the slow variables (the eigenvalues of _{*}), arranged from large to small according to their magnitudes (0 < |λ_{M}| < |λ_{M − 1}| < … < |λ_{1}|) and

being the amplitude of these poles, with _{kj} and _{k} being the respective components of _{*} and _{*} is diagonal. Note that, ∀_{k}] < 0 (from the properties of _{*}).

Using a similar derivation for κ (

with _{k} and _{k} being the respective components of _{*} is diagonal.

For concreteness, we demonstrate our results on a simple model in which _{*} is a diagonal matrix and, as a result, _{*} (which depends on _{*}) is also diagonal. In this “diagonal” model all the components of

∀ _{s,k}(_{k}(_{k}) + γ_{k}(_{k}) ^{ − 1}_{s,k}]^{1/2} and _{s,k} are the number of slow ion channels of type _{+,k},γ_{−,k} and γ_{0,k} denote the averages of the kinetic rate γ_{k}(

is the average γ_{k}(

with zero on all other (non-diagonal) components and

Therefore, in Equations (15, 16) we have,

Importantly, by tuning the parameters _{k}(_{k}(_{s,k} and _{k} we seem to have complete freedom in determining λ_{k}, _{k} and _{k} (Equations 19–21). This, in turn, would give complete freedom in tuning ^{o}_{Y} (

The only caveat in the previous argument is that in non-diagonal models λ_{k} can be complex, but not in a diagonal model, since the kinetic rates γ_{k}(_{k}(_{k}] > 0) always come in conjugate pairs. These pairs behave asymptotically (i.e., for 2π _{k}| or 2π _{k}|) very similarly to two real poles, with an additional “resonance” (either a bump or depression) in a narrow range in the vicinity of these poles (i.e., 2π _{k}|) (see Appendix A.2, or Oppenheim et al.,

As observed in Gal et al. (^{4}^{−α} noise.” This is because the Power Spectral Density (PSD, Papoulis and Pillai, _{m} is a “^{−α} noise signal” then its PSD (Equation 10) has a ^{−α} shape

where the PSD is defined here as in Equation (1). As is usually the case for most ^{ − α} phenomena, Equation (22) is true only in a certain range _{min} ≤ _{max}, and with 0 < α ≤2. Note also that if α > 1, then _{min} > 0 necessarily^{5}^{−α} behavior is considered interesting due to its “scale-free” properties, which can sometimes indicate a “long memory,” as explained in the introduction. Therefore, it is interesting to ask the following questions:

What is the biophysical origin of the ^{−α} behavior?

Does this ^{−α} behavior imply that the neuron “remembers” its history on very long timescales (hours and days)?

We aim to answer the first question in section 3.2, focusing on the case of periodical stimulation _{m} = _{*}, as in Gal et al. (_{m}. Finally, in section 3.4.2 we fit a specific CBM (which is an extension of a previous CBM) so it adheres to this set of minimal constraints. We numerically reproduce the experimental results of Gal et al. (

As we explained in the introduction, neurons contain a large variety of processes operating on slow timescales. These processes are, in many cases, not well characterized or contain unknown parameters. Therefore, it is hard to model the behavior of the neuron on slow timescales with a CBM using only simulation. With so many unknowns, an exhaustive parameter search is unfeasible ^{6}

However, even if a specific model could be found to reproduce the experimental results, it would still be unclear whether or not this is would be a “useful” model – one which can be used to infer the biophysical properties of the neuron, or its response to untested inputs. The first problem is that CBMs are highly degenerate, where different parameter values can generate similar behaviors^{7}

In order to address the first problem, initially, in section 3.2.1 we analyze Equation (10), and attempt to answer a more general question – what class of CBM models can generate the experimental results? We find “rather general” sufficient conditions – i.e., which, given a few assumptions, also become necessary conditions. Next, in section 3.2.2, we aim to find a “minimal” set of constraint on a CBM to fulfill theses conditions. Qualitatively, these conditions indicate that, in order to reproduce the experimental results, a general CBM must:

Include only a finite number of ion channels of each type (implying a stochastic model).

Include few slow processes with timescales “covering” the range of timescales over which _{Y} (^{−α} is observed.

Obey a certain scaling relation (with an exponent of 1−α), implying that slower processes are more “noisy.”

More detailed explanations of these conditions, and a concrete example, are provided in the following two subsections.

In this section we derive general conditions on the parameters of a CBM (section 2.2) so it can generate robust ^{−α} statistics in _{Y} (_{m} = _{*} ≫ τ_{AP} (as in Gal et al.,

This analysis is based on the decomposition of the PSD _{Y} (^{o}_{Y} (^{2}. Recall that _{Y} (^{−α} is robustly observed for all stimulation parameters – even when _{*} is near 0 or 1 (see section 3.1). Note that one can arbitrarily vary _{*} by changing the stimulation parameters (such as _{0} or _{*}). It is straightforward to show that when _{*} → 0 or _{*} → 1, the effect of feedback is negligible^{8}_{Y} (^{o}_{Y} (^{o}_{Y} (^{−α}. For this reason, and for the sake of analytical simplicity, we first develop general conditions so that ^{o}_{Y} (^{−α}, and later we discuss the effects of the feedback κ (

Note from Equation (15) that if ^{o}_{Y} (^{−α} exactly if and only if α = 0 or 2. However, these values are far from what was measured experimentally (Equation 42). Therefore, ^{o}_{Y} (^{−α} can be generated exactly only in some limit (in which _{1}|, then ^{o}_{Y} (_{*} σ^{2}_{e} ∝ ^{−2}. Additionally, if 2π _{M}|, we have ^{o}_{Y} (^{o}_{Y} (^{−α} with 0 < α < 2 only for |λ_{M}| < 2π _{1}|.

Next, we explain when this becomes possible. For simplicity assume that in Equation (15) _{*} σ^{2}_{e} is negligible and all the poles are real (the effect of complex poles will be discussed below). We define the following pole density

where δ(·) is Dirac's delta function. Using Equations (15 and 23) we obtain

For |λ_{M}| ≪ 2π _{1}| and 0 < α < 2, Equation (24) becomes

if and

in the range |λ_{1}| > |λ| > |λ_{M}|, with _{0} = 2π^{−1}^{o}_{Y} (

Several comments are in order at this point.

It was previously known that, in a linear system, a ^{−α} PSD could be generated using a similarly scaled sum of real poles (Keshner,

Formally, Equation (26) can be exact only in the continuum limit where the number poles is infinite and they are closely packed. However, in practice, Equation (25) remains a rather accurate approximation even if the poles are few and well separated (Figure

We have assumed that all the poles are real. What happens if some of the poles are complex? Recall (section 2.3.4) that if some poles have complex values then ^{o}_{Y} (

Note that so far we have discussed only ^{o}_{Y} (_{Y} (_{Y} (_{M}|≪2π _{1}| if, in that range: either (1) the magnitude of κ (_{Y} (^{o}_{Y} (^{o}_{Y} (

^{-α} PSD using a finite number of poles – a graphic description^{o}_{Y} (^{-α} (blue) can be approximated (on a log–log scale) in two distinct ways:

In the previous section we found general conditions under which Equation (13) gives ^{o}_{Y} (^{−α}. In this section, we aim is to generate ^{o}_{Y} (^{−α} over _{min} < _{max} in a minimal model, in which _{*} is diagonal (Equation 18). From Equation (26), we know that |λ_{k}| must “cover” the frequency range _{min} < _{max}. In order for _{k} to be uniform over a logarithmic scale (similarly to Keshner, _{k} ∝ ϵ^{k} with ϵ < 1. The “simplest” way to achieve this is to have (see Equation 18)

so

In order for λ_{k}/(2π) to cover the range [_{min},_{max}] we require that

Given _{k} ∝ |λ |^{1−α} ^{(2−α)k}, since _{k} − λ_{k − 1} ∝ ϵ^{k}. Therefore, from Equations (21) and (20) we have

so that ^{o}_{Y} (^{−α}. Therefore, we require that _{k} ∝ ϵ^{−μ k}, _{s,k} ∝ ϵ^{νk} with 2μ +

In Appendix A.4, we investigate what type of scaling will generate also _{Y} (^{−α}, taking into account the effects of feedback (through κ (_{Y} (_{Y} (^{−α} would be to take μ = 0. In this case, we have (Equation 59), for −1 < _{M}| ≪ 2π _{1}|,

where the logarithmic correction arises from the effect of feedback κ (

Due to the logarithmic correction, in order to approximate _{Y} (^{−α} it is a reasonable choice to set

Even if there is no scaling in the parameters (i.e., μ = _{Y} (^{−1} (neglecting logarithmic factors).

Equation (30) is based on asymptotic derivation, which is correct in two opposing limits (“sparse” or “dense” poles, Appendix A.5), indicating that these results are rather robust to parameter perturbations.

The magnitude of the ion channels number _{s,1} is inversely proportional to the magnitude of _{Y} (_{1} (the magnitude of the weights) does not affect _{Y} (

When _{s,1} → ∞ we have _{Y} (^{−α} noise (in accordance with our results from Soudry and Meir,

In the previous section we derived minimal biophysical constraints under which a neuron may generate ^{−α} statistics in response to periodic stimulation. In this section we explore the input–output relation of the neuron under these constraints, in the case where the inter-stimulus intervals _{m} form a general (sparse) random process. We decompose the neuronal response into contributions from its “long” history of internal fluctuations and its “short” history of inputs, quantifying neuronal memory.

Recall that _{m} ≜ _{m} − _{*}, with _{*} ≜ 〈 _{m} 〉 and _{T} (_{m} (Equation 2). As explained in Soudry and Meir (^{9}_{m}, the fluctuations in the neuronal response, to a linear sum of the history of the input and internal noise, i.e.,

with the filter ^{ext}_{k} used to integrate ^{int}_{k} used to integrate _{m}, a zero mean and unit variance white noise representing

where we define ^{ext}(^{int}(_{m} → _{m} neuronal I/O at very long timescales.

Note that these filters are related to the PSDs, in the following way

where we recall that _{YT} (_{T} (_{Y} (^{ext}(^{2}_{T} (

For a general CBM, we can derive semi-analytically the exact form of the filters in Equation (33) from its parameters, as we did for _{Y} (_{m} = 0 (periodical input), then also _{T} (

where _{Y} (

with

Next, we find both filters for the minimal model described in section 3.2.2. Recall that in this model

with _{1} and _{1}, respectively given by Equations (8) and (38). To simplify analysis, we derive an asymptotic form for both filters, for the cases |λ_{M}| ≪ 2π _{1}| and 2π _{1}|. First, from Equation (36), and Equation (59), we find

Similarly, from Equation (37), we find (Appendix A.6) that for the minimal model the interpolation between the two asymptotic cases is monotonic, so we can approximate

where ^{−1}^{−1}_{*}_{1}. A few comments on Equations (40, 41) are in order at this point.

We found that ^{ext}(_{ext} = _{1}/2π while ^{int}(^{−α/2} for 2π _{1}|. Consequently, in the temporal domain (Equation 32), for large ^{ext}_{k} ~ ^{−f}_{1T*k}), while its memory of its internal fluctuations decays as a power law (^{int}_{k} ~ ^{−(1−α/2)}). Therefore, the input memory has a finite timescale (equal to ^{−1}_{ext}), while the memory of internal fluctuations is “long” (with a cutoff only near ^{−1}_{min}).

It is perhaps surprising that Equation (37), which has multiple poles, becomes a low pass filter with a single pole _{1}. The derivation (Appendix A.6) gives two main reasons for this. First, the scaling of _{k} and _{k} in Equation (39) induces only a weak (logarithmic) scaling of the poles in open-loop. Second, even this weak scaling is canceled by the effects of the feedback.

Naturally, other models may have a different shape of ^{ext}(

In this section we apply our results to experimental data, described in section 3.4.1. In section 3.4.2 we implement the set of “minimal constraints” we found in section 3.2.2 in a specific CBM, and fit it to experimental data in which _{Y} (^{−α}. The analytical results in section 3.2 suggest that this specific CBM is a “reasonable” representative of the family of CBMs that can generate the experimental results. Other members of this family can be reached by varying the parameters within the (either minimal or general) constraints. Next, in section 3.4.3 we use our results from section 3.3.2 on the fitted model. We show that, although internal fluctuations in the model can affect the neural response on a timescale of days, the memory of the input is only retained for a duration of minutes. We suggest specific experiments to test this prediction. In section 3.4.4 we suggest further predictions

The experiment from Gal et al. (_{0}. The observed neuronal response was characterized by different modes (Gal et al., _{*} < 1 (i.e., sometimes the stimulation evokes an AP, and sometimes it does not). The patterns observed in _{m}, the AP occurrences timeseries, are rather irregular (Gal et al., _{m} fall into the category of “^{−α} noise” where the value of α varied significantly between neurons – with

(mean ± ^{−α} behavior is true only in some limited range _{min} < _{max}. From the experimental data, (Figure 6C in Gal et al., _{min} < 10^{−5}Hz and _{max} ~ 10^{−2}Hz. Also, since α > 1, then 0 < _{min} (see section 3.1).

In our previous work (Soudry and Meir, _{Na}_{Na}^{−1} _{1} has the same equation as _{i} (i.e., ∀_{k} = _{1}). The remaining rates (for _{k}(^{k − 1}, δ_{k}(^{k − 1} (as in Equation 27), where γ(_{s,k} = _{s} ϵ^{ν k}. Therefore, the only free parameters are ϵ, _{s}, _{0} (_{0} is the current amplitude of the stimulation pulses).

This model can be used to fit the experimental results for any α ∈ [0,2). We performed a numerical simulation of the full equations (Equations 4–6) of the HHMS model under periodical stimulation with _{*} = 50 ms. We aimed to fit an experiment from Gal et al. (_{Y} (^{−α}, with α = 1.4 (which is approximately the average α value measured in Gal et al., _{0} was set to _{0} = 7.7 μ _{*} ≈ 0.4 as in the experimental data (using the self consistent equations for _{*} from Soudry and Meir, _{s} = 10^{4} in order to fit the magnitude of the _{Y} (

After fitting the HHMS model to the experimental results, we can examine its resulting linearized input–output relation, described by the filters ^{ext}(^{int}(^{int}(^{ext}(

In accordance with the asymptotic forms in Equations (40) and (41), we find that ^{ext}(_{ext} ~ 10^{−2} Hz (Figure ^{int}(^{−α/2} for _{min} < ^{−2} Hz (Figure _{min} < 10^{−5} Hz. Therefore, as explained in section 3.3.2 this model implies that the response of the neuron is affected by internal fluctuations over the scale of days (~ ^{−1}_{min}) or more, generating the ^{−α} behavior we observe in Figure ^{−1}_{ext}).

^{ext}(^{−2} Hz while _{int} (^{-α/2} for ^{−2} Hz.

Next, we examine two methods which allow us to probe ^{ext}(

First, a simple method to probe the external input filter ^{ext}(^{ext}(^{ext} (^{2}_{T} (^{int}(_{TY} (_{T} (^{−β} (above some lower cutoff). In Figure _{TY} (_{T} (_{TY} (^{ext}(

_{YT} (^{ext}(

Second, The filter ^{ext}(

As we explain in Appendix B.3, in this case the output of the neuron would be

This allows us to easily estimate |^{ext}(_{amp}^{−1} ^{−1}_{amp} _{m}) at frequencies _{l}, as we demonstrate in Figure

As explained in Gal et al. (_{*} = 1, where the PSD of the latency, _{L} (_{Y} (_{*} → 1 (see section 4.4.6 in Soudry and Meir, _{L} (^{−α} approximately (neglecting logarithmic factors).

Next, suppose we vary some measurable stimulation parameter, such as the mean stimulation rate ^{−1}_{*}. How would this affect the shape of the filters we derived? The analytical results allow us to calculate this explicitly in the HHMS model.

First, we consider the gain of the external input filter ^{ext}(^{ext}(0)). As we explain in Appendix A.7, if _{cutoff}, than

which is the mean firing rate of the neuron – a simply measurable quantity.

Second, how would _{int} (_{*} is varied? Since _{int} (_{Y} (_{Y} (_{Y} (^{−α} approximately at low frequencies then the exponent α should not depend much on any external parameter (assuming 0 < _{*} < 1). This was observed experimentally when the stimulation rate (^{−1}_{*}) was varied, as can be seen in Figure 1G in Gal and Marom (

In this work we aim to explain biophysically the phenomenon of ^{−α} behavior in the response of isolated neurons, and explore its implications on the input–output relation of the neuron. We do this under a regime of sparse stimulation (Figure

These mathematical results expose the large parameter degeneracy of CBMs (Marder and Goaillard, ^{−α} noise in a CBM (section 3.2.1). These conditions indicate which types of CBMs can generate the observed behavior. We show that, in order to generate ^{−α} behavior, neurons should have intrinsic fluctuations (e.g., due to ion channel noise), and have a number of slow processes with a large range of timescales, “covering” the entire range over which ^{−α} statistics is observed. Furthermore, the parameters of these processes must be scaled in a certain way in order to generate ^{−α} noise with a specific α (Equation 26).

We implement these constraints in a minimal CBM (section 3.2.2), in which the slow processes are uncoupled, except through the voltage, as in Soudry and Meir (^{−α} statistics from being generated in case (1). In contrast, option (2) can robustly generate the observed ^{−α} statistics in the neuronal response for any 0 < α < 2 (Equation 30 and Figure 6).

Naturally, outside of the framework of CBMs (Equations 4–6) long term correlations may be modeled differently, since there are numerous distinct ways to generate power law distributions (Newman,

We examine our theoretical predictions numerically. We do this using a stochastic Hodgkin Huxley type model with slow sodium inactivation that was previously fitted to the basic experimental results (Soudry and Meir, ^{−α} noise, and is demonstrated numerically (Figure

Previous works (Lowen et al., ^{−α} in its response. In Lowen et al. (^{−α} firing rate response. Their model produced an exponent of α ≈ 0.5, replicating experiments measurements from the auditory nerves. Another work (Soen and Braun,

The identity of the specific slow processes involved in generating ^{−α} remains a mystery at this point, since there are many possible mechanisms which can modulate the excitability of the cell in such long timescales. For example, ion channel numbers, conductances and kinetics are constantly being regulated and may change over time (e.g., Levitan,

The linearized input–output relation of the fitted CBM was derived using the methods described in Soudry and Meir (^{2} s (Figure

In the introduction we mentioned previous works (Lundstrom et al., ^{−α} statistics measured by Gal et al. (^{2} s ago.

Qualitatively, this specific timescale of the input memory stems from the “fastest slow negative feedback process” in the model (in this specific model, slow sodium inactivation). This process responds to perturbations in the input which change the firing rate much more quickly than all the other slow processes. Its response to perturbation brings the firing rate back to its steady state, before slower processes even register that the firing rate has changed. Therefore, effectively, these slower processes do not store much information about input perturbations. We suggest experiments to test input memory directly, by using ^{−α} stimulation (Figure

This work makes several practical contributions. First, our results impose specific constraints on the slow processes that modulate the excitability on very long timescales (e.g., a ratio between timescales and channel numbers). Such constraints facilitate the construction of neuronal models with “realistic” input–output relations over extended timescales. Hopefully, these constraints may also help to identify the relevant slow biophysical processes. Second, our results suggest that for sparse spiking inputs, the memory of a cortical neuron stretches back to the last minute of its input, but not much more. This limit could be especially relevant when fitting statistical models to neuronal data, and for setting limitations on neuronal computations.

As for the functional significance, it is still not clear why the neuronal response fluctuates so wildly, especially at long timescales. We end this paper by offering some speculations on this issue. We see three possible scenarios. One possibility is that these fluctuations are beneficial. For example, such non-stationary fluctuations should increase network heterogeneity, which may be advantageous (Tessone et al., ^{−α} noise imposes important constraints on electronic circuits, and was predicted to impose similar constraints on neural circuits (Sarpeshkar,

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

The Supplementary Material for this article can be found online at:

The authors are grateful to O. Barak, N. Brenner, Y. Elhanati, A. Gal, T. Knafo, Y. Kafri, S. Marom, J. Schiller, and M. Ziv for insightful discussions and for reviewing parts of this manuscript. The authors are also grateful to A. Gal and S. Marom for supplying the experimental data. The research was partially funded by the Technion V.P.R. fund and by the Intel Collaborative Research Institute for Computational Intelligence (ICRI-CI).

^{1}I.e., if ∀

^{2}A semi-analytic derivation is an analytic derivation in which some terms are obtained by relatively simple numerics.

^{3}For example, this can happen if the kinetic rates all have low voltage threshold, resulting in _{+} ≈ _{+} ≈

^{4}I.e., in all neurons for which 0 < _{*} < 1.

^{5}Otherwise, 0.25 ≥ _{*} − _{*}^{2} = 〈 _{m}^{2} 〉 ≥ 2

^{6}Also simulations take a long time, since experiments, as in Gal et al. (

^{7}E.g., in Equation (16) many different parameters would give the same _{k}.

^{8}Near the edges,

^{9}I.e., Equations (4–6), with the same assumptions as we had in section 2.3.1.