^{1}

^{2}

^{3}

^{4}

^{5}

^{3}

^{1}

^{1}

^{2}

^{3}

^{4}

^{5}

Edited by: David Hansel, University of Paris, France

Reviewed by: David Golomb, Ben-Gurion University of the Negev, Israel; Ron Meir, Israel Institute of Technology, Israel

*Correspondence: Elisha Moses, Department of Physics of Complex Systems, Weizmann Institute of Science, Rehovot 76100, Israel. e-mail:

This is an open-access article subject to an exclusive license agreement between the authors and the Frontiers Research Foundation, which permits unrestricted use, distribution, and reproduction in any medium, provided the original authors and source are credited.

We present a theoretical framework using quorum percolation for describing the initiation of activity in a neural culture. The cultures are modeled as random graphs, whose nodes are excitatory neurons with _{in} inputs and _{out} outputs, and whose input degrees _{in} = _{k}. We examine the firing activity of the population of neurons according to their input degree (_{k}

Development of connectivity in a neuronal network is strongly dependent upon the environment in which the network grows: cultures grown in a dish will develop very differently from networks formed in the brain. In a dish, the only signals that neurons are exposed to are chemicals secreted by neighboring neurons, which then must diffuse to other neurons via the large volume of fluid that surrounds the culture. The result is a connectivity dominated by proximity in a planar geometry, whose input degree follows a statistical distribution function that is Gaussian-like (Soriano et al.,

We have previously shown (Cohen et al.,

In this paper we apply our graph theoretic approach to the intriguing process of the initiation of a spontaneous population spike. On the one hand, a perturbation needs to be created that pushes a number of neurons to begin firing. On the other hand, the initial firing pattern must propagate to the rest of the neurons in the culture. Understanding this recruitment process will give insight on the structure of the network, on the interrelation of activation in neurons, and on the dynamics of neuronal firing in such a complex culture. A simple scenario for initiation that one might conceive of is wave-front propagation, in which a localized and limited area of initiation is ignited first, and from there sets up a spherical traveling front of excitation. However, as we shall see, the initiation is a more intricate process.

The experimental situation regarding initiation of activity is complex. In quasi one-dimensional (1D) networks we have been able to show that activity originates in a single “Burst Initiation Zone” (BIZ), in which a slow recruitment process occurs over several hundreds of milliseconds (Golomb and Ermentrout,

In this paper we address and connect three experimental observations that are at first sight unrelated. The first and fundamental observation is the fact that bursts are initiated by Leaders, or first-to-fire neurons (Eytan and Marom,

We then turn to the second experimental observation, that the activity in a burst starts with an exponential growth. We show that this can happen only if the distribution of in-degree is a power law with exponent of −2. However, this needs to be related with the third experimental observation, which is that the distribution of in-degrees is Gaussian rather than power law. We reconcile both observations by stitching together the two solutions into an in-degree distribution that has the large majority of neurons in a Gaussian centered around an average in-degree of about 75, while 10% of the population lie on a power law tail that can reach a few thousand connections. We show that this reconstructs the full experimentally observed burst structure, which is an exponential initiation during the pre-burst followed by a super-exponential during the burst.

We present several ideas on the origin of these distributions in the spatial extent and geometry of the neurons, and show that the distribution of in-degrees is proportional to the distribution of spatial sizes of the dendritic trees. We thus conjecture that the distribution of dendritic trees is mostly Gaussian, but that a few neurons must have dendrites that go off very far, with power law distribution of this tail. We end by making some additional conjectures about 1D cultures and on the importance of the size of the culture.

We describe the neural culture using a simplified model of a network whose nodes are neurons with links that are the axon/dendrite connections. This picture is further simplified if we consider a randomly connected sparse graph (Bollobas, _{k}

In particular, in this paper we obtain a theoretical explanation for the experimental observation of initiation of activity by a small number of neurons from the network and the subsequent gradual recruitment of the rest of the network. Within the framework of a random graph description, we have previously shown that the dynamics of firing in the network is described by a fixed point equation for the probability of firing in the network, which also corresponds to the experimentally observed fraction of neurons that fire. Experimentally, this fraction can be set by applying an external voltage (Breskin et al.,

Specifically, connections are described by the adjacency matrix _{k}_{ij}_{ij}_{ii}

Our study starts by assuming initial conditions where a fraction

where _{i}_{i}_{j}_{f}_{f}

To derive the “mean-field” dynamics for the average fraction of firing neurons, Φ(_{i}^{−1}Σ_{i}s_{i}_{i}_{j}A_{ij}s_{j}_{i}_{i}_{j}A_{ij}s_{j}_{i}_{i}_{i}_{i}_{i}_{j}A_{ij}s_{j}_{i}_{j}_{i}_{j}_{i}_{j}A_{ij}s_{j}

where the combinatorial expression for Ψ is the probability that at least _{i}_{∞}, which is found by inserting Φ(_{∞} into Eq. _{∞} = _{∞}).

The role of Leader neurons in the initiation and the development of bursts can be clarified by dividing the neurons into classes of in-degree _{k}p_{k}_{k}_{k}

where Ψ_{k}_{k}

It follows from Eq. _{k,∞}_{k}_{∞}). Given the time dependence of Φ(_{k}

The combinatorial expressions for Ψ and Ψ_{k}

There is a particular critical initial firing

The model was numerically investigated by performing a simulation, employing _{tail}, obey a power law distribution function.

The spatial extent and arrangement of connections can be of importance to the dynamics of the network. In contrast to spatially embedded (metric) graphs, random graphs allow any two nodes to connect, i.e., they are the analog of infinite dimensional networks. The experiment is obviously metric but our model employs a random graph. This seeming contradiction was resolved in a previous study (Tlusty and Eckmann,

The basic reason why a tree-like graph will describe the experimental network lies in the observation that the percentage of connections emanating from a neuron that actually participate in a loop is small. Indeed, we have demonstrated in Soriano et al. (

Such networks do indeed have some loops, and thus we need to study the effect of such loops on the general argument of Eq.

If neuron A fires at time ^{2}

To see this, we note that if A is off then the number of available inputs that can fire into B reduces by one, from ^{2}. The back-propagated effect on A will be of order ^{2}/^{4}.

To estimate the total number of 2-loops that include neuron A we first look at all trajectories of length 2 that emanate from A. There are _{in}/

Assuming that for a typical neuron _{in} is equal to _{out}, the total effect of 2-loops on Φ calculated at neuron A is therefore ^{2}^{2}/^{4} = ^{2}/(^{2}

Below we show the applicability of the tree-like random graph model by comparing its results directly to the simulation that uses

In the experimental case, spatial proximity may lead to more connections than in a random graph. The effect of space is to change the relevant number of neurons _{space} = 3,000 as compared to _{total} = 500,000. However, 1/^{2}

In a separate work, a simulation that takes space into account was performed (Zbinden, _{space}/_{total}, but still remains small.

In a physical model one must be sure that the ensemble of random examples chosen to average over a given quantity does indeed represent well the statistics of the system that is being treated. In our model, the connectivity of the graph is fixed (“quenched” disorder), and the ensemble is that of the random graphs that can be generated with the particular choice of input connection distribution function. In reality, the experiment and the simulation measure the bursts inside one particular realization. However, the mean-field equation averages over a whole ensemble of such random graphs. The question is whether the averages obtained using one real graph are representative of the whole ensemble. This is a behavior which is termed “self-averaging,” and means that, in the limit of large graphs, one single configuration represents the average behavior of the ensemble.

The similar ensemble of the classic Hopfield (spin-glass) model for neural networks is known to be self-averaging in the limit of an infinite sized graph (van Hemmen,

In practice, our model differs from the neural network model of Amit et al. (

The possibility of turning a neuron “off” is not incorporated in the model because we only consider short times. The whole process described by the simulation occurs over a very brief period of time, and therefore a firing neuron keeps its effect on other neurons during the whole process. To be concrete, the unit of time in the model and in the numerical simulations is the firing of one spike, equivalent in the experiment to about 1 ms. The simulation extends to about 50 units, i.e., describes a process that occurs typically for 50 ms.

In our model a neuron has no internal structure, so that whether it is “on” or “off” impacts only the neurons that are its neighbors. The relevant issue is therefore – how long is the effect of a neuron's activity felt by its “typical” neighbor. The experimental facts are that a neuron fires on average 4–5 spikes per burst, each lasting a millisecond, with about 4–5 ms between the spikes (Jacobi and Moses,

The post-synaptic neuron retains the input from these spikes over a time scale set by the membrane decay constant, which is on the order of 20–30 ms. Therefore, after a firing period of about 20 ms, there is a retention period of comparable duration. We can conclude that the effect of a neuron that has fired can be felt by its neighbors for the total build-up time of the burst, about 50 ms. We therefore describe by “on” the long term, averaged effect of the neuron once it has begun firing. One caveat to this is that the strength of that effect may vary with time, and such an effect is not described within the model.

We also assume, for simplicity, that all the neurons are available and can participate in the burst (no refractive neurons). In the experiment this is equivalent to looking at those bursts that have quiescent periods before them, which is often the case.

In this model all neurons are excitatory; within the “0” or “1” structure of the model, the contribution of an inhibitory neuron would be “−1.” Thus adding inhibitory neurons amounts to increasing

Below we also model the dynamics of burst activation observed in the experiments of Eytan and Marom (

We assume the existence of spontaneous sporadic activity of single neurons in the culture. In principle, this can be treated as a background noise (Alvarez-Lacalle and Moses,

We use the simulation for an initial look at the recruitment process and to identify those neurons that fire first. We use a Gaussian distribution to describe the experimental situation as closely as possible, and put the system near criticality, i.e., with

_{k}_{tail} is a Gaussian centered on _{k}^{−2} for _{tail}. We checked that a simple Gaussian _{k}

Further information on the distribution of ignition times for different neurons with different in-degree

To understand this from the model, we concentrate on the neurons within a given _{k}^{−} and the threshold value _{k}

meaning that under this approximation the whole

where is the cumulative distribution.

The time continuous version of the iteration equation is

which can be integrated, at least numerically, to obtain the dynamics of the system. Within this approximation, the _{k}

Since these neurons are highly connected, they will statistically be connected to each other as well. For the experimentally relevant case of a Gaussian distribution peaked at 78 connections with a width of 25 (Breskin et al.,

In the Multi Electrode Array experiment about 60 neurons were monitored, and a burst was observed to begin with one or two of these neurons. From these initial sites the activity spread. Identifying these neurons as Leaders, we reach the conclusion that in every experimentally accessible patch of the network that we monitor there is a small number of neurons that lead the other neurons in activation. We therefore deduce from the theory that they are part of this highly interconnected, sparse sub-network. In the initial pre-burst period nearby neurons are recruited by inputs from the Leaders, while in the burst itself all the neurons fire together. During the pre-burst a spatial correlation to the Leader exists in its near vicinity, which vanishes as the activity transits to the burst.

We remark here that within our model a node that fires early is highly connected. However, the number of connections

If the firing order of the neurons is determined by their connectivity, then by observing and analyzing the evolution of the burst we may learn about the connectivity of the neurons. We focus on the experimental fact that the growth rate of the very first firing is exponential, which leads us in the next sections to analyze a particular form of the degree distribution that can lead to exponential growth dynamics.

Our observation that the initial growth of the burst is totally dependent on nodes at the very high-^{αt} observed by Eytan and Marom (

If Φ(_{k}_{k}^{−2}. We begin by plugging into the approximate dynamics (Eq. ^{αt}, where

and therefore

Introducing

and we end up with

This sets the value for

However, the distribution obtained above would give an exponential growth at all times until the whole network is ignited and Φ(_{cen} ≃ 78 and with a width of about σ = 25. The average connectivity, as measured by the mean of the Gaussian, was shown to increase with the density of plating of the neurons (Soriano et al.,

This leads us to the following tailored solution, which combines both these experimental inputs and solves the growth rate problem. We keep the Gaussian distribution for _{k}

For simplicity and conformity with the experimental situation, we also demand that no node has in-degree less than a minimal _{min} > _{min} < _{tail}):

At high _{k}^{−2}, and we do this from a degree _{tail}. The value of _{tail}, among other parameters, is determined by external considerations along with consistency constraints, as detailed below.

An impressive experimental fact is the large dynamic range observed in the amplitude of the burst, about two and a half decades in total. In the experiment, the amplitude grows in the exponential pre-burst phase by a factor of about 30, and in the burst itself by a further factor of about 10.

In the experiment the large dynamic range and high precision are obtained by averaging over a large number of bursts, and can be reproduced in the simulation only if there is a very large range of available

To ensure that during the exponential regime Φ increases by a factor of 30 while during the faster growth it grows by a factor of 10, the transition from exponential growth to the faster, full blown firing of the network is designed to occur at Φ = 0.1 and

Since the entry of the _{k}_{tail} ≃ _{tail} ≃ 150. This is a considerable distance from the peak of the Gaussian, so that it is justified to describe the majority of the nodes by the Gaussian distribution.

The highest cutoff of the degree distribution is in turn determined by the constraint on the integral over the distribution from _{tail} to _{max}, which should yield a total fraction of 0.1, since that is the part of the network that will ignite in the initial, exponential regime, _{tail}) = Φ_{tail} = 0.1. This condition allows us to normalize the cumulative function:

Plugging this

where we used _{tail} = _{tail}. At _{tail} Eq.

Since we have seen that Φ_{tail} = 0.1, this sets consistency demands on α and on

Since the experimentally relevant values of both _{tail} ≃ _{max} ≃ _{tail} all the way to _{max} is glued onto a Gaussian curve centered on _{cen} = 75.

_{k}_{cen} = 75, Σ = 31, _{min} = 20 while the power law tail ^{−2} goes from _{tail} = 150 to _{max} = 4,680 and its pre-factor is

Figure

_{min} = round (_{max} = round (

We can also compare the simulations of the network with the numerical solution of the model. For this we use the iterative scheme defined by Eq.

In summary, from the quantitative comparison we find that the model has an exponential initial transient if the in-degree distribution is mostly Gaussian, with 10% of the neurons in the power law tail, and that the highest

When comparing these results with the experiment, we should remember that only 60 electrodes are being monitored. The exponential behavior that is observed over a large dynamic range, can be resolved since multiple firings at the same electrode are observed with a resolution better than 1 ms. In the simulation, in contrast, this is modeled by going to high numbers of neurons, each of which can only fire once.

At the end of Section _{k}_{k}

Facilitation of activity can occur if neurons that are already excited several times are easier to excite at the next time. By synaptic facilitation we mean the property of a synapse increasing its transmission efficacy as a result of a series of high frequency spikes. We examine here some of these effects by introducing, for the sake of simplicity, a threshold which is a function of the average firing state,

Given the time series of the firing fraction Φ(_{k}

where

It is particularly interesting to ask if the power law degree distribution _{k}^{−2}, supports a biologically feasible form of

where _{min} and _{max} are the limits of the distribution. The inverse function is

To further advance we need to model the burst itself, which grows exponentially at first, then grows even faster, at a super-exponential rate and finally saturates when all the network has fired. This behavior can be described by the function:

with ^{αt} and begins to diverge after

For this profile

We see that _{min}/(1 + α) and ends at _{k}^{−2} with variable

^{−2} power law distribution and variable

In summary, we have shown here that the experimental situation of an exponential transient followed by super-exponential growth can be well described in our model of an in-degree distribution that is ^{−2} at high

The

We remain with the question of how significant is the need for an exponent −2 in the power law distribution, and whether small deviations will change the exponential growth rate. Is there any logical or biological reason for this power law to be built up?

On the experimental side, the search for a few highly connected neurons would be needed. One possibility is that Leaders are neurons of a different species then that of the majority. Identifying them, investigating their properties and potentially intervening by disrupting their function are all important experimental goals.

In Eckmann et al. (

Within the Quorum Percolation model, high-

Looking only at in-degree is only part of picture. Indeed, high-

An interesting question is what kind of growth and development process would lead to a distribution of in-degree that is essentially a Gaussian centered on a value of about _{cen} = 75, but has a tail that goes like _{in} ∼ ^{−2}, and can reach in-degree in the thousands, _{max} ∼ 4,700.

We propose the following simplified and intuitive geometrical picture for how _{in} and _{out} are determined (see Figure

_{in} and _{out}

There are two corresponding length distributions

The number of in-connections of a neuron is obtained by calculating the probability of an axon emitted from another neuron located a distance ℓ away to cross its dendritic tree (Figure _{in} for a neuron with dendritic tree of size

Here ^{2}), and 2

We now insert for

_{tail} and ^{−2} for ℓ_{tail} < ℓ ≤ ℓ_{max}, with

For ℓ < ℓ_{tail} we get,

while for ℓ_{tail} < ℓ < ℓ_{max}:

The integral over _{max}). Since the maximal length ℓ_{max} is determined by the size

We get that the number of in-connections is

We are now in a position to ask where the tail of high connections arises. In principle, it could arise from fluctuations in the density

Thus we are led to the conclusion that the power law tail of _{in} has its origin in the distribution of dendritic trees

While the possibility of exchanging between elementary dynamics and connectivity statics does not come as a surprise, the lesson we take from the results of Section

When only a small fraction Φ of the network is active, the chances that a given neuron is activated at a high frequency are low. Hence, chances of changes in threshold as a result of synaptic or membrane dynamics are low. However, as the active fraction Φ of the network grows, the chances of a neuron to be bombarded at high frequency become higher; we could then get a dependence

In many studies of biological networks, this ambiguity is somewhat neglected in favor of a more static view, largely due to lack of access to elementary level dynamics. In the case of neuronal excitability, single element dynamics is experimentally accessible, and the existence of dynamical-thresholds are well documented. Our results in Section

Initiation of activity in 1D cultures seems to be very different from the Leaders scenario. In 1D cultures, we (Feinerman et al.,

One explanation for the difference in behavior of BIZ and Leaders is in the dimensionality. However, the basic argument we presented for the number of input connections relates it to the multiplication of the area of the dendritic tree by the density of the axons that cross through this area. Since both the radius of the dendritic tree and the width of the line are on the order of 100 μm, there should be no difference in the first factor. As for the density of axons, there is no direct information, but also no compelling argument why 1D cultures should differ in this from 2D cultures.

A different possibility, and the one we believe to be correct, is just that there are too few neurons in the culture (Tlusty and Eckmann, _{k}

The results of simulating excitation in varying numbers of neurons are given in Table

_{max} does not reach the theoretical prediction

_{max} Theory |
_{max} Realized |
||
---|---|---|---|

0.05 | 500 | 312 | 237 |

0.022 | 5,000 | 709 | 615 |

0.0076 | 50,000 | 2,053 | 1,795 |

0.0051 | 100,000 | 2,836 | 2,512 |

0.0038 | 500,000 | 4,680 | 4,680 |

We immediately see that indeed _{max}, _{k}_{k}^{−2} both _{max} and ^{−1/2}. We can conclude that within the Quorum Percolation model smaller cultures require a much larger fraction of initial activity to sustain a burst.

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

We are indebted to Shimon Marom for many stimulating discussions and in particular for suggesting the equivalent effect of in-degree distributions and neuronal thresholds. We also thank Shimshon Jacobi, Maria Rodriguez Martinez and Jordi Soriano. This research was partially supported by the Fonds National Suisse, the Israel Science Foundation grants number 1320/09 and 1329/08, Minerva Foundation Munich Germany and the Einstein Minerva Center for Theoretical Physics. Olav Stetter acknowledges support from the German Ministry for Education and Science (BMBF) via the Bernstein Center for Computational Neuroscience (BCCN) Göttingen (Grant No. 01GQ0430).