Edited by: Andrew P. Davison, Centre national de la recherche scientifique, France
Reviewed by: Jochen M. Eppler, Research Center Jülich, Germany; Thomas Nowotny, University of Sussex, UK
*Correspondence: Marcel Stimberg, Equipe Audition, Département d'Etudes Cognitives, Ecole Normale Supérieure, 29 rue d'Ulm, 75230 Paris Cedex 05, France email:
This article was submitted to the journal Frontiers in Neuroinformatics.
This is an openaccess article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
Simulating biological neuronal networks is a core method of research in computational neuroscience. A full specification of such a network model includes a description of the dynamics and state changes of neurons and synapses, as well as the synaptic connectivity patterns and the initial values of all parameters. A standard approach in neuronal modeling software is to build network models based on a library of predefined components and mechanisms; if a model component does not yet exist, it has to be defined in a specialpurpose or general lowlevel language and potentially be compiled and linked with the simulator. Here we propose an alternative approach that allows flexible definition of models by writing textual descriptions based on mathematical notation. We demonstrate that this approach allows the definition of a wide range of models with minimal syntax. Furthermore, such explicit model descriptions allow the generation of executable code for various target languages and devices, since the description is not tied to an implementation. Finally, this approach also has advantages for readability and reproducibility, because the model description is fully explicit, and because it can be automatically parsed and transformed into formatted descriptions. The presented approach has been implemented in the Brian2 simulator.
Computational simulations have become an indispensable tool in neuroscience. Today, a large number of software packages are available to specify and carry out simulations (Brette et al.,
Defining new model components can be time consuming and technically challenging, requiring the user to implement them in a lowlevel language such as C++, as in the NEST simulator (Gewaltig and Diesmann,
In this article, we present an approach that combines extensibility with easeofuse by using mathematical equations to describe every aspect of the neural model, including membrane potential and synaptic dynamics. We implemented this approach in Brian2, the current development version of the Pythonbased Brian simulator (Goodman and Brette,
Simulating a neural model means tracking the change of neural variables such as membrane potential or synaptic weights over time. The rules governing these changes take two principal forms: continuous updates (e.g., the decay of the membrane potential back to a resting state in the absence of inputs) and eventbased updates (e.g., the reset after a spike in an integrateandfire neuron, or the impact of a presynaptic spike on a postsynaptic cell). Generally, continuous updates can be described by deterministic or stochastic differential equations, while eventbased updates can be described as a series of mathematical operations. In this unified framework, it is possible to specify a very wide range of model components. With different sets of equations, neuronal models can range from variants of integrateandfire neurons to arbitrarily complex descriptions referring to biophysical properties such as ion channels. In the same way, a wide range of synaptic models can fit in this framework: from simple static synapses to plastic synapses implementing short or longterm plasticity, including spiketiming dependent rules. Finally, mathematical expressions can also be used to describe neuronal threshold conditions or synaptic connections, weights and delays. In sum, this framework based on the mathematical definition of a neural network model seen as a hybrid system allows for expressiveness while at the same time minimizing the “cognitive load” for users, because they do not have to remember the names and properties of simulatordependent objects and functions but can describe them in a notation similar to the notation used in analytical work and publications (Brette,
We then explain in section 3 how an equationoriented description of models can be transformed into runnable code, using code generation. This code generation involves two steps. The first step applies to model components described by differential equations. In most simulators, the numerical integration method (such as forward Euler or RungeKutta) is either fixed or part of a model component definition itself. We propose instead to describe the integration method separately, in mathematical notation. Using the capabilities of the Python symbolic manipulation library SymPy (Joyner et al.,
Our approach also has important implications for the issue of reproducibility of simulation results: by making the equations underlying the model fully explicit, the source code also acts as a readable documentation of the model. In addition, giving the neural simulator access to mathematical descriptions of model equations or connection patterns allows for straightforward semiautomatic generation of model descriptions (see e.g., Nordlie et al.,
Neural models are described by state variables that evolve in time. Mathematically speaking, they are hybrid systems: variables evolve both continuously (e.g., the evolution of the membrane potential between action potentials) and discontinuously through events (e.g., the reset of the membrane potential after a spike in an integrateandfire model, or the effect of presynaptic spikes). To describe a model therefore requires a system that allows for both of these components. An event is a change in the state variables of the system that is triggered by a logical condition on these variables (possibly after a delay); spikes are the most obvious type of events in neural models. But more generally, there could be other types of events, for example changes triggered when some variable (e.g., intracellular calcium) reaches a threshold value. In addition, it is common that neural models, in particular integrateandfire models, have different states, typically excitable and nonexcitable (refractory), with different sets of differential equations. An event can then not only trigger changes in state variables but also a transition in state.
It would be possible to make such a system extremely general by allowing for an arbitrary number of general states that a neuron can be in, conditions to change between the states and descriptions of the dynamic evolution within the states (as in NeuroML, Gleeson et al.,
In the following, we have made the following simplifying choices: (1) there are only two states, active (excitable) and refractory (nonexcitable); (2) there is a single type of event per state. In the active state, the only type of event is spikes. It triggers changes in state variables of the neuron (reset) and its target synapses (propagation), and triggers a transition to the refractory state. In the refractory state, the only type of event is a condition that triggers the transition to the active state. This is indisputably restrictive, but was chosen as a reasonable compromise between simplicity and generality. However, the framework could be extended to more general cases (see Discussion).
Finally, another important aspect of neural models is that some state variables represent physical quantities (e.g., the membrane potential, the membrane capacitance, etc.) that have physical units while others correspond to abstract quantities that are unitless. Therefore, to be fully explicit and to avoid any errors when dealing with variables in various units and scales, a description system should allow the user to explicitly state the units in all expressions that deal with state variables.
In sum, the description of a neuron model consists of the following four components: the model equations, the threshold condition, the reset statements and the refractoriness condition. Model equations define all the state variables with their units and describe their continuous evolution in time. The threshold condition describes when an action potential should be emitted and when the reset statements should be executed. Finally, the refractoriness condition describes under which condition a neuron can exit the refractory state. We explain in section 2.1.3 how to specify different dynamics in the refractory state. We will show that this fourcomponent description allows for flexible specification of most neuronal models while being still simple enough to be automatically transformable into executable code.
Model equations of point neurons are most naturally defined as a system of ordinary differential equations, describing the evolution of state variables. The differential equations can be nonautonomous (depend on the time
As an example, consider the following equations defining a HodgkinHuxley model with a passive leak current and active sodium and potassium currents for action potential generation (omitting the equations for the rate variables
Since this model includes the action potential generation in the dynamics it does not need a threshold or reset, except for recording or transmitting spikes.
The model equations can be readily transformed into a string description (including information about the units and general comments), see Figure
If a state variable should evolve stochastically, this can be modelled by including a reference to a stochastic white noise process ξ (or several independent such processes ξ_{1}, ξ_{2}, …). The inclusion of a stochastic process in the model equations means that the differential equations are now stochastic, with important consequences for their numerical integration (see section 3). Figure
Integrateandfire models require a threshold condition and one or more reset statements. A simple leaky integrateandfire neuron, for example, can be described as:
Reset statements are not restricted to resetting variables but can perform arbitrary updates of state variables. Similarly, the threshold condition is not restricted to comparing the membrane potential to a fixed value, it is more generally a logical expression evaluated on the state variables. For example, a leaky integrateandfire neuron with an adaptive threshold could be described by the equations:
This model increases the threshold after every spike by 3 mV, between spikes it decays back to the resting value
In integrateandfire models, the fact that a neuron is not able to generate a second action potential for a short time after a first one is modeled explicitly (and not following from channel dynamics described in the differential equations). In contrast to that, HodgkinHuxley type models only use a threshold for detecting spikes and the refractoriness to prevent detecting multiple spikes for a single threshold crossing (the threshold condition would evaluate to
A simple formulation of refractoriness that allows some flexibility is to consider that the exit from refractoriness is defined by a logical expression that may depend on state variables and the time of the last spike. In Brian2, the latter is stored in the special variable
Finally, the set of differential equations could be different in the refractory state. Most generally, this could be described by two different sets of equations. However, neural models generally implement refractoriness in only two ways: either some or all state variables are clamped to a fixed value, or the state variables are allowed to continue to change but threshold crossings are ignored. Only the former case (clamped variables) requires new syntax. We propose to simply mark certain differential equations as nonchanging during the refractory period. This can be an explicit syntax feature of the description language (as in Brian2, see Figure
The description of synaptic models has very similar requirements to the description of neuronal models: synaptic state variables may evolve continuously in time and undergo instantaneous changes at the arrival of a presynaptic or postsynaptic spike. A synapse connects a presynaptic neuron to a postsynaptic neuron, and can have synapsespecific variable values. Events can be presynaptic spikes or postsynaptic spikes, and they can trigger changes in synaptic variables, presynaptic neural variables or postsynaptic neural variables. With these specifications, describing synaptic models should follow a similar scheme to the one used for neural models: the continuous evolution of the synaptic state variables is described by differential equations, “pre” and “post” statements describe the effect of a pre or postsynaptic spike. In contrast to neural models, there is no need for a threshold condition since action potentials are emitted from the pre/postsynaptic neurons according to their threshold conditions.
A very simple synaptic model might not define any equation and only add a constant value to a postsynaptic conductance or current (or the membrane potential directly) on every presynaptic spike, for example
Probabilistic synapses can be modeled by introducing a source of randomness in the “pre” statement. If
In the most general formulation, however, the evolution of the synapses' state variables have to be described by differential equations, in the same way as neuronal model equations. By allowing these equations, as well as the “pre” and “post” statements, to refer to pre and postsynaptic variables a variety of synaptic models can be implemented, including spiketimingdependentplasticity and shortterm plasticity. For example, models of spiketimingdependentplasticity rules make use of abstract “traces” of pre and postsynaptic activity. Such a model (Song et al.,
Equivalently, the differential equations can be analytically integrated between spikes, allowing for an eventdriven and therefore faster simulation (e.g., Brette,
The transformation from differential equations to eventdriven updates can be done automatically using symbolic manipulation. In Brian2, this is implemented for certain analytically solvable equations, in particular systems of independent linear differential equations (see Figure
The framework presented so far is insufficient for two important cases, however. One is nonlinear synaptic dynamics such as models of NMDA receptors. In this case, individual synaptic conductances must be simulated separately and then summed into the total synaptic conductance. In such a model, the total NMDA conductance of a single neuron can be described as follows (e.g., Wang,
Another important case is gap junctions. In this case, the synaptic current is a function of pre and postsynaptic potential, which can be expressed in the previously introduced framework, and then all synaptic currents must be added in a neuronal variable representing the total current.
Both cases can be addressed by marking every relevant synaptic variable (NMDA conductance, gap junction current) so that the sum over this variable should be taken for all synapses connecting to a postsynaptic neuron and copied over to the corresponding postsynaptic state variable at each simulation time step. See Figure
We consider the following specifications for synaptic connections: each synapse is defined by its source (presynaptic neuron) and target (postsynaptic neuron); there is a transmission delay from source to synapse, and another one from target to synapse (for the backpropagation needed for example in spiketimingdependent plasticity rules); there may be several synapses for a given pair of pre and postsynaptic neurons.
There are several approaches to the problem of building the set of synaptic connections, each having certain advantages and disadvantages. A set of predefined connectivity patterns such as “fully connected,” “onetoone connections,” “randomly connected,” etc. does not allow us to capture the full range of possible connection patterns. In addition, such patterns are not always clearly defined: for example, if neurons in a group are connected to each other randomly, does that include selfconnections or not (cf. Crook et al.,
An alternative approach that offers expressivity, explicitness and concise description of connectivity patterns is to use mathematical expressions that specify: (1) whether two neurons
The first expression is a boolean expression that evaluates to
For example, if the location of neurons in the 2d plane is stored as neural state variables
The same framework can be used to specify connection probability, possibly in combination with conditions described above. For example, the condition
Finally, the expression syntax can also be used to create more than one synapse for a pre/postsynaptic pair of neurons (useful for example in models where a neuron receives several inputs from the same source but with different delays).
See Figure
To make the description complete, the initial value of state variables must be set. Many models include state variables that differ across neurons from the start in a systematic (e.g., synaptic weights or delays might depend on the distance between two neurons) or random way (e.g., initial membrane potential values). Such descriptions can be expressed using the very same formalism that has been presented so far (for Brian2 syntax, see Figure
For synaptic variables, references to pre and postsynaptic state variables can be used to express values depending on the neurons that are connected via the synapse. For example, synaptic delays might be set to depend on the distance of the involved neurons as
Setting state variables with textual descriptions instead of assigning values directly using the programming language syntax may seem to be a questionable choice. However, it offers at least two advantages: firstly, it allows the generation of code setting the variable that then runs on another device, e.g., a GPU, instead of having to copy over the generated values (see section 3); secondly, it allows for a semiautomatic model documentation system to generate meaningful descriptions of the initial values of a state variable (see section 4).
To simulate a neural model means to track the evolution of its variables in time. As shown in the previous section, these dynamical changes consist of three components:
Continuous updates are specified in the form of equations that first have to be combined with a numerical integration method to yield abstract code (see Figure
Most neural models are based on equations that are not analytically solvable. The standard approach is therefore to use numerical integration and calculate the values at discrete time points. Many wellstudied integration methods exist, allowing for different tradeoffs between computational complexity and precision. Often, the provided numerical integration methods are either an integral part of the simulation tool (e.g., in Neuron, Carnevale and Hines,
Here we show a new approach implemented in Brian2, in which a mathematical formulation of an integration method can be combined with the description of the neural model to yield abstract code that is later transformed into target language code using a common “abstract code to language code” framework.
Explicit integration methods can be described by a recursive scheme providing the values at discrete time steps. For example, the “midpoint method” (secondorder RungeKutta) calculates the
We specify this integration scheme using the following description, defining a name for a subexpression to avoid nested references to the function
The
Let us consider a model equation with two state variables, describing a neuron with an adaptation current:
The integration method and the model equations are combined and transformed into abstract code using SymPy, according to algorithm 1.
Combining model equations and numerical integration method description to yield abstract code. Here, the differential equations in vector form are d
Expand 
Replace 

Append component σ_{v} of the transformed 

Append 
Combining the midpoint method and the neuronal equations from above according to this algorithm works as follows:
The model equations in vector form:
The first statement σ:
Expanding it:
The second statement σ:
Expanding it:
After replacing
Finally:
The full abstract code then reads (using names starting with underscores to denote the variables
The same procedure can also be applied to stochastic differential equations, a description of a state updater in this case looks like (EulerMaruyama method):
The function
In an equation defining a simple integrateandfire neuron with additive noise
The
In the case of more than one stochastic variable (which can be used to model shared noise between variables) the stochastic part of the state updater description is understood as being additive for all stochastic variables. For example, in the case of two stochastic variables, the above described integration method is read as
Therefore, the following equation with two stochastic variables
will be integrated as:
“Abstract code” is also used for updates that are triggered by specific events, typically a spike, either in a neuron itself or in a pre or postsynaptic neuron in the context of synapses. In contrast to the model equations, this code is not a mathematical formulation but an explicit statement on how variables should be changed. For example, the reset of an integrateandfire neuron might simply state
Abstract code is currently restricted to simple statements of the form:
where
The final remaining building block for model definitions are
Building connections from the synaptic connection descriptions presented previously is straightforward: two nested loops iterate over all possible values of
Note that in languages supporting vectorization (e.g., in Python with the NumPy libraries (Oliphant,
Setting state variable values with string expressions does not require any specific mechanism and can use the existing code generation techniques. In particular, setting a state variable of a group of neurons can be implemented in the same way as the reset (it can be thought of as a reset that only happens once, not after every spike) and setting state variables of synapses can be implemented in the same way as the effect of a pre or postsynaptic spike.
The abstract code that is generated from the combination of model equations and state updater descriptions or directly provided for eventtriggered state updates mostly follows Python syntax conventions but is not directly executable as such. It describes what operations should be executed in the context of a given neuron or synapse, but the implementation may use vectorization or parallelization over neurons/synapses (e.g., in Python, see Brette and Goodman,
Let us investigate a simple code statement, resulting from applying forward Euler integration to an integrateandfire model with an adaptive current (same example as in the beginning of section 3.1):
If we let
Code in other languages, e.g., C++, does not have builtin support for vectorisation, therefore it has to loop explicitly. Still, the main state update code can be left intact, by surrounding it with
Thus, the transformation from abstract code to target code consists of a modelindependent template (responsible for the
More details on the code generation mechanism can be found in Goodman (
Even though it is now considered best practice for publications in computational neuroscience to make the source code that was used to generate the published results available, a simulatorindependent description of the simulation is still valuable. Firstly, it is more accessible, particularly for researchers not familiar with the given simulation environment and/or programming language. Secondly, it simplifies reproducing and verifying the result with other tools.
There are two main approaches to this issue: first, the whole model can be specified in an abstract specification language such as NeuroML (Gleeson et al.,
The techniques presented in this paper allow for a third approach: since the simulator operates on highlevel descriptions of the model in the form of strings, it is possible to create model descriptions automatically. For example, by virtue of SymPy's
Included in a
This “rich representation” of models not only makes it easier to generate useful model descriptions but can also help in preventing mistakes when generating them; a description that is directly generated from code is always “in sync” with it.
Models are not only defined by their equations, but also by parameter values. For simple parameters, e.g., the time constant τ_{m} from above, most simulators would allow for a convenient readout of the values and therefore be able to display them with name and value in a table, for example. The situation is different for values that have to be described as a vector of values instead of a scalar, e.g., a τ_{m} that varies across neurons. Suppose we have a group
where
In contrast, consider the following assignment, providing the expression as a string:
where
While a
We have described a general framework for defining neural network models, which is based essentially on mathematical equations. It consists of a formalism for defining state variables including their physical units, differential equations describing the dynamics of state variables, conditions on state variables to trigger events and eventtriggered statements, changing the state variables discontinuously.
We think that such a mechanism has several advantages over the approach of writing models based on a fixed library of models and mechanisms that can only be extended by writing new descriptions in a lowlevel language: the equationoriented framework allows for straightforward descriptions of models; it is explicit about details of the model; by relying on common mathematical notation it does not require the user to learn any special syntax or names used for models and mechanisms.
Not all models can be expressed in the framework we have presented, and we will now try to list these limitations. Neuron models were constrained to have only two excitability states, active and refractory, instead of an arbitrary number of states with transitions between them. This is not a fundamental limitation of an equationoriented approach, but rather a choice that substantially simplifies the syntax.
The framework also neglects the spatial dimension, which would be important to simulate the cable equation on models with an extended spatial morphology, or to simulate calcium dynamics. While a small number of compartments could be already simulated in the current framework (by using the equations for the equivalent electrical model), a complex multicompartment simulation can only be described in a simple way with an explicit support for this scenario.
Regarding synaptic connections, although a fairly diverse set of synaptic models can be expressed with our framework, there are at least two limitations: structural plasticity and development (requiring the addition and removal of connections during a simulation), and heterosynaptic plasticity (which cannot be expressed in the presented framework that describes changes in individual synapses independently of other synapses). Extending the framework to cover these cases is not impossible but would require substantial additions.
Finally, the proposed method of specifying the numerical integration method is designed for the general case where the equations cannot be integrated analytically. In Brian2 we do however also allow for special integrators for specific cases such as linear equations that can be integrated exactly using a matrix exponential (Hirsch and Smale,
The NineML description language uses string descriptions of differential integrations, conditions and statements in a similar way to the approach presented here. However, due to the use of XMLbased definitions and the decision to allow an arbitrary number of states and transition conditions, it is much more verbose in the common use cases and therefore more difficult to use for interactive exploration and rapid development. NineML and the approach presented here are not incompatible but rather complementary. It would be possible to automate the creation of a Brian2 simulation from a NineML description or vice versa.
For describing connectivity patterns, Djurfeldt (
The framework presented allows for a wide variety of models with a minimal and unobstrusive syntax. However, we also plan to further increase its expressivity: the restriction to two neural states can be lifted without sacrificing simplicity by supporting multiple event types, each with a condition and a list of statements to be executed. This indirectly allows for an arbitrary number of states, since the state could be represented by a neural variable and equations could then depend on this value. The textual descriptions of numerical integration methods are currently restricted to explicit methods that only refer to the previous simulation time step. The same formalism could be quite naturally extended to implicit methods (e.g., the backward Euler method:
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
This work was supported by ANR11000102 PSL^{*}, ANR10LABX0087 and ERC StG 240132.
^{1}The code generated by Brian2 is a bit more complicated, including optimisations using