^{1}

^{2}

^{*}

^{3}

^{1}

^{2}

^{3}

^{1}

^{4}

^{5}

^{1}

^{1}

^{2}

^{3}

^{4}

^{5}

Edited by: Giacomo Indiveri, University of Zurich, Switzerland

Reviewed by: Thomas Nowotny, University of Sussex, United Kingdom; Timoleon Moraitis, Huawei Technologies, Switzerland; Gopalakrishnan Srinivasan, MediaTek, Taiwan

This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

Due to the point-like nature of neuronal spiking, efficient neural network simulators often employ event-based simulation schemes for synapses. Yet many types of synaptic plasticity rely on the membrane potential of the postsynaptic cell as a third factor in addition to pre- and postsynaptic spike times. In some learning rules membrane potentials not only influence synaptic weight changes at the time points of spike events but in a continuous manner. In these cases, synapses therefore require information on the full time course of membrane potentials to update their strength which a priori suggests a continuous update in a time-driven manner. The latter hinders scaling of simulations to realistic cortical network sizes and relevant time scales for learning. Here, we derive two efficient algorithms for archiving postsynaptic membrane potentials, both compatible with modern simulation engines based on event-based synapse updates. We theoretically contrast the two algorithms with a time-driven synapse update scheme to analyze advantages in terms of memory and computations. We further present a reference implementation in the spiking neural network simulator NEST for two prototypical voltage-based plasticity rules: the Clopath rule and the Urbanczik-Senn rule. For both rules, the two event-based algorithms significantly outperform the time-driven scheme. Depending on the amount of data to be stored for plasticity, which heavily differs between the rules, a strong performance increase can be achieved by compressing or sampling of information on membrane potentials. Our results on computational efficiency related to archiving of information provide guidelines for the design of learning rules in order to make them practically usable in large-scale networks.

One mechanism for learning in the brain is implemented by changing the strengths of connections between neurons, known as synaptic plasticity. Already early on, such plasticity was found to depend on the activity of the connected neurons. Donald Hebb postulated the principle “Cells that fire together, wire together” (Hebb,

In recent years, a new class of biologically inspired plasticity rules has been developed that takes into account the membrane potential of the postsynaptic neuron as an additional factor (for a review, see Mayr and Partzsch,

Further inspiration for recently proposed plasticity rules originates from the field of artificial neural networks. These networks showed great success in the past decade, for example in image or speech recognition tasks (Hinton et al.,

Research on functionally inspired learning rules in biological neural networks is often led by the requirement to implement a particular function rather than efficiency. Present studies are therefore primarily designed to prove that networks with a proposed learning rule minimize a given objective function. Indeed many learning rules are rather simple to implement and to test in

In parallel to the above efforts are long-term developments of simulation software for biological neural networks (for a review, see Brette et al.,

Modern network simulators use individual objects for different neurons and synapses. One common strategy of parallelization is to distribute these objects across many compute processes (Lytton et al.,

Update schemes for neurons and synapses.

An event-based scheme for synapses is perfectly suitable for classical STDP rules, which only rely on a comparison between the timings of spike events. In these rules, synaptic weights formally depend on spike traces, which are continuous signals that are fully determined by spike timings of pre- and postsynaptic neurons and which can be updated at the time of spike events. Optimizations of simulations including STDP have been extensively discussed (Song et al.,

We here focus on more complex voltage-based learning rules which not only rely on membrane potentials at the time points of spike events, but on an extended history of membrane potentials. For these rules synapses continuously require information from the postsynaptic neurons in order to update their weights (Clopath et al.,

In this study we present an efficient archiving method for the history of postsynaptic state variables that allows for an event-based update of synapses and thus makes complex voltage-based plasticity rules compatible with state-of-the-art simulation technology for spiking neural networks. In particular, we derive two event-based algorithms that store a time-continuous or discontinuous history, respectively. These algorithms apply to plasticity rules with any dependence on post-synaptic state variables and therefore cover a large range of existing models (Brader et al.,

The presented simulation concepts are exemplified and evaluated in a reference implementation in the open source simulation code NEST (Gewaltig and Diesmann,

To exemplify the general simulation algorithms, we here focus on the voltage-based plasticity rules by Clopath et al. (

Our study begins with a specification of the mathematical form of the learning rules that we consider (section 2.1). We distinguish between classical STDP (section 2.2) and voltage-based rules (section 2.3) and present a special case where voltage-based rules can be efficiently implemented by compressing information on the postsynaptic membrane potential. We then introduce the Clopath and the Urbanczik-Senn rule as two examples of voltage-based plasticity (sections 2.4 and 2.5). In section 3 we first contrast time- and event-driven schemes for updating synapses with voltage-based plasticity (section 3.1). Subsequently, we detail a reference implementation of the algorithms in NEST (section 3.2) and use this to reproduce results from the literature (section 3.3). After that, we examine the performance of the reference implementation for the Clopath and the Urbanczik-Senn rule (section 3.4). Conclusions from the implementation of the two rules are drawn in section 3.5, followed by a general Discussion in section 4. The technology described in the present article is available in the 2.20.1 release of the simulation software NEST as open source. The conceptual and algorithmic work is a module in our long-term collaborative project to provide the technology for neural systems simulations (Gewaltig and Diesmann,

The focus of this study are plasticity models of the general form

where the change _{ij} between the presynaptic neuron _{ij}(_{i}, the presynaptic spike train _{j}, and the postsynaptic membrane potential _{i}, respectively (^{*}(_{α}, where each one depends on spike trains and membrane potentials in a different manner. Note also that

Voltage-based plasticity rules. The change Δ_{ij} in synaptic strength between presynaptic neuron _{j}, the postsynaptic spike train _{i} and the postsynaptic membrane potential _{i}.

One can formally integrate (1) to obtain the weight change between two arbitrary time points

In general, the integral on the right hand side of the equation cannot be calculated analytically. There is, however, a notable exception, which is the model of spike-timing dependent plasticity (STDP). This model is a form of Hebbian plasticity that relies on the exact spike times of pre- and postsynaptic neurons and ignores any effect of the postsynaptic membrane potential. The dependence on the exact spike times becomes apparent by the fact that either the pre- or postsynaptic spike functional is the spike train itself, for example

where

with functions _{±} that model the weight dependence, and functionals _{±}, which in the classical STDP rule correspond to one-sided exponential decays. The appearance of the raw spike trains (delta distributions) in the differential equation of the STDP model renders the integration of the ODE trivial

where

For models that do not solely rely on exact spike times, but for example on filtered versions of the spike trains, much more information is needed in order to calculate a weight update Δ_{ij}(

In a time-driven neuron update, the membrane potential in many simulators is computed at each simulation step ^{α} = α ·

which, in comparison to the small sum over spikes in the STDP rule (5), contains a large sum over all time steps ^{α} in between time points ^{α}, it generally enters (7) in a piecewise constant manner – hence the argument ^{α}). The synapse therefore predominantly needs information of the postsynaptic neuron in order to update its weight. Thus, in a distributed simulation framework, where neurons are split across multiple compute processes, it is beneficial to store the synapses at the site of the postsynaptic neurons in order to reduce communication (

If weight changes Δ_{ij} depend on the synaptic weight themselves, then (7) cannot be used in practice as intermediate weights ^{α} < ^{α+1} are not known. In this scenario, weight changes have to be calculated on the simulation grid with

Given that _{i} and _{j} are spike trains, the functionals

The major operation of the plasticity scheme in terms of frequency and complexity is the computation of infinitesimal weight changes

defined via the Heaviside step function

where _{LS} and _{LS} denotes the last spike time of the presynaptic neuron. In this case the weight update in between two spike events factorizes

where the latter integral Δ_{i}(_{LS}, _{S}) is independent of the presynaptic spike train _{i} depends on _{LS} only via an exponential prefactor. Thus, an integral Δ_{i}(_{1}, _{2}) over an arbitrary time interval _{LS} ≤ _{1} < _{2} ≤ _{S} which is completely independent of any presynaptic information, can be used as a part of the whole integral Δ_{i}(_{LS}, _{S}) since it can be decomposed as

Therefore, whenever an integral of the postsynaptic quantities _{LS} of the incoming connections, the postsynaptic neuron stores the different Δ_{i}(_{LS}, _{i}(_{LS}, _{S}) can be read out by the synapse for the correct _{LS} of that synapse and be combined with the stored presynaptic spike trace

The Clopath rule (Clopath et al.,

The plasticity rule is of the general form (1) with a sum of two different functions _{α} on the right hand side. It treats long-term depression (LTD) and potentiation (LTP) of the synaptic weight in the two terms _{LTD} and _{LTP}, with

and

Here (_{0})_{+} = _{0})(_{0}) is the threshold-linear function and _{LTD} and _{LTP} are prefactors controlling the relative strength of the two contributions. κ_{±} are exponential kernels of the form (9), which are applied to the postsynaptic membrane potential, and κ_{s} is an exponential kernel applied to the presynaptic spike train. The time-independent parameters θ_{±} serve as thresholds below which the (low-pass filtered) membrane potential does not cause any weight change (_{LTP} can also depend on the membrane potential. This case is described in

Illustration of LTP contribution to the Clopath rule. A presynaptic neuron _{sp,pre} = 4 ms and _{sp,post} = 6 ms, respectively. The presynaptic spike elicits a trace _{i} (_{+} (green) so that both _{i} and ū_{+} exceed the respective thresholds θ_{+} (dash-dotted, dark blue) and θ_{−} (dash-dotted, dark green), cf. (13), between _{1} and _{2}. Only within this period, shifted by _{s} = 3 ms, which is for times _{1} + 3 ms < _{2} + 3 ms (_{s} = 3 ms does not apply to the spike trace (_{clamp} = 33 mV for a period of _{clamp} = 2 ms.

In a reference implementation of the Clopath rule by C. Clopath and B. Torben-Nielsen available on ModelDB (Hines et al., _{s} between the convolved version of the membrane potentials ū_{±} and the bare one [cf. parameter _{s} in (12) and (13)]. The convolved potentials are shifted backwards in time by the duration of a spike _{s} (see _{+}, see red background in _{±} is essential to reproduce the results from Clopath et al. (

The depression term _{LTD} depends on the unfiltered spike train _{j}. It can thus be treated analogous to ordinary STDP rules (cf. (4)ff). In particular, _{LTP}, however, depends on the filtered spike train

The Urbanczik-Senn rule (Urbanczik and Senn,

The plasticity rule is again of the general form (1), with a functional

with exponential filter kernels κ and κ_{s} and non-linearities ϕ and _{i} via _{i} (point process) are compared against a rate prediction ϕ(_{i}) (continuous signal).

In order to solve (1), we need to integrate over

A straight forward implementation of this expression is inefficient in terms of memory usage and computations because of the two nested integrals. However, since the kernels κ and κ_{s} are exponentials, one can perform one of the integrations analytically (see

which is in line with the general formulation discussed in section 2.3.

In the following, we first discuss time- and event-driven update schemes for synapses with voltage-based plasticity. Then we present a reference implementation for the Clopath rule (Clopath et al.,

Let's assume in the following that _{LS} and _{S} denote two consecutive spike times of a presynaptic neuron _{ij}(_{S}) corresponding to the spike at time _{S} can be obtained from the weight _{ij}(_{LS}) at the time of the previous spike at _{LS} and (6) by employing (8) to calculate the latter. As _{LS} and _{S}, it does not matter when the synapse is performing the updates of its weight. Two possibilities are: 1) Neurons calculate their own ^{*} and ^{*} for the current time step and make it accessible to the synapse to enable direct readout and update according to (8) in every time step. This method corresponds to a time-driven update of synapses (^{*} and ^{*} and the synapse reads out this information at _{S}, i.e., at the time where the weight update becomes relevant for the network. This method corresponds to an event-driven update of synapses (

Simulation concepts. Left: illustration of processing the postsynaptic voltage trace _{m} (_{S}. Right: corresponding pseudocodes. _{S} (see line marked SUP in pseudocode). _{m} (see HST in code) from the last spike delivered by synapse 2 at _{LS} up to the current time step _{S} is needed. _{m} integrated from the last incoming spike at _{LI} up to the current time step _{S} (see INT in code) to complete its weight update (see SUP in code) and also to advance that of synapse 1. The preceding part of _{m} from _{LS} to _{LI} was already integrated and applied to all incoming synapses (see HUP in code) by synapse 1 when it delivered the spike at _{LI}.

In a time-driven update scheme the information on the membrane potential is directly processed by the synapses such that only the current value of the membrane potential needs to be stored, corresponding to a membrane potential history of length _{max} of the maximal delay _{max} measured in simulation steps needs to be stored. We here assume the delay to be on the postsynaptic side; it represents the time the fluctuations of the somatic membrane potential propagate back through the dendrites to the synapses. Therefore, _{j} encoding the location of the synapse with presynaptic neuron

Illustration of buffer sizes for different simulation schemes in case of fully synchronous or asynchronous spikes. _{m,post} needs to be available (green). In the event-driven scheme every synapse processes _{m,post} from the last spike to the current one. Therefore, the relevant time trace needs to be stored (red). In the compressed event-driven scheme this part of _{m,post} is processed only once and used to update the weight of all the synapses. Since the weight change is a function of the last spike time which is the same for all the synapses, only one value needs to be updated (blue). In this situation the length

Comparison of synapse update schemes.

History length |
1 | ||

Synapse function calls |
|||

Weight change computations |
|||

History entry manipulations |

In an event-driven update scheme for synapses, the time trace of the membrane potential

The event-driven compression scheme is a modified event-driven scheme that makes use of the fact that for a specific class of plasticity rules the integrated time trace of the membrane potential _{LI} in

Finally, a drawback of the event-driven compression is that it relies on the fact that all synapses use the same processed membrane potential _{i}(_{LS},

This section describes the implementation of two example voltage-based plasticity rules by Clopath et al. (

The Clopath and Urbanczik-Senn rule are chosen as widely used prototypical models of voltage-based plasticity. The differences in the two rules help to exemplify the advantages and disadvantages of the algorithms discussed in section 3.1. As originally proposed, they are implemented here for two different types of neuron models, Adex and Hodgkin-Huxley point-neurons for the Clopath rule (

The plasticity rules differ in the state variable that is being stored and its interpretation. For the Clopath rule, the stored variable is a thresholded and filtered version of the membrane potential that takes into account characteristics of membrane potential evolution within cells in the vicinity of spike events. The restriction to temporal periods around spikes suggests to implement a history that is non-continuous in time. In contrast, the Urbanczik-Senn rule uses the dendritic membrane potential to predict the somatic spiking; the resulting difference is taken as an error signal that drives learning. This error signal never vanishes and thus needs to be stored in a time-continuous history.

Finally, the proposed infrastructure for storing both continuous and non-continuous histories is generic so that it can also be used and extended to store other types of signals such as external teacher signals.

The implementation of voltage-based plasticity rules in NEST follows the modular structure of NEST, key part of which is the separation between neuron and synapse models. This separation makes it easy for a newly added neuron model to be compatible with existing synapse models and vice versa. A downside is that information, such as values of parameters and state variables, is encapsulated within the respective objects. Simulations in NEST employ a hybrid parallelization scheme: OpenMP threads are used for intra node parallelization and the Message Passing Interface (MPI) for inter node communication. In parallel simulations, synapses are located at the same MPI process as the postsynaptic neurons (Morrison et al.,

The model of STDP requires synapses to access spike times of postsynaptic neurons. In order to provide a standardized transfer of this information across all neuron models that support STDP, in recent years the so-called

All synapses implemented in NEST are so far purely event-driven. To assess the performance of the time-driven update scheme of synapses with voltage-based plasticity, we also implemented a time-driven version of the Clopath and Urbanczik-Senn synapse. Spiking network simulators exploit the delay of connections to reduce communication between compute processes (Morrison et al., _{S} can affect the postsynaptic membrane potential is at _{S} + min_delay. In between _{S} and

Storing state variables in event-driven schemes is more complex as the history does not have a fixed length

As discussed in section 3.1.3, the event-based compression scheme relies on the fact that all synapses to one postsynaptic neuron employ the same

We implement both an adaptive exponential integrate-and-fire neuron model (_{±} of the membrane potential as additional state variables of the neuron. Thereby, they can be included in the differential equation solver of the neurons to compute their temporal evolution. Parameters of κ_{±} consequently need to be parameters of the neuron object rather than the synapse. The same is true for the values of θ_{±}; they are used in the neuron to determine whether

The LTD mechanism is convenient to implement within the event-driven framework: when the synapse is updated at time _{−} (_{−} from its target and computes the new weight. Here, _{−} for the past

The computation of the weight change due to LTP requires the evaluation of the integral over

For simulations with homogeneous delays equal to the simulation time step, the history of

In event-driven schemes, the history of _{S} of a spike, it requests the part of the history between the last spike _{LS} and the current spike _{S} (minus the dendritic delay, see _{LI} and the current spike at _{S} inside the archiving node. Using this newly integrated time trace, the weight of synapse

In any case, the integrated history of _{s} is the time constant of the kernel κ_{s}.

Following the original publication (Urbanczik and Senn, ^{*}. Therefore ϕ as well as its parameters need to be known by the neuron and the synapse. Creating an additional helper class (_{2}(0, _{1} and _{2} but in all cases the archiving node stores the history of

Class diagram of NEST classes and functions. Simplified class diagram for embedding the Clopath

The reference implementation of the Clopath plasticity reproduces the results from Clopath et al. (

Reproduction of results with Clopath rule. _{pair} with which the spike pairs are induced. _{pair} is shown for two different neuron models (aeif: solid curves, Hodgkin-Huxley: dashed curves). The blue curves represent a setup where the postsynaptic neuron fires after the presynaptic one (pre-post, Δ

The basic use of the Urbanczik-Senn rule in NEST is exemplified in

Reproduction of results with Urbanczik-Senn rule. _{I} and _{E} of a two-compartment neuron are modulated such that they induce a teaching signal with sinusoidal shape. The dendrite receives a repeating spike pattern as an input via plastic synapses (green arrows). _{i} (green) follow the matching potential _{M} (red) after learning. _{E}) and inhibitory (_{I}) somatic conductances that produce the teaching signal.

In order to evaluate the performance of the implementation of the Clopath rule in NEST, in a weak-scaling setup, we simulate excitatory-inhibitory networks of increasing size, but fixed in-degree

The Clopath rule has originally been proposed for connections without delays (Clopath et al.,

Comparison of simulation times _{sim} for excitatory-inhibitory networks with different implementations of the Clopath plasticity in NEST. Simulation times exclude network building and only account for updates of the dynamical state of the system. The following implementations are shown: “stdp”: standard implementation of STDP synapse, “td”: time-driven implementation of Clopath synapse, “ed”: event-driven scheme as included in NEST 2.20.1, “edc”: event-driven compression. ^{6} with small in-degree ^{5} with large in-degree

How does compression of the history change the picture? As discussed in section 3.1.3, compression has the advantage of not integrating the membrane potential history for each synapse separately. A downside of the event-based compression is that it requires storing one history entry for each last spike time of presynaptic neurons. For large in-degrees, this history is therefore longer than the history of

Scaling of simulation time _{sim} with network size for 2

Another advantage of having short non-continuous histories is that searching the history at readout is fast. A simple linear iteration scheme is therefore even faster than a binary search (

The Urbanczik-Senn rule, in its original version, does not account for delays in connections (Urbanczik and Senn,

Comparison of simulation times _{sim} for excitatory-inhibitory networks with different implementations of the Urbanczik-Senn plasticity in NEST. The following implementations are shown: “stdp”: standard implementation of STDP synapse in NEST, “td”: time-driven implementation of Urbanczik-Senn synapse, “ed”: event-driven scheme, edc”: event-driven compression. ^{5} with small in-degree ^{4} with large in-degree

We furthermore employ a weak-scaling setup with excitatory-inhibitory networks of increasing size and fixed in-degree _{sim} for updating neurons and synapses is similar for Urbanczik, static and STDP synapses. With increasing network size _{sim} rises only slightly (_{sim} is larger than for Clopath synapses as the Urbanczik-Senn rule requires longer histories of membrane potentials and a more extensive history management.

Scaling of simulation times _{sim} with network size for 2

The analyses of the Clopath and the Urbanczik-Senn plasticity as prototypical examples for rules that rely on storage of discontinuous vs. continuous histories show that the former are much faster to simulate, in particular for large networks that require distributed computing. For discontinuous histories, the event-driven scheme is most generally applicable and efficient, which makes corresponding rules easy to integrate into modern simulators with event-based synapses. The performance gap between the different rules should be kept in mind in the design of new learning rules. Furthermore, it is worthwhile to test modifications of existing learning rules to decrease the amount of stored information.

For illustration, we here test a spike-based alternative to the original Urbanczik-Senn rule, where we replace the rate prediction ϕ (_{i} (^{*} of (15) by a noisy estimate, which we generate by a non-homogeneous Poisson generator with rate ϕ (_{i}(_{i} and

When changing learning rules to improve the efficiency of an implementation, the question is in how far the modified rule, in our example including the noisy estimate of the dendritic prediction, still fulfills the functionality that the original rule was designed for. Generally, without control of the error any simulation can be made arbitrarily fast. Therefore, Morrison et al. (

Comparison of learning curves in the experiment described in

This work presents efficient algorithms to implement voltage-based plasticity in modern neural network simulators that rely on event-based updates of synapses (for a review, see Brette et al.,

Event-driven update schemes for voltage-based plasticity come at the expense of storing possibly long histories of a priori continuous state variables. Such histories not only require space in memory but they also affect the runtime of simulations, which we focus on here. The time spent for searching and post-processing the history to calculate weight updates increases with increasing length, and these operations have to be done for each synapse. Therefore, in addition to an ordinary event-driven scheme, we devised a compression scheme that becomes superior for long histories as occurring in the Urbanczik-Senn rule. In particular for networks with small in-degrees or synchronous spiking, the compression scheme results in a shorter history. It further reduces the total amount of computations for weight changes by partially re-using results from other synapses thereby avoiding multiple processing of the history. For short histories as occurring in the Clopath rule, the compression results in unnecessary overhead and an increase in history size as one entry per last presynaptic spike time needs to be stored instead of a discontinuous membrane potential around sparse postsynaptic spike events. We here, for simplicity, contrasted time- and event-driven update schemes. However, further work could also investigate hybrid schemes, where synapses are not only updated at spike events, but also on a predefined and coarse time grid to avoid long histories and corresponding extensive management. A similar mechanism is used in Kunkel et al. (

The storage and management of the history as well as complex weight change computations naturally reduce the performance of simulations with voltage-based plasticity in comparison to static or STDP synapses. The latter only require information on spike times which is much less data compared to continuous signals. Nevertheless, given that the Clopath rule is based on thresholded membrane potentials and consequently short, discontinuous histories, the performance and scaling of the event-driven algorithms is only slightly worse than for ordinary STDP. Time-driven implementations cannot employ this model feature and update weights also in time steps where no adjustment would be required, leading to significantly slower simulations. The performance gain of using event-driven schemes is less pronounced for the Urbanczik-Senn rule as, by design, histories are typically long. In this case, the compression scheme naturally yields better results in terms of runtime. Our own modification of the Urbanczik-Senn rule only requires storage of sparsely sampled membrane potentials, giving rise to the same performance as STDP. Generally, an algorithm is faster if it requires fewer computations. However, opportunities for vectorization and cache efficient processing, outside of the scope of the present manuscript, may change the picture.

We here chose the Clopath and the Urbanczik-Senn rule as two prototypical models of voltage-based plasticity. While both rules describe a voltage dependence of weight updates, their original motivation as well as their specific form are different: The Clopath rule refines standard STDP models to capture biologically observed phenomena such as frequency dependence of weight changes (Sjöström et al.,

The current implementation, which is published and freely available in NEST 2.20.1, supports an adaptive exponential integrate-and-fire and a Hodgkin-Huxley neuron model for the Clopath rule. The former is used in the original publication (Clopath et al.,

While the here presented implementation refers to the neural network simulator NEST (Gewaltig and Diesmann,

In general, one has to distinguish two types of efficiency in the context of simulating plastic networks: Firstly, the biological time it takes the network to learn a task by adapting the weights of connections. Secondly, the wall-clock time it takes to simulate this learning process. Both times crucially depend on the employed plasticity rule. In this study, we focus on the wall-clock time and argue that this can be optimized by designing learning rules that require storing only minimal information on postsynaptic state variables. Ideally, the plasticity rule contains unfiltered presynaptic or postsynaptic spike trains to reach the same performance as in ordinary STDP simulations. This amounts to synapses requiring postsynaptic state variables only at the time of spike events. The Clopath and Urbanczik-Senn rule capture the dependence of synaptic weights on the postsynaptic membrane potential in a phenomenological manner. The dependence on the voltage history observed in biological synapses (Artola et al.,

For the plasticity rules by Clopath et al. (

The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found at: Stapmanns et al. (

The reference implementation for the event-driven update scheme of synapses with Clopath and Urbanczik-Senn plasticity was reviewed by the NEST initiative and is publicly available in NEST 2.20.1. The PyNEST code for model simulations and Python scripts for the analysis and results are fully available at: Stapmanns et al. (

JS and JH wrote the simulation code, the plotting scripts, and performed the NEST simulations for the HPC performance measurements. JS and DD worked out the details of the theoretical analysis of the different algorithms. JS was supervised by MH, MD, and DD. JH was supervised by MB. All authors jointly did the conceptual work and wrote the paper.

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

We thank Claudia Clopath and Wulfram Gerstner for explaining details of their reference implementation and the underlying biological motivation. Moreover, we thank Hedyeh Rezaei and Ad Aertsen for suggesting to implement the Clopath rule in NEST and Charl Linssen, Alexander Seeholzer, Renato Duarte for carefully reviewing our implementation. Finally we thank Walter Senn, Mihai A. Petrovici, Laura Kriener, and Jakob Jordan for fruitful discussions on the Urbanczik-Senn rule and our proposed spike based version. We further gratefully acknowledge the computing time on the supercomputer JURECA (Jülich Supercomputing Centre,

The Supplementary Material for this article can be found online at:

To derive (17) it is convenient to first investigate Δ_{ij}(0, _{ij}(0, _{ij}(_{ij}(0, _{ij}(0,

where we exchanged the order of integration from the first to the second line. In the third line we introduced

In case of the Urbanczik-Senn rule

which implies the identities

which we use to write the weight change from

This is the the result (17).

For the Clopath rule the change of the synaptic weight strongly depends on the excursion of the membrane potential _{m} around a spike of the postsynaptic neuron which causes ū_{±} to cross the respective thresholds θ_{±} so that (12) and (13) yield non-vanishing results. Within the original neuron model Brette and Gerstner (_{clamp} for a period of _{clamp} before it is reset. The reference implementation is restricted to a simulation resolution of exactly 1 ms and sets _{clamp}. In the simulations we set _{clamp} to 2 ms and _{clamp} to 33 mV. These values are chosen to match the behavior of the reference implementation.

There are three points that need to be considered in the context of history management: First, which information needs to be stored. Second, how to search and read out the history. Third, how to identify and remove information that is no longer needed. The first and third point mainly affect memory usage, while the second point impacts the simulation time as searching within shorter histories is faster.

There are four different histories to which our considerations apply. The one to store the membrane potential _{i}(_{LS}, _{i} of the postsynaptic neuron (also used for ordinary STDP), and finally one might need a history that stores the last spike time for every incoming synapse (see below for details).

This paragraph concerns only the history that stores the time trace of

Let's assume _{LS} and _{S} be the times of the last and the current spike of a synapse. At time _{S} that synapse then needs to request a part from _{1} = _{LS} − _{2} = _{S} − _{1} of the history that ranges from _{start} < _{1} to _{end} > _{2}. This part is shifted with respect to the spike times by a delay _{1/2} by just knowing _{start} and _{end}. As pointed out in section 3.2 this is the case for the Urbanczik-Senn plasticity rule. If the history is not continuous in time, like in case of the Clopath rule, this scheme is not applicable. Instead, we add a time stamp _{1}, _{2}) using e.g., a linear or a binary search. Searching for the positions that define the start and the end of the requested interval is slower than computing them directly. Nevertheless, a non-continuous history can lead to a large acceleration of simulations as we discussed in case of the Clopath rule (section 3.4.1). Here, only values of the membrane potential in the vicinity of a spike of the postsynaptic neuron are needed so that neglecting the majority of values in between leads to a non-continuous history but saves memory.

Technically, the archiving node contains a function called _{1} and _{2}. When executed, the function sets the iterators to point to the correct entries of the history of the postsynaptic neuron corresponding to _{1} and _{2}, respectively. Having received the correct position of the pointers, the synapse evaluates the integral (6). In the event-driven compression scheme, the integration (11) is not done inside the synapse but inside the _{i}(_{LS}, _{S}), which is updated in case of an incoming spike, is stored inside the

To prevent the history from occupying an unnecessary amount of memory, it is crucial to have a mechanism to delete those entries that have been used by all incoming synapses. The simplest implementation to identify these entries is to add one additional variable to each entry called _{1} to _{2} of the history, the algorithm iterates over all entries _{1} < _{2} and increases the access counters by one. After the update of the synaptic weight all entries whose access counters are equal to the number of incoming synapses are deleted. This scheme can be combined easily with a linear search starting the iteration from the oldest entry of the history.

For long histories a linear search is inefficient and should be replaced by a binary search or direct computation of positions if applicable. To avoid iteration within long histories, we replace access counters by a vector that stores the last spike time _{LS} for every incoming synapse. If a synapse delivers a spike, it updates its entry in that vector by replacing _{LS} by the time stamp of the current spike. After each weight update, searching the vector for the smallest _{LS} allows us to safely remove all membrane potentials with time stamps _{LS,i}}). In practice, we can further improve this mechanism with two technical details. Firstly, _{LS} which we then have to provide with a counter that goes down from _{LS}. Secondly, we can avoid the search for the smallest _{LS} by making sure that the entries _{LS} are in chronological order. This can be easily achieved if a synapse does not update its entry in the vector but removes it and appends a new one at the end of the vector.

The setup of the spike pairing experiment from Clopath et al. (_{init}. In this experiment we use the Clopath rule with fixed amplitude _{LTD}. A list with all the parameters can be found in

In this experiment after Clopath et al. (_{I} inhibitory and _{E} excitatory neurons subject to an external input develops strong bidirectional couplings between neurons of the excitatory population. The input is given by _{p} Poisson spike trains with rates

where _{p}. The center μ_{p} of the Gaussian is drawn randomly from a set _{p} of possible values and a new value is drawn after each time interval _{μ}. The total number of intervals is _{μ}. In our simulation with NEST we used _{μ} intervals between which the rates of the _{p} _{p}

For the network simulations presented in Clopath et al. (_{LTD} is replaced by a voltage dependent term

to take into account homeostatic processes. The quantity _{−}(_{ref} is a reference value. An exact temporal average requires storing the time trace of ū_{−} (_{±},

In the simulation experiment shown in _{p} independent Poisson spike trains with a firing rate _{p}. For learning, the pattern is repeated _{rep} times. Dendritic synapses adapt their weights so that after learning the somatic membrane potential _{M}. The latter is created by somatic input via two _{I}. Excitatory spikes have a modulated weight to generate a periodic excitatory conductance _{E}. The input to the dendritic compartment is provided by _{p}

The weight change of the Urbanczik-Senn rule as presented in section 2.5 in line with the original publication is driven by the prediction error

where _{i} is the somatic spike train and _{i} the dendritic prediction of the somatic membrane potential _{i}. Instead of integrating over the difference between the spike train and the rate ϕ (_{i}) (spike-rate), one can derive two variants

In the first one (spike-spike) we replaced the dendritic rate prediction by a noisy realization _{i}). In the second one (rate-rate) the somatic spike train is replaced by the rate of the underlying Poisson process which is computed by applying the rate function ϕ to the somatic potential _{i}. The learning of a matching potential _{M} as described in section 3.3 also works in these two cases. _{i} and _{M} averaged over one period _{p} of the input pattern

The decrease of the loss as a function of the pattern repetitions has a similar shape for all three variants with a significantly higher variance in case of the spike-spike version.