^{1}

^{†}

^{2}

^{†}

^{2}

^{1}

^{2}

Edited by: Zoltan Dienes, University of Sussex, UK

Reviewed by: John J. McDonald, Simon Fraser University, Canada; David Eagleman, Baylor College of Medicine, USA

*Correspondence: Dobromir A. Rahnev, Department of Psychology, Columbia University, 406 Schermerhorn Hall, 1190 Amsterdam Ave, MC 5501, New York, NY 10027, USA. e-mail:

^{†}Stanislav Nikolov and Dobromir A. Rahnev have contributed equally to this work.

This article was submitted to Frontiers in Consciousness Research, a Specialty of Frontiers in Psychology.

This is an open-access article subject to an exclusive license agreement between the authors and the Frontiers Research Foundation, which permits unrestricted use, distribution, and reproduction in any medium, provided the original authors and source are credited.

A very basic computational model is proposed to explain two puzzling findings in the time perception literature. First, spontaneous motor actions are preceded by up to 1–2 s of preparatory activity (Kornhuber and Deecke,

How does the brain process temporal information? How do we determine the onsets of stimuli? Despite the impressive volume and quality of recent work in this area (Leon and Shadlen,

It has been shown (Kornhuber and Deecke,

The second finding: Titchener's law of attentional prior entry (Titchener,

Here we propose a simple model that can naturally explain the apparent contradiction in these findings. The key idea is that time perception, just as in perception in general, is constrained by noise and uncertainty. Many previous modeling studies have employed these concepts (for a review see Chater et al.,

For instance, one may think that the preparatory activity preceding self-paced action starts at an exact time point prior to motor execution, as indicated by the RP measure. However, the RP is the result of averaging over many trials. To determine the onset of preparatory activity in every trial, the brain does not have the luxury of averaging to reduce noise. Of course, in the RP there is also noise from the EEG measurement. But even for actual neuronal activity, there is trial-by-trial fluctuation, which is not necessarily meaningful with respect to neural processing. Given such presence of noise, perception essentially depends on a decision process (Green and Swets,

We modeled onset perception as an uninterrupted process of signal detection. In a task where a signal is to occur within a certain epoch and the subject is to determine its onset, one natural way the brain could solve this problem is to perform signal detection at every time point. The reported onset

At every time point, the chance of detection depends on the signal and noise distributions. We call the whole duration of a trial “trial epoch,” while the period in which the signal is present “signal interval.” In all our simulations we assume that the signal has a positive value during the signal interval and a value of 0 in the rest of the trial epoch. Further, we call the value of the signal that is not corrupted by noise simply “true signal” and the value of the signal corrupted by noise “internal evidence.” Note that the problem that the brain deals with, and that we are modeling here, is how to use the internal evidence in order to guess the onset of the true signal.

We used a signal detection theoretic framework (Green and Swets,

In each trial, the signal had a random onset and was detected at a particular time bin. If the signal was not detected by the end of the trial epoch, the system was forced to guess the onset randomly using a uniform distribution over the duration of the whole trial epoch. There were two alternatives to this method that we considered. First, trials in which the signal was not detected could simply be discarded. However, such an approach does not punish misses and resulted in extremely high optimal criteria, which missed the signal on a very large percentage of the trials. Second, for trials in which the signal was not detected, the system could have chosen the last time bin as the correct answer. This option seemed further removed from what the brain might do in a similar situation. Thus, we decided to choose the onset from a uniform distribution as the best way of approximating how the brain may deal with this problem.

We presented the same true signal multiple times, each time with an onset chosen uniformly at random. We modeled the noise at each time bin to come from a Gaussian distribution with a mean of 0 and SD of 1. The units here are arbitrary and all other variables are expressed in terms of the SD of the noise distribution. For each series of simulations we used a different criterion. In each trial, we recorded the detection time relative to the onset of the true signal in that trial (that is, we subtracted the onset from the detection time). In this way, we approximated the probability distribution of the relative detection time. We then computed the mean square error (MSE) for the resulting distribution. The mean squared error is simply

We defined the optimal criterion as the one that minimizes the MSE. We chose to use the mean squared error as it is one of the most widely used measures in statistics to quantify the amount of deviation from a value. It may appear as if the choice of MSE is central to our model but we believe that this is not the case. Initial models using the absolute deviation from the true onset (defined as

The true signal was either a step or a slowly rising function designed to approximate the RP. The step function started at 0, then rose sharply to a particular value, and finally went back to 0 again until the rest of the trial epoch. The slowly rising function started at 0, then rose slowly to a particular value (we used a log-sinusoidal function to approximate the RP), and finally went back to 0 again until the rest of the trial epoch.

We calculate the respective optimal distributions of two signals – an attended signal and an unattended signal – with identical shapes and noise levels. Compared to the unattended signal, the attended signal was modeled with greater signal strength. In accordance with the method for finding the optimal distribution, the onsets of the two stimuli were still random in each trial, but this time the attended stimulus preceded the unattended stimulus by a fixed time gap, which we called the onset advantage. Using the two distributions,

We varied the stimulus interval, the trial epoch, signal-to-noise ratio, and criterion used. For each set of values for the above variables we obtained a distribution of onset estimations

Figure

The crucial question is what would be the optimal criterion and the corresponding

We achieved similar qualitative results by varying the length of the trial epoch and the stimulus interval, as well as the signal-to-noise ration, thus confirming that our results are not dependent on the particular values of these variables.

We then simulated how an optimal system would determine the onset of a slowly ramping-up signal, in order to shed light on the mechanism underlying introspective reports of the onset of motor preparation (see

Figure

One could see that at the optimal criterion, the expected time of reported onset, i.e., expected

Psychophysical findings (Stelmach and Herdman,

We assume that attention boosts the signal-to-noise ratio of a true signal, either by increasing the signal magnitude, reducing the noise level, or both. Figure

To make this point more clear, we have extended our model to stimulate a TOJ experiment. Figure

Our model provides an explanation to the apparent contradiction in the Kornhuber–Deecke–Libet paradox. The RP is averaged over many trials. Although it may reflect the shape of the underlying signal, the brain does not have the luxury of averaging when it has to make a decision in real time after each motor action. The early part of the RP might be on average higher than baseline, but in fact the signal-to-noise ratio is weak, as compared to the later part of the preparatory activity. To detect the earliest part, the system would have to use a very low criterion and may therefore suffer from low consistency because of the false alarms generated. To detect the onset of the RP, the brain must set a certain criterion by taking into account the trade-off between bias and consistency. Our analysis suggests that the optimal trade-off would mean that a reasonably consistent system would give a sufficiently large late bias. This may explain why we are only aware of the later part of the preparatory activity. The findings byLibet et al. (

Second, our model also helps to explain the discrepancy of results regarding attentional prior entry. Previous work has failed to find shifts of ERP onsets that reflect the behavioral effect that attention seems to speed up perception. McDonald et al. (

Is the model we proposed here realistic? Admittedly, it is unlikely that the brain treats each data point independently and performs signal detection on each of them. This is an abstraction that allows ease of computation and illustration. However, we have also tried augmenting the model such that it accumulates evidence over time (Ratcliff and McKoon,

In our model, we assume that the brain tries to minimize error in its onset judgments, even for endogenously generated signal (i.e., the motor preparation activity, or “intention”). Is this a reasonable assumption? How can the brain compute the MSE for such judgments given that the true onset of the motor preparation activity is not known? We acknowledge it is unclear how the brain can achieve this exact computation. It is likely that the brain does not directly compute MSE in such situations, but rather uses some heuristics developed from other situations where the true onset of the event can be verified. However, it is important to note that our argument does not depend on the brain actually computing the exact value of the MSE. Our argument, as in many other modeling studies (e.g., “Bayesian” models, Ma et al.,

The idea that a high criterion would predict late onset detection is not new. In fact, Libet et al. (

The parameters chosen for the model may seem ad hoc. But our modest goal here is mainly to demonstrate as an example case of how this could work conceptually. We have not been able to provide analytic solutions to some most of the optimization problems yet. We hope future work can address these issues.

A related criticism could be that the model does not specify the neuronal mechanism underlying onset detection. Currently the exact neuronal mechanism is unknown, and we hesitate to speculate about such details. The model we propose is on a more abstract, general cognitive level. A useful analogy is that signal detection theory has been useful for perception research even though a neuronal mechanism for detection criterion has not been specified. In fact, recent work in neurophysiology tends to adopt diffusion-style models (Gold and Shadlen,

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

The Supplementary Material for this article can be found online at

Stanislav Nikolov, Department of Mathematics, Massachusetts Institute of Technology; Dobromir A. Rahnev and Hakwan C.Lau, Department of Psychology, Columbia University. We thank Will Penny, Matt Davidson and Brian Maniscalco for helpful input. Hakwan C. Lau is supported by an internal grant from Columbia University.