^{1}

^{2}

^{*}

^{1}

^{2}

^{1}

^{2}

Edited by: Paul E. M. Phillips, University of Washington, USA

Reviewed by: Bernd Weber, Rheinische Friedrich-Wilhelms-Universität Bonn, Germany; Cleotilde Gonzalez, Carnegie Mellon University, USA

*Correspondence: Dirk Ostwald, Arbeitsbereich Computational Cognitive Neuroscience, Department of Education and Psychology, Free University Berlin, Habelschwerdter Allee 45, 14195 Berlin, Germany

This article was submitted to Decision Neuroscience, a section of the journal Frontiers in Psychology

This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

“Decisions from experience” (DFE) refers to a body of work that emerged in research on behavioral decision making over the last decade. One of the major experimental paradigms employed to study experience-based choice is the “sampling paradigm,” which serves as a model of decision making under limited knowledge about the statistical structure of the world. In this paradigm respondents are presented with two payoff distributions, which, in contrast to standard approaches in behavioral economics, are specified not in terms of explicit outcome-probability information, but by the opportunity to sample outcomes from each distribution without economic consequences. Participants are encouraged to explore the distributions until they feel confident enough to decide from which they would prefer to draw from in a final trial involving real monetary payoffs. One commonly employed measure to characterize the behavior of participants in the sampling paradigm is the sample size, that is, the number of outcome draws which participants choose to obtain from each distribution prior to terminating sampling. A natural question that arises in this context concerns the “optimal” sample size, which could be used as a normative benchmark to evaluate human sampling behavior in DFE. In this theoretical study, we relate the DFE sampling paradigm to the classical statistical decision theoretic literature and, under a probabilistic inference assumption, evaluate optimal sample sizes for DFE. In our treatment we go beyond analytically established results by showing how the classical statistical decision theoretic framework can be used to derive optimal sample sizes under arbitrary, but numerically evaluable, constraints. Finally, we critically evaluate the value of deriving optimal sample sizes under this framework as testable predictions for the experimental study of sampling behavior in DFE.

“Decisions from experience” (DFE) refers to a body of work that emerged in research on behavioral decision making over the last decade. The work on DFE is held together by the central question of how humans search for information and make decisions with economic consequences in uncertain environments. Perhaps the most popular experimental paradigm used to study DFE is the “sampling paradigm” (Hertwig et al.,

A commonly employed measure to capture the behavior of participants in the sampling paradigm is the sample size, i.e., the number of draws which participants choose to obtain from each box prior to terminating sampling. A repeated finding is that the typical sample sizes are rather low, dependent on the values of the observed outcomes (Hertwig et al.,

The notion of an “optimal” sample size is, of course, a relative concept: A given sample size can be optimal with respect to certain constraints and suboptimal with respect to others. As an example, consider a participant, whose objective is to invest as little time as possible in the experiment. For her, the optimal sample size would be zero, and the decision for a distribution that is “better” at the final draw would correspond to random guessing. Because the notion of “optimal” sample sizes is a relative concept, we have to introduce a set of assumptions or constraints that define when the benchmark applies. These assumptions may be classified into one “strong” assumption and several “weaker” assumptions.

The “strong” assumption that we make is that the problem of how much to sample can be solved by some form of statistical inference. This assumption can be stated as follows: a reasonable approach to making a sampling-based choice is to estimate the expected values of each distribution, and choosing the one deemed to offer the largest expected value. This assumption is common and implicit in many previous experimental and theoretical studies of the sampling paradigm. These studies have mostly been carried out in the context of the “description-experience gap.” The description-experience gap is the experimental finding, that choice behavior is systematically different dependent on whether information about payoffs and probabilities is learned sequentially as in DFE or stated explicitly in terms of outcomes and their associated probabilities, a paradigm referred to as “decisions from description” (DFD). As reviewed in Hertwig (

Importantly, as will be seen, the “inference assumption” renders the notion of optimal sample sizes in DFE a concept that can be addressed in the maximum expected utility (MEU) framework for statistical decisions, originally developed by Raïffa and Schlaifer (

In addition to the inference assumption, we introduce a number of “weaker” constraints or assumptions to arrive at the notion of an “optimal” sample size. These include, for example, the invocation of utility functions, sampling costs, pre-determined sample sizes, and available information about the distributions' possible outcomes. These assumptions are “weaker,” because they are ultimately consequences of the inference assumption. In other words, if we were to refute the inference assumption, these additional assumptions would presumably not enter a framework to determine optimal sample sizes, but others would in their stead. In addition, as discussed below, some of these assumptions (for example the available information about outcomes) can readily be relaxed by extending the current approach.

The outline of our manuscript is as follows. In the preceding Sections, we formalize the sampling paradigm and precisely specify the “inference assumption.” In the Section “The Maximal Expected Utility Framework”, we provide a general introduction to the optimal sample size approach by Pratt et al. (

In sum, the manuscript makes the following novel contributions to the literature on DFE. First and foremost, we explicitly relate the question of optimal sample sizes in the sampling paradigm to the classical literature on statistical decision theory. To this end, we provide a simplified treatment of the optimal sample size theory developed in Raïffa and Schlaifer (

A few preliminary remarks on the mathematical notation are in order. For simplicity, we use the applied notation for probability distributions specified in terms of probability mass or density functions. In this notation, _{x ∈ X}_{x ∈ X}^{max}, ^{min} ∈

The sampling paradigm in research on DFE can be conceived in the following form: A human participant is presented with two “binary payoff distributions” _{GA} and _{GB}, where by “binary payoff distribution” _{G} we understand a probability distribution of a random variable _{1}, _{2} ∈ ℤ, specified by the probability mass function.

_{GA} by θ_{A} and the parameter θ of the binary payoff distribution _{GB} by θ_{B}. After being permitted to sample from the distributions _{GA} and _{GB} _{GA} and _{GB}, or, in other words, their parameters θ_{A} and θ_{B}. In this manuscript, we are thus concerned with the question of how many samples from each distribution the agent “optimally” draws to infer the parameters θ_{A} and θ_{B}, where “optimally” is understood in a relative way with respect to some boundary conditions to which we shortly turn. Please note again that the “inference approach” to the sampling paradigm is neither exclusive nor exhaustive; alternative formalizations are possible (see Discussion), and the inference approach is merely the approach we take here.

_{10}×ℕ_{10}×[0, 1] and “final” samples obtained from either the binary payoff distribution with the higher expected value (“higher EV”) or from either payoff distribution with equal probability (“Guess”). The cumulative returns of the “higher EV” choice rule outperform the random choice rule from approximately 40 DFE problems on in this realization. In the “Mixed Ecology,” two binary payoff distributions were uniformly sampled from {−10, −9, …, 9, 10} × {−10, −9, …, 9, 10} × [0, 1]. While the random choice rule results in approximately equal gains and loss and thus a cumulative return centered around 0, the higher EV choice rule yields cumulative gains. Finally, in the “Gain and Loss Ecology” one binary payoff distribution was sampled from ℕ_{10}×ℕ_{10}×[0, 1], while the other was sampled from {−10, −9, …, −1} × {−10, −9, …, −1} × [0, 1]. Again, the random choice rule results in approximately equal gains and loss, while the higher EV rule always prefers the binary payoff distribution with the positive expected value in the final choice.

The inference approach to the sampling paradigm may itself be addressed as a formalized decision problem in at least two ways: either (1) the participant decides on the sample size for each distribution before starting to draw samples from them, or (2) the participant decides after each obtained sample whether next to sample from distribution _{GA}, to sample from distribution _{GB}, or to terminate the sampling process altogether and obtain a final sample with economic consequences from either _{GA} or _{GB}. The latter approach corresponds to a sequential decision problem (Powell,

As a first step, we simplify the problem as follows: Because we assume that the two distributions do not differ in their characteristics as specified by Equations (1) and (2), and thus the optimal sample size for each of the distributions derived will be functionally identical, we formulate a simplified problem to which we shall refer to in the following as the “simplified sampling problem” (SSP) in DFE: Assume a decision maker is faced with a binary payoff distribution _{G} specified by a probability mass function over two outcomes _{1}, _{2} ∈ ℤ given by:

In analogy to the discussion above, we postulate that a reasonable “inference” strategy for a decision maker, aiming to maximize expected return, is to choose to draw from the distribution for a monetary return, if she believes that the expected value of the distribution is larger than zero, and not, if she believes that it is equal or smaller than zero—the former would leave the decision maker's expected cumulative return identical, the latter would decrease it. In Figure

The final decision strategy in the inference approach to the SSP is trivial. However, the question of an “optimal sample size” for inference of the expected value is not. According to Lindley (

In this section we review the general framework for optimal decisions under uncertainty, as formulated by Raïffa and Schlaifer (

The MEU framework rests on the specification of the Cartesian product of four sets: a set of possible “experiments” _{0}. An outcome

The second ingredient of the MEU framework is a “utility function” defined on this Cartesian product space:
_{t} determined by the chosen action _{s} determined by the experiment _{s} of experimentation as the negative of the utility of experimentation or sampling:

Finally, the MEU framework assumes that the decision maker can specify a probability measure on the space _{e}(_{e}(

The decision maker's “prior probability” over states of the world

The decision maker's marginal probability of experimental outcomes _{e}(

The decision maker's “likelihood” of experimental outcomes _{e}(

The decision maker's posterior probability over states of the world _{e}(

Based on the preliminaries outlined above the optimal experiment in the MEU framework is given by:
_{e}(^{opt}. In the third step, the function _{e}(^{opt} which maximizes the function

By introducing additional assumptions about the utility function and the nature of the space of experiments _{t} and a cost of sampling _{s}. In lieu of Equation (8) this additional assumption yields the special case:
_{0}, _{0}, and that the cost of sampling is independent of the experimental outcome _{s}:
^{opt}, by:

Having established the general MEU framework and a first specialization of it in Equation (14) as basis for its application to the SSP, we are now in the position to make this application more concrete. In the Sections “Optimal Sample Sizes for Parameter Point Inference” and “Optimal Sample Sizes for Bayesian Parameter Inference,” we will illustrate two formulations of the inference approach to optimal sample sizes in DFE that differ in their notions of “inferences,” or, in general MEU terms, their notions of “actions.” Specifically, in the Sections “Optimal Sample Sizes for Parameter Point Inference” and “Optimal Sample Sizes for Quadratic Terminal Loss and Beta Prior,” we conceive the action space as a space of point estimates for the true, but unknown, state of the SSP. In the Section “Optimal Sample Sizes for Bayesian Parameter Inference,” we conceive the action space as a space of probability distributions over the states of the world. Informally, these approaches may be conceived as different types of inferences the optimizing decision maker performs as actions: “classical point parameter estimation” in the former, and “Bayesian inference” in the latter case (see Figure

^{*} of the SSP. Specifically, as will become clear below, the state space

Based on the developments in Section Formalization of the DFE Problem, we are in the position to derive the optimal sample size as defined by the MEU framework in Equation (14). To recapitulate, we assumed that faced with a binary payoff distribution, the optimizing decision maker uses an inference approach to determine the expected value of the distribution, with the aim of choosing to sample from this distribution in a final draw with economic consequences if and only if the expected value is larger than zero. Second, we assumed that prior to entering the sampling stage, the decision maker evaluates the optimal sample size to take from the distribution. To apply the MEU framework to the SSP, we take advantage of the following simple idea: We identify the SSP with the well-studied Bernoulli distribution parameter estimation problem, by adopting the coding scheme _{1}: = 1 (“success”) and _{2}: = 0 (“fail”) for the binary payoff distribution outcomes. The assumption of determining the optimal sample size a priori then becomes a Binomial sampling problem (as opposed to Pascal sampling approaches). In this scenario, if the decision maker has committed to an estimate of the Bernoulli distribution parameter, he or she may evaluate the expected value of the distribution by means of Equation (2) and thus proceed to the final decision on whether to sample the distribution for a monetary return, or not. Note that this approach assumes that the outcomes of the binary payoff distribution are, by one means or another, known to the decision maker, an issue we will return to in the Discussion. Based on this idea and under the additional assumption of parameter point inference, the definition of the remaining components of the MEU framework for application to the SSP is straightforward:

We define the set of possible states of the world, that is, the set of possible true, but unknown, values of the underlying Bernoulli parameter, by ^{0} taken for each experiment _{0} and use ^{n} by the sufficient statistic outcome space _{n} and the total number of samples _{s}(_{n}):
_{n}(_{n}|θ), we choose the Binomial distribution throughout the remainder of this study. The Binomial distribution specifies the probability of _{n} observations of 1's (or “successes”) in an independent and identical Bernoulli sampling sequence of length _{n}.

Based on the definitions (15), (16), (18), (19), and (21), we now explore optimal sample sizes for point parameter inference in SSP in a setting that allows for the analytical derivation of optimal sample sizes as a function of the prior distribution

As discussed in Pratt et al. (_{t}(

A different perspective on the additive constant in (22) is afforded by considering the terminal utility of no experimentation. If the decision maker chooses not to experiment at all, i.e., the sample size is _{e}(

The specific loss function we use in the remainder of this Section is the “quadratic loss function,” a classically chosen loss function for point estimation problems (Lehmann and Casella, _{t}(

To be able to derive optimal sample sizes, we further have to specify the form of the prior distribution _{n}.

_{n} as a function of the prior distribution parameters α, β (rows) and the sample size

Based on the definitions (24) and (25), we may now evaluate optimal sample sizes as a function of the prior parameters α, β and the value of the sampling cost proportionality constant

Note that we have exchanged maximization with minimization for simplicity. Because the additive terminal opportunity loss constant is independent of θ and _{n}, we may equivalenty write (28) as:

As shown by Pratt et al. (

We first derive the probability distributions involved based on the specification of _{n}(θ, _{n}) in (27). To this end, the conditional distribution of θ given _{n} is well-known to conform to an updated beta distribution.
_{n}, i.e., the parametric form of:
_{n} = 0, 1, …,

Based on (30) one finds that the inner integral expression in the first term of (29) evaluates to:

The quantity (33) is referred to as “posterior terminal opportunity loss” and, for a given setting of the prior parameters, is a function of the sample size _{n}. Its minimizer _{n}. Interestingly, in contrast to estimators derived in the standard framework of classical statistics and their associated optimality theory, by means of the MEU framework, this estimator has already been established as “optimal” for every possible state of the world. In Figure

_{n} and their respective minimal point. The higher the prior certainty (i.e., the lower the variance of the prior beta distribution), the less dependent is the location of the posterior terminal opportunity loss on the point parameter estimate. _{n} = 8, this figure depicts the posterior terminal opportunity loss at its minimum. Notably, the minimized posterior terminal opportunity loss is symmetric in the prior parameters and decreases with higher prior certainty. Note that the posterior terminal opportunity loss is a function of the experimental outcome.

Substitution of the optimal posterior act obviously fulfills the minimization operation in (29) and integration with respect to the marginal outcome distribution then yields the complete integral term in (29) as:
^{0} by ℝ_{[0, ∞]} in (36) to obtain a differentiable function continuous in ^{opt}. In Figure

In summary, assuming, that the decision maker in the SSP (1) is adopting the “inference approach,” (2) is willing to commit to a classical, squared-loss parameter inference scheme, and (3) is willing to quantify its prior uncertainty about the binary payoff distribution parameter using a beta distribution, she may thus read-off the optimal sample size depending on her subjective sampling cost constant

Until now we assumed that the decision maker determines the optimal sample size with the aim to obtain a point estimate of the “true state of the world” (that is, the parameter θ of the binary payoff distribution). To this end, the action space

Continuing from the introduction of the terminal utility function _{t} in Equation (5), we consider the consequences of identifying the action space with the space _{e}(

To introduce the notion of “information from data” (Bernardo and Smith,

Because the only argument dependent on _{t}(

By definition, the logarithmic score function is maximized for the posterior distribution _{e}(_{n}(_{n}(

As in the Section “Optimal Sample Sizes for Parameter Point Inference,” we define the set of possible states of the worlds as the true, but unknown value of the Bernoulli distribution parameter by _{0} and assume Z to be the sufficient outcome space ℕ_{n} of a binomial sampling approach. As discussed above, we assume

Note that the assumption of the elements of

The form of the utility function was discussed in detail above. Further, as above we assume the following probability measure on the Cartesian product of the space of states of the world and experimental outcomes:

To obtain the optimal sample size, we now consider Equation (47), which, based on the specifications above, evaluates to:

In this equation, the integral term mirrors the minimized posterior terminal opportunity loss of Equation (29) and intuitively corresponds to the expected KL-divergence between the posterior and prior beta distributions under the marginal distribution of the data. The KL-divergence between two beta distributions is well-known (Liu et al.,

For a proof, please refer to the Supplementary Material. Unfortunately, unlike the corresponding function in the case of parameter point inference, the function

In summary, assuming that the decision maker in the SSP (1) is adopting the “inference approach,” (2) is willing to commit to a Bayesian parameter estimation scheme in which the utility of sampling is expressed as the information (in an information-theoretic sense) about the binary payoff distribution parameter, and (3) is willing to quantify her prior uncertainty about the distribution parameter using a beta distribution, she may thus read-off the optimal sample size depending on its subjective sampling cost constant

In our application of the MEU framework for parameter point or interval probability estimation we have so far assumed analytically tractable parameter prior probability distributions and terminal utility functions mostly for mathematical convenience. However, the MEU framework is by no means limited to these special classes of probability distributions and terminal utility functions. In this Section we demonstrate how the optimal sample size for the inference approach to the SSP can be derived with the help of a computer for arbitrary, but numerically evaluable, prior distributions and terminal utility function. As we elaborate below, this approach is of particular relevance for applications of the theory developed here in an experimental context. Note that our demonstration merely serves as a proof-of-principle and does not aim for the systematic evaluation of the errors introduced by the numerical approximation of analytic quantities or attempts to provide an in any way exhaustive coverage of possible prior and terminal utility functions.

With respect to the MEU framework, we first note that if one specifies the marginal distribution _{n}(θ|_{n}) also assumes the form of a probability mass function, which can be evaluated according to Bayes theorem as follows:

_{n}, _{n} = 8 in both analytically evaluated probability density form (red) and numerically evaluated probability mass function form (blue), and

We further note that the integration operations can, for finite discrete state and outcome spaces and probability mass functions defined over these spaces, be evaluated by means of scalar products. Finally, the respective maximization operations can be evaluated using standard list sorting techniques available in numerical computing. Figure

_{n} = 9,

As a proof-of-principle that the MEU framework can yield an optimal sample size for arbitrary prior distributions and terminal utility function, we consider prior probability mass and terminal utility functions for a discretization of the state space into 10 equally spaced bins (Figure _{n}(_{n}|θ) and numerically derive the marginal distribution over outcomes (Figure ^{opt} = 19.

In summary, assuming that the decision maker (1) adopts the inference approach, (2) commits to arbitrary, but numerically evaluable prior distribution probability mass and utility functions, and (3) has the numerical computing facilities, she may thus read-off the optimal sample size depending on her subjective sampling cost constant

Leaving technicalities aside, we next elaborate on the applicability of the numerical solutions discussed above in a concrete experimental context. We address this scenario first from the perspective of the decision maker, i.e., the experimental participant, and then from the perspective of the experimenter.

Consider an experimental participant faced with the SSP. In line with the inferential notion of our framework, we assume that the participant would like to solve the question of how many samples to draw before deciding whether to take a final draw with economic consequences by means of estimating the expected value of the SSP. As we focused our discussion on estimating the binary payoff distribution parameter θ, we have to implicitly assume that the participant is aware of the SSP's binary payoff distribution functional form and has knowledge about the values of the possible outcomes (e.g., by having been exposed to experimental trials of the same task previously). Our framework next assumes that the participant has a means to quantify her initial uncertainty about the value of θ in terms of a prior distribution over discrete possible values of θ. For the current purposes, we assume that this has resulted in the distribution shown in Figure _{n} (number of observations of _{1}) for each sample size _{1} in a pre-specified sample size of ^{*} = 0.45 in Figure

We next consider the experimental applicability of our numerical framework from the perspective of the experimenter. By considering the framework discussed here, we obviously assume that the experimenter is led by the intuition that the participant's prior assumptions about the state of the world and economic preferences are of importance when studying decision making under uncertainty. More specifically, an experimenter may view the framework discussed herein from two perspectives. Firstly, the experimenter may conceive the proposed framework as a normative “null” model, which has no psychological plausibility, but can serve as an objective predictor for subjectively optimal behavior. In other words, assuming that the experimenter has made the participant's prior assumptions, terminal utility function, and sampling cost, explicit (for example by having explained the binary payoff distribution character of the SSP to the participant, having the participant revealed her prior belief over discrete values of θ, for example by means of a visual analog scale, and likewise having revealed the participant's terminal utility function and sampling cost), the experimenter can test whether the participant behaves in accordance with her subjective preferences or not. In case of the former, the question arises, how the participant's neurocognitive apparatus is able to implement (or at least approximate) the non-trivial computations involved. In case of the latter, the question arises, which cognitive processes may distort the mapping from prior beliefs and preferences to selected sample sizes—which in turn may lead to more psychological plausible accounts of the decision processes in the SSP. Secondly, the experimenter may conceive the framework as a valid working hypothesis and, by fixing or inducing specific components of the framework, study others. For example, assume that the participant has specified her prior beliefs over the values of θ and her sampling cost constant

In this study we have shown how a normative benchmark for optimal sample sizes in the DFE sampling paradigm can be developed based on results from classical statistical decision theory. More specifically, we have shown that assuming an inference approach to the sampling problem in DFE, optimal sample sizes are dependent on the desired inference type and can be quantitatively related to the decision maker's prior beliefs about the problem, the decision maker's value assigned to identifying the correct solution, and the decision maker's cost assigned to each sample. We conclude with discussing the benefits and limitations of this framework for generating testable predictions and point to potential applications of the framework in experimental cognitive psychology.

Perhaps the most fundamental benefit of the MEU framework in the context of DFE is that it is explicit and constructive: upon specification of the necessary concepts (the state, action, experimental, and outcome spaces, the utility function, and the probability measure on the product of experimental and outcome space) it will yield an optimal sample size. From the perspective of behavioral experimentation this is helpful, because search behavior in DFE can be tested against quantitative predictions. Further, because of its generality, the MEU framework can be adapted to a wide range of conceivable utility functions (for example those incorporating a notion of risk-sensitivity Shen et al.,

Perhaps the most fundamental limitation of employing the MEU framework as a generative model for behavioral DFE data is that it is, as presented here, “non-identifiable.” By this we mean, as shown by Figures

Because of its indefiniteness, an unlimited set of objections can be raised against the current approach from the perspective of cognitive process modeling. We thus limit ourselves to a set of objections for which we see constructive rejoinders at present. A first objection may be that the current application of the MEU framework assumes that participants have knowledge of the (to be) observed outcomes such that estimating the state of the world, i.e., the parameter θ ∈ [0, 1] in the SSP is actually the only necessary action. We agree that this assumption has been made here (for experimental approaches in DFE that work on a similar basis, see selected experiments in Erev et al., _{1}, _{2}, θ) of observed outcomes and parameter, where the sufficient statistics for the outcomes may correspond to the first observations. It should also be noted that in general, upon sampling, both outcomes will have been observed, permitting for the evaluation of the expected value estimate based on the inferred probability parameter values. A second objection is that it is implausible that participants in DFE studies evaluate optimal sample sizes for each payoff distribution prior to starting the sampling. Instead, they may after each observation (or sets thereof) decide whether to (a) terminate the exploration phase and continue to the final incentivized draw, (b) continue sampling from the currently investigated payoff distribution, or (c) terminate sampling from the currently investigated payoff distribution and to start (or continue) to sample from another payoff distribution. A model class appropriate to capture these intuitions is offered by the theory of (partially observable) Markov decision processes (Wiering and Otterlo,

Finally, we note that (Vul et al., _{e}(

In summary, a broad empirical literature on DFE has developed over the last decade in behavioral psychology, which has shown that human choice behavior can remarkably differ depending on how information is presented and sampled for uncertain choices with economic consequences. However, so far few attempts have been made to study the quantitative nature of human sampling behavior in DFE by means of computational modeling. Specifically, no normative benchmark has been developed that would allow to judge whether and when the observed sample sizes drawn by human observers are “reasonable.” In this study, we related the DFE sampling paradigm to the classical and modern literature on statistical decision making and reviewed and extended a framework based on which such a normative benchmark can be developed. Specifically, we have shown how, under a probabilistic inference assumption, the optimal sample size in DFE can be quantitatively related to the decision maker's preferred type of inference, prior beliefs about the payoff distributions at hand, and utility assigned to the inference's precision. Because of its quantitative nature, the framework introduced here has yielded directly testable predictions for the behavioral study of DFE. Moreover, given the strong conceptual similarity between the DFE sampling paradigm and evidence accumulation schemes as prevalent in research on perceptual decision making, we believe that the current study addresses key theoretical aspects of decision making under dynamic subjective uncertainty. Finally, we believe that the current study lays an important foundation for future theoretical efforts on the computational description of human behavior in the DFE sampling paradigm and provides a useful basis towards their experimental validation.

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

The Supplementary Material for this article can be found online at: