dfddm
Function dfddm()
evaluates the density function (or
probability density function, PDF) for the Ratcliff diffusion decision
model (DDM) using different methods for approximating the full PDF,
which contains an infinite sum. An empirical validation of the
implemented methods is provided in the Validity
Vignette. Timing benchmarks for the present methods and comparison
with existing methods are provided in the Benchmark Vignette. Examples of using
dfddm()
for parameter estimation are provided in the Example Vignette.
Our implementation of the DDM has the following parameters: a ∈ (0, ∞) (threshold separation),
v ∈ (−∞, ∞) (drift rate),
t0 ∈ [0, ∞)
(non-decision time/response time constant), w ∈ (0, 1) (relative starting
point), sv ∈ (0, ∞)
(inter-trial-variability of drift), and σ ∈ (0, ∞) (diffusion coefficient of
the underlying Wiener Process). Please note that for this vignette, we
will refer to the inter-trial variability of drift as η instead of sv to make the notation in
the equations less confusing.
There are several different methods for approximating the PDF of the
DDM, and there are three optional control parameters in
dfddm()
that can be used to indicate which method should be
used in the function call: switch_mech
,
n_terms_small
, and summation_small
. For each
method we describe, we include the parameter settings for the function
call to dfddm()
so that it uses the desired method. As
these parameters are optional, leaving them blank results in the default implementation that is
indicated later in this vignette. For general purpose use, we recommend
ignoring these optional parameters so that the default settings are used
as this will be the fastest and most stable algorithm. Note that
precedence for the optional parameters is first given to checking if the
default implementation is
selected. If not, precedence is then given to the
switch_mech
parameter value; for example,
switch_mech = "large"
will ignore the
summation_small
input value.
Note that there are actually two probability density functions of the DDM: the PDF with respect to the upper boundary, and the PDF with respect to the lower boundary. Following the precedent set by the literature, we use the general terminology “PDF” or “density function” to mean the probability density function with respect to the lower boundary. Should the probability density function with respect to the upper boundary be required, it may be calculated using the simple transformation fupper(t | v, η, a, w) = flower(t | − v, η, a, 1 − w).
Since the DDM is widely used in parameter estimation usually involving numerical optimization, significant effort has been put into making the evaluation of its density as fast as possible. However, the density function for the DDM is notorious for containing an unavoidable infinite sum; hence, the literature has produced a few different methods of approximating the density by truncating the infinite sum. This vignette details the various methods used in the literature to approximate the infinite sum in the density function for the DDM.
The author of the seminal book where the density function originates, Feller (1968) explains the derivation from first principles. In this derivation there is a step that requires taking a limit, and Feller (1968) provides two different – but equivalent – limiting processes that yield two different – but equal – forms of the density function. Each of these forms contains an infinite sum, and they are known individually as the large-time approximation and the small-time approximation because the former is on average faster when calculating the density for large response times and the latter is on average faster when calculating the density for small response times. The improved speed is due to the ease at which the infinite sums can be adequately approximated with a finite truncation. In other words, the efficiency of each approximation is dictated by the number of terms required in the truncated sum to achieve the pre-specified precision; fewer required terms translates to fewer computations and thus generally faster computation time.
When the drift rate is held constant (i.e. η = 0), the density function for the DDM is often written in a factorized form (Navarro and Fuss 2009): where $f_i(\frac{t}{a^2} | 0, 1, w)$ determines whether the large-time or small-time model will be used:
In an effort to simplify the terms inside the infinite summations as much as possible, we instead rewrite the constant drift rate density function as two separate functions without the factorization:
In addition to having large-time and small-time variants, there exist
two mathematically equivalent formulations for the infinite summation in
the small-time density functions. The details and proof of equivalence
of these two formulations are provided in the paper accompanying
fddm
, but we will continue to use the traditional
formulation for the remainder of this vignette.
Now allowing the drift rate to vary across trials (i.e. η > 0), we should have two
density functions. However, as only the small-time variable drift rate
density function has been available in the literature (Blurton, Kesselmeier, and Gondan 2017), we
provide the derivation of the large-time variable drift rate density
function in the fddm
paper. The large-time and small-time
variable drift rate density function are:
Immediately of note is that the infinite summation for each time scale is the same regardless of the inclusion of variability in the drift rate. It then follows that there exists a term M such that the density function for the constant drift rate multiplied by M yields the density function for the variable drift rate. That is, M ⋅ f(t|v, a, w) = f(t|v, a, w, η2) from the above equations; this value M works for converting both the large-time and small-time constant drift rate densities to variable drift rate densities. Although we do not use this term, it may be useful in adapting current algorithms to easily outputting the density with variable drift rate. Note that there are some issues with simply scaling the constant drift rate density, so please see the Validity Vignette for more information about the potential problems with this conversion. The multiplicative term M is given below:
The main issue of these families of density functions is that they
all contain an infinite sum that must be approximated. Since there is no
closed form analytical solution to this infinite sum, we instead
calculate only a partial sum by truncating the sequence of terms after a
certain point. We cannot actually calculate the true value of the
density function, but we can mathematically prove that we can get
arbitrarily close to the true value; the proof of this fact is provided
in the paper accompanying the fddm
package. The nature of
this truncation has been the topic of many papers in the literature, but
the underlying idea supporting all of the methods is the same: the user
specifies an allowable error tolerance, and the algorithm calculates
terms of the infinite sum so that the truncated result is within the
allowed error tolerance of the true value.
The methods in the literature pre-calculate the number of terms required for the infinite sum to converge within the allowed error tolerance, and this number of terms is referred to as kℓ and ks for the large-time and small-time infinite sums, respectively. Navarro and Fuss (2009) include a method for calculating kc, the number of required terms for the infinite sum when combining the density functions of the two time scales. In addition to these existing methods, we add a novel method that does not perform this pre-calculation, and we also provide two new combinations of the large-time and small-time density functions. Note that in each method that pre-calculates the number of terms, the response time t is scaled inversely by a2, that is $t' := \tfrac{t}{a^2}$. Also note that for the rest of this vignette, the ceiling function will be denoted by ⌈⋅⌉.
The large-time density functions, Equations $\eqref{eq:con-l}$ and $\eqref{eq:var-l}$, have an infinite sum that runs for all of the positive integers. For a given error tolerance ϵ, Navarro and Fuss (2009) provide an expression for kℓ, the number of terms required for the large-time infinite sum to be within ϵ of the true value of the density function. Thus the infinite sum becomes finite:
It remains to find the value of kℓNav that ensures the truncated sum is ϵ-close to the true value. Navarro and Fuss (2009) provide a derivation in their paper that finds an upper bound for the tail of the sum, the sum of all terms greater than kℓNav (i.e., the error). Then they back-calculate the number of terms required to force this upper bound on the error to be less than ϵ, since then the actual error must also be less than ϵ. The resulting number of terms is:
This method is often viewed as the most inefficient of the available
options in the literature; however, this method proves to be extremely
efficient in particular areas of the parameter space (typically for
large t′). To implement this
method in dfddm()
(and completely excluding the small-time
methods), the user must set the parameter
switch_mech = "large"
in the function call; in this case,
the other parameters are ignored.
The small-time approximations, Equations $\eqref{eq:con-s}$ and $\eqref{eq:var-s}$, also contain an infinite sum, but this sum runs over all of the integers – from negative infinity to positive infinity. Given this infinite nature in both directions, it is impossible to rigorously define the number of terms required to achieve the ϵ-accuracy because we don’t know where to start counting the terms. To solve this issue, we rearrange the terms in the sum into the sequence {b0, b−1, b1, …, b−j, bj, b−(j + 1), bj + 1, …}; this allows us not only to count the terms in a sensible manner but also to define ks as the index of the sequence where the truncation should occur. Then we can write the truncated version of the sum:
To choose the small-time methods when using dfddm()
(and
completely excluding the large-time method), set the optional parameter
switch_mech = "small"
in the function call. You can also
set the optional control parameter summation_small = "2017"
or summation_small = "2014"
, but it is recommended to
ignore this parameter so it retains its default value of “2017” that
evaluates slightly faster than its counterpart. This parameter controls
the style of summation used in the small-time approximation, and more
details on the differences between these two styles can be found in the
paper accompanying fddm
. The final control parameter,
n_terms_small
, will be discussed in the following three
subsections.
After Navarro and Fuss (2009) published their paper, Gondan, Blurton, and Kesselmeier (2014) introduced another method for calculating the required number of terms in the truncated small-time summation. It is important to note, however, that Gondan, Blurton, and Kesselmeier (2014) provided the number of required pairs of terms in the S14 summation style, and not the number of required individual terms. As we want the number of individual terms, we adapt their formula and define it given a desired precision ϵ:
To use this method, set switch_mech = "small"
and
n_terms_small = "Gondan"
in the function call. The
parameter summation_small
should be ignored so that it
retains its default value to obtain the best performance.
If we consider the terms of the infinite sum as the sequence defined
above, the series alternates in sign (+, −, +, …); moreover, the series eventually
decreases monotonically (in absolute value) due to the exponential term.
Combining and exploiting these two mathematical properties has been the
cornerstone of the previous approximations, but we will instead truncate
the sum using a method suggested by Gondan,
Blurton, and Kesselmeier (2014). This method does not
pre-calculate the number of terms required to achieve the given error
tolerance. Instead, the general idea of this method is to take full
advantage of the alternating and decreasing nature of the terms in the
infinite sum by applying a handy theorem (commonly known as the
alternating series test) to place an upper bound on the truncation error
after including so many terms. It has been proven that this upper bound
is in fact the absolute value of the next term in the sequence, thus we
can truncate the infinite sum once one of its terms (in absolute value)
is less than the desired error tolerance, ϵ. Hence we do not consider the
number of terms in the sum, rather just that the terms in the summation
will eventually be small enough. The validity of this method is proven
in the paper that accompanies the fddm
package.
To use this method, set swich_mech = "small"
and
n_terms_small = "SWSE"
in the function call. The parameter
summation_small
should be ignored so that it retains its
default value to obtain the best performance.
A sensible next approach to approximating the density of the DDM is
to use some combination of the large-time and small-time density
functions. As their names suggest, each density function approximation
has a section of the parameter space where it outperforms the other one.
Essentially, these methods involve calculating the number of terms
required for both the large-time and small-time density functions, then
using whichever approximation requires fewer terms. The goal is to use
each approximation where it is efficient and avoid the areas of the
parameter space where the approximations perform poorly. The main
control parameter used to indicate this preference is
switch_mech
.
Fundamentally, each combined time scale method uses either the
large-time or small-time approximation for each calculation of the
density function. As there is only one option for the large-time
approximation, there are no optional control parameters to set for this
part of the combined time scale approximation. In contrast, there are
multiple options for the small-time part of the combined time scale
approximation. However, the effect of the control parameter
summation_small
is consistent throughout the small-time
methods; we recommend leaving it to its default value of “2017” for the
best performance. The remaining subsections of this vignette detail how
to set the rest of the control parameters in the function call to use a
particular set of methods for calculating the combined time scale
density function.
This combination of methods has not been explored in the literature before, but it works very similarly to the Navarro-Navarro combination above. The only difference is that we use the Gondan, Blurton, and Kesselmeier (2014) approximation for the small-time instead of the one provided by Navarro and Fuss (2009). Since ksGon ≤ ksNav, this method should never be less efficient than the previous combined time scale method.
To use this method, set switch_mech = "terms"
and
n_terms_small = "Gondan"
in the function call. The
parameter summation_small
should be ignored so that it
retains its default value to obtain the best performance.
The SWSE approximation to the small-time density function differs
from the Navarro or Gondan approximations in that it does not
pre-calculate ks, the number
of terms in the infinite sum that are required to achieve the desired
precision; instead, the infinite sum is truncated when the individual
terms of the sum become “small enough.” This method of truncating the
infinite sum poses a problem of how to incorporate this method with the
Navarro large-time approximation that relies on pre-calculating kℓNav. We will
introduce two heuristics for determining when to use the small-time
approximation or the large-time approximation. Both heuristics will use
a fourth parameter called switch_thresh
, but each heuristic
will use the new parameter in a slightly different way. The validity of
these two methods is proven in the paper that accompanies the
fddm
package. Note that in this paper, the parameter
max_terms_large
is labelled δ.
The first heuristic compares switch_thresh
to kℓNav; in this
case, switch_thresh
is treated as the proxy for the
required number of terms for approximating the truncated small-time
infinite sum. If kℓNav ≤ switch_thresh
, then the
Navarro large-time approximation is used. On the other hand, if kℓNav > switch_thresh
, then the
SWSE small-time approximation is used. This method essentially checks
the efficiency of the large-time approximation relative to
switch_thresh
. The user can alter the behavior of this
method by setting the optional parameter switch_thresh
to
any non-negative integer; the default value for this parameter when
using this method is 1.
To use this method, set switch_mech = "terms_large"
. The
user may wish to specify a particular threshold (measured in the
required number of terms for the truncated sum) for switching between
the small-time and large-time approximations by setting the parameter
switch_thresh
(i.e., switch_thresh = 1
). The
parameter summation_small
should be ignored so that it
retains its default value to obtain the best performance.
The second heuristic avoids all comparison with kℓNav as it
simply considers the effective response time, $t' := \tfrac{t}{a^2}$, to be the indicator
of a small or large response time. t′ is compared to a new parameter
switch_thresh
so that if t′ > switch_thresh
, then the
Navarro large-time approximation is used; otherwise, the SWSE small-time
approximation is used. The user can alter the behavior of this method by
setting the optional parameter switch_thresh
to any
non-negative real number; the default value for this parameter when
using this method is 0.8.
Since this is the default implementation, all four of the optional
parameters (switch_mech
, switch_thresh
,
n_terms_small
, and summation_small
) can be
ignored. The user may wish to specify a particular threshold (measured
in seconds) for switching between the small-time and large-time
approximations by setting the parameter switch_thresh
(i.e., switch_thresh = 0.8
). We recommend ignoring the
parameter summation_small
so that it retains its default
value and achieves optimal performance. Note that the parameter
n_terms_small
is ignored because the only option is “SWSE”.
sessionInfo()
#> R version 4.4.2 (2024-10-31)
#> Platform: x86_64-pc-linux-gnu
#> Running under: Ubuntu 24.04.1 LTS
#>
#> Matrix products: default
#> BLAS: /usr/lib/x86_64-linux-gnu/openblas-pthread/libblas.so.3
#> LAPACK: /usr/lib/x86_64-linux-gnu/openblas-pthread/libopenblasp-r0.3.26.so; LAPACK version 3.12.0
#>
#> locale:
#> [1] LC_CTYPE=en_US.UTF-8 LC_NUMERIC=C
#> [3] LC_TIME=en_US.UTF-8 LC_COLLATE=C
#> [5] LC_MONETARY=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8
#> [7] LC_PAPER=en_US.UTF-8 LC_NAME=C
#> [9] LC_ADDRESS=C LC_TELEPHONE=C
#> [11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C
#>
#> time zone: Etc/UTC
#> tzcode source: system (glibc)
#>
#> attached base packages:
#> [1] stats graphics grDevices utils datasets methods base
#>
#> other attached packages:
#> [1] ggforce_0.4.2 ggplot2_3.5.1 reshape2_1.4.4
#> [4] microbenchmark_1.5.0 RWiener_1.3-3 rtdists_0.11-5
#> [7] fddm_1.0-2 rmarkdown_2.29
#>
#> loaded via a namespace (and not attached):
#> [1] sass_0.4.9 utf8_1.2.4 generics_0.1.3 stringi_1.8.4
#> [5] lattice_0.22-6 digest_0.6.37 magrittr_2.0.3 estimability_1.5.1
#> [9] evaluate_1.0.1 grid_4.4.2 mvtnorm_1.3-2 fastmap_1.2.0
#> [13] plyr_1.8.9 jsonlite_1.8.9 Matrix_1.7-1 ggnewscale_0.5.0
#> [17] Formula_1.2-5 survival_3.7-0 fansi_1.0.6 scales_1.3.0
#> [21] tweenr_2.0.3 codetools_0.2-20 jquerylib_0.1.4 cli_3.6.3
#> [25] rlang_1.1.4 expm_1.0-0 polyclip_1.10-7 gsl_2.1-8
#> [29] munsell_0.5.1 splines_4.4.2 withr_3.0.2 cachem_1.1.0
#> [33] yaml_2.3.10 tools_4.4.2 colorspace_2.1-1 buildtools_1.0.0
#> [37] vctrs_0.6.5 R6_2.5.1 emmeans_1.10.5 lifecycle_1.0.4
#> [41] stringr_1.5.1 MASS_7.3-61 pkgconfig_2.0.3 pillar_1.9.0
#> [45] bslib_0.8.0 gtable_0.3.6 glue_1.8.0 Rcpp_1.0.13-1
#> [49] tidyselect_1.2.1 xfun_0.49 tibble_3.2.1 sys_3.4.3
#> [53] knitr_1.49 msm_1.8.2 farver_2.1.2 htmltools_0.5.8.1
#> [57] labeling_0.4.3 maketools_1.3.1 compiler_4.4.2 evd_2.3-7.1