dfddm
Function dfddm()
evaluates the density function (or
probability density function, PDF) for the Ratcliff diffusion decision
model (DDM) using different methods for approximating the full PDF,
which contains an infinite sum. An overview of the mathematical details
of the different approximations is provided in the Math Vignette. Timing benchmarks for the present
methods and comparison with existing methods are provided in the Benchmark Vignette. Examples of using
dfddm()
for parameter estimation are provided in the Example Vignette.
Our implementation of the DDM has the following parameters: a ∈ (0, ∞) (threshold separation),
v ∈ (−∞, ∞) (drift rate),
t0 ∈ [0, ∞)
(non-decision time/response time constant), w ∈ (0, 1) (relative starting
point), sv ∈ (0, ∞)
(inter-trial-variability of drift), and σ ∈ (0, ∞) (diffusion coefficient of
the underlying Wiener Process).
This vignette is more technical in nature and will demonstrate not
only the consistency across the different implemented methods but also
show that dfddm()
is in accordance with other
implementations in the literature. In addition to testing all of the
implementations available in dfddm()
, we also include three
functions that are considered to be current and widely used. First, we
include the function rtdists::ddiffusion()
from the package
rtdists
as it is well known for being not only user friendly and feature-rich
but also designed specifically for handling data with regard to
distributions for response time models. Second, we also test the
RWiener::dwiener()
function from the RWiener
package, which is mainly aimed at providing an R
language
interface to calculate various functions from the Wiener process. Third,
we include some raw R
code that is bundled with the paper
written by Gondan, Blurton, and Kesselmeier
(2014) about improving the approximation to the density function.
As this code was not yet available in an R
package, it is
part of fddm
and can be accessed after running
source
on the corresponding file as
shown below.
First, we perform a simple test in Validating the Density Function Approximations, evaluating each implementation of the density function approximation of the DDM and comparing their consistency. To ensure rigorous concordance across all implementations, we test each implementation throughout a sufficiently large and granular parameter space that can be found fully defined in the subsection Generating Data. Of course these implementations are approximations, and as such they will provide slightly different results given the same parameter inputs. To ensure that these small differences in density output do not affect the results of fitting parameters to real-world data, we also include testing in an optimization setting in Validating Fitting (Optimization) Using the Density Function Approximations. We use the various implementations as the bases for log-likelihood functions for the optimization process and include a variety of starting points in the parameter space to ensure rigorous consistency testing.
It is imperative to show that our density function approximations produce the same results as the current standards. To demonstrate this, we calculate the lower probability density for a granular parameter space and show that all of the results are within an acceptable error tolerance. We only calculate the lower probability density because the upper probability density negates v (v′ = −v) and takes the unit complement of w (w′ = 1 − w); our parameter space already includes all of these values so calculating the upper probability density would be redundant. The fully defined parameter space can be found below, and it includes both realistically feasible values and some extreme values for each parameter. Since each of these functions approximates an infinite summation to a desired precision of ϵ, we allow for a difference of 2 ⋅ ϵ between any pair of calculated densities to account for convergence from above and below the limit of the summation.
Note that we include sv in all the functions
tested below, even those that do not contain sv themselves (i.e.,
RWiener::dwiener()
and the raw R
code from
Gondan, Blurton, and Kesselmeier (2014)).
This is possible because variability in drift rate can be added to those
functions via the constant M
as described in the Math
Vignette.
First we load the necessary packages and code available from the current literature.
library("fddm")
require("rtdists")
require("RWiener")
source(system.file("extdata", "Gondan_et_al_density.R", package = "fddm", mustWork = TRUE))
The following code chunk stores the lower probability densities calculated across the parameter space using the different implementations; we also calculate and store the log-probability for consistency testing.
# Define parameter space
RT <- c(0.001, 0.1, 1, 10)
A <- c(0.5, 1, 5)
V <- c(-5, 0, 5)
t0 <- 1e-4 # must be nonzero for RWiener
W <- c(0.2, 0.5, 0.8)
SV <- c(0, 0.5, 1.5)
SV_THRESH <- 1e-6
eps <- 1e-6 # this is the setting from rtdists
nRT <- length(RT)
nA <- length(A)
nV <- length(V)
nW <- length(W)
nSV <- length(SV)
N <- nRT * nA * nV * nW * nSV
twoN <- 2 * N
rt <- rep(RT, each = nSV * nW * nV * nA, times = 1)
a <- rep(A, each = nSV * nW * nV, times = nRT)
v <- rep(V, each = nSV * nW, times = nRT * nA)
w <- rep(W, each = nSV, times = nRT * nA * nV)
sv <- rep(SV, each = 1, times = nRT * nA * nV * nW)
# fddm methods
SWSE_s_17 <- data.frame(
res = dfddm(rt = rt, response = "lower", a = a, v = v, t0 = t0, w = w,
sv = sv, err_tol = eps, log = FALSE, switch_mech = "small",
n_terms_small = "SWSE", summation_small = "2017"),
dif = numeric(N),
log_res = dfddm(rt = rt, response = "lower", a = a, v = v, t0 = t0, w = w,
sv = sv, err_tol = eps, log = TRUE, switch_mech = "small",
n_terms_small = "SWSE", summation_small = "2017")
)
SWSE_s_14 <- data.frame(
res = dfddm(rt = rt, response = "lower", a = a, v = v, t0 = t0, w = w,
sv = sv, err_tol = eps, log = FALSE, switch_mech = "small",
n_terms_small = "SWSE", summation_small = "2014"),
dif = numeric(N),
log_res = dfddm(rt = rt, response = "lower", a = a, v = v, t0 = t0, w = w,
sv = sv, err_tol = eps, log = TRUE, switch_mech = "small",
n_terms_small = "SWSE", summation_small = "2014")
)
SWSE_t_17 <- data.frame(
res = dfddm(rt = rt, response = "lower", a = a, v = v, t0 = t0, w = w,
sv = sv, err_tol = eps, log = FALSE, switch_mech = "eff_rt",
n_terms_small = "SWSE", summation_small = "2017"),
dif = numeric(N),
log_res = dfddm(rt = rt, response = "lower", a = a, v = v, t0 = t0, w = w,
sv = sv, err_tol = eps, log = TRUE, switch_mech = "eff_rt",
n_terms_small = "SWSE", summation_small = "2017")
)
SWSE_t_14 <- data.frame(
res = dfddm(rt = rt, response = "lower", a = a, v = v, t0 = t0, w = w,
sv = sv, err_tol = eps, log = FALSE, switch_mech = "eff_rt",
n_terms_small = "SWSE", summation_small = "2014"),
dif = numeric(N),
log_res = dfddm(rt = rt, response = "lower", a = a, v = v, t0 = t0, w = w,
sv = sv, err_tol = eps, log = TRUE, switch_mech = "eff_rt",
n_terms_small = "SWSE", summation_small = "2014")
)
SWSE_b_17 <- data.frame(
res = dfddm(rt = rt, response = "lower", a = a, v = v, t0 = t0, w = w,
sv = sv, err_tol = eps, log = FALSE, switch_mech = "terms_large",
n_terms_small = "SWSE", summation_small = "2017"),
dif = numeric(N),
log_res = dfddm(rt = rt, response = "lower", a = a, v = v, t0 = t0, w = w,
sv = sv, err_tol = eps, log = TRUE,
switch_mech = "terms_large", n_terms_small = "SWSE",
summation_small = "2017")
)
SWSE_b_14 <- data.frame(
res = dfddm(rt = rt, response = "lower", a = a, v = v, t0 = t0, w = w,
sv = sv, err_tol = eps, log = FALSE, switch_mech = "terms_large",
n_terms_small = "SWSE", summation_small = "2014"),
dif = numeric(N),
log_res = dfddm(rt = rt, response = "lower", a = a, v = v, t0 = t0, w = w,
sv = sv, err_tol = eps, log = TRUE,
switch_mech = "terms_large", n_terms_small = "SWSE",
summation_small = "2014")
)
Gondan_s_17 <- data.frame(
res = dfddm(rt = rt, response = "lower", a = a, v = v, t0 = t0, w = w,
sv = sv, err_tol = eps, log = FALSE, switch_mech = "small",
n_terms_small = "Gondan", summation_small = "2017"),
dif = numeric(N),
log_res = dfddm(rt = rt, response = "lower", a = a, v = v, t0 = t0, w = w,
sv = sv, err_tol = eps, log = TRUE, switch_mech = "small",
n_terms_small = "Gondan", summation_small = "2017")
)
Gondan_s_14 <- data.frame(
res = dfddm(rt = rt, response = "lower", a = a, v = v, t0 = t0, w = w,
sv = sv, err_tol = eps, log = FALSE, switch_mech = "small",
n_terms_small = "Gondan", summation_small = "2014"),
dif = numeric(N),
log_res = dfddm(rt = rt, response = "lower", a = a, v = v, t0 = t0, w = w,
sv = sv, err_tol = eps, log = TRUE, switch_mech = "small",
n_terms_small = "Gondan", summation_small = "2014")
)
Gondan_b_17 <- data.frame(
res = dfddm(rt = rt, response = "lower", a = a, v = v, t0 = t0, w = w,
sv = sv, err_tol = eps, log = FALSE, switch_mech = "terms",
n_terms_small = "Gondan", summation_small = "2017"),
dif = numeric(N),
log_res = dfddm(rt = rt, response = "lower", a = a, v = v, t0 = t0, w = w,
sv = sv, err_tol = eps, log = TRUE, switch_mech = "terms",
n_terms_small = "Gondan", summation_small = "2017")
)
Gondan_b_14 <- data.frame(
res = dfddm(rt = rt, response = "lower", a = a, v = v, t0 = t0, w = w,
sv = sv, err_tol = eps, log = FALSE, switch_mech = "terms",
n_terms_small = "Gondan", summation_small = "2014"),
dif = numeric(N),
log_res = dfddm(rt = rt, response = "lower", a = a, v = v, t0 = t0, w = w,
sv = sv, err_tol = eps, log = TRUE, switch_mech = "terms",
n_terms_small = "Gondan", summation_small = "2014")
)
Navarro_s_17 <- data.frame(
res = dfddm(rt = rt, response = "lower", a = a, v = v, t0 = t0, w = w,
sv = sv, err_tol = eps, log = FALSE, switch_mech = "small",
n_terms_small = "Navarro", summation_small = "2017"),
dif = numeric(N),
log_res = dfddm(rt = rt, response = "lower", a = a, v = v, t0 = t0, w = w,
sv = sv, err_tol = eps, log = TRUE, switch_mech = "small",
n_terms_small = "Navarro", summation_small = "2017")
)
Navarro_s_14 <- data.frame(
res = dfddm(rt = rt, response = "lower", a = a, v = v, t0 = t0, w = w,
sv = sv, err_tol = eps, log = FALSE, switch_mech = "small",
n_terms_small = "Navarro", summation_small = "2014"),
dif = numeric(N),
log_res = dfddm(rt = rt, response = "lower", a = a, v = v, t0 = t0, w = w,
sv = sv, err_tol = eps, log = TRUE, switch_mech = "small",
n_terms_small = "Navarro", summation_small = "2014")
)
Navarro_b_17 <- data.frame(
res = dfddm(rt = rt, response = "lower", a = a, v = v, t0 = t0, w = w,
sv = sv, err_tol = eps, log = FALSE, switch_mech = "terms",
n_terms_small = "Navarro", summation_small = "2017"),
dif = numeric(N),
log_res = dfddm(rt = rt, response = "lower", a = a, v = v, t0 = t0, w = w,
sv = sv, err_tol = eps, log = TRUE, switch_mech = "terms",
n_terms_small = "Navarro", summation_small = "2017")
)
Navarro_b_14 <- data.frame(
res = dfddm(rt = rt, response = "lower", a = a, v = v, t0 = t0, w = w,
sv = sv, err_tol = eps, log = FALSE, switch_mech = "terms",
n_terms_small = "Navarro", summation_small = "2014"),
dif = numeric(N),
log_res = dfddm(rt = rt, response = "lower", a = a, v = v, t0 = t0, w = w,
sv = sv, err_tol = eps, log = TRUE, switch_mech = "terms",
n_terms_small = "Navarro", summation_small = "2014")
)
Navarro_l <- data.frame(
res = dfddm(rt = rt, response = "lower", a = a, v = v, t0 = t0, w = w,
sv = sv, err_tol = eps, log = FALSE, switch_mech = "large",
n_terms_small = "Navarro"),
dif = numeric(N),
log_res = dfddm(rt = rt, response = "lower", a = a, v = v, t0 = t0, w = w,
sv = sv, err_tol = eps, log = TRUE, switch_mech = "large",
n_terms_small = "Navarro")
)
# non-fddm methods
t <- rt - t0
M <- exp(v * a * w + v*v * t / 2 +
(sv*sv * a*a * w*w - 2 * v * a * w - v*v * t) / (2 + 2 * sv*sv * t)
) / sqrt(1 + sv*sv * t)
if (require("RWiener")) {
RWiener <- data.frame(
res = numeric(N),
dif = numeric(N)
)
for (i in 1:N) { # RWiener can't handle model parameters as vectors
RWiener[i, "res"] <- dwiener(rt[i], resp = "lower", alpha = a[i],
delta = v[i], tau = t0, beta = w[i],
give_log = FALSE)
}
RWiener[["res"]] <- M * RWiener[["res"]]
}
Gondan_R <- data.frame(
res = M * fs(t = t, a = a, v = v, w = w, eps = eps),
dif = numeric(N)
)
if (require("rtdists")) {
rtdists <- data.frame(
res = ddiffusion(rt, "lower", a = a, v = v, t0 = t0, z = w*a, sv = sv),
dif = numeric(N)
)
}
# Calculate differences (use fddm's SWSE_t_17 method as truth)
ans <- SWSE_t_17[["res"]]
SWSE_s_17[["dif"]] <- abs(SWSE_s_17[["res"]] - ans)
SWSE_s_14[["dif"]] <- abs(SWSE_s_14[["res"]] - ans)
SWSE_t_17[["dif"]] <- abs(SWSE_t_17[["res"]] - ans)
SWSE_t_14[["dif"]] <- abs(SWSE_t_14[["res"]] - ans)
SWSE_b_17[["dif"]] <- abs(SWSE_b_17[["res"]] - ans)
SWSE_b_14[["dif"]] <- abs(SWSE_b_14[["res"]] - ans)
Gondan_s_17[["dif"]] <- abs(Gondan_s_17[["res"]] - ans)
Gondan_s_14[["dif"]] <- abs(Gondan_s_14[["res"]] - ans)
Gondan_b_17[["dif"]] <- abs(Gondan_b_17[["res"]] - ans)
Gondan_b_14[["dif"]] <- abs(Gondan_b_14[["res"]] - ans)
Navarro_s_17[["dif"]] <- abs(Navarro_s_17[["res"]] - ans)
Navarro_s_14[["dif"]] <- abs(Navarro_s_14[["res"]] - ans)
Navarro_b_17[["dif"]] <- abs(Navarro_b_17[["res"]] - ans)
Navarro_b_14[["dif"]] <- abs(Navarro_b_14[["res"]] - ans)
Navarro_l[["dif"]] <- abs(Navarro_l[["res"]] - ans)
if (require("RWiener")) {
RWiener[["dif"]] <- abs(RWiener[["res"]] - ans)
}
Gondan_R[["dif"]] <- abs(Gondan_R[["res"]] - ans)
if (require("rtdists")) {
rtdists[["dif"]] <- abs(rtdists[["res"]] - ans)
}
Now we test the consistency of the various density function
approximations by using the testthat
package. As discussed above, we allow the
difference between any two outputs to be twice the original desired
accuracy to allow for convergence from either above or below. We have to
amend some of the checks because of the instability of some of the
implementations, and these are documented in the Known
Errors section below. First we check that all of the approximations
produce densities that are non-negative (we allow a density equal to
zero). Next, we check the consistency of the internal
dfddm()
implementations before confirming their accuracy
with external code from established packages. Lastly, we run similar
tests to check the consistency of the logged results; however, we do not
test the logs of densities that are very close to zero (i.e., less than
ϵ2) because even a
small difference in approximated density can lead to a very large
difference in logged density. If all tests pass correctly, there should
be no output.
library("testthat")
# Ensure all densities are non-negative
test_that("Non-negativity of densities", {
expect_true(all(SWSE_s_17[["res"]] >= 0))
expect_true(all(SWSE_s_14[["res"]] >= 0))
expect_true(all(SWSE_t_17[["res"]] >= 0))
expect_true(all(SWSE_t_14[["res"]] >= 0))
expect_true(all(SWSE_b_17[["res"]] >= 0))
expect_true(all(SWSE_b_14[["res"]] >= 0))
expect_true(all(Gondan_s_17[["res"]] >= 0))
expect_true(all(Gondan_s_14[["res"]] >= 0))
expect_true(all(Gondan_b_17[["res"]] >= 0))
expect_true(all(Gondan_b_14[["res"]] >= 0))
expect_true(all(Navarro_s_17[["res"]] >= 0))
expect_true(all(Navarro_s_14[["res"]] >= 0))
expect_true(all(Navarro_b_17[["res"]] >= 0))
expect_true(all(Navarro_b_14[["res"]] >= 0))
expect_true(all(Navarro_l[["res"]] >= 0))
if (require("RWiener")) {
expect_true(all(RWiener[["res"]] >= 0))
}
expect_true(all(Gondan_R[["res"]] >= -eps)) # density between 0 and -eps := 0
if (require("rtdists")) {
expect_true(all(rtdists[["res"]] >= 0))
}
})
#> Test passed 😸
# Test accuracy within 2*eps (allows for convergence from above and below)
test_that("Consistency among internal methods", {
expect_true(all(SWSE_s_17[["dif"]] <= 2 * eps))
expect_true(all(SWSE_s_14[["dif"]] <= 2 * eps))
expect_true(all(SWSE_t_17[["dif"]] <= 2 * eps))
expect_true(all(SWSE_t_14[["dif"]] <= 2 * eps))
expect_true(all(SWSE_b_17[["dif"]] <= 2 * eps))
expect_true(all(SWSE_b_14[["dif"]] <= 2 * eps))
expect_true(all(Gondan_s_17[["dif"]] <= 2 * eps))
expect_true(all(Gondan_s_14[["dif"]] <= 2 * eps))
expect_true(all(Gondan_b_17[["dif"]] <= 2 * eps))
expect_true(all(Gondan_b_14[["dif"]] <= 2 * eps))
expect_true(all(Navarro_s_17[["dif"]] <= 2 * eps))
expect_true(all(Navarro_s_14[["dif"]] <= 2 * eps))
expect_true(all(Navarro_b_17[["dif"]] <= 2 * eps))
expect_true(all(Navarro_b_14[["dif"]] <= 2 * eps))
testthat::skip_on_os("solaris")
testthat::skip_if(dfddm(rt = 0.001, response = "lower",
a = 5, v = -5, t0 = 1e-4, w = 0.8, sv = 1.5,
err_tol = 1e-6, log = FALSE, switch_mech = "large") >
1e-6)
expect_true(all(Navarro_l[rt/a/a >= 0.009, "dif"] < 2 * eps)) # see KE 1
})
#> Test passed 🌈
test_that("Accuracy relative to established packages", {
if (require("RWiener")) {
expect_true(all(RWiener[sv < SV_THRESH, "dif"] <= 2 * eps)) # see KE 2
}
if (require("rtdists")) {
expect_true(all(rtdists[["dif"]] <= 2 * eps))
}
testthat::skip_on_os("solaris")
testthat::skip_if(dfddm(rt = 0.001, response = "lower",
a = 5, v = -5, t0 = 1e-4, w = 0.8, sv = 1.5,
err_tol = 1e-6, log = FALSE, switch_mech = "large") >
1e-6)
expect_true(all(Gondan_R[sv < SV_THRESH, "dif"] <= 2 * eps)) # see KE 2
})
#> Test passed 🎊
# Test consistency in fddm log vs non-log (see KE 3)
test_that("Log-Consistency among internal methods", {
expect_equal(SWSE_s_17[SWSE_s_17[["res"]] > eps*eps, "log_res"],
log(SWSE_s_17[SWSE_s_17[["res"]] > eps*eps, "res"]))
expect_equal(SWSE_s_14[SWSE_s_14[["res"]] > eps*eps, "log_res"],
log(SWSE_s_14[SWSE_s_14[["res"]] > eps*eps, "res"]))
expect_equal(SWSE_t_17[SWSE_t_17[["res"]] > eps*eps, "log_res"],
log(SWSE_t_17[SWSE_t_17[["res"]] > eps*eps, "res"]))
expect_equal(SWSE_t_14[SWSE_t_14[["res"]] > eps*eps, "log_res"],
log(SWSE_t_14[SWSE_t_14[["res"]] > eps*eps, "res"]))
expect_equal(SWSE_b_17[SWSE_b_17[["res"]] > eps*eps, "log_res"],
log(SWSE_b_17[SWSE_b_17[["res"]] > eps*eps, "res"]))
expect_equal(SWSE_b_14[SWSE_b_14[["res"]] > eps*eps, "log_res"],
log(SWSE_b_14[SWSE_b_14[["res"]] > eps*eps, "res"]))
expect_equal(Gondan_s_17[Gondan_s_17[["res"]] > eps*eps, "log_res"],
log(Gondan_s_17[Gondan_s_17[["res"]] > eps*eps, "res"]))
expect_equal(Gondan_s_14[Gondan_s_14[["res"]] > eps*eps, "log_res"],
log(Gondan_s_14[Gondan_s_14[["res"]] > eps*eps, "res"]))
expect_equal(Gondan_b_17[Gondan_b_17[["res"]] > eps*eps, "log_res"],
log(Gondan_b_17[Gondan_b_17[["res"]] > eps*eps, "res"]))
expect_equal(Gondan_b_14[Gondan_b_14[["res"]] > eps*eps, "log_res"],
log(Gondan_b_14[Gondan_b_14[["res"]] > eps*eps, "res"]))
expect_equal(Navarro_s_17[Navarro_s_17[["res"]] > eps*eps, "log_res"],
log(Navarro_s_17[Navarro_s_17[["res"]] > eps*eps, "res"]))
expect_equal(Navarro_s_14[Navarro_s_14[["res"]] > eps*eps, "log_res"],
log(Navarro_s_14[Navarro_s_14[["res"]] > eps*eps, "res"]))
expect_equal(Navarro_b_17[Navarro_b_17[["res"]] > eps*eps, "log_res"],
log(Navarro_b_17[Navarro_b_17[["res"]] > eps*eps, "res"]))
expect_equal(Navarro_b_14[Navarro_b_14[["res"]] > eps*eps, "log_res"],
log(Navarro_b_14[Navarro_b_14[["res"]] > eps*eps, "res"]))
expect_equal(Navarro_l[Navarro_l[["res"]] > eps*eps, "log_res"],
log(Navarro_l[Navarro_l[["res"]] > eps*eps, "res"]))
})
#> Test passed 🥇
Both the RWiener and Gondan_R approximations divide the error
tolerance by the multiplicative term outside of the summation. Since the
outside term is different when sv > 0, the
approximations use the incorrect error tolerance for sv > 0. This affects the
number of terms required in the summation to achieve the desired
precision, thus not actually achieving that desired precision. This
issue is fixed in our implementation of the Gondan implementation
(switch_mech = "small", n_terms_small = "Gondan"
). For an
example of this discrepancy, see the following code:
rt <- 1.5
t <- rt - 1e-4
a <- 0.5
v <- 4.5
w <- 0.5
eps <- 1e-6
sv <- 0.9
sv0 <- exp(-v*a*w - v*v*t/2) / (a*a) # for constant drift rate
sv0_9 <- exp((-2*v*a*w - v*v*t + sv*sv*a*a*w*w)/(2 + 2*sv*sv*t)) /
(a*a*sqrt(1+sv*sv*t)) # for variable drift rate
ks_0 <- ks(t/(a*a), w, eps/sv0) # = 2; the summation will only calculate 2 terms
ks_9 <- ks(t/(a*a), w, eps/sv0_9) # = 5; but the summation needs 5 terms
cat("the summation will only calculate", ks_0, "terms, but it needs", ks_9, "terms.")
#> the summation will only calculate 2 terms, but it needs 5 terms.
When calculating the log of the density, it is better to use the
built-in log option (log = TRUE
). For very small densities,
simply calculating the density can cause rounding issues that result in
a density of zero (thus the log of the density becomes
-Inf
). Using the built-in log option avoids some of these
rounding issues by exploiting the algebraic properties of the logarithm.
Also note that sometimes the densities are just too small
(i.e. extremely negative) and the logarithm function returns a value of
-Inf
, so we discard the samples whose density is very small
(less than ϵ2 = 1 × 10−12).
Perhaps the most practical use of dfddm()
is to use it
in an optimization setting, such as fitting DDM parameters to real-world
data, and this section will show that all of the implementations in
dfddm()
yield the same results in this setting. The
parameters that we will fit are: a (threshold separation), v (drift rate), t0 (non-decision
time/response time constant), w (relative starting point), and
sv
(inter-trial-variability of drift). Since the example data we are using
consists of two different item types that each require a different
correct response (i.e., for one item type the correct response is mapped
to the lower boundary and for the other item type the correct response
is mapped to the upper boundary), the model includes two different
versions of v (drift rate):
vℓ for fitting to
the truthful lower boundary, and vu for fitting
to the truthful upper boundary. As many of the implementations in
dfddm()
are technically different but similar in style, we
only test a subset of all the available implementations in
dfddm()
to avoid unnecessary testing. After using the
various density function approximations in fitting the DDM to real-world
data, we then validate that the produced parameter estimates are
consistent across the various implementations within a given error
tolerance.
We use a range of different initial parameter values for the optimization function to ensure that none of the implementations encounters a problem in the parameter space. However, we need to slightly restrict the starting values to prevent fitting issues with some of the small-time implementations; these restrictions are discussed in a later subsection. We allow a difference of 0.0001 across the implementations for each combination of parameters since the density function approximationss can return slightly different results (within 2 ⋅ ϵ), as discussed earlier in this vignette. The following subsections will define all of the functions used to generate the fittings and provide the code to run the full fitting. Since running the full fitting for all of the individuals in the data takes a long time, we will forgo running the fitting in this vignette and instead read pre-fit parameter estimates that used the provided code.
To avoid too many repetitious plots and possible instability, we will
only include four of the implementations from the previous section that illustrate the most stable
implementations of the density function approximationss. We include: all
three implementations in dfddm()
that combine the
small-time and large-time, as well as the one implementation available
in the rtdists
package. We only use a selection of the implementations available in
dfddm()
to avoid redundancy in our testing since most of
the implementations have multiple sibling implementations that are very
similar but slower and less stable. Moreover, we do not require the RWiener
package nor the Gondan raw R
code because the associated
density function approximations do not include an option for variability
in drift rate. While we can convert the constant drift rate density to
the variable drift rate density using a multiplicative factor, the
densities are still potentially calculated
incorrectly. For more details on the differences between the
constant drift rate density functions and their variable drift rate
counterparts, see the Math
Vignette.
First, we load the necessary packages.
This code chunk defines the log-likelihood functions used in the optimization algorithm. The log-likelihood functions are fairly straightforward and split the responses and associated response times by the true item status (i.e., the correct response) to enable fitting distinct drift rates (vℓ for the items for which the correct response is the lower boundary and vu for the items for which the correct response is the upper boundary). In addition, the log-likelihood functions heavily penalize any combination of parameters that returns a log-density of −∞ (equivalent to a regular density of 0).
ll_fb_SWSE_17 <- function(pars, rt, resp, truth, err_tol) {
v <- numeric(length(rt))
v[truth == "upper"] <- pars[[1]]
v[truth == "lower"] <- pars[[2]]
dens <- dfddm(rt = rt, response = resp, a = pars[[3]], v = v,
t0 = pars[[4]], w = pars[[5]], sv = pars[[6]], err_tol = 1e-6,
log = TRUE, switch_mech = "terms_large", switch_thresh = 0.8,
n_terms_small = "SWSE", summation_small = "2017")
return( ifelse(any(!is.finite(dens)), 1e6, -sum(dens)) )
}
ll_fb_Gon_17 <- function(pars, rt, resp, truth, err_tol) {
v <- numeric(length(rt))
v[truth == "upper"] <- pars[[1]]
v[truth == "lower"] <- pars[[2]]
dens <- dfddm(rt = rt, response = resp, a = pars[[3]], v = v,
t0 = pars[[4]], w = pars[[5]], sv = pars[[6]], err_tol = 1e-6,
log = TRUE, switch_mech = "terms", n_terms_small = "Gondan",
summation_small = "2017")
return( ifelse(any(!is.finite(dens)), 1e6, -sum(dens)) )
}
ll_fb_Nav_17 <- function(pars, rt, resp, truth, err_tol) {
v <- numeric(length(rt))
v[truth == "upper"] <- pars[[1]]
v[truth == "lower"] <- pars[[2]]
dens <- dfddm(rt = rt, response = resp, a = pars[[3]], v = v,
t0 = pars[[4]], w = pars[[5]], sv = pars[[6]], err_tol = 1e-6,
log = TRUE, switch_mech = "terms", n_terms_small = "Navarro",
summation_small = "2017")
return( ifelse(any(!is.finite(dens)), 1e6, -sum(dens)) )
}
ll_RTDists <- function(pars, rt, resp, truth) {
rtu <- rt[truth == "upper"]
rtl <- rt[truth == "lower"]
respu <- resp[truth == "upper"]
respl <- resp[truth == "lower"]
densu <- ddiffusion(rtu, respu, a = pars[[3]], v = pars[[1]],
z = pars[[5]]*pars[[3]], t0 = pars[[4]], sv = pars[[6]])
densl <- ddiffusion(rtl, respl, a = pars[[3]], v = pars[[2]],
z = pars[[5]]*pars[[3]], t0 = pars[[4]], sv = pars[[6]])
densities <- c(densu, densl)
if (any(densities <= 0)) return(1e6)
return(-sum(log(densities)))
}
We use a different set of initial parameter values for each combination of dataset and implementation. In a real-life situation we would probably do so using random initial parameter values to avoid local minima. Here we do this with a fixed set of initial parameter values to ensure that the different implementations produce the same results for different starting points. However, there are a couple of restrictions to these initial values that we must address (we would also have to do so if we picked the initial values randomly). The parameter t0 must be less than the response time, so we set the initial values for t0 to be strictly less than the minimum response time according to each individual in the input data frame.
Furthermore, we must place a lower bound on a that is necessarily greater than
zero because the optimization algorithm occasionally evaluates the
log-likelihood functions (and thus the underlying density function
approximations) using values of a equal to its bounds. In the case
where optimization algorithm evaluates using a = 0, the density function
approximation from the rtdists
package does not evaluate. In common use this is not an issue because
very small values of a do not
make any sense with regard to the psychological interpretation of the
parameter, but this issue can arise in an exploratory optimization
environment.
The following code chunk defines the function that will run the
optimization and produce the fitted parameter estimates for vu, vℓ, a, t0, w, and sv. As discussed above, we only use a selection of
implementations from those available in dfddm()
. This
fitting function will run the optimization for each set of initial
parameter values and store the resulting convergence code (either 0 indicating successful convergence or 1 indicating unsuccessful convergence),
minimized value of the objective log-likelihood function, and parameter
estimates.
rt_fit <- function(data, id_idx = NULL, rt_idx = NULL, response_idx = NULL,
truth_idx = NULL, response_upper = NULL, err_tol = 1e-6) {
# Format data for fitting
if (all(is.null(id_idx), is.null(rt_idx), is.null(response_idx),
is.null(truth_idx), is.null(response_upper))) {
df <- data # assume input data is already formatted
} else {
if(any(data[,rt_idx] < 0)) {
stop("Input data contains negative response times; fit will not be run.")
}
if(any(is.na(data[,response_idx]))) {
stop("Input data contains invalid responses (NA); fit will not be run.")
}
nr <- nrow(data)
df <- data.frame(id = character(nr),
rt = double(nr),
response = character(nr),
truth = character(nr),
stringsAsFactors = FALSE)
if (!is.null(id_idx)) { # relabel identification tags
for (i in 1:length(id_idx)) {
idi <- unique(data[,id_idx[i]])
for (j in 1:length(idi)) {
df[["id"]][data[,id_idx[i]] == idi[j]] <- paste(
df[["id"]][data[,id_idx[i]] == idi[j]], idi[j], sep = " ")
}
}
df[["id"]] <- trimws(df[["id"]], which = "left")
}
df[["rt"]] <- as.double(data[,rt_idx])
df[["response"]] <- "lower"
df[["response"]][data[,response_idx] == response_upper] <- "upper"
df[["truth"]] <- "lower"
df[["truth"]][data[,truth_idx] == response_upper] <- "upper"
}
# Preliminaries
ids <- unique(df[["id"]])
nids <- max(length(ids), 1) # if inds is null, there is only one individual
init_vals <- data.frame(vu = c( 0, 10, -.5, 0, 0, 0, 0, 0, 0, 0, 0),
vl = c( 0, -10, .5, 0, 0, 0, 0, 0, 0, 0, 0),
a = c( 1, 1, 1, .5, 5, 1, 1, 1, 1, 1, 1),
t0 = c( 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0),
w = c(.5, .5, .5, .5, .5, .5, .5, .2, .8, .5, .5),
sv = c( 1, 1, 1, 1, 1, 1, 1, 1, 1, .05, 5))
ninit_vals <- nrow(init_vals)
algo_names <- c("fb_SWSE_17", "fb_Gon_17", "fb_Nav_17", "rtdists")
nalgos <- length(algo_names)
ni <- nalgos*ninit_vals
# Initilize the result dataframe
cnames <- c("ID", "Algorithm", "Convergence", "Objective",
"vu_init", "vl_init", "a_init", "t0_init", "w_init", "sv_init",
"vu_fit", "vl_fit", "a_fit", "t0_fit", "w_fit", "sv_fit")
res <- data.frame(matrix(ncol = length(cnames), nrow = nids*ninit_vals*nalgos))
colnames(res) <- cnames
# label the result dataframe
res[["ID"]] <- rep(ids, each = ni) # label individuals
res[["Algorithm"]] <- rep(algo_names, each = ninit_vals) # label algorithms
res[["vu_init"]] <- init_vals[["vu"]] # label initial vu
res[["vl_init"]] <- init_vals[["vl"]] # label initial vl
res[["a_init"]] <- init_vals[["a"]] # label initial a
res[["w_init"]] <- init_vals[["w"]] # label initial w
res[["sv_init"]] <- init_vals[["sv"]] # label initial sv
# Loop through each individual and starting values
for (i in 1:nids) {
# extract data for id i
dfi <- df[df[["id"]] == ids[i], ]
rti <- dfi[["rt"]]
respi <- dfi[["response"]]
truthi <- dfi[["truth"]]
# starting value for t0 must be smaller than the smallest rt
min_rti <- min(rti)
t0_lo <- 0.01*min_rti
t0_me <- 0.50*min_rti
t0_hi <- 0.99*min_rti
init_vals[["t0"]] <- c(rep(t0_me, 5), t0_lo, t0_hi, rep(t0_me, 4))
# label the result dataframe
res[["t0_init"]][((i-1)*ni+1):(i*ni)] <- init_vals[["t0"]] # label initial t0
# loop through all of the starting values
for (j in 1:ninit_vals) {
temp <- nlminb(init_vals[j, ], ll_fb_SWSE_17,
rt = rti, resp = respi, truth = truthi, err_tol = err_tol,
# limits: vu, vl, a, t0, w, sv
lower = c(-Inf, -Inf, .01, 0, 0, 0),
upper = c( Inf, Inf, Inf, min_rti, 1, Inf))
res[["Convergence"]][(i-1)*ni+0*ninit_vals+j] <- temp[["convergence"]]
res[["Objective"]][(i-1)*ni+0*ninit_vals+j] <- temp[["objective"]]
res[(i-1)*ni+0*ninit_vals+j, 11:16] <- temp[["par"]]
temp <- nlminb(init_vals[j, ], ll_fb_Gon_17,
rt = rti, resp = respi, truth = truthi, err_tol = err_tol,
# limits: vu, vl, a, t0, w, sv
lower = c(-Inf, -Inf, .01, 0, 0, 0),
upper = c( Inf, Inf, Inf, min_rti, 1, Inf))
res[["Convergence"]][(i-1)*ni+1*ninit_vals+j] <- temp[["convergence"]]
res[["Objective"]][(i-1)*ni+1*ninit_vals+j] <- temp[["objective"]]
res[(i-1)*ni+1*ninit_vals+j, 11:16] <- temp[["par"]]
temp <- nlminb(init_vals[j, ], ll_fb_Nav_17,
rt = rti, resp = respi, truth = truthi, err_tol = err_tol,
# limits: vu, vl, a, t0, w, sv
lower = c(-Inf, -Inf, .01, 0, 0, 0),
upper = c( Inf, Inf, Inf, min_rti, 1, Inf))
res[["Convergence"]][(i-1)*ni+2*ninit_vals+j] <- temp[["convergence"]]
res[["Objective"]][(i-1)*ni+2*ninit_vals+j] <- temp[["objective"]]
res[(i-1)*ni+2*ninit_vals+j, 11:16] <- temp[["par"]]
temp <- nlminb(init_vals[j, ], ll_RTDists,
rt = rti, resp = respi, truth = truthi,
# limits: vu, vl, a, t0, w, sv
lower = c(-Inf, -Inf, .01, 0, 0, 0),
upper = c( Inf, Inf, Inf, min_rti, 1, Inf))
res[["Convergence"]][(i-1)*ni+3*ninit_vals+j] <- temp[["convergence"]]
res[["Objective"]][(i-1)*ni+3*ninit_vals+j] <- temp[["objective"]]
res[(i-1)*ni+3*ninit_vals+j, 11:16] <- temp[["par"]]
}
}
return(res)
}
As an example dataset, we use the fddm::med_dec
data
that comes with fddm
. This dataset contains the accuracy
condition reported in Trueblood et al.
(2018), which investigates medical decision making among medical
professionals (pathologists) and novices (i.e., undergraduate students).
The task of participants was to judge whether pictures of blood cells
show cancerous cells (i.e., blast cells) or non-cancerous cells (i.e.,
non-blast cells). The dataset contains 200 decisions per participant,
based on pictures of 100 true cancerous cells and pictures of 100 true
non-cancerous cells. We remove the trials without response (indicated by
rt
< 0 in the data)
before fitting.
Having set up the fitting functions in the above chunks of code, we
could pass the fddm::med_dec
data to this function to get
the parameter estimates. However, as this takes a long time we skip the
fitting in this vignette and instead read the pre-fit parameter
estimates in the next section.
Now we test the consistency of the fitted parameters using the various density function approximations. As discussed above, we use an allowable error tolerance of 0.0001 to account for the slight differences in the output of the different density approximations. First, we define a function to determine the differences in the objective value of the log-likelihood function and the parameter estimates. For each set of initial parameter values, there is one objective value corresponding to each algorithm; we take the minimum of these objective values and use the associated parameter estimates as the best estimates. Then we use this set of best estimates to compare against each set of parameter estimates that use the same initial parameter values. The following code chunk defines this function, which will be used next.
fit_prep <- function(fit) {
nr <- nrow(fit)
fit[["Obj_diff"]] <- rep(0, nr)
fit[["vu_diff"]] <- rep(0, nr)
fit[["vl_diff"]] <- rep(0, nr)
fit[["a_diff"]] <- rep(0, nr)
fit[["t0_diff"]] <- rep(0, nr)
fit[["w_diff"]] <- rep(0, nr)
fit[["sv_diff"]] <- rep(0, nr)
ids <- unique(fit[["ID"]])
nids <- length(ids)
algos <- unique(fit[["Algorithm"]])
nalgos <- length(algos)
fit_idx <- c(4, 11:16)
dif_idx <- 17:23
ninit <- nrow(fit[fit[["ID"]] == ids[1] & fit[["Algorithm"]] == algos[1], ])
for (i in 1:nids) {
for (j in 1:ninit) {
actual_idx <- seq((i-1)*ninit*nalgos+j, i*ninit*nalgos, by = ninit)
min_obj_idx <- actual_idx[which.min(fit[actual_idx, 4])]
best_fit <- fit[min_obj_idx, fit_idx]
for (k in 0:(nalgos-1)) {
fit[(i-1)*(ninit*nalgos) + k*ninit + j, dif_idx] <-
fit[(i-1)*(ninit*nalgos) + k*ninit + j, fit_idx] - best_fit
}
}
}
return(fit)
}
As previously mentioned, we will read the pre-fit data from file as
the fits can take a long time to run. Then we run our prep function to
expose any significant differences in the fits across the
implementations. As an example of the fitting comparison, we print the
results for the first ID in the dataset
(ID = "experienced 2"
). This sample shows that the
agreement across implementations for this set of initial parameter
values is rather good for the first participant in the study.
# load data, will be in the variable 'fit'
load(system.file("extdata", "dfddm_density", "valid_fit.Rds", package = "fddm", mustWork = TRUE))
fit <- fit_prep(fit)
cat("Results for ID = experienced 2")
#> Results for ID = experienced 2
fit[(0:3)*11+1, ]
#> ID Algorithm Convergence Objective vu_init vl_init a_init t0_init w_init sv_init
#> 1 experienced 2 fb_SWSE_17 0 42.47181 0 0 1 0.2305 0.5 1
#> 12 experienced 2 fb_Gon_17 0 42.47181 0 0 1 0.2305 0.5 1
#> 23 experienced 2 fb_Nav_17 0 42.47181 0 0 1 0.2305 0.5 1
#> 34 experienced 2 rtdists 0 42.47181 0 0 1 0.2305 0.5 1
#> vu_fit vl_fit a_fit t0_fit w_fit sv_fit Obj_diff vu_diff vl_diff
#> 1 5.681299 -2.188658 2.790909 0.3764465 0.4010116 2.281296 7.302121e-08 -4.918607e-06 4.658488e-07
#> 12 5.681304 -2.188659 2.790912 0.3764464 0.4010115 2.281298 0.000000e+00 0.000000e+00 0.000000e+00
#> 23 5.681304 -2.188659 2.790912 0.3764464 0.4010115 2.281298 4.119201e-09 1.394453e-08 2.550020e-08
#> 34 5.681304 -2.188659 2.790912 0.3764464 0.4010115 2.281298 4.119137e-09 1.207362e-08 2.637026e-08
#> a_diff t0_diff w_diff sv_diff
#> 1 -2.808479e-06 6.480561e-08 9.702718e-08 -2.104597e-06
#> 12 0.000000e+00 0.000000e+00 0.000000e+00 0.000000e+00
#> 23 -2.550003e-09 -2.017408e-10 -2.694656e-09 -8.215454e-09
#> 34 -3.132476e-09 -2.113398e-10 -2.697446e-09 -9.459360e-09
We can see that the fits for this individual participant and this set of initial parameter values are very similar across the different implementations. As there are many individuals in this dataset, we will not print this style of fit comparison for each combination of participant and initial parameter values; instead we will print only those fits that are worse than the best fit. Output below are all of the runs where either:
the optimization run did not converge (i.e. Convergence = 0); or
the log-likelihood objective value differs from the smallest produced of any implementation for that specific set of initial parameter values; or
or the parameter estimates differ from the smallest produced of any implementation for that specific set of initial parameter values.
# Define error tolerance
eps <- 1e-4
out <- fit[unique(which(abs(fit[, c(3, 17:23)]) > eps, arr.ind = TRUE)[, 1]), ]
out[, -c(1:2)] <- zapsmall(out[, -c(1:2)])
out
#> ID Algorithm Convergence Objective vu_init vl_init a_init t0_init w_init
#> 102 experienced 7 fb_Gon_17 0 -2.96169 -0.5 0.5 1 0.2365 0.5
#> 112 experienced 7 fb_Nav_17 0 -2.96169 10.0 -10.0 1 0.2365 0.5
#> 188 experienced 12 fb_Gon_17 0 38.53788 0.0 0.0 1 0.2155 0.5
#> 199 experienced 12 fb_Nav_17 0 38.53788 0.0 0.0 1 0.2155 0.5
#> 495 inexperienced 8 fb_SWSE_17 0 149.93610 0.0 0.0 1 0.0885 0.5
#> 703 inexperienced 15 rtdists 0 40.39162 0.0 0.0 1 0.1985 0.5
#> 2257 novice 34 fb_Gon_17 0 86.54239 10.0 -10.0 1 0.2600 0.5
#> sv_init vu_fit vl_fit a_fit t0_fit w_fit sv_fit Obj_diff vu_diff vl_diff a_diff
#> 102 1.00 1.58408 -1.90459 1.41630 0.44971 0.47408 0e+00 9.06161 -1.02847 0.81932 -0.34074
#> 112 1.00 1.58408 -1.90459 1.41630 0.44971 0.47408 0e+00 9.06161 -1.02847 0.81932 -0.34074
#> 188 1.00 3.14949 -0.66906 1.58550 0.37883 0.44876 0e+00 16.89640 -2.00273 0.97318 -0.55889
#> 199 1.00 3.14949 -0.66906 1.58550 0.37883 0.44876 0e+00 16.89640 -2.00273 0.97318 -0.55889
#> 495 5.00 1.89358 -0.08095 2.15608 0.13127 0.48149 0e+00 1.18084 -0.27262 -0.02294 -0.16710
#> 703 0.05 1.89118 -0.09720 1.32430 0.36665 0.50752 0e+00 0.08837 -0.09054 -0.01098 -0.01889
#> 2257 1.00 0.93572 -0.07329 1.27265 0.49275 0.53130 6e-05 0.98066 -0.27897 -0.00257 -0.09404
#> t0_diff w_diff sv_diff
#> 102 0.00422 0.01528 -1.57682
#> 112 0.00422 0.01528 -1.57682
#> 188 0.00511 0.00628 -2.05337
#> 199 0.00511 0.00628 -2.05337
#> 495 0.00399 0.00808 -0.59120
#> 703 0.00063 0.00296 -0.43358
#> 2257 0.00356 0.00858 -1.04990
We can see that each implementation occasionally has issues with data
fitting, but these issues are easily circumvented by using an assortment
of initial parameter values. The default implementation of
dfddm()
is “fcSWSE17”,
and this combines the large-time approximation of Navarro with the novel
small-time approximation introduced in this package to provide a quite
stable implementation. Using this implementation, there is one instance
of a fit whose difference from the best fit is greater than our given
error tolerance. However, this issue can taken care of by running a set
of fits with properly randomized initial parameter values and
subsequently selecting the best fit from this set. These results suggest
that the most stable implementation is the default option for
dfddm()
(switch_mech = "eff_rt", switch_thresh = 0.8
).
sessionInfo()
#> R version 4.4.2 (2024-10-31)
#> Platform: x86_64-pc-linux-gnu
#> Running under: Ubuntu 24.04.1 LTS
#>
#> Matrix products: default
#> BLAS: /usr/lib/x86_64-linux-gnu/openblas-pthread/libblas.so.3
#> LAPACK: /usr/lib/x86_64-linux-gnu/openblas-pthread/libopenblasp-r0.3.26.so; LAPACK version 3.12.0
#>
#> locale:
#> [1] LC_CTYPE=en_US.UTF-8 LC_NUMERIC=C LC_TIME=en_US.UTF-8
#> [4] LC_COLLATE=C LC_MONETARY=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8
#> [7] LC_PAPER=en_US.UTF-8 LC_NAME=C LC_ADDRESS=C
#> [10] LC_TELEPHONE=C LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C
#>
#> time zone: Etc/UTC
#> tzcode source: system (glibc)
#>
#> attached base packages:
#> [1] stats graphics grDevices utils datasets methods base
#>
#> other attached packages:
#> [1] testthat_3.2.1.1 ggforce_0.4.2 ggplot2_3.5.1 reshape2_1.4.4
#> [5] microbenchmark_1.5.0 RWiener_1.3-3 rtdists_0.11-5 fddm_1.0-2
#> [9] rmarkdown_2.29
#>
#> loaded via a namespace (and not attached):
#> [1] gtable_0.3.6 xfun_0.49 bslib_0.8.0 lattice_0.22-6 vctrs_0.6.5
#> [6] tools_4.4.2 generics_0.1.3 tibble_3.2.1 fansi_1.0.6 pkgconfig_2.0.3
#> [11] Matrix_1.7-1 ggnewscale_0.5.0 desc_1.4.3 lifecycle_1.0.4 compiler_4.4.2
#> [16] farver_2.1.2 stringr_1.5.1 brio_1.1.5 munsell_0.5.1 codetools_0.2-20
#> [21] htmltools_0.5.8.1 sys_3.4.3 buildtools_1.0.0 sass_0.4.9 evd_2.3-7.1
#> [26] yaml_2.3.10 Formula_1.2-5 pillar_1.9.0 crayon_1.5.3 jquerylib_0.1.4
#> [31] MASS_7.3-61 cachem_1.1.0 tidyselect_1.2.1 digest_0.6.37 mvtnorm_1.3-2
#> [36] stringi_1.8.4 maketools_1.3.1 labeling_0.4.3 splines_4.4.2 gsl_2.1-8
#> [41] polyclip_1.10-7 rprojroot_2.0.4 fastmap_1.2.0 grid_4.4.2 colorspace_2.1-1
#> [46] expm_1.0-0 cli_3.6.3 magrittr_2.0.3 survival_3.7-0 utf8_1.2.4
#> [51] withr_3.0.2 waldo_0.6.1 scales_1.3.0 estimability_1.5.1 emmeans_1.10.5
#> [56] msm_1.8.2 evaluate_1.0.1 knitr_1.49 rlang_1.1.4 Rcpp_1.0.13-1
#> [61] glue_1.8.0 tweenr_2.0.3 pkgload_1.4.0 jsonlite_1.8.9 R6_2.5.1
#> [66] plyr_1.8.9