Computes an estimate and a deterministic upper bound of the probability Pr\((l<X<u)\), where \(X\) is a zero-mean multivariate normal vector with covariance matrix \(\Sigma\), that is, \(X\) is drawn from \(N(0,\Sigma)\). Infinite values for vectors \(u\) and \(l\) are accepted. The Monte Carlo method uses sample size \(n\); the larger \(n\), the smaller the relative error of the estimator.
mvNcdf(l, u, Sig, n = 1e+05)
l | lower truncation limit |
---|---|
u | upper truncation limit |
Sig | covariance matrix of \(N(0,\Sigma)\) |
n | number of Monte Carlo simulations |
a list with components
prob
: estimated value of probability Pr\((l<X<u)\)
relErr
: estimated relative error of estimator
upbnd
: theoretical upper bound on true Pr\((l<X<u)\)
Suppose you wish to estimate Pr\((l<AX<u)\), where \(A\) is a full rank matrix and \(X\) is drawn from \(N(\mu,\Sigma)\), then you simply compute Pr\((l-A\mu<AY<u-A\mu)\), where \(Y\) is drawn from \(N(0, A\Sigma A^\top)\).
For small dimensions, say \(d<50\), better accuracy may be obtained by using
the (usually slower) quasi-Monte Carlo version mvNqmc
of this algorithm.
Z. I. Botev (2017), The Normal Law Under Linear Restrictions: Simulation and Estimation via Minimax Tilting, Journal of the Royal Statistical Society, Series B, 79 (1), pp. 1--24.
d <- 15; l <- 1:d; u <- rep(Inf, d); Sig <- matrix(rnorm(d^2), d, d)*2; Sig=Sig %*% t(Sig) mvNcdf(l, u, Sig, 1e4) # compute the probability#> $prob #> [1] 3.423825e-154 #> #> $relErr #> [1] 0.001830164 #> #> $upbnd #> [1] 3.909422e-154 #>