John A. Hoffnagle, William D. Hinsberg, et al.
Microlithography 2003
A problem that is frequently encountered in statistics is that of computing some of the elements (e.g., the diagonal elements) of the inverse of a large, possibly sparse, positive definite matrix. Let W = {wij} represent an m × m positive definite matrix, let V = {vij} = W-1, and let x = {xj} represent an m × 1 random vector whose distribution is multivariate normal with null mean vector and variance-covariance matrix V. The Gibbs sampler can be used to generate a sequence of m × 1 vectors x(1),x(2),... such that, for sufficiently large values of k, x(k) can be regarded as a sample value of x. Letting x(k)l,...,x(k)m represent the elements of x(k), x(k)i is a draw from a univariate normal distribution with mean -w-1ii (∑i-1j=1 wijx(k)j + ∑mj=i+1 wijx(k-1)j) and variance w-1ii. The sample values of x can be used to obtain a Monte Carlo estimate of the expectation E(xixj) and hence [since vij = E(xixj)] of vij. A possibly more efficient alternative is to use the sample values to evaluate the outer expectation in the expression E[E(xixj|xT)], where xT is a subvector of x that excludes Xi and/or xj. Sparsity (or various other kinds of structure) in W can be used to (computational) advantage in generating the draws x(1),x(2),.... Numerical results indicate that if W is well-conditioned, then the statistical dependence between the draws is relatively small, reasonably accurate Monte Carlo estimates can be obtained from a relatively small number of draws, and the conditioning of xixj on xT leads to significant improvements in accuracy. © 1999 Elsevier Science Inc. All rights reserved.
John A. Hoffnagle, William D. Hinsberg, et al.
Microlithography 2003
Paul J. Steinhardt, P. Chaudhari
Journal of Computational Physics
Minghong Fang, Zifan Zhang, et al.
CCS 2024
Arnon Amir, Michael Lindenbaum
IEEE Transactions on Pattern Analysis and Machine Intelligence