Reference: Dagum, P. & Horvitz, E. A Bayesian Analysis of Randomized Approximation Procedures. Knowledge Systems Laboratory, September, 1991.
Abstract: Computer scientists have used Zero-One Estimation Theory to develop a body of results on the convergence of Monte-Carlo algorithms. In much of this work, investigators have analyzed bounds on the variance of an estimator p after N trials assuming knowledge of the expectation u of a random variable. We present an alternative framework for analyzing randomized approximation procedures that considers the manner in which u are distributed for Bernoulli processes. Rather than relying on results about error that follow from the assumption of independent and identically distributed random variables, we consider the properties of the beta distribution. The beta is a conjugate distribution for the Bernoulli sampling processes. Conjugate distributions allow us to represent both the prior and posterior probability distributions for a sampling process. We derive probabilistic stopping rules for Monte- Carlo algorithms by approximating the portion of the cumulative distribution of the beta defined by bounds on error. The probabilistic stopping rules do not require prior knowledge or estimates of u. We prove an upper bound on N that is a function of the value of the estimator p rather than u. For small relative error or u, our stopping rule gives an O(log n) speed-up on the running time of Monte-Carlo algorithms over the running time predicted by the Zero-One Estimator Theorem.