Reference: Dagum, P. & Luby, M. Approximating Probabilistic Inference in Bayesian Belief Networks is NP-Hard. April, 1993.
Abstract: It is known that exact computation of probabilistic inference is NP-hard, [Cooper90]. For inferences that are not conditioned on any evidence nodes, it is relatively straightforward to get approximations in polynomial time with absolute error e and probability of failure d. However, the effort directed at finding efficient approximation algorithms for general inferences, i.e., inferences conditioned on evidence, has been met with little success. Polynomial time algorithms that approximate inferences with absolute or relative bounds have been constructed for special case belief networks only. We prove that even approximating probabilistic inference conditioned on evidence in general belief networks is NP-hard. We prove that if P is not equal to NP, there does not exist an algorithm that can output an estimate Z such that for some e<1/2, Pr[x|y]-e <= Z <= Pr[x|y]+e, for inferences Pr[x|y] with single node instantiations x and y. Furthermore, we present evidence that it is intractable to determine whether with probability greater than 1/2, Z satisfies Pr[x|y]-e <= Z<= Pr[x|y]+e for e<1/2.