Reference: Levy, A. Y. Irrelevance Reasoning in Knowledge Based Systems. Ph.D., Stanford University, 1993.
Abstract: Speeding up inferences made from large knowledge bases is a key to scaling up knowledge based systems. To do so, a system must have the ability to automatically identify and ignore information that is irrelevant to a specific task. Identifying irrelevant knowledge is also key to enabling reasoning in environments in which several systems (and their respective knowledge bases) interoperate. This dissertation considers the problem of reasoning about irrelevance of knowledge in a principles and efficient manner. Specifically, it is concerned with two key problems: (1) developing algorithms for automatically deciding what parts of a knowledge base are irrelevant to a query and (2) the utility of relevance reasoning. As a basis for addressing these problems, we present a formal framework for analyzing irrelevance. The framework includes a space of possible definitions of irrelevance, based on a proof theoretic analysis of the notion. Within the space of definitions, we identify the class of strong irrelevance claims, that has two desirable properties. Strong irrelevance claims can be efficiently derived automatically and are guaranteed to lead to savings in inference. The dissertation describes a novel tool, the query-tree, for reasoning about irrelevance. Based on the query-tree, we develop several algorithms for deciding what formulas are irrelevant to a query. These algorithms dramatically speed up inference, especially when the knowledge base includes a large data base of ground facts. The query-tree has been investigated primarily for Horn rule knowledge bases with interpretable constraints (e.g., order and sort constraints), and several more expressive extensions. For certain cases, the algorithms are shown to be complete, in that they detect all the irrelevant formulas. An important aspect of the query-tree is that it can be built by examining only a small part of the knowledge base (e.g., only the rules), and therefore, can be built efficiently. The query-tree is also used to derive the consequences of irrelevance knowledge given by a user. The dissertation presents an empirical analysis of the algorithms when doing backward chaining on Horn rules, showing that in practice, significant savings (often orders of magnitude) are obtained by relevance reasoning. Our general framework sheds new light on the problem of detecting independence of queries from updates. We present new results that significantly extend previous work in this area. The framework also provides a setting in which to investigate the connection between the notion of irrelevance and the creation of abstractions. We propose a new approach to research on reasoning with abstractions, in which we investigate the properties of an abstraction by considering the irrelevance claims on which it is based. We demonstrate the potential of the approach for the cases of abstraction of predicates and projection of predicate arguments. Finally, we describe an application of relevance reasoning to the domain of modeling physical devices. We consider the task of selecting a model for a device and a query by composing model-fragments, each describing single phenomena in the physical world at different levels of abstraction and approximation. We present a novel model-composition algorithm based on irrelevance that composes a model with appropriate abstractions and perspectives for answering the query.
Full paper available as ps.