Speeding Up Inferences in Large Knowledge Bases

Reference: Levy, A. Y.; Fikes, R. E.; & Sagiv, Y. Speeding Up Inferences in Large Knowledge Bases. Knowledge Systems Laboratory, November, 1993.

Abstract: Speeding up inferences made from large knowledge bases is a key to scaling up AI systems. The query-tree is a powerful tool for analyzing KBs containing Horn rules which takes into consideration the semantics of interpreted predicates that appear in the rules (e.g., order and sort predicates). It is a finite structure that encodes all derivations of a given set of queries and tells us which rules and ground facts can be used in deriving answers to the queries and which can be ignored. This paper investigates experimentally the impact of several methods of employing the query-tree on speeding up inference. Speedups are obtained by creating specialized indices that point only to relevant facts in the KB and by following only sequences of rule applications that are allowed by the query-tree. The experiments show that significant speedups (often orders of magnitude) are obtained by employing the query-tree. Moreover, we show that the speedups improve as the size of the KB grows, indication that the methods will scale up to large KBs.

Full paper available as ps.

Jump to... [KSL] [SMI] [Reports by Author] [Reports by KSL Number] [Reports by Year]
Send mail to: ksl-info@ksl.stanford.edu to send a message to the maintainer of the KSL Reports.