Publications



Communicable Knowledge in Automated System Identification

by Reinhard Stolle and Elizabeth Bradley
Invited chapter in Computational Discovery of Communicable Knowledge, pp. 17-43. Edited by L. Todorovski and S. Dzeroski. LNCS 4660, Springer, Berlin, 2007.
PDF version (299 kB). PS version (282 kB).

Abstract:

We describe the program PRET, an engineering tool for nonlinear system identification, which is the task of inferring a (possibly nonlinear) ordinary differential equation model from external observations of a target system's behavior. PRET has several characteristics in common with programs from the fields of machine learning and computational scientific discovery. However, since PRET is intended to be an engineer's tool, it makes different choices with regard to the tradeoff between model accuracy and parsimony. The choice of a good model depends on the engineering task at hand, and PRET is designed to let the user communicate the task-specific modeling constraints to the program. PRET's inputs, its outputs, and its internal knowledge base are instances of communicable knowledge -- knowledge that is represented in a form that is meaningful to the domain experts that are the intended users of the program.


Mastering complexity through modeling and early prototyping

by Reinhard Stolle, Christian Salzmann and Tillmann Schumm
Proceedings 7th Euroforum-Jahrestagung Software im Automobil, Stuttgart, Germany, May 2006.
PDF version (213 kB).

Abstract:

Modern systems of automotive electronics and software are characterized by a high degree of complexity and heterogeneity. Important techniques in the design, decomposition, implementation and integration of such systems include standardization, modeling and prototyping. In this paper, we describe these techniques and discuss their benefits.


Modellierungsarten fuer automotive HMIs

by Oliver Scheickl, Thomas Benedek and Reinhard Stolle
Proceedings Modellierung 2006, Workshop Modellbasierte Entwicklung von eingebetteten Fahrzeugfunktionen, Innsbruck, Austria, March 2006.
PDF version (554 kB).

Abstract:

Es werden zwei grundsaetzlich verschiedene Moeglichkeiten analysiert, automotive HMIs zu modellieren: spezifikationsnahe und implementierungsnahe Modellierung. Spezifikationsnahe Modelle beschreiben aus Spezifikateurssicht die Anforderungen an das HMI. Implementierungsnahe Modelle erleichtern typische und haeufige Weiterentwicklungen an der HMI-Implementierung.

Fuer HMI Code Generation (automatische Generierung der Implementierung aus der Spezifikation) und HMI Test Automation (automatischer Test der Implementierung gegen die Spezifikation) muessen spezifikationsnahe Modelle und implementierungsnahe Modelle in eine eindeutige, formale Beziehung zueinander gesetzt werden. Hierzu ist die vorliegende Analyse der Zwecke und Eigenschaften solcher Modelle ein erster Schritt.


Challenges in Automated Model-Based HMI Testing

by
Reinhard Stolle, Thomas Benedek, Christian Knuechel and Harald Heinecke
Proceedings GI Jahrestagung 2005(2), Automotive Software Engineering, Springer, Berlin, Germany, September 2005, pp. 186-190.

Abstract:

We describe our approach to automated model-based HMI testing. The paper is divided into two parts. In the first part, we summarize the current status of our work. In the second part, we describe a number of research areas that need to be worked on in order to achieve true model-based HMI test automation.


Model-Based Test Automation for Automotive HMIs

by Reinhard Stolle, Thomas Benedek and Christian Knuechel
Proceedings Jahrestagung der ASIM/GI-Fachgruppe 4.5.5 Simulation technischer Systeme, Simulations- und Testmethoden fuer Software in Fahrzeugsystemen, ISSN 1436-9915, Berlin, Germany, March 2005.
PDF version (of the whole proceedings, 10.7 MB).

Abstract:

The development process of automotive human machine interfaces (HMIs) is traditionally characterized by a number of discontinuities. A diverse set of stake-holders participate in this development process, all of which have their own perspectives, use their own approaches, and rely on their own representational frameworks and tools. Our strategy to overcome the existing discontinuities relies on a model-based approach: our goal is to use a formal HMI specification that serves as a common basis for all phases of the development process, including automated testing. Currently, our automated tests compare the HMI embedded control unit (ECU) against a prototype that has been derived partially from a formal specification. We are working toward being able to derive the prototype almost completely automatically from the specification and test the HMI ECU automatically against the formal specification.


Agenda Control for Heterogeneous Reasoners

by Reinhard Stolle, Apollo Hogan and Elizabeth Bradley
The Journal of Logic and Algebraic Programming 62:41-69 (2005).
PDF version (470 kB). Postscript version (421 kB).

Abstract:

As artificial intelligence techniques are maturing and being deployed in large applications, the problem of specifying control and reasoning strategies is regaining attention.

Complex AI systems tend to comprise a suite of modules, each of which is capable of solving a different aspect of the overall problem, and each of which may incorporate a different reasoning paradigm. The orchestration of such heterogeneous problem solvers can be divided into two subproblems: 1. When and how are various reasoning modes invoked?, and 2. How is information passed between various reasoning modes? In this paper, we explore some solutions to this problem. In particular, we describe a logic programming system that is based on three ideas: equivalence of declarative and operational semantics, declarative specification of control information, and smoothness of interaction with non-logic-based programs.

Meta-level predicates are used to specify control information declaratively, compensating for the absence of procedural constructs that usually facilitate formulation of efficient programs. Knowledge that has been derived in the course of the current inference process can at any time be passed to non-logic-based program modules. Traditional SLD inference engines maintain only the linear path to the current state in the SLD search tree: formulae that have been proved on this path are implicitly represented in a stack of recursive calls to the inference engine, and formulae that have been proved on previous, unsuccessful paths are lost altogether. In our system, previously proved formulae are maintained explicitly and therefore can be passed to other reasoning modules.

As an application example, we show how this inference system acts as the knowledge representation and reasoning framework of PRET---a program that automates system identification.


Entailment, Intensionality and Text Understanding

by Cleo Condoravdi, Richard Crouch, Valeria de Paiva, Reinhard Stolle and Daniel G. Bobrow
Proceedings Workshop on Text Meaning, Human Language Technology Conference (HLT-NAACL-2003), Edmonton, Canada, May 2003.
PDF version (86 kB). Postscript version (90 kB).

Abstract:

We argue that the detection of entailment and contradiction relations between texts is a minimal metric for the evaluation of text understanding systems. Intensionality, which is widespread in natural language, raises a number of detection issues that cannot be brushed aside. We describe a contexted clausal representation, derived from approaches in formal semantics, that permits an extended range of intensional entailments and contradictions to be tractably detected.


Flattened Semantic Representations

by Daniel G. Bobrow, Cleo Condoravdi, Richard Crouch, Valeria de Paiva and Reinhard Stolle
Stanford Semantics and Pragmatics Workshop 2003, Stanford, California, March 2003.

Abstract:

Flattened representations are standard fare in computational systems that attempt to reason with logical formulae. A textbook example is the skolemization and conversion to clausal form of first-order formulas, facilitating various forms of resolution theorem proving. Flat representations simplify the control for the reasoning engine but potentially at the expense of expressive power.

Semantic analyses for natural language tend to be framed in some kind of higher-order and/or intensional logic that is both expressive and intractable. To what extent can such representations be flattened into something more tractable so that reasoning can be performed on the output of semantic analysis? The process of flattening itself must be tractable and preserve key entailments of the original NL input.

In this talk we propose a systematic way of deriving flattened representations from linguistically motivated logical forms and discuss some of the questions it raises.

There are two principal motivations for flattening. First, structures assembled by compositional semantics must be transformed to structures that are well-suited for making successive small, automated inference steps. This transformation requires that important globally represented information, such as scopes of operators, be made locally accessible.

The second motivation is to bring about a reduction in expressive power of the formalism, while as far as possible preserving the meaning of what is represented. This typically involves replacing expressions with complex internal structure (e.g., propositional arguments, verbal complexes) with atomic first order terms (e.g., skolems for Davidsonian events, context names), and then making first order statements about these terms in order to recapitulate their internal structure.

For purposes of illustration, consider a simplified semantic representation for (1), which abstracts away from such things as tense, the proper representation of gradable predicates, etc, but which follows the proposal that object arguments to "prevent" should be concept denoting ("Preventing existence", FOIS 2001).

 (1) Removing a sleeve made the cable flexible, preventing breakage.

 (2) exists(c, cable(c)
        exists(s, sleeve(s),
             prevent( make( remove(s), flexible(c)), ^breakage)))

How can (2) be transformed into a flat, clausal representation, and what would be achieved by doing so? We propose (3) as a flattened representation.

 (3) (ctx t (sleeve sleeve100))
     (ctx t (cable cable220))
     (ctx t (breakage breakage250_type))
     (ctx t (remove *remove_ev700* nullAgent221 sleeve100))
     (ctx t (prevent *make_context105* breakage250_type))
     (ctx *make_context105*
          (make *remove_ev700* *flexible_context123*))
     (ctx *flexible_context123* (flexible cable220))

In (3) the transformation from global to local dependencies and the reduction to atomic first order terms (highlighted with *'s) is achieved by

  1. skolemizing quantifiers,
  2. use of Davidsonian event arguments,
  3. replacing propositional arguments to predicates by named contexts, and placing the propositions within the context created, and
  4. conversion to a conjunction of contexted clauses.

Knowledge Tracking: Answering Implicit Questions

by Reinhard Stolle, Daniel G. Bobrow, Cleo Condoravdi, Richard Crouch and Valeria de Paiva
Proceedings AAAI Spring Symposium on New Directions in Question Answering 2003, Stanford, California, March 2003.
PDF version (26 kB).

Abstract:

Research on Question Answering has produced an arsenal of useful techniques for detecting answers that are explicitly present in the text of a collection of documents. To move beyond current capabilities, effort must be directed toward analyzing the source documents and interpreting them by representing their content, abstracting away from the particular linguistic expressions used. The content representations enable reasoning based on what things mean rather than how they are phrased. Mapping accurately from natural language text to content representations requires deep linguistic analysis and proper treatment of ambiguity and contexts. Research in Question Answering has traditionally tried to circumvent these problems due to the lack of feasible solutions. We strongly believe that these problems can and must be tackled now: PARC's deep NLP technology scales well, and our preliminary results with mapping to content representation are encouraging. In order to bring fundamental issues of deep analysis to the fore, we have chosen to work on a task we call "knowledge tracking" that cannot be accomplished without interpretation of the source text. The goal of knowledge tracking is to identify the relationship of the content of a new document to the content of previously collected documents. Knowledge tracking can thus be viewed as a particular kind of question answering for a set of implicit questions. It provides useful functionality even when applied to a medium-size collection and can therefore serve as a laboratory where deep processing is feasible. Results on this task can help to extend the capabilities of many question-answering systems.


Scalability of Redundancy Detection in Focused Document Collections

by Richard Crouch, Cleo Condoravdi, Reinhard Stolle, Tracy King, Valeria de Paiva, John O. Everett and Daniel G. Bobrow
Proceedings First International Workshop on Scalable Natural Language Understanding (SCANALU-2002), Heidelberg, Germany, May 2002.
PDF version (164 kB). Postscript version (148 kB).

Abstract:

We describe the application of primarily symbolic methods to the task of detecting logical redundancies and inconsistencies between documents in a medium sized, domain focused collection (1000--40,000 documents). Initial investigations indicate good scalability prospects, especially for syntactic and semantic processing. The difficult and largely neglected task of mapping from linguistic/semantic representations to domain tailored knowledge representations is potentially more of a bottleneck.


Finding Similar Documents in Document Collections

by Thorsten Brants and Reinhard Stolle
Proceedings Third International Conference on Language Resources and Evaluation (LREC-2002), Workshop on Using Semantics for Information Retrieval and Filtering, Las Palmas, Spain, June 2002.
PDF version (88) kB). Postscript version (130 kB).

Abstract:

Finding similar documents in natural language document collections is a difficult task that requires general and domain-specific world knowledge, deep analysis of the documents, and inference. However, a large portion of the pairs of similar documents can be identified by simpler, purely word-based methods. We show the use of Probabilistic Latent Semantic Analysis for finding similar documents. We evaluate our system on a collection of photocopier repair tips. Among the 100 top-ranked pairs, 88 are true positives. A manual analysis of the 12 false positives suggests the use of more semantic information in the retrieval model.


Finding Similar Content in Different Documents

by John O. Everett, Daniel G. Bobrow, Cleo Condoravdi, Richard Crouch, Valeria de Paiva and Reinhard Stolle
Proceedings AAAI Spring Symposium on Mining Answers from Texts and Knowledge Bases 2002, page 14, Stanford, California, March 2002.

Abstract:

Documents about a particular subject may describe the same phenomena in different words. For example, in a database of tips for repairing photocopiers, there are two tips about a safety cable failure. One describes the situation as “the cable is too stiff, which causes it to snap. Remove the sleeve from the cable.” The other says “Stripping the cover from the cable makes it more flexible.”

Assessing the similarity of texts requires the application of knowledge about language and about the world. In this case, we need to know, for example, that “stiff” and “flexible” both refer to the rigidity of an object, and that one is the inverse of the other, and also that a sleeve is a type of covering. Our research focuses on combining deep natural language analysis with domain knowledge representations.

We are developing a layered approach to automatically identifying similar document content. The first layer extracts from the text semantically normalized entities (in our case things like parts, e.g., photoreceptor belt) and relevant activities (in our case higher level concept representing domain specific actions, such as cleaning). The set of normalized entities can be used as a signature for identifying tips likely to contain information about the same topic.

The second layer builds from these normalized entities and activities representation fragments that correspond to events, which are actions applied to specific entities (e.g., cleaning a photoreceptor belt). These can be used to identify parts of a pair of related tips that have similar content. Matches between representations at this level are reliable and precise, as the representations are independent of the particular words of the text.

The third layer links these events with causal or temporal relations, to approximate the macro structure of the text. The partial matches and the higher level structure enable the generation of hypotheses about the relation of other parts of the tips not previously matched, such as identifying an action sequence as a workaround for repairing a machine, and placing it in correspondence with an instruction sequence for a standard method for fixing the machine.

This layered approach allows us to build up increasingly refined relationships among similar documents, providing some information about similarity of tips at each level. As a result, performance degrades gracefully in the face of ambiguous or incomplete natural language analyses.


Making Ontologies Work for Resolving Redundancies Across Documents

by John O. Everett, Daniel G. Bobrow, Reinhard Stolle, Richard Crouch, Valeria de Paiva, Cleo Condoravdi, Martin van den Berg, Livia Polanyi
Communications of the ACM 45(2):55-60 (2002).
PDF version (641 kB). Sorry, no postscript version available.

Abstract:

Knowledge management efforts over the past decade have produced many document collections focused on particular domains, such as the repair of photocopiers. As such systems scale up, they become unwieldy and ultimately unusable if obsolete and redundant content is not continually identified and removed.

We are working with such a knowledge sharing system at Xerox. Called Eureka, it now contains about 40,000 technician-authored tips, which are free text documents on how to repair copiers. Figure 1 shows a pair of similar tips from this corpus. Our goal is to build a system that can (a) identify such conceptually similar documents, regardless of how they are written, (b) identify the parts of two documents that overlap, and (c) identify parts of the documents that stand in some relation to each other, such as expanding on a particular topic or being in mutual contradiction. Such a system will enable the maintenance of vast document collections by identifying potential redundancies for human attention.

This task requires extensive knowledge about language and of the world, and a rich representation language. However, assessing similarity imposes conflicting requirements on the underlying ontology. On the one hand, the representations must capture enough of the nuances of natural language to be sufficiently discriminating, yet the ontology must support the normalization of differing representations of similar content, to enable the detection of similarities.

We have developed a number of design criteria for ontologies that support comparisons of natural language texts. In [1], we discuss the need for reified contexts to handle the representation of nonexistent situations and objects, and how reasoning with types and their instantiations can help. In this paper, we focus on ways to produce normalized representations in our ontology from a wide range of different ways of expressing the same idea. We then describe a particular mechanism for normalizing constructs such as x is deeper than y, comparatives that occur frequently in our domain.


Preventing Existence

by Cleo Condoravdi, Richard Crouch, John O. Everett, Valeria de Paiva, Reinhard Stolle, Martin van den Berg, Daniel G. Bobrow
Proceedings International Conference on Formal Ontology in Information Systems (FOIS-2001), Ogunquit, Maine, October 2001.
PDF version (105 kB). Postscript version (96 kB).

Abstract:

We discuss the treatment of prevention statements in both natural language semantics and knowledge representation, with particular regard to existence entailments. First order representations with an explicit existence predicate are shown to not adequately capture the entailments of prevention statements. A linguistic analysis is framed in a higher order intensional logic, employing a Fregean notion of existence as instantiation of a concept. We discuss how this can be mapped to a Cyc style knowledge representation.


Reasoning about Models of Nonlinear Systems

by Reinhard Stolle, Matthew Easley, and Elizabeth Bradley
In Logical and Computational Aspects of Model-Based Reasoning, edited by Lorenzo Magnani, Nancy J. Nersessian and Claudio Pizzi. Kluwer Academic, Dordrecht, 2002.
PDF version (312 kB). Postscript version (275 kB).

Abstract:

An engineer's model of a physical system balances accuracy and parsimony: it is as simple as possible while still accounting for the dynamical behavior of the target system. PRET is a computer program that automatically builds such models. Its inputs are a set of observations of some subset of the outputs of a nonlinear system, and its output is an ordinary differential equation that models the internal dynamics of that system. Modeling problems like this have immense and complicated search spaces, and searching them is an imposing technical challenge. PRET exploits a spectrum of AI and engineering techniques to navigate efficiently through these spaces, using a special first-order logic system to decide which technique to use when and how to interpret the results. Its representations and reasoning tactics are designed both to support this flexibility and to leverage any domain knowledge that is available from the practicing engineers who are its target audience. This flexibility and power has let PRET construct accurate, minimal models of a wide variety of applications, ranging from textbook examples to real-world engineering problems.


Orchestrating Reasoning for Automated System Identification

by Reinhard Stolle
Accepted for publication in Lecture Notes in Computer Science. Springer, Heidelberg.
Revised version of Ph.D. thesis "Integrated Multimodal Reasoning for Modeling of Physical Systems."
Click here for abstract.

Reasoning about Nonlinear System Identification

by Elizabeth Bradley, Matthew Easley and Reinhard Stolle
Artificial Intelligence 133:139-188 (2001).
PDF version (640 kB). Postscript version (822 kB).

Abstract:

System identification is the process of deducing a mathematical model of the internal dynamics of a black-box system from observations of its outputs. The computer program PRET automates this process by building a layer of artificial intelligence (AI) techniques around a set of traditional formal engineering methods. PRET takes a generate-and-test approach, using a small, powerful meta-domain theory that tailors the space of candidate models to the problem at hand. It then tests these models against the known behavior of the target system using a large set of more-general mathematical rules. The complex interplay of heterogeneous reasoning modes that is involved in this process is orchestrated by a special first-order logic system that uses static abstraction levels, dynamic declarative meta control, and a simple form of truth maintenance in order to test models quickly and cheaply. Unlike other modeling tools -- most of which use libraries to model small, well-posed problems in limited domains and rely on their users to supply detailed descriptions of the target system -- PRET works with nonlinear systems in multiple domains and interacts directly with the real world via sensors and actuators. This approach has met with success in a variety of simulated and real applications, ranging from textbook systems to real-world engineering problems.


Multimodal Reasoning for Automatic Model Construction

by Reinhard Stolle and Elizabeth Bradley
Proceedings Fifteenth National Conference on Artificial Intelligence 1998 (AAAI-98), Madison, Wisconsin, July 1998.
PDF version (221 kB). Postscript version (199 kB).

Abstract:

This paper describes a program called PRET that automates system identification, the process of finding a dynamical model of a black-box system. PRET performs both structural identification and parameter estimation by integrating several reasoning modes: qualitative reasoning, qualitative simulation, numerical simulation, geometric reasoning, constraint reasoning, resolution, reasoning with abstraction levels, declarative meta-level control, and a simple form of truth maintenance.

Unlike other modeling programs that map structural or functional descriptions to model fragments, PRET combines hypotheses about the mathematics involved into candidate models that are intelligently tested against observations about the target system.

We give two examples of system identification tasks that this automated modeling tool has successfully performed. The first, a simple linear system, was chosen because it facilitates a brief and clear presentation of PRET's features and reasoning techniques. In the second example, a difficult real-world modeling task, we show how PRET models a radio-controlled car used in the University of British Columbia's soccer-playing robot project.


Integrated Multimodal Reasoning for Modeling of Physical Systems

by Reinhard Stolle
Ph.D. dissertation, University of Colorado at Boulder, August 1998.
Click here for abstract.
Please contact me if you would like to get a copy.

Multimodal Reasoning about Physical Systems

by Reinhard Stolle and Elizabeth Bradley
Proceedings AAAI Spring Symposium on Multimodal Reasoning 1998, Stanford, California, March 1998. AAAI Technical Report SS-98-04.
Postscript version

Abstract:

We present a knowledge representation and reasoning framework that integrates qualitative reasoning, qualitative simulation, numerical simulation, geometric reasoning, constraint reasoning, resolution, reasoning with abstraction levels, declarative meta-level control, and a simple form of truth maintenance. The framework is the core of PRET, a system identification program that automates the process of modeling physical systems.


Opportunistic Modeling

by Reinhard Stolle and Elizabeth Bradley
Proceedings IJCAI Workshop Engineering Problems for Qualitative Reasoning, Nagoya, Japan, August 1997.
PDF version Postscript version

Abstract:

System identification -- the process of inferring an internal model from external observations of a system -- is a routine and difficult problem faced by engineers in a variety of domains. Typically, in the hierarchy from more-abstract to less-abstract models, the model of choice is the one that is just detailed enough to account for the properties and perspectives that are of interest for the task at hand. The main goal of the work described here was to design and implement a knowledge representation framework that allows a computer program to reason about physical systems and candidate models -- ordinary differential equations (ODEs), specifically -- in such a way as to find the right model at the right abstraction level as quickly as possible.

A key observation about the modeling process is the following. Not only is the resulting model the least complex of all possible ones, but also the reasoning during model construction takes place at the highest possible level at any time. Because of this, the knowledge representation framework was designed to allow easy formulation of knowledge and meta knowledge relative to various abstraction levels. The implemented framework is the core of PRET, an automatic modeling program that automates the system identification process.

We present two examples of system identification tasks that can be performed by PRET. The first example is a simple linear system that we have chosen for a brief and clear presentation of PRET's features and reasoning techniques. The second example is a real-world modeling task: We show how PRET models a radio-controlled car used in the University of British Columbia's soccer-playing robot project and discuss important research directions that arise from this real-world example.


Meta-Programming for Generalized Horn Clause Logic

by Clemens Beckstein, Reinhard Stolle, and Gerhard Tobermann
Proceedings Fifth International Workshop on Metaprogramming and Metareasoning in Logic (META96), pp. 27-42, Bonn, Germany, September 1996.
PDF version (274 KB) Postscript version (248KB)

Abstract:

In conventional logic programming systems, control information is expressed by clause and goal order and by purely procedural constructs, e.g., the Prolog cut. This approach destroys the equivalence of declarative and procedural semantics in logic programs.

In this paper, we argue that in order to comply with the logic programming paradigm, control information should also be expressed declaratively. A program should be divided into a logical theory that specifies the problem to be solved and control information that specifies the strategy of the deduction process. Control information is expressed through meta level control clauses. These control clauses are evaluated dynamically in order to select the subgoal that will be resolved next and to select the resolving clause. Program clauses have guards that allow clause determinism to be expressed. A major design goal for the presented work is to keep the declarative and the procedural semantics of logic programs equivalent. The emphasis lies on the precise specification of the introduced meta level constructs.


Automatic Construction of Accurate Models of Physical Systems

by Elizabeth Bradley and Reinhard Stolle
Annals of Mathematics and Artificial Intelligence 17:1-28 (1996).
PDF version Postscript version

Abstract:

This paper describes an implemented computer program called PRET that automates the process of system identification: given hypotheses, observations, and specifications, it constructs an ordinary differential equation model of a target system with no other inputs or intervention from its user. The core of the program is a set of traditional system identification (SID) methods. A layer of artificial intelligence (AI) techniques built around this core automates the high-level stages of the identification process that are normally performed by a human expert. The AI layer accomplishes this by selecting and applying appropriate methods from the SID library and performing qualitative, symbolic, algebraic, and geometric reasoning on the user's inputs. For each supported domain (e.g., mechanics), the program uses a few powerful encoded rules (e.g., sum of forces = 0) to combine hypotheses into models. A custom logic engine checks models against observations, using a set of encoded domain-independent mathematical rules to infer facts about both, modulo the resolution inherent in the specifications, and then searching for contradictions. The design of the next generation of this program is also described in this paper. In it, discrepancies between sets of facts will be used to guide the removal of unnecessary terms from a model. Power-series techniques will be exploited to synthesize new terms from scratch if the user's hypotheses are inadequate, and sensors and actuators will allow the tool to take an input-output approach to modeling real physical systems.


A Customized Logic Paradigm for Reasoning about Models

by Reinhard Stolle and Elizabeth Bradley
Proceedings Tenth International Workshop on Qualitative Reasoning, Stanford Sierra Camp, California, 1996. (Picture here.) AAAI Technical Report WS-96-01.
PDF version Postscript version

Abstract:

Modeling is the process of constructing a model of a target system that is suitable for a given task. Typically, in the hierarchy from more-abstract to less-abstract models, the model of choice is the one that is just detailed enough to account for the properties and perspectives of interest for the task at hand. The main goal of the work described here was to design and implement a knowledge representation framework that allows a computer program to reason about physical systems and candidate models (ordinary differential equations, specifically) in such a way as to find the right model at the right abstraction level as quickly as possible.

A key observation about the modeling process is the following. Not only is the resulting model the least complex of all possible ones, but also the reasoning during model construction takes place at the highest possible level at any time. Because of this, the knowledge representation framework was designed to allow easy formulation of knowledge and meta knowledge relative to various abstraction levels.

Candidate models are constructed via simple, powerful domain rules. The customized knowledge representation framework is then used to generate new knowledge about the physical system and new knowledge about the candidate model. A candidate model is valid if the facts about the system that is to be modeled are consistent with the facts about the candidate model. Any inconsistency is a reason to discard the candidate model.

The implemented framework is the core of PRET, a program -- currently under development -- that automates the modeling process.


Declarative Meta Level Control for Logic Programs

by Clemens Beckstein, Reinhard Stolle, and Gerhard Tobermann
Proceedings First Russian-German Symposium on Intelligent Information Technologies and Expert Systems, pp. 11-26, Moscow, Russia, November 1995.
Postscript version

Abstract:

In conventional logic programming systems, control information is expressed by clause and goal order and by purely procedural constructs, e.g., the Prolog cut. This approach destroys the equivalence of declarative and procedural semantics in logic programs.

In this paper, we argue that in order to comply with the logic programming paradigm, control information should also be expressed declaratively. A program should be divided into a logical theory that specifies the problem to be solved and control information that specifies the strategy of the deduction process. Control information is expressed through meta level control clauses. These control clauses are evaluated dynamically in order to select the subgoal that will be resolved next and to select the resolving clause. Program clauses have guards that allow clause determinism to be expressed. A major design goal for the presented work is to keep the declarative and the procedural semantics of logic programs equivalent. The emphasis lies on the precise specification of the introduced meta level constructs.


Reinhard Stolle, e-mail: stolle@parc.com, phone: 650-812-4346
Last modified: Fri Apr 19 15:27:50 PDT 2002