CALO Explanation Project Page

The CALO Explanation project provided explanation capabilities for various components in CALO. Through the Integrated Cognitive Explanation Environment (ICEE), this project enabled CALO to explain its task processing and task learning abilities, to aid in building user trust. This work was mainly conducted at Stanford University, Rensselaer Polytechnic Institute (RPI), and SRI International, under the primary guidance of Deborah L. McGuinness.

Research Team

Deborah L. McGuinness, RPI (PI)
Alyssa Glass, Stanford
Michael Wolverton, SRI
Paulo Pinheiro da Silva, UTEP
Cynthia Chang, RPI
Li Ding, RPI

Project Overview

CALO, as an adaptive agent, is incredibly complex. It includes task processors, hybrid theorem provers, and probabilistic inference engines; multiple learning components employing a wide range of logical and statistical techniques; and multiple heterogeneous, distributed information sources underlying the processing. Despite the sophistication, however, individual CALO components typically provide little transparency into the computation and reasoning being performed.

At CALO's heart is also the ability to take autonomous control. CALO must not only assist with user actions, but it must also act autonomously on behalf of its users.

As CALO plans for the achievement of abstract objectives, executes tasks, anticipates future needs, aggregates multiple sensors and information sources, and adapts its behavior over time, there is an underlying assumption that there will be a user in the loop whom CALO is serving. This user would need to understand the CALO's behavior and responses enough to participate in the mixed-initiative execution process and to adjust the autonomy inherent in CALO. The user would also need to trust the reasoning and actions performed by CALO, including that those actions are based on appropriate processes and on information that is accurate and current. CALO needs to be able to use these justifications to derive explanations describing how they arrived at a recommendation, including the ability to abstract away detail that may be irrelevant to the user's understanding and trust evaluation process. Further, with regards specifically to task processing, CALO needs to explain how and under what conditions it will execute a task, as well as how and why that procedure has been created or modified over time.

One significant challenge to explaining a cognitive assistant like CALO is that it, by necessity, includes task processing components that evaluate and execute tasks, as well as reasoning components that determine conclusions. Thus, a comprehensive explainer needs to explain task processing responses as well as more traditional reasoning systems, providing access to both inference and provenance information, which we refer to as knowledge provenance.

The ICEE project aimed to provide transparency into this knowledge provenance for the CALO systems. We conducted a trust study to analyze trust and explanations in complex adaptive agents. We designed an explanation framework to address the issues uncovered by our study, utilizing our Proof Markup Language (PML) intralingua and our Inference Web (IW) infrastructure, which previously had been used to represent provenance and inference information for logical inference, text analytics, and web services, and extending this paradigm to also include the results of task execution systems and machine learning. The resulting framework provides a uniform approach for representing and explaining both provenance and inference information from the disparate communities that contributed components to CALO. In particular, we emphasized (a) the information that systems need to make transparent in order to be considered "explainable," described here through support of introspective predicates provided to the explanation system; and (b) providing explanations across a wide range of machine learning, using a single unified representation. The result is our Integrated Cognitive Explanation Environment (ICEE), the explanation system that we built and tested within CALO.