Re: Some thoughts on K.I.F. requirementsMark_Stefik.PARC@xerox.com
Date: Thu, 9 May 1991 17:26:49 PDT
Subject: Re: Some thoughts on K.I.F. requirements
In-Reply-To: "email@example.com:edu:Xerox's message of Wed, 8 May 1991 09:28:00 PDT"
Cc: SRKB@isi.edu, INTERLINGUA@isi.edu
I'm not sure I'm wise cc'ing this message to SRKB and INTERLINGUA,
because it exposes my ignorance and conveys more uncertainty than I want. But
perhaps I'll learn something from responses.
re "requirements for distributed knowledge processing"
I think it is accurate to say that at the workshops I have attended (which is
not all of them) -- we have given lip service to notions of distributed
knowledge processing, but that we are confused about how these ideas fit
For example, we have been talking about two alternative models for building
large, sharable knowledge systems.
(1) The knowledge sharing model
(2) The services model
where the second one is a generic name for being able to invoke and combine
different kinds of services.
For starters, it seems to me that there are a lot of concepts in the systems
programming community about services. This goes back to the older Actor stuff
you did long ago -- but also includes a lot of work under the banners of either
"rpc" or "interoperability." In the networking community, there is
experience with different ways to make programs intercallable, through network
protocols at different levels. At your own institution, I think of project
Mercury as an example. But there are lots of projects. The ideas here involve
models of data sharing, external data representations, secure and reliable
communications ideas, program control models, ...
Beyond rpc, of course, are all the things that the transaction communities care
about for accounting, authentication, etc. Passing money in the network.
Value-added service. All the kinds of topics that appear in Huberman's book
and which to different degrees are handled in special ways by the electronic
funds transfer people. The point of mentioning these "beyond" topics is that
they are very real for people who are interested in building value-added
networks where people can build knowledge services that make use of other
people's knowledge services and all the accounting works out. Think of Bob
Kahn's scenarios in this regard.
I think that the SRKB has not given these topics central billing yet, and
furthermore, that most people in the group would be somewhat confused about
just how these topics are relevant.
So if we compare "interoperability languages" to KIF, we notice that both kinds
of language efforts are concerned with having adequate semantics, however, the
efforts have very different notions of the kinds of semantics that are needed.
KIF's semantics are about reference (how observers assign symbols meaning in
some world), truth, and proof.
RPC semantics are about what kinds of computations are performed, how
computations compose, and about invariant properties derived from internal and
external data representations. The "beyond" semantics introduce resource
issues to services.
One attitude about these topics is that they are interesting, but that they are
being worked on by other people out there and that we as architect's or
planners for the SRKB efforts can decouple ourselves from that because we will
just use those ideas when they are ready.
I'm not so sure about that position and would like to hear other people's
thoughts on it. In the engineer's apprentice proposals, I have found it
interesting how that project moved from being about a giant knowledge base to
now including major thrusts in collaboration technology and simulation.
It may be that there are very important notions -- say involving the difference
between consistency and coherence -- that make ideas from the distributed
computing community particularly relevant at this time.
Perhaps it would be useful to start up another subgroup whose concerns included
the latter topics. Maybe Gio will tell me that these concerns are included in
the things his group is worrying about, but if so, I haven't understood that
I would say that at PARC we are starting to worry about interoperability issues
because we want to be able to build systems that make use of knowledge services
written by different groups in differnt programming languages. I don't want to
say a lot about this, but you can imagine that groups that work on natural
language technology, gesture recognition, audio signal processing and speech,
gestures, user interfaces, .... have a lot of different skills and knowledge
bases. We are coming to believe that by providing intercallable systems and
persistent, reliable network services we may creat new and interesting things
-- and incidentally find effective ways to communicate and use standardized
representations of all kinds.
So this note is somewhat less than a trial balloon. I'm not ready to advocate
that I think this is exactly what we should do. But I would like to hear other
people's thoughts on it.