Re: Knowledge languages vs. programming firstname.lastname@example.org
Date: Tue, 17 Mar 92 07:16:39 EST
Cc: INTERLINGUA@ISI.EDU, SRKB@ISI.EDU, CG@CS.UMN.EDU
Subject: Re: Knowledge languages vs. programming languages
I just returned from the ISO Special Group Meeting on Conceptual Schema
and Data Modeling Facilities (Renesse, the Netherlands, March 9 to 13).
As a result of the discussions on Monday (March 9), it became clear that
people had diverse opinions about the distinction (if any) between a
conceptual schema language and a data modeling language. We therefore
allocated one hour on Tuesday morning (which eventually extended to the
entire morning) to a discussion of that issue. The question is similar
to the issue about knowledge languages vs. programming languages. In
the Renesse discussion, two distinctions seemed to be the most popular:
1. 100% principle: A conceptual schema language must be capable of
representing 100% of the semantics in a domain of discourse, whereas
a data modeling language might represent some, but not all of the
semantics -- e.g. it might model structure, but not behavior.
2. LOTs vs. NOLOTs: The NIAM system distinguishes Lexical Object Types
(LOTs) from Nonlexical Object Types (NOLOTs). A LOT is anything that
can be completely represented in symbols on a sheet of paper or a
computer storage device; e.g. numbers, character strings, and lists,
arrays, records, or other structures made up of them. A NOLOT is
anything that cannot be so represented, such as physical objects,
events, situations, and abstractions like justice or happiness.
Simple test: If you can store it on a disk and recover the same
thing you stored, it's a LOT. If you can't, it's a NOLOT.
Language distinction: A data modeling language is limited to
talking about LOTs, but a conceptual schema language can talk about
LOTs, NOLOTs, and the relationships between them (e.g. the person
John is a NOLOT, and the string 'John' is a LOT that represents the
name used to refer to John).
Both of these distinctions raise other questions. For the 100%
principle, the question is 100% of what? Any domain of discourse could
be described at any level of detail. How can you tell when you have
reached 100%? The consensus seemed to hinge on the word "capable".
Although no finite description might describe a domain completely, a
language could satisfy the 100% principle if it is capable of stating
any distinction that anyone might care to make -- i.e. as long as you
want to keep talking, the language has sufficient expressive power to
keep up with you.
The LOT vs. NOLOT distinction comes closest to the point that I was
trying to make. If the variables of a language can only refer to LOTs,
it is a programming (or data modeling language). But if they can refer
to NOLOTs as well as LOTs, it is a knowledge (or conceptual schema)
Some comments on your comments:
> I liked your message, but have a few comments. This distinction is intuitively
> compelling (although rejected by some) but notoriously hard to make exact, and
> unfortunately the basic idea which you nicely expound in this message has
> problems under strong examination.
> For a start, Herbrand showed us that 1PC, and probably pretty much any
> knowledge language with a model theory, CAN be interpreted as talking about
> its own symbols: ie, if it has a model at all, it has one made of symbols
> themselves. 'Grounding' is motivated in part by the felt need to somehow
> guarantee that such symbolic interpretations are ruled out. Your switch from
> the variables in L being "intended to refer to" somethings to the variables
> "refer(ring) to things in T" in the next sentence illustrates the problem
> nicely: we might intend with all our might, but that doesn't guarantee actual
Yes, that's true. For any set of NOLOTs and relations in the real world,
you can always construct an isomorphic set of LOTs and relations. That
seems to be equivalent to Quine's paradox about the indeterminacy of
translation. I think that the only sure way out of that indeterminacy
must be related to the ways of handling the symbol grounding problem:
i.e. the symbols must have a connection to sensors and actuators that
would relate them to their referents in the actual domain of discourse.
> Second, why should a knowledge language not have some reflexive abilities to
> refer to its own expressions? You say that English is a knowledge language,
> but it can certainly refer to English expressions: there is a word for "word",
> for example. More technically, the CYCL system, which I think would safely be
> put on the Knowledge side of the fence, has categories for all the
> datastructures which are used to implement it. Richard Wehyrauch and Frank
> Brown have both developed systems which are clearly assertional but can
> self-refer to their own structures. So the distinction in terms of
> subject-matter doesnt really hold together.
A general purpose knowledge language or conceptual schema language must
be able to talk about LOTs as well as NOLOTs. You should be able to say
"The character string 'John' represents the name of the person John."
But a data modeling language can only talk about LOTs.
> Third, you refer to strong typing as characteristic of knowledge languages.
> But surely this is almost completely orthogonal to the distinction being
> discussed. Many programming languages are highly typed with strong runtime
> type-checking and no freedom to violate the boundaries. Prolog is in fact a
> rather unusual programming language in this respect. And there are plenty of
> knowledge-representation languages which have no especial type structure,
> although they often give the user the ability to create a sort structure,
> because this is often useful. And I think it is arguable that most natural
> languages are not strongly typed in this way. I can mix categories in English
> with results which might be unusual but are not ill-formed, and often in fact
> used, eg as in such phrases as "a hairy problem" or "fat cash". You say that
> "This is why" you believe that strong typing is an essential part of a
> knowledge language, but you don't say why, you just tell us that its part of
> your definition.
I agree that strong typing should not be part of the definition, but I
think that it follows from the symbol-grounding condition. Let's assume
that we answer the referent question by connecting the symbols to sensors
and actuators so that we can see whether a robot that uses our language
makes the right moves.
But no sensor ever devised actually identifies the referents directly
-- it only recognizes types, and the identity of the referent must be
inferred. When I see you, I identify you as Pat Hayes because you fit
a type I remember. But if you had a twin brother, I could be mistaken.
Even if you marked the referents with unique bar codes, that would not
violate the principle: the bar code reader would still be recognizing
types, and it couldn't distinguish a counterfeit from the real thing.
Re metaphor: It's true that metaphor violates type constraints,
but Eileen Way had an interesting interpretation of metaphor in her
dissertation on dynamic type hierarchies. She claimed that a metaphor
is only a temporary violation of type constraints, and that its purpose
is to create a new type that resolves the violation.
For example, the metaphor "hairy problem" would be resolved by comparing
it to a normal use such as "hairy beast". The next step would be to
find the minimal common supertype of PROBLEM and BEAST -- in this case,
ENTITY. You would then look at the way that "hairy" modifies "beast" and
generalize those modifications to conditions that might apply to a wider
class of entities, especially problems: in this case, you might say
that a hairy entity is one that has complex ramifications extending
out in a disorderly way.
This new type might be created for a single use, as in a poem that is
intended to achieve a striking effect. But if the new type is more
generally useful, it could be applied to many more things, such as
hairy traffic or hairy music. If it is used frequently enough, it could
become a frozen metaphor and become a permanent part of the language.
If you have $98 to spend, you can read more about this point in
_Knowledge Representation and Metaphor_ by Eileen C. Way,
Kluwer Academic Publishers, 1991.
> Let me suggest that what makes Prolog a programming language is chiefly
> because it comes with a fixed interpreter, and this interpreter's behavior is
> an essential part of the meaning of the language. That is, it is a language
> part of whose meaning has to do with the way a machine manipulates its
> expressions. Whereas there isnt any machine in a Krep languages semantic
> story, unless of course it happens to be in part about a machine: but thats
> a different kind of relationship.
I think that the question about the kind of interpreter goes off in
another orthogonal direction. I would prefer to base the definition
on the LOT vs. NOLOT distinction. Then the questions about whether
the language should be strongly typed or have some particular kind of
interpreter would be answered by a further analysis of the implications
of that distinction.