Primitives, definitions, and firstname.lastname@example.org
Date: Fri, 10 Jan 92 19:14:51 EST
To: INTERLINGUA@ISI.EDU, KRSS@ISI.EDU, CG@cs.umn.edu
Subject: Primitives, definitions, and metalanguage
Comments on Len Schubert's comments on my comments on his comments:
>> But when [logicians or lexicographers] state their definitions, both
>> of them use exactly the same form: Aristotle's style of definition by
>> genus and differentiae.
> I don't see that logical definitions like
> S(x) <=>def member(x,x)
> P<=>Q <=>def P=>Q & Q=>P
> are definitions by genus and differentiae.
I would say that those are abbreviated ways of saying that S is
a function (its genus) and <=> is a Boolean operator (its genus).
In standard form, one could say:
S is defined as /* Metalanguage */
a function with one argument x /* Genus */
where S(x) = member (x,x). /* Differentiae */
<=> is defined as /* Metalanguage */
a Boolean operator with two arguments P and Q /* Genus */
where P=>Q & Q=>P. /* Differentiae */
> In any case, the point I
> was getting at was that I think the definitional syntax should clearly
> distinguish between complete and (potentially) partial definitions.
I agree. A partial definition is one where you state the genus and
syntactic properties, and leave the semantic properties implicit in
a set of axioms (which may simultaneously determine the semantic
properties of a lot of other elements of the theory). This is like
Quine's web of belief, where you can't really identify which idea is
being defined in terms of any other.
> As just one example of a logic with quotation (at the object level),
> let me point to De Rivieres & Levesque, "The consistency of syntactical
> treatments of knowledge", Comp. Int. 4, 1988. It is true that quotation
> has sometimes been taken to automatically boost a sentence to the
> metalevel, e.g., in the Genesereth and Nilsson book. But that's only
> reasonable as long as we can avoid mixing sentences containing quotes
> with others. Let me quote Stuart Russell on this: "[Genesereth &
> Nilsson's] analysis of metalevel systems does not, however, allow for
> mixed-level sentences, which refer to objects from more than one level.
> Such sentences often arise in descriptions of sensing: 'If I open the
> window I will know if the birds are singing" refers to an external
> action (opening the window) with an internal effect (knowledge of a
> proposition." (Do the Right Thing, p32)
Yes, you do have to mix levels. In one note, I gave the example
"At 2 pm in New York, John defined A = B." We have been considering
such statements in the IRDS, since we have to worry about version
control and keeping track of who entered and updated each definition.
You might even have a database that is keeping track of the same
employees who are defining the next version of the database that
is keeping track of them. I believe that the best way to handle
such knowledge structures is to make contexts "first-class objects"
(in McCarthy's sense).
In conceptual graphs, I represent contexts as concept boxes that
contain some collection of propositions. But since each context is
itself a concept, you can attach arbitrary relations to it, such as
"John knows C", "Bill defined C", "Tom thinks C, but Mary doesn't."
The outside of each context box is a metalevel with respect to the
propositions inside. The nature of the relations attached to a
context box determines whether you can move propositions in and out
of it (opaque or transparent). In the outer context, you can state
axioms that determine how the propositions in the inner context
> As for Quine's view on modality, profound and provocative though
> it may be, its current status seems to be something like Einstein's
> view on quantum uncertainty.
True, Quine's views as stated in some of his writings tend to be rather
negative, and like Einstein's negative attitude towards quantum mechanics,
they tend to destroy rather than support rich theoretical structures.
That makes them unpopular among those who want to build such structures.
When I first read Carnap's "Meaning and Necessity", Quine's "Word and
Object", and the debates between them, I sympathized with Carnap over
Quine. I didn't like Quine's approach because it didn't provide a
convenient way of handling modality in natural language, which was one
of my primary interests.
Several things led me to believe that Quine's position was more
tenable: One was the article by Michael Dunn, where he showed that
you could develop a modal semantics that was just as rich as Kripke's
(or perhaps richer) solely in terms of laws and facts, without making
any assumptions about possible worlds. Another was my interpretation
of Wittgenstein's language games, which I believe can be reconciled
more easily with a Dunn-Quine version of modality than a Carnapian one.
And a third was my work on representing situation-semantic-like things
in conceptual graphs, which led me to constructions that are similar
to McCarthy & Guha's work on contexts.
I believe that this approach gives you the rich semantic structures
you need for natural language while avoiding the things that Quine
was criticizing. I plan to talk a bit about this approach at the
workshop on propositional knowledge at the AAAI Spring Symposium.
> That a definition is more than its defining axiom is a point
> I have been pushing hard. John's remaining remarks about object-level
> and meta-level truth also made perfect sense to me. Where things get
> tricky is when one "pulls down" the metalanguage into the object
> language using quotation ...
I agree that the mixed object-language/metalanguage statements get
very tricky, and I can't claim that I have sorted out all the tricks.
But I think that there are certain key notions that can help sort
them out: contexts as first-class objects, language/metalanguage
distinctions, some of the situation semantic ideas, and modalities
as metalanguage about the distinction between laws and facts.
In an earlier note, Len brought up the question of various kinds
of modality -- alethic, deontic, epistemic, etc. Those can all be
handled in Dunn's style of modality: the difference is whether you
consider the laws to be laws of logic, laws of physics, laws of
morality, laws of belief, etc. I believe that it gives you a more
unified way of viewing all those kinds of modality, and it doesn't
require you to postulate infinite families of fictional worlds.
I also think that the metalanguage approach gives you a way of
handling nonmonotonic reasoning without changing your fundamental
"logic". In a sense, defaults are like very weak laws: whereas
laws have a stronger "epistemic entrenchment" than facts, defaults
are weaker than the facts. That leads me to belief revision a la
Gardenfors as an interpretation of nonmonotonic reasoning (although
one can still preserve the computational approaches as bookkeeping
shortcuts for dynamic belief revision).