Primitives, definitions, and firstname.lastname@example.org
Date: Wed, 8 Jan 92 21:44:22 EST
Subject: Primitives, definitions, and metalanguage
Some comments on some of Len Schubert's comments:
> JS's proposal to use the Hilbert epsilon
> operator would be one rather nice way around that. However, if we want
> to stick to more familiar constructs....
The question of familiarity is irrelevant for the choice of primitives.
You can introduce epsilon, lambda, and such as primitives that would
primarily be used to define higher-level constructs like defobject
or defun. System developers might find new uses for them in novel
applications, but the average user would never need to worry about them.
Aside: I do recognize the need to make the manual simple and easy to
read. When I was writing the Conceptual Structures book, I originally
referred to Hilbert's epsilon operator and cited Leisenring's book.
But when I was doing the final editing, I deleted the epsilon operator
because it required me to discuss new material in terms of something
that was even less familiar. I later mentioned the epsilon operator
in another article. But I think the fact that I had it in the back
of the mind made my definitions cleaner: it was a guiding principle
that helped me to keep my own thinking straight.
> My impression is that in much of
> the KR community, "terminological knowledge" is taken to cover roughly
> the sorts of information one finds in dictionary definitions, including
> type subsumption. As long as we make a clear distinction between which
> definitional constructs conform with the mathematical logician's
> notion, and which ones are more like the lexicographer's, this should
> not cause problems.
The primary difference between the mathematician's job and the
lexicographer's job is that the mathematician is trying to legislate
what a term should mean, while the lexicographer is trying to discover
what other people mean.
But when they state their definitions, both of them use exactly the
same form: Aristotle's style of definition by genus and differentiae.
The primary reason why mathematical logicians seldom worry about the
type hierarchy is that they rarely have more than one basic type of
individual. But whenever you deal with the real world, you have an
enormous multiplicity of types of things. So the major differences
in the definitions are not logical distinctions, but differences in
the application domain.
> Finally, the question of whether definitions should count as sentences
> and have truth values seems to hinge on the status of sentences
> containing quotation. RMc suggests that the content of a definition
> consists of the domain assertion along with an assertion like
> (define '<name> '<defining-expressions>). Yet he opposes sentential
> status for definitions. But isn't the latter a sentence? If not,
> what is it? We might say, it's a sentence of the metalanguage, but not
> of the object language. However, though the KIF document does talk
> this way (in some places), there's every indication (including in the
> formulation of a revision-semantics for truth, and in the suggested
> approach to modals) that quotation is earmarked for use in the _object_
> language -- i.e., a knowledge base may properly contain assertions
> with quoted expressions, and the theory of truth and entailment will
> cover these).
I would say that any sentence that uses quoted expressions is a
sentence in the metalanguage. You can only consider modal logic
as a kind of first-order logic if you treat the modal operators as
special kinds of quantifiers over possible worlds. But as soon as
you treat them as operators on quoted expressions, you are treating
them as metalanguage operators. (And by the way, Quine considered
metalanguage to be the most coherent way of thinking about modality.
I tend to agree with him.)
I would also say that definitions are metalanguage as well. I can
write an equation of the following form in first-order logic:
A = (lambda x)B(x).
This sentence is a normal equation that has as easily testable a
truth value as '2+2=5'. But just because it uses a lambda expression,
that does not make it a definition. It only becomes a definition
when you add that magic metalinguistic term "definition":
Definition: A = (lambda x)B(x).
When you speak a sentence, you don't utter the word "sentence".
When you use quantifiers, you don't use the word "quantifier".
When you prove a theorem, you don't use the word "proof".
But whenever you utter words like "sentence", "quantifier", or
"proof", you are talking metalanguage.
A similar principle holds for other tell-tale traces of metalanguage:
quotation marks, the word "that" as a subordinate conjunction, and
the presence of words like "definition", "sentence", "word", "grammar",
"synonym", "type", "subtype", "relation", "predicate", "quantifier", etc.