Re: Standardizing FOL
Message-id: <>
Date: Wed, 22 Sep 1993 14:28:47 +0000
Subject: Re: Standardizing FOL

As usual, we are on the wrong wavelength. Of course I approve of 
not reinventing the FOL wheel. Ive been urging this vehemently in 
speech and writings for twenty years, and defending the idea 
against all sorts of attacks. And of using a many-sorted
FOL. Ive been urging and doing this now for a while also (See 'A 
logic of actions', Machine Intelligence 6, 1971). Tony Cohn did 
some of the basic work on many-sorted unification under my 
supervision. I'm glad your summer reading has brought you up to 
speed on this subject ;-) 

(Incidentally, there is a whole variety of many-sorted logics 
of varying strengths. I myself enjoy the pragmatic convenience 
of overloaded heirarchical sorts, so that Edge(x,y) can be 
wellformed if x is a line and y a surface AND if x a surface and y 
a solid, but not if x is a line and y a solid, or both surfaces. 
This is impossible in simple sort structures. (See 'A logic of 
actions', Machine Intelligence 6, 1971.) However, unification 
with overloaded sorts is very complicated, as Cohn discovered.)

But be careful: a sort heirarchy is not a type heirarchy, ie , 
a higher-order heirarchy. 

......  Then in Part IV, they develop time
>logic -- FOL with time as an explicit sort -- and they embed both
>temporal logic and dynamic logic in it.

I havnt seen this book, but this is not surprising. Its often the 
case that a first-order logic, even, say, a modal logic, can be 
embedded in sorted FOL by using a theory of its semantic structures. 
Eg see the wellknown 1969  McCarthy/Hayes paper, part 4. My 'worries' 
wouldnt apply here, of course, since the semantic base of the LP 
language is first-order anyway, so its not surprising that the 
first-order models of a suitable lp theory are suitably isomorphic.
(However, I was surprised to learn recently that some apparently 
harmless modal logics in fact can't be so reduced: their axioms 
are irreducibly second-order when translated in the usual way, 
because their expressive power depends on the modal rules being 
stated as operations on schemas.)

>The usual SOL is extremely powerful:  you can, for example, define
>finiteness, .....
>The many-sorted mapping lets you say the same things, but the
>theory has all the nice properties:  compactness, countable models,
>etc.  You can give the definition of finiteness, if you like, but
>it doesn't work:  there may exist infinite models for it.

It lets you make assertions of the same syntactic form, but they 
*aren't* saying the same things, as you correctly observe! In 
contrast, there are some second-order theories whose 
intended models are genuinely second-order, but which are in 
fact decideable. So this is a delicate area.

>   Kurt Jensen, _Coloured Petri Nets_, Springer-Verlag, 1992.
COLORED Petri nets? Well, every man to his taste, as I am 
sure you agree.

Some comments on your comments on...(I promise this will not get 

>> The context mechanism ...... what it does do is let you
>partition your knowledge base into modules.  Outside the module, you
>can use metalanguage to talk about what's inside, axiomatize the
>insides, reason about them, etc.  Furthermore, you can have families
>of contexts that are inconsistent with one another, and outside the
>modules you can talk about how they are related to one another.

This is the kind of thing that McCarthy and Guha and others are 
developing at Stanford and MCC. But you can't do what they are doing 
just with quotation. If I simply quote (the text of) a theory, all I 
have is a string of symbols. To do the reasoning necessary, one needs 
to be able to say things like true-in(p,c) where both this and 
p are wffs of the logic. This is a genuine extension to 
the logic and it requires a delicate hand to get its semantics right.
Its not just a relation between a string and something.

>  And yes, indeed, you can define lambda conversion.

I don't believe it!! Give me a reference to this, or tell me how its done.
Notice I said DEFINE, not program. I dare say you can hack it up: after all, 
FOL can be regarded as a programming language.
>>...Suppose I have a language LP with a model 
>> theory defined on it, and suppose we can describe the syntax of LP and its 
>> model theory in L1: call this the lp-theory. Now, take a first-order 
>> model of the lp-theory: will this be suitably isomorphic to a model 
>> of LP? Will every model of LP be suitably isomorphic to a first-order 
>> model of lp-theory? Neither of these questions has an obvious answer.
>I would say that the answer is intuitively obvious, although verifying
>the intuition might take some work.  Let me ask you how you (or anyone
>else) defined this language LP and its model theory?  Standard practice
>in mathematical textbooks and research papers is to use some natural
>language as the metalanguage for defining the theory (or for defining
>the language that specifies the theory).  That use of NL is essentially
>always a rather simple version of a sorted FOL using NL words and syntax
>intermixed with variables and mathematical expressions.
NO! Use of natural language is almost never a use of FOL or any other 
formalism. If it were this trivial to transcribe NL into FOL then AI work 
in NL understanding would have been finished years ago. (Theres a deeper 
reason why not, which is that natural languages are different in kind 
>From formal logics. I don't expect to able to convince you of that here 
if you don't believe it, so just go with the complexity argument.)

>> No, you don't have them. I was imagining a 'context' syntax which would 
>> allow higher-order heirarchies to exist but with a limited range. The 
>> semantic difficulty with (fullblown) higher-order logic is ....
>Yes, but.  The quoting mechanism of KIF and CGs gives you the
>limited range you were asking for.  Outside the context box, you
>can specify exactly what kinds of expressions and rules of inference
>you will permit inside the box.  You can limit it in any way you like.

No. You miss the point. Even if you can describe the inference rules, etc., 
that you need - and its not obvious to me that you always can, by the way - 
that doesnt specify the semantics. And it was the semantics I was worrying 

>> What you can make the logic talk about is one thing, but what that talk 
>> means is another. Until you provide a model theory, its not clear what 
>> these internal translations and descriptions amount to. (Now do you see 
>> why I make such a fuss about the importance of semantics? :-)
>Yes, of course.  But that is exactly why we have adopted FOL for KIF
>and CGs.  We have a very nice, very simple, fully defined model theory.
>If you use our notations to define your logic inside one of our contexts,
>you are free to use our constructions to create your models, but then
>defining the model theory for your logic is your responsibility, not ours.
>We give you the lego blocks and let you play with them in our context 

Then this is just a programming language, not a Krep language, standardised 
or otherwise.If all the Krep aspects of the standard are not going to be 
used by me as Krep, then they just get in the way. Why should I not just 
use PROLOG or LISP? (Or C++, which is rapidly becoming a de facto standard 

>.... On the other hand, we can see very large
>application areas where the current technology is not going anywhere
>near the limits.  Just good old fashioned FOL is a major conceptual step
>for most programmers.  As I have been trying to say again and again and
>again in all these notes is that we are not trying to develop standards
>for research, but standards for industrial-strength applications.
>... what we want to do is to develop standards for commercial tools so
>that you can buy off-the-shelf parsers, theorem provers, etc., in the
>same kinds of stores where you can buy the latest spreadsheets, word
>processors, and megadeath games.

Fine, I agree that this is an excellent ambition and will definitely 
make the world a better place. If I can help in any way...?

>I'm less worried about giving PhD students "the wrong impression"
>than I am about not having decent educational tools for the freshmen.

Well, I worry about the former as well. And I guess what I sometimes react 
to is your overconfidence in the current tools. You say that you aim them 
at freshmen, but you also imply confidently and authoritatively - as you 
did in this very message - that they make masses of fundamental research 
unnecessary. Contexts? We got 'em! Different semantic bases? We can hack 
up anything!  For years Ive been pointing out that some new representational 
idea is just another notation for FOL. But thats not because I think FOL is 
the answer to everything. On the contrary, I want us to go beyond it, and I 
shoot lame horses. Nonmonotonic reasoning genuinely goes beyond FOL, for 
example. The message that started this sequence merely expressed a concern 
that the way in which we need to go beyond it might, just might, involve 
a fundamentally different way of thinking about semantics. So we 
shouldn't stop thinking about semantics.

Over to you for the last word.


PS the next century is only seven years away.

Beckman Institute                                    (217)244 1616 office
405 North Mathews Avenue        	   (217)328 3947 or (415)855 9043 home
Urbana, IL. 61801                                    (217)244 8371 fax  or