Re: clarifying clarifying firstname.lastname@example.org (John F. Sowa)
Date: Tue, 8 Aug 1995 14:13:42 +0500
From: email@example.com (John F. Sowa)
To: firstname.lastname@example.org, email@example.com
Subject: Re: clarifying clarifying ontologies
Cc: firstname.lastname@example.org, email@example.com, firstname.lastname@example.org,
I agree with Ed Hovy's point that it's not easy to create useful taxonomies,
"especially when you have to worry about 50,000+ symbols."
But I strongly disagree with the claim that the axioms are not necessary or
at least useful for NLP. I was just reading the recent book by Alice ter Meulen,
_Representing Time in Natural Language_, MIT Press, 1995. In it, she gives
a nice classification based on distinctions, combinations, and constraints
(see my last note endorsing Ken Forbus' point). She makes three basic
1. A _hole_ is a description of an event that allows information to flow
2. A _filter_ restricts the flow.
3. A _plug_ blocks the flow.
Based on these three distinctions (with all + and - combinations of them)
she derives her _aspectual cube_ with eight kinds of verbs or aspects of
verbs at the corners. (A lattice would be an equivalent way of displaying
the combinations.) Then she also gives axioms for the ways in which sequences
of such verbs interact in discourse.
The book is only 144 pages long, so one can't expect a definitive classification
of Hovy's 50,000+ symbols. But it does demonstrate the power of a theoretical
analysis of the distinctions and possible combinations. It might not be complete,
definitive, or comprehensive, but it does make an interesting contribution that
shows the power of conceptual analysis to derive axioms that are useful in NLP.
In my paper, I say that we need both empirical studies and theoretical analysis.
Some people prefer to do one or the other, but they should work together rather
than criticizing the others as irrelevant or not applicable.