![]() | |
---|---|
Logic, Ontologies and Mental State Report on the ECAI'98 conference Tei Laine Abstract This short report of the ECAI'98 conference held in Brighton, England, in August 1998 addresses -- from the cognitive scientist's point of view -- some conceptual issues concerning the contents (subject matter) of the presented papers, as well as some conceptual ambiguities of overall arrangements of the conference. Due to the fact that cognitive scientists work especially on human behaviour -- and are interested both in rational and fallible aspects of human nature -- the conference revealed some fundamental inaccuracies of terminology frequently used in artificial intelligence research and engineering societies. This report questions these as well as the connection between intelligent behaviour and the formal systems presented at the conference. Mental States of Intelligent Systems "Cognition" (or "cognitive") has mysteriously become a very fashionable term. It is used in any context concerning rational and intentional- natural or artificial - behaviour (probably "intelligence" turned out to be too hard todefine or was too rich in connotation), as though it were kind of wonder-working tool that can be generalized to mean anything human beings do and with which all the deficiencies of artificial intelligence can be attributed to human beings (e.g. Turing's answers to hypothetical objections [14]) or all the human deficits can be explained as natural. This magic term "cognition" and all issues it covers are then utilized liberally as an excuse for various practical and theoretical decisions in HCI, design of user interfaces and intelligent tutoring systems, knowledge-based reasoning, decision support and risk analysis systems as well as in systems processing visual or auditory information. The ultimate goal of the use of this term probably is to make the systems appear more human and intelligible compared to rough technology, as if human-like behaviour were the goal the AI society is aiming at.(The assumption made here is that there are no beliefs and intentions without human beings in the world.). Obviously, there is- in the modern world- no such commonly .utilized (artificial) system that makes so many serious errors and irrational decisions and is so fallible in so many domains as human being. Thus, the terminology of cognitive psychology and the philosophy of the mind has gradually crept into the world of artificial intelligence, but unfortunately with implausible definitions and semantic contents without any real content, as no reference is made to the wider context or goals to be accomplished with its concepts. Actually, some of the terms were there right from the beginning when the study of intelligence was specifically inspired by human intelligence (in the 1950's, remember Newell and Simon [8]), but later the objectives of AI diverged dramatically from what is known about human thinking and turned out to strive in the direction of some kind of "super-human" intelligence. It remains to be seen whether this conceptual misunderstanding or confusion is just an intentional attempt on behalf of researchers in this field to make themselves feel more comfortable or whether they are actually trying to redefine "mentality". Still, why bother to speak in ambiguous and fuzzy terms of such mental phenomena as "belief" and "intuition", when the system functions as a result of mechanical calculation and formal reasoning, and the goal is to implement truly intelligent systems without human deficiencies and fallibility. Is it not enough to differentiate between "states" (with attributes or probabilities) that determine behaviour, instead of interpreting the states as beliefs and emotions, and vice versa, interpreting and encoding these beliefs and emotions into appropriate attribute values and probabilities to represent states? Logic and Cognition At first glance the first session of the conference (entitled "Logics for actions and mental states") sounded really promising but turned out to be a contradiction in terms, as mental states are defined as having semantic contents [13], otherwise they are not mental but neurological or physical, and logical deduction systems operate by manipulating symbols according to some formally defined system from which it is impossible to derive any semantics. There remained much room for interpretation though, for instance with regard to the papers discussing logical semantics of actions, preference and commitment [6] or knowledge and belief revision systems (e.g .[3,4]). Logic as a form of knowledge representation appears to be rather dominating in artificial intelligent and learning systems, and a major part of the research concentrates solely on definition of formal logic systems. This is somewhat strange as numerous experimental findings have shown that human intelligence and rationality do not follow any kind of logical rules, and even the ability to manipulate logical entities and to have insight into logical formalities is very limited among humans. In other words, inferences drawn in everyday thinking do not follow from premises (what is known or believed) logically. Presumably, the commonness of logical representations is due to the tradition of early mathematicians and computer scientists who investigated the limits of computability and formal reasoning. Moreover, the reason why logic intruded into AI research, which is intrinsically concerned with complex systems, is its symbolic form and the inherent guarantee of each system's internal consistency. Another surprising thing was the organizer's broad-minded decision to place the paper discussing representation of human body and human ergonomic simulation as an aid for design of industrial products [1] into a session on cognitive modeling. The problem here is not in an attempt to reduce human thinking to physical matter and bodily actions, but it seems that the content of cognition is very wide and its borders flexible. There were, though, some papers discussing, for instance, belief revisions and representation of alternative situations [5], explanations given by decision support systems [10], and especially papers addressing various aspects of computational linguistics (in addition to [9]), such as ambiguous expressions [7], learning new concepts [12], and speech recognition [15], that would have been very appropriate in a cognitive modeling session because these issues are very centrally involved in the study of human thinking and are frequently addressed in cognitive science literature. Ontologies Furthermore, it is not clear whether the term "artificial intelligence" is necessary or feasible anymore (except regarding conference arrangements) when all domains of artificial intelligence define intelligence differently and they address limited and non-overlapping sectors of a vast area. More crudely said, the the domain must be called "artificial intelligence" only for the reason that systems operate on entities which can be described as beliefs and intentions. When replaced with more formal terminology (e.g. states and functions) the intelligent behaviour would be reduced to pure computation. Support of the conceptual diversity is partly due to a kind of tug-of-war between groups working on symbolic models and on artificial neural nets, between those dedicated to Bayesian models for uncertain reasoning and those dedicated to case-based reasoning etc. Hence, it seems that there are real possibilities for ontological engineering (OE) in order to acquire some kind of coherence and consensus in the conceptualization of the domain, shared understanding as it was formulated in the Ontological Engineering Tutorial [2]. The tutorial, however, did not offer very many novel ideas, although it aroused some visions of possibilities of OE in establishingclear-cut borders on a meta-level with neighbouring disciplines, such as cognitive science, or as well inside the artificial intelligence society between different fields of study. This is because of the objectives of OE (reusability, stability, expressive power, consensus on conceptualization of general and domain specific knowledge) that are applicable in every domain inside AI (or anywhere in the scientific world as well). The current trend in research seems to be the development of systems that operate at very restricted task domains, such as natural language processing, and in even more specific subjects within natural language processing, such as sentence disambiguation. (The same problem exists in the field of cognitive modeling as well, as most of the systems are able predict human behaviour in very limited task domain, and some of the systems only under precisely defined experimental conditions.). If these research paradigms wish to communicate efficiently, some kind of mutual understanding and uniformity of conceptualization should be developed in order to be able to design more general architectures sometime in the future. The question asked by many tutorial attendants as to the difference between ontologies and knowledge bases was left unanswered. Unfortunately, a major part of the tutorial was formulated as questions and definite answers were not presented. However, after examining some pictures in the tutorial material the difference between knowledge bases and ontologies could be figured out. It was also mentioned that ontologies are needed when some non-domain specific knowledge is involved, while the knowledge base is always tailored for specific task domain and consists only of relevant information. Strictly speaking, in human decision making and problem solving there actually are no such situations where is non-domain specific knowledge involved. At least it is very difficult to make the distinction on-line. Furthermore, ontological engineering makes very strong assumptions that the conceptualization is being kept somewhere else than the actual knowledge. People do not construct new ontological taxonomies for every task domain separately but they rely on the general ontological commitments made in the world they live in. Finally, the difference between ontologies and knowledge representation remained somewhat obscure as to whether the former are only a syntactic structures to represent knowledge or whether they also determine conceptual contents. At least OE tries to answer the same questions that are relevant for knowledge representation formalisms: what concepts are needed, what kind of inferences can be drawn with them, and what kinds of mechanisms are necessary for making inferences. Epilogue Despite these academic problems the conference offered a good opportunity to have interesting discussions with people whose approaches to intelligence and rationality (and all those other issues concerning the subject modern man is most sensitive to) differ so drastically from each other. Besides being a cultivated and informative experience, the conference once again decreased true faith in artificial intelligence. Instead, it only showed the superiority and advancements of human beings over other intelligent agents: the ability to construct complex systems based on a variety of syntactic formalizations and carefully expressed semantics that are so prejudiced by theoretical assumptions that it would be a miracle if the implemented system did not produce the expected behaviour. Moreover, the whole field of AI is so scattered that the big picture of intelligence vanished ages ago. The ultimate goal of research still remains an open question; whether it is to understand the phenomenon under study and to explain the underlying processes in order to build a coherent view of the subject (unified theory of intelligence), or whether the objective is just to tailor {\em ad hoc} solutions to different task domains or merely to satisfy theoretical interests. Apparently, there seems to be very little left of the relationship between observable intelligent behaviour and the painfully unfolding formalities -- equations, definitions, rules, axioms -- frequently presented in AI conferences and journals. The significance of these formalities with respect to the advancements of AI or general view of intelligence is not evident. A theory is a theory is a theory, and it might explain and specify some of the requirements needed to carry out rational behaviour and implement functional mechanisms behind it, however in its current forms it does not provide very strong claims regarding true intelligence that can be refuted or even tested [11]. references [1] Alauzet, A. (1998), ADELE: a blackboard-based architecture for ergonomic simulation. In the Proceedings of the 13th Biennial European Conference on Artificial Intelligence, ECAI'98 (H. Prade, ed.), John Wiley & Sons. [2] Cómez-Pérez, A. (1998),What is ontological engineering. Slide book of the Ontological Engineering Tutorial, Facultad de Inform\'atica, Universidad Polit\'ecnica de Madrid. [3] Crampé, I., Euzenat, J. (1998), Object knowledge base revision. In the Proceedings of the 13th Biennial European Conference on Artificial Intelligence (H. Prade, ed.), John Wiley \& Sons. [4] Ghose, A.K., Goebel, R. (1998), Belief states as default theories: Studies in non-prioritized belief change. In the Proceedings of the 13th Biennial European Conference on Artificial Intelligence (H. Prade, ed.), John Wiley \& Sons. [5] Lévy, F., Quantz, J. (1998), Representing beliefs in a situated event calculus. In the Proceedings of the 13th Biennial European Conference on Artificial Intelligence (H. Prade, ed.), John Wiley \& Sons. [6] Liau, C. (1998), A logic for reasoning about action, preference, and commitment. In the Proceedings of the 13th Biennial European Conference on Artificial Intelligence (H. Prade, ed.), John Wiley & Sons. [7] Monz, C. (1998), Dynamic semantics and underspecification. In the Proceedings of the 13th Biennial European Conference on Artificial Intelligence (H. Prade, ed.), John Wiley \& Sons. [8] Newell, A., Simon, H.A. (1972), Human Problem Solving. Prentice-Hall, New Jersey. [9] Pacholczyk, D. (1998), A new approach to the intended meaning of negative information. In the Proceedings of the 13th Biennial European Conference on Artificial Intelligence (H. Prade, ed.), John Wiley & Sons. [10] Papamichail, K.N. (1998),{\em Explaining and justifying decision support advice in intuitive terms. In the Proceedings of the 13th Biennial European Conference on Artificial Intelligence (H. Prade, ed.), John Wiley & Sons. [11] Popper, K.R. (1975), The Logic of Scientific Discovery. Hutchinson, London, UK: [12] Schnattinger, K., Hahn, U. (1998),Quality-based learning. In the Proceedings of the 13th Biennial European Conference on Artificial Intelligence (H. Prade, ed.), John Wiley \& Sons. [13] Searle, J.R. (1984), Minds, Brains, and Science. Harvard University Press. [14] Turing, A.M. (1950), {\em Computing machinery and intelligence}. In Mechanical Intelligence (D.C. Ince, ed.), North-Holland, 1992. [15] Worm, K., Rupp, C.J. (1998),{ Towards robust understanding of speech by combination of partial analyses. In the Proceedings of the 13th Biennial European Conference on Artificial Intelligence (H. Prade, ed.), John Wiley \& Sons. |
|
![]() |