TOC PREV NEXT INDEX

ThinkArt Lab

ThinkArt Lab Hogmanay 2004
ThinkArt Lab Animation:  A.T. Kelemen
© November 12, 1998 Dr. Rudolf Kaehr
PDF version of DERRIDA'S MACHINES, Part I+II
PDF version of Dynamic Semantic Web as Part I

Minsky´s new Machine


1 Proemiality and Panalogy

1.1 Cognitive Systems and Panalogy Architectures

The DARPA label Cognitive Systems maybe interpreted as a interplay of cognitive and volitive aspects of a living system. Such an interplay was described by Gunther in his Cybernetic Theory of Subjectivity as the mechanism of a proemial relation.

This idea of a proemiality between structurally different systems can be brought to a more concrete level as an interplay of the four aspects of a living system architectonics", reflectionality", interactivity" and positionality". I choose these terms because they show a possible connection to existing work in the fields of AI, robotics, living systems, etc.

None of these aspects is prior to the other. They are simultaneously founding and generating each other. There is now need for a complex architectonic if there is no need for complex reflection, and so on.

The polycontextural approach to cognitive systems postulates that cognitive systems are from the very beginning involved in the interplay of (at least) these aspects of specifications. Cognitive systems don´t exist in the world isolated for themselves, and are starting from time to time to interact and to reflect, etc. In contrary, they simply don´t exist if they are not principally involved from the very beginning simultaneously in all these actions.

At the time there are some very interesting developments in AI, robotics and other branches, collected by terms like "Cognitive Systems" (DARPA), "Architectures" (Sloman, Minsky) and "Emotion Machine" (Minsky), "Common Sense Interfaces", etc.

The main background idea and strategy seems to be to introduce multitudes against single monolitical concepts and methods. Slogans like "Multiple ways of thinking", "Diversity of ways of thinking", "Parallel ways of thinking", etc. The introduction of different agents like critics are part of the dissolution of monoliticity of classical modeling in AI.

Minsky calls one important case of multitudes "parallel analogy" or short panalogy.

Push Singh is on the way to write and implement in his Ph.D. dissertation The Panalogy Architecture for Commonsense Computing.

The Panalogy Principle: If you 'understand' something in only one way then you scarcely understand it at all-because when something goes wrong, you'll have no place to go. But if you represent something in several ways, then when one of them fails you can switch to another. That way, you can turn things around in your mind to see them from different points of view -until you find one that works well for you now. And that's one of the things that `thinking" means!
þ
We cannot expect much resourcefulness from a program that uses one single technique-because if that program works in only one way, then it will get stuck when that method fails. However, a program with multiple `ways to think'-the way we'll describe in Chapter §7-could behave more like a person does: whenever you get frustrated enough, then you can switch to a different approach-perhaps through a change in emotional state.
Minsky

Sloman´s Email

One aspect of the broader view is the way in which a growing interest in
architectures and varieties of forms of representation replaces, or
rather subsumes, the older emphasis on algorithms and data-structures.

By 'architecture' I don't mean what computer engineers used to mean: the
low level organisation of a kind of hardware device, e.g. a Turing
Machine architecture, or a VonNeumann architecture, or a Vax
architecture. Rather the study of architecture includes the study of all
sorts of ways of constructing complex functioning systems from many,
possibly diverse, components. This includes software architectures,
virtual machine architectures, hybrid architectures, etc.

I expect the study of architectures, especially layered virtual-machine architectures will
continue to grow in importance. We may need entirely new kinds of mathematics for this.
From: Aaron Sloman ([email protected])
Date: May 26, 2003 23:22

Push Sings´s main questions

In order to explore how to build an architecture of diversity for commonsense computing, my thesis will explore these questions:
How can we represent a "way of thinking"?
How can we map types of problems to types of ways of thinking?
How can we detect problems with the current way of thinking?
How can we switch efficiently between ways of thinking?














1.2 Ways of thematicizing

In earlier papers I introduced some distictions to characterize how we are thematicizing our subject.

Explanation (Narration, Metaphors, Notions)

Formalization (Mathematics, Logics)

Implementation (Modeling, Computerimplementation)

Realization (Construction, real-world performance)

Diagramm 3

Leitlinien der Erfragung

These categories of thematicizing are understood as purely functional and structural and not fixed in any sense. Therefore we can study the implementation of the explanation or the realization of the formalization, etc. In this sense even a poem can be thematicized by its power of formalization, its level of realization, etc.

From this point of view the project "Cognitive Systems" with its architectures are situated in the field of Explanation, see Minskys book "Emotion Machines", Implementation, Modeling, see SADE, CoGaff, SOAR and Realization in the domain of Modeling. This implies Formalization too. But here we observe a very classical situation without any attempts in the direction of the slogan "Multiple ways of thinking". The whole monolitical concept and apparatus of mathematical and logical thinking and reasoning is accepted, at least it is not a topic of the new panalogy program. The same happens to the realization of the model, it has to run on a classical computer, accepting the paradigm of algorithms as formalized e.g. in the Turing machine and the concept of information as formalized by Shannon.

This situation is surely not surprising, because it mirrors the current situation of technology and mathematics. Any denial of these presuppositions would sabotage the whole project of developing today on a reasonable base more complex systems.

Nevertheless, to discuss and surpass the limits of the formalization power of mathematics for the realization of artificial living systems was one of the aims at the Biological Computer Laboratory (BCL) in the early days of AI researches.

The only voice concerning mathematics in connection with the Grand Challenge Project I found in Sloman´s email. "We may need entirely new kinds of mathematics for this." But this statement can have itself a multitude of interpretations.

The open question remains, is the study of cognitive systems, more explicit, is informatics part of mathematics? Or is it the grand challenge of informatics to deliberate itself from the paradigm of mathematics? It seems that Margret Boden is thinking in this direction too.

1.3 Complementary Work?

From the point of view of strict foundational studies in this field, I try to realize some complementary aspects of the contemporary situation.

Therefore I have to focus mainly on the aspects of formalization. What are "Multiple ways of thinking" in logic and arithmetics? One actual answer to this question we can find in the growing approach of Combining Logics (fibring, labelling, weaving formal systems, Pfalzgraf, Gabbay). This trend is not yet recognized by the vision of panalogy. Mainly probably, because this work is not only very recent but also extremely technical and even mathematicians will have some problems to understand and to use it. Also, at least from my standpoint, this work is not radical enough at all. Because it is based on a monolitical kernel of classical logic. The diversity comes here to a stop at the bottom and ends in monolicity. On the other hand, it is based in its meta-language, category theory and multi-sorted logics, on a monolitic monster at the top.

That classical logics in all its forms are not enough for the study of cognitive systems maybe well known. Not only Kant and Hegel discovered it, but also Peter Wegner was criticizing the Japan Project from this point of view. Prolog, based on first order logic, is to weak to cope with interaction. But the common strategy of Hegel and Wegner is to avoid logic and to switch to a more speculative or empirical level of modeling, instead of transforming the paradigm of logic itself. There is no reason to believe that logic is something natural like the natural numbers of arithmetics and that it could not be changed as the naturality of the natural numbers can be de-mystified. This challenge is not accepted at all, the result is, again, some regression into non-formal thinking.

Because of my focus on foundational studies my realization of the category of explanation is worked out in a more philosophical sense, and the category of implementation too, is more foundational than empirical, that is, it is implementing new formalisms and programming languages with the help of today´s monolitical methods (existing programming languages) on monolitical machines.

"I first presented the idea that Turing machines cannot model interaction at the 1992 closing conference of the Japanese 5th generation computing project, showing that the project´s failure to reduce computation to logic was due not to lack of cleverness on the part of logic programming researchers, but to theoretical impossibility of such a reduction. The key argument is the inherent trade-off between logical completeness and commitment. Commitment choice to a course of action is inherently incomplete because commitment cuts off branches of the proof tree that might contain the solution, and commitment is therefore incompatible with complete exploration."
Wegner, ECOOP ´99, p.1/2

As mentioned above, Wegners strategy to surpass this limiting situation is not to deliberate the paradigm of formality which is defining the very concept of logic and all the concrete logical systems, but some form of regression to empiricism.

"Logic can in principle be extended to interaction by allowing nonlogical symbols to be interactively modified (reinterpreted) during the process of inference, for example by updating a database of facts during the execution of logic programs. However, interactive discovery of facts negates the monotonic property that true facts always remains true."
Wegner, p. 25

This strategy of extending logical systems by non-logical symbols for modeling interaction introduces into logic some non-logical elements of empiricism. For practical reasons this approach has its merits. Nevertheless, from a structural point of view of operativity and formality nothing has changed. Still the old logic is ruling the situation.

This strategy of extending the concept of pure classical logic is well know, at least by the work of mathematical linguists. I named this tendency of concretizing pure logic to more mundane tasks the "parametrization of logical constants". Every element in a logical system which has some constant definition can by parametrized to a more dynamic notion. In a strict sense, all these extensions are co