ThinkArt Lab Hogmanay 2004
ThinkArt Lab Animation:  A.T. Kelemen
© November 12, 1998 Dr. Rudolf Kaehr

Exploiting Parallelism in PCL-Systems

1 Polycontextural Strategy towards the Challenge of Parallelism

"Cooperation is expensive, yet it is the only way to get large tasks done quickly."
Peyton Jones, 409


Living systems

Living systems are not based on expensive exploitation of nature. They are naturally non-expensive but highly complex. Complexity in contrast to complication (=nowadays complexity) is a strategy of nature that is not repeated in science. Science behaves quite non-natural in thinking and replicating natural systems.

Robert Rosen has introduced as a first step into a more natural science of the natural, the distinction between simple and complex systems.

Today's reality of computing isn't mirrored properly in the framework of mono-contextural concepts, models and methods. To realize parallelism, it would be much more strait forward to understand that parallelism is a quite special case of interactivity between more or less autonomous systems.

Interactivity is well understood in the framework of poly-contextural systems. Therefore I propose that parallelism is a special case of polycontexturality.

Parallelism comes in computer science in many incarnations. In this study I will start with a quite vague notion of parallelism and I will restrict this notion mainly to software concepts and programming languages. Standard examples for functional and logical programming are introduced for the purpose of deconstruction and introduction of a new understanding of parallelism as guided by poly-contextural strategies. This new understanding of parallelism as interaction of autonomous systems is not in conflict with the well known approaches to handle parallelity. In contrary, the poly-contextural approach is seen as a complexion of complementary strategies enriching the field of parallelisms.

1.1 Parallelism in hierarchies

Parallelism in classical systems is obviously defined by its operations and equations. The operations and functions can have to property of being parallelized. The head of the basic module, or the abstract algebra in which this functions are defined is not involved in the procedure of parallelism. Neither the main function which heads the parallelized child functions. The head of the algebra and the main function is hierarchizing the construct of parallelism. In this sense parallelism occurs only secondary and involved in a hierarchy of the whole structure. This approach is reasonable because parallelism is involved here in solving a well-defined problem and delivering a well-defined solution, and parallelism is considered only as a tool or method to solve the problem more economically than in a linear setting.

Diagramm 46

Procedure of parallelism

It seems to be of no value to think about a distribution of the problem and the solution statement during the procedure of parallelism. The main task and its solution is unique and a distribution of it would only be a repetition of the same problem at different places. In other words, parallelism in combinatory logic based programming languages is limited to the parallelism of the strict combinators. The non-strict combinators, like S, K, I are not involved in this type of parallelisms.

Diagramm 47

Basic Modul e

For CL, the basic module is reduced to: contexture is 1, super-operator is ID, and name is Combinatory Logic, sorts are one, the entities E, with constants K, S, and opn is *, and eqns are "=". And parallelism is concerned only in respect of operations and equations. In contrast, poly-contextural parallelism is distributing the whole concept of abstract algebra with its head and body over different places.

Diagramm 48

Logical structure of a parallel graph reduction machine

"A task is executed by an agent. Typically an agent will be implemented by a physical processor. Agents are concrete pieces of hardware (we can point to one!), whereas a task is a virtual object (a piece of work to be done). An agent is employed if it is executing a task. An unemployed agent will look for a task to execute in the task pool which contains all the tasks awaiting execution."

"Synchronization between tasks is mediated entirely through the graph, so that the tasks do not communicate directly with each other at all." Peyton, p. 414

Obviously the reduction of the S commbinator, Sxyz = (xz) (yz), gives the key for parallelsim in classical systems. The graph reduction of Sxyz is shown in the diagram.

Parallelity in CL-based programming languages is at first depending on the combinator S. The graph gives the logical structure of the formula.

Is there any parallelity involved with the other main combinator, K? Obviously not: Kxy = x is reducing (xy) to (x). This process doesn't show any possibility of parallelism.

Remark. From a PCL point of view, which is dealing with complexities, and where terms are complex and in some sense ambigue, the reduction opens up the question, to which x of the other, but same x´s of the complexion are we reducing? The intra-contextural understanding of the rule Kxy = x asks for an additional identity relation ID, which rules the process of staying in the contexture and not leaving it. In analogy, K can reduce at once into different contextures, K(3)xy = (x, x, x) depending on the super-operator BIF. Every action or transition of a system is involved into the question of a contextural staying/leaving-decision.

Parallel Graph Reduction
"We have seen that functional languages can form a basis for parallel programming.
(i) Graph reduction is an inherently parallel activity. At any moment the graph may contain a number of redexes and it is very natural to reduce the simultaneously.
(ii) Graph reduction is an inherently distributed activity. A reduction is a (topologically) local transformation of the graph, and no shared bottleneck (such as an environment) need be consulted to perform a reduction.
(iii) All communications mediated trough the graph. This gives a very simple model of the way in which concurrent activities cooperate, and it is a model in which we have considerable confidence (because it is the same as our sequential implementations!)
(iv) The entire state of the computation at any moment is well defined - it is the current state of the graph.
Graph reduction gives us a rocked-solid model of parallel computation which can underpin the complexities of a parallel machine." Peyton Jones, p. 413

Also functional programming languages are not inherently sequential as conventional imperative languages but in principle parallel, in order to produce interesting results the program must contain algorithmic parallelism.

This inherent parallelism of functional languages offers to program parallelism with the means of the language itself without the need of adding new operators of parallelism.

This concept is widely known and has been explicitly studied by S.L. Peyton Jones, Kevin Hammond and Hans-Wolfgang Loidl.

Parallel HASKELL

To exploit algorithmic parallelism in mono-contextural parallel functional programming languages like parallel HASKELL two main operators are additionally introduced: the operators par and seq.

Simulating parallelism in the framework of poly-contextural systems has to deal with this two operators.

Modeling: seq, par ==> diss

The parallel operators par and seq in GranSim

For example in the expression

squares = [ i^2 | i <--- [1..n] ]
squares `par` sum squares

the list of squares will be produced by one thread (the producer) and the result sum will be computed by another thread (the consumer). Note that this producer-consumer structure of the algorithm is solely determined by the data-dependencies in the program. All the programmer has to do is to annotate a named subexpression (in this example squares), which should be computed in parallel. Thread placement, communication and whether the thread should be created at all are up to the runtime-system.

Algorithmic parallelism of Fibonacci
parfib :: Int --> Int
parfib 0 = 1
parfib 1 = 1
parfib n = nf2 `par` (nf1 `par` (nf1+nf2+1))
where nf1 = parfib (n-1)
nf2 = parfib (n-2)

The drawback of this program is the blocking of the main task on the two created child-tasks. Only when both child tasks have returned a result, the main task can continue. It is more efficient to have one of the recursive calls computed by the main task and to spark only one parallel process for the other recursive call. In order to guarantee that the main expression is evaluated in the right order (i.e. without blocking the main task on the child task) the seq annotation is used:

parfib :: Int --> Int
parfib 0 = 1
parfib 1 = 1
parfib n = nf2 `par` (nf1 `seq` (nf1+nf2+1))
where nf1 = parfib (n-1)
nf2 = parfib (n-2)

The operational reading of this function is as follows: First spark a parallel process for computing the expression parfib (n-2). Then evaluate the expression parfib (n-1). Finally, evaluate the main expression. If the parallel process has not finished by this time the main task will be blocked on nf2. Otherwise it will just pick up the result.

1.2 Parallelism in Heterarchies

Parallelism in polycontextural systems is not hierarchical but heterarchical. Heterarchies in PCL systems are strict, there is no summon bonum, no main head or main task which could have the power to subsume the disseminated systems under a common general system. Such a concept of heterarchy is not involved in any reduction to a meta-system of what ever generality. In other words, a general system which would reflect as a mediating system the behavior of two contextures would be itself a contexture like the observed contextures.

The difference would depend only on their different functionality or roles but not on their structure. The meta-system would simply be another contextures together with the observed contextures. This result is a consequence of the proemiality of dissemination as opposed to the abstracting and homogenizing power of morphisms.

Parallelism in polycontextural systems has at least two main aspects. One is the autonomous parallelism of the systems involved. Because polycontexturality consists of a mediation of distributed basic systems parallelism is not only inherent inside of the system but fundamental in the sense that the architectonics of the system as an heterarchy is parallel.

The second aspect is the trans-contextural interactivity between the systems involved. The first can be seen as a radicalized form of classical parallelity changing from the strict to the non-strict functionality, the second is unknown in classical systems and could have only a vague analogy in message passing between tasks.

Diagramm 49

Basic Modul e for 2-contextural algebras

Because poly-contexturality, in this study, consists of a dissemination of combinatory logics, at each contextural occurrence of such a combinatory logic there is obviously also an occurrence of the S combinator responsible for intra-contextural parallelism. Therefore, it has to be distinguished between the architectural parallelity of the whole system and the intra-contextural parallelities, types of parallelism, defined by the S combinator. This distinction of two types of distributed parallelism, inter- and trans-contextural, opens up a great flexibility of modelling parallelism in PCL systems.

In following the S-parallelism with its strict separation of parallel procedures we can speak of a similar strict disjunct parallelism of distributed autonomous functions for each contexture. That is, this strict parallelism is defined without the trans-contextural super-operator BIF, consisting only of intra-contextural operations. If we exclude any permutation PERM and reduction RED, an absolute strict separated parallelism will be introduced. For logical operations this can even be strengthened by the restriction to strictly mono-form functions, like (and,and,and), (or,or,or) etc.

1.2.1 Interaction model of consumption and production

Production and consumption are understood as interacting procedures. They are building a system of mutual interaction. If we are concentrating only on their common data then we are running into ontological or semiotic problems of their identity. Because in a classical setting we cannot consume and produce at the same time, that is at once, our data under consideration. Data-sharing under the principle of identity easily produces contradictions. Production and consumption. Both procedures are per se, at least in a classical setting, well-defined and independent from each other. Together their functionality is contrary, they are as notions opposites.

Neither-nor. The objects they are producing and consuming as such are neither produced nor consumed, they can be involved into this interaction but there is no necessity for there existence to be involved in this special game. Therefore they are localized as patterns in the system of kenogrammatics.

Both at once. To produce and to consume the same object at once is obviously violating the principle of identity because both processes, production and consumption, are contrary procedures related to common objects. Working together at once in an interaction they are localized as behavioral patterns in different contextures. In this distributed functionality they are independent of each other. Referring at once to the same object, as an object of consumption and an object of production, their common object is over-determined and ambiguous.

1.3 Hierarchical parallelisms in Heterarchies

One of the main task of the proposed approach of introducing polycontextural systems is to model classical parallelism in a polycontextural framework. This is the strategy of applying complex methods to non-complex problems.

What are the intended merits and advantages of this approach?

To be ruled by a common head means that the control structure, the logic and all basic non-strict operators are in common and cannot be used or be involved freely. Also classical parallel processes can run concurrently without sharing special data and methods involving elaborated communications, what they are sharing anyway is their common data type, their common logic and arithmetics as superposed in their common head. For that reason, a complicated strategy of distributing the access to the computational resources is needed. Because the abstract poly-contextural or polylogic machine as such is parallel in itself it is natural and not expensive to model parallel functions on it. The costs are somewhere else, in the decision to give up the hegemony of classical logocentric reasoning and to develop on a new level of abstraction and the thematization of a system of polycontextural computation.


"I will deal with the problem but I will solve only some parts of it. You can deal with the other parts of the whole problem."

This does not only mean that an agent is dealing with the sparked parallel task but that he also knows the whole problem, that is, he accepts the head of the problem too. This is automatically given in the classical situation, because the tasks are hierarchically organized by their common head. But the isolated parallel tasks don´t know anything about the main problem, they are reduced to their parts of the whole problem. Not only the isolated parallel task have to be distributed over different contextures but also their head and with it their main task. Also the whole problem is distributed it doesn't mean that in each contexture the whole problem has to be solved. In accordance to the modeling of the parallelity each contexture will be concerned only with the parallel task which it has accepted to deal with. This decision can be changed but this is not in the focus now. The distribution of parallel tasks over different contextures involves a process of negotiation and agreement which will not be modeled in our examples.

Towards a strategic metaphor

To solve a problem we have to clone it into its multitudes. This could give a hint to a metaphor of poly-graph reduction.

A direct step to model the classical type of parallelism in the polycontextural framework I propose to map the intra-contextural functional nodes @ of the hierarchical graph into the mediating poly-contextural operators § of the heterarchical graph. With this mapping we are distributing the hierarchically organized parallel tasks of the source to the heterarchically organized tasks of the polycontextural target system. This further step from the hierarchical to the heterarchical graph reduction, could be called poly-graph reduction.

In the same sense as the hierarchical organization of the tasks is ruled by the graph, the heterarchical organization rules the polycontexturally distributed tasks. As the hierarchical graph of parallelism guarantees the functioning of the computation, the heterarchy of the polycontextural graph is guaranteed by the operator of mediation § of the distributed tasks. Both, distribution and mediation, are the characteristics of dissemination. Thus, hierarchical parallelism is mapped into disseminated parallelism.

This picture of a mapping from a hierarchy to a heterarchy has two strategic functions, it gives us a visualization of the idea, that is a metaphor of our vision, and second a hint to a strict formal realization of the idea as a dissemination of intra-contextural parallelism to trans-contextural parallelism.

1.3.1 A very first step of modeling poly-contextural parallelism

Diagramm 50

Poly-Graph Reduction

Sffx =par (fx) @ (fx) ===> Sffx ===PAR (fx) § (fx) § (fx)

Poly-graph reduction: [email protected] ==> Operator§

The @-structure is hierarchical, and well known. The §-structure is heterarchical, and has to be introduced. It is the structure of distributed and mediated, that is, disseminated systems as developed before.

Poly-graph reduction is modeling intra-contextural parallelism of a single contexture, mono-contexture, into individual, that is elementary contextures, of a poly-contextural compound. What is inside a contexture, what is part of a contexture becomes itself a contexture in a poly-contextural system. At least, this is the main strategy without mentioning the transformations which happens to these parts.

As an introductory example lets take the formula x = (ADD(ADD 12)(ADD 34)) as well studied in Mahler, p. 143.

The tree of x is obvious.

Task : ADD((ADD 12)(ADD 34))

Also this example shows algorithmic parallelism

it is modeled in a framework of mono-contextural

combinatory logic. Parallelism is restricted to

strict operations.

This task is easily distributed over 3 contextures

and its short cut notation maybe:

ADD3 ((ADD1)(ADD2))

In reality, 3 different number systems are

involved in this model. Also they are not identical

they are the same. And the number system of

contexture3 takes the role of representing the

resulting arithmetics, perhaps forgetting the genealogy of its components.

As in the intra-contextural parallel modeling of x the child tasks can be executed in parallel without any delay on a multi-processor machine. The main task has to wait in both cases of modeling under consideration. But there is an important difference even in this elementary example of modeling parallelism. The main task in the poly-contextural model is in reality not waiting the results of the distributed child tasks. It is an autonomous ADD-processor, it may add what ever is to add at the time. That this processor takes the role as the executor of a main task is secondary. If he gets his values from the neighbor tasks which represents the child tasks he will operate on them. Therefore this process is not involved in a waiting mode at all. Nevertheless there is a structural dependency between the child and the main task in this example, the main task can only be executed if the results of the child tasks are accessible to the main task.

As in the classical case the graph takes the organization of the tasks. The classical graph is hierarchic, the poly-contextural graph is heterarchical. In the hierarchical case the organization of the work seems to be easy because the main task simply has to wait for its results. But this happens on the costs of efficiency. Because the processor is identified with its role in the graph there is no autonomy to process something different, therefore, in contrast to the heterarchical case, it has to wait.

The price of the autonomy of the processors in the PCL model is its need of reflection. The processor has to decide to take the job if it is free and the neighbor processors have delivered their results. The results of the neighbor systems is not put directly into the scope of the main processor but to the reflectional space of the neighbor systems inside the main system. As the results of the neighbors the main agent can accept and take this results for further calculation. In the same sense the processors which play the roles of the child tasks are free to continue other jobs using or not the results they send to the main processor.

All that seems to be quite complicated and dangerous compared to the hierarchical model which lacks any freedom, and is therefore very stable, but with its own costs of identifying different roles which are separable in the heterarchical case.

This example shows again the purely functional definition of poly-contextural operations. They are not so much hetero-referentially oriented to their arguments, values and objects but more to their own functionality and to their place and locality inside the whole scenario of poly-contexturality.

In this example represents to the machine M3 at the place O3 the value of processor M1which delivered the value a for itself in its own space.

The value or object is semiotically not identical with the object a but it is the same, that is, it is a duplication of a. This duplication of a is not well understood in terms of data processing. It is a transition from one contexture to another one. Such operations are part of the poly-contextural framework independently of the systems which are localized in it. Therefore the proposed model is nevertheless not a model of data-sharing or similar.

The mirrored objects in contexture3 of contexture1 and contexture2 are not identical to the objects a´´ and b´´ in use by the main task of machine3, they are, again, only the same, that is, they are objects used as objects for computation by machine3 as they are delivered by the other machines. These objects are having 3 different appearances in respect to their 3 different roles in the whole game. All these roles are played by the objects more or less simultaneously. As a final result of this interaction, machine3 is delivering the result c.

The metaphor of cloning in contrast to the metaphor of transport and transfer of data maybe helpful again.

1.3.2 A further step of modeling poly-contextural parallelism

The idea is to simulate classical parallelism by means of polycontextural methods. To simulate parallelism in classical systems we don't need transcontextural operators, it is sufficient to use monoform or polyform intra-contextural operators. These operators are defined by the restricted set of the super-operators {ID, RED, PERM}.

The Fibonacci function in the example is a function defined by double recursion. This function is primitive recursive.

Classical parallel formulation:

parfib :: Int --> Int
parfib 0 = 1
parfib 1 = 1
parfib n = nf2 `par` (nf1 `seq` (nf1+nf2+1))
where nf1 = parfib (n-1)
nf2 = parfib (n-2)

Poly-contextural distributed formulation:


parfib :: Int --> Int
parfib 0 = 1
parfib 1 = 1
nf1 = parfib (n-1)

put to system3



parfib :: Int --> Int
parfib 0 = 1
parfib 1 = 1
nf1 = parfib (n-2)

put to system3


par-task3=main task

parfib :: Int --> Int
parfib 0 = 1
parfib 1 = 1
nf3= par(nf1+nf2+1)
get nf1
get nf2


Contexture3 represents the main contexture with the full instruction set. Contexture1 and contexture2 are calculating the two independent parallel tasks nf1 and nf2. The operation seq has a representation in the transaction (put/get) of the results of nf1 and nf2 to nf3. Because the Fibonacci functions "fib" are distributed over 3 different contextures which are architectonically parallel, it is not necessary to repeat the operator "par" in the distributed functions nf1, nf2 and nf3.

Main scheme for PCL-parallel Fibonacci

parfib(3): Int(3) --> Int(3)

parfib(3) (0, 0, 0) === (1, 1, 1)

parfib(3) (1, 1, 1) === (1, 1, 1)

nf(3)3=== par(3)(nf(3)1+++ nf(3)2+++ (1,1,1))

get nf11 for nf33

get nf22 for nf33

Reflectionality in parallel processing.

Instead of the function get in "get nf2 for nf3" we could use "mirror". Mirror the value of nf1 and nf2 at the place of nf3.


At a first glance such a mapping seems to be rather trivial or even meaningless. But we shouldn't forget that the structure of the framework into which we are embedding these different arithmetical functions is ruling their locality and the possibilities of their interaction. In another wording we can say, that the distributed arithmetics of the polycontextural framework are autonomous in their status and are receiving some jobs, like to calculate parts of a Fibonacci function from another contexture and are delivering their results back to this arithmetical contexture.

What are the advantages of this modeling parallelism in polycontextural systems? Even if it seems that the difference is nearly invisible both, the hierarchical and the heterarchical parallelism, are fundamentally different in their conception and operativity.

In both models the calculation is decomposed into 3 steps realized, say by 3 processors. But the difference in the mode of decomposition is crucial, one is algorithmically the other is architectonically. Therefore their parallelism is set on different operational levels. Also this difference is sufficiently formally introduced to make this difference more explicit and clear we should introduce a new more complex example. The difference between the arithmetical and the architectonic parallelism has its influence on the way the parts are communicating together, how the child tasks are delivering their results to the main task. Obviously the architectonic model is incorporating in its basic logical and arithmetical operations the mechanisms of communication, the algorithms model has to introduce special non-logical and non-arithmetical operations to fulfill communication. Communication in this model seems not to be guarantied by the graph of the function alone. But the connection graph in the hierarchical model rules only the conceptual dependencies of the parts and gives no mechanism for the arithmetics of the connection of the parts. In contrast, the graph in the heterarchical model represents the rules of the mediation of the different contextures to a complex polycontextural system and represents insofar the architectionical base of interaction and communication between the parts.

Because the Fibonacci example proposes to solve a problem and especially a hierarchical problem-main task, child tasks, result-the procedure is to bring together heterarchical components to a hierarchical solution. This is an interplay between hierarchical and heterarchical functionality of parts of a complex system.

1.3.3 A more explicite modeling of the Fibonacci example

f(5) par f(4)



(f(4) par f(3)) par (f(3) par f(2))


f(5) par f(4)



Usually we would reduce any redundancy in the program by means of storing and retrieving mechanism. But this strategy relays on other mechanism which are used to solve the task. Here the Fibonacci example is taken to show a new modelling which depends only on the formula and its logic, no special tricks are involved. Redundancy here is only a part of the example and not of the concept.

The brackets in the scheme are not necessary because of the parallelism of the tasks.

Trees mapped into binomials.

(f(4) et f(3)) par f(3) par f(2) S1S2S4

------------------------- S3S5

f(5) par f(4) S6



An even cleaner mapping would be to model the arithmetical subtask only into disjunct logical subsystems, like S1, S4, S7, etc.

1.3.4 Putting the model into a tabular design

In a tabular design of the distributed parallelism it will be visible how the procedure of the parallel computation is working. In the quasi-algebraic modeling we are concentrated mainly with the algebraic structure of the proposed parallelism and not with its processuality.

Because of the constitutional or architectonic parallelism of PCL systems it is adequate to speak of parallelism even in situation where we don't have any algorithmic parallelism. Even the arithmetical successor function which is much too elementary to have any intrinsic structure to give place for parallelism has in poly-contextural systems qua distributed function its parallelism. Poly-arithmetics are institutional parallel.

Two actors are processing independently each a function, one in system1 and the other in system2, the main task is processed by the actor in system3.

Also a system can know, that is, mirror the whole problem in its contexture, an actor can be seen as a processor which processes a function regardless where it is from. The actor has not to be focused on the problem in full.

Zig-zagging parallelism

Mapping classical parallelism onto polycontextural systems doesn't deny the possibility of developing inside a contexture again classical parallelism based on intra-contextural S-operators. These distributed intra-contextural parallelities themselves can now be handed over to new contextures for trans-contextural parallel computation. There is no need for an explosion of contextures but also no necessity for an encapsulation of rapidly growing internal parallelisms, they can be exported if necessary to other contextures.

2 Parallelism in Polycontextural Logic

Additionally to the well known OR- and AND-parallelism, polylogical systems offer two main extensions to the logical modeling and implementation of parallelism. First the distribution of the classical situation over several contextures and second, the trans-contextural distributions ruled by the different transjunctional operators. The distribution over several contextures corresponds to a concurrent parallelism where the different processes are independent but structured by the grid of distribution. The trans-contextural parallelism corresponds to a parallelism with logical interactions between different contextures.

"The tree corresponding to the search for a solution to a question seems open to various kinds of parallelism. The most obvious technique, called OR parallelism, allows processes to search disjunctive subtrees in parallel, reporting back to the parent node the result(s) of the search.
The advantage of OR parallelism is that the searches are completely independent of each other and may execute concurrently (except that both may share access to a common data base storing facts and rules). The process performing the search of one subtree does not communicate with processes searching other subtrees." Michael J. Quinn, 212, 1987

Prolog is based not only on its logic, used as an inference machine, but also on its semantics or ontology, realized as a data base. Therefore the process of parallelising has to deal with a deconstructive dis-weaving of the data base´s ontology.

2.1 Strategies towards a polycontextural parallelism in Prolog

Like in the case above, where the number systems had to be cloned, in the Prolog case, the data base has to be decomposed into disjunct parts. These separated conceptual parts, or conceptual subsystems, have to be distributed over different contextures in a mediated polycontexturality.

Additionally the Prolog parallelism which is based on OR- and AND-parallelism has to be mapped into distributed logics, that is, into a polylogical system.

The Prolog example allows to explain in more a plausible way the decomposition or cloning of the common universe of discourse, that is, the data base of facts, into different subsystems. And secondly it is easier to introduce parallelism based on polycontextural logic than on arithmetics and combinatory logics.

Polycontextural logic is not widely known but more accessible than combinatory poly-logic and poly-arithmetics, which I am just introducing. Additionally there exists since 1992 a working implementation of a tablex proof system of an interesting subsystem of polycontectural logics in ML, running on Unix systems like NeXT.

2.1.1 An intermediate step with Metapattern

As an intermediate step in the shift of conceptualization from a hierarchical to a heterarchical way of concept building it maybe helpful to use the strategy of metapattern (Wisse). Metapatterns are used as an new modeling strategy for complex informational systems. Metapatterns are not involved in changing the basic assumptions of programming languages or even their logic as with the PCL approach.

Metapatterns could be helpful to move the process of parallelisation from the OR- and AND-level, that is, from the logical level to the deeper level of the data base, with its facts and rules, shared by the classical parallelism.

She can relax on a fixed object orientation because - the metapattern determines that -situation and object are relative concepts (Wisse 2001). A particular situation is also object in another, higher-level situation. Likewise, an object can act as situation in which another, lower-level object resides. Situation, then, is a recursive function of object and relationship. Wisse

Hierarchy or chiasm?

It is this concept of situation that characteristically sets the metapattern apart from traditional object orientation (and provides it with advantages over OO; Wisse 2001). Compared to an object that (only) exists absolutely, an object believed to exist in a multitude a different situations can unambiguously be modeled - to be equiped - with corresponding behavioral multiplicity. Wisse 2001

The radical conclusion from the orientation at situational behavior is that an object's identification is behaviorally meaningless. The modeler does not have to explicitly include something like an original signature in all her models. Essentially a privileged situation may implied. It serves the only purpose of guaranteeing sameness or, its equivalent, persistent identity across (other) situations. Being a situation in its own right, when included in a model it is represented by a seperate context. Made explicit or not, its role is to authenticate an object's identity in other situations by establishing the signature in other contexts.

Identity as a network of nodes
Traditional object orientation assigns identity at the level of overall objects. Context orientation replaces this view of singular objects with that of plusrality within the object; the object always nneds a context to uniquely identify the relevant part of an overall object, which is what identifying nodes regulate. When behaviors are identical, no distinction between contexts is necessary.

2.2 Deconstruction of a typical PROLOG example

The classical prolog example to prove an "aunt"-relationship can be decomposed from its hierarchical ontology into different situations mapped into different contextures and visualized in the metapattern.

kinship: married/not-married, in-law, aunt

gender: male, female

genealogy: parent, sibling

ontology: different/not-different

It is also possible that there is some overdetermination because parent and sibling could also be part of kinship.

In Prolog all the facts belong to one ontology or to one semantic general domain or universe. All the rules are based on this mono-contextural ontology and on the corresponding logical operators AND and OR of the again, mono-contextural logic. Everything therefore is linearized and homogenized to a global or universal domain. This, if corresponding fairly with the real world situation is of great practicality and efficiency in both direction, in the case of the formal system, Prolog, and in the case of its data base.

But often, if not always, real world applications are much more complex than this. Even the fairly classical example is presupposing all sorts of facts which are not mentioned in the definition and which would belong to a different real world situation.

I don't criticize this kinship model. It is doing its job to explain in a first step Prolog perfectly. Again, I am using this example for deconstructive reasons, that is for introducing the PCL way of thinking. This is, again a form, I guess, of legitimate abuse of classical models.

Instead of linearizing the above separated contextures kinship, gender, genealogy, ontology into one universal domain, for the example here represented by kinship, the polycontextural modeling is asking for an interweaving and mediating of these different contextures together to a complex poly-contexturality.

Compared to the original mono-contextural modeling this is involving much more complicated mechanisms than it is necessary in the classical case.

Why should we model a simple situation with highly complex tools into a complex model if we can solve the problem with much simpler tools? Simply because the classical approach lacks any flexibility of modeling a complex world. The truth is, that the simple approach needs an enormous amount of highly complicated strategies to homogenize its domains to make it accessible for its formal languages.

To decompose the basic classical ontology into different disjunct domains is a well known procedure and should not be confused with the decomposition, or de-sedimentation of an ontology in the PCL case. In PCL the domains are not simply disjunct and embraced by the general ontology but interwoven in a complex mechanism of interactions.

2.2.1 Polylogical modeling of the metapattern

The metapattern approach has helped to dissolve the hierarchical conception of the "aunt"-relation into different aspects.

In Prolog, the aunt-relation is defined as follows:

ant(x,y):= female(x), sibling(x,z), parent(z,y).

additionally the rule for sibling is:

sibling(x,y):= parent(z,x), parent(z,y), (x/==y).

The aunt-function is fullfilled and is true, if all components which are connected by the conjunction et (AND) are true.

true(aunt(x,y) iff ( true(female(x)) et true(sibling(x,z)) et true(parent(z,y)))

Metapattern distribute the AND (or: et) over different heterarchical places but gives no formalism to handle this distribution. Polylogics is also distributing these conjucntions but in transforming them at the same time into operators of mediation. Polylogics is shortly defined as a distribution and mediation of classical logics.

ant(x,y) := female(x) § sibling(x,z) § parent(z,y)

sibling(x,y):= parent(z,x) § parent(z,y) § (x/==y)

Therefore the polylogical truth-function is transformed to:

aunt(x,y) eTrue ==> aunt(3)e(x,y) e (T1,T2,T3)

The metapattern of parts of the formulas can be transformed into the diagram.

How to read the transformation?

In Prolog, each term as such has an identical meaning. If the variable x is denoted with "mary" and mary is female, then the relation or attribute female(mary) is true. Also the variables x, y, z,... are identical. Obviously no "x" will be read as an "y"; we don´t make a "x" for a "u".

In polylogic the situations are happily a little bit more flexible. The variables are flexible to occur as variables in different systems. The variable "x" can occur as the variable x in system S1, that is the variable x can occur as variable x1.

In the same sense the denotation "mary" can occur as female or as sibling or as parent or as something else. Mary as Mary, again something else, maybe a secret.

Our model suggest the following reading:

x as female: x1 and mary as female: mary1

x as sibling: x2 mary as sibling: mary2

z as sibling: z2 stuart as sibling: stuart2.

y as parent: y3 kathleen as parent: kathleen3

z as parent: z3 edward as parent: z3

The result: aunt(mary,kathleen).

x as aunt: x4 mary as aunt: mary4

y as -aunt: y4 kathleen as beeing in relation to her aunt: kathleen4

Also the simultaneity for "mary" of being female and sibling, which is ruled in the Prolog model by the conjunction "et", is realized in the polylogical model, obviously by the mediation rule "§".

This example is very simple because the elements of the partition are simple, there are no composed formulas included. Insofar there is no need to involve polycontextural negations, junctions and transjunctions. Only the operator of mediation "§" between distributed attributes and relations are involved.

Only if we freeze the scenario to a static ontological system all the flexibility of the as-function, not to confuse with the as-if-function, can boil down to the well known non-flexible structure. But to allow a flexible ontology with x as x1, as x2, etc. or mary as female, as sibling, etc. allows to change ontology and to be ready for new situations without starting the system from scratch. It is easy to freeze complexity, but there are no known rules how to make a frozen and dead systems alive. Maybe that's the reason why artificial life is nevertheless so hard.

2.2.2 Prolog´s ontology

Prolog refers as it has to do as a programming language based on First Order Logic (FOL) on attributes, relations between attributes and inference rules etc. and not on behaviors and contexts.

To be a parent is classically an attribute of a person, described as a relation to other persons, in PCL this attribute becomes a behavior, maybe of a person, in a complex situation. To be parents is not necessary connected with the attribute to be married, to be a sibling has not to be restricted to have the same parents, to be married has not to involve different gender, and so on. And even that a person is different to another person, or that the person is identical to itself is not as natural as it seems to be. All these presumptions are reasonable, and are corresponding to possible real world models only if all the possible ambiguities and over-determinations are ruled out in favor to a very special model of kinship.

The solution to this situation of complexity is not so much to enlarge the given ontology and to introduce the new differences and attributes to cope with the new situation. Because this strategy is based on the exact same ontological presuppositions and is therefore only repeating the old scenario again.

In the framework of PCL mechanism are offered for a great flexibility in interlocking and interweaving different points of view, situations, and modeling.

The decomposition of an universal domain into its different components is not only introducing a conceptual advantage for the process of modeling but also on a computational level a new form of parallelism is introduced.

The whole manoeuvre is quite similar to what I proposed as a proemial relation between sorts and universes in many-sorted first order logics.

2.2.3 The devil is in the detail

Polycontexturality is not starting somewhere in a complexity, it is virulent at the very beginning of the basic definition of relationships.

Y as child of X and Y as the father of Z has to be mediated, synchronized, realized. Only in a stable hierarchical ontology this relationship of Y as "child of" and "father of" is automatically connected. And therefore "father of father" can be equal to "grandfather" and realized by a conjunction of the two relations, father(X,Y) et father(Y, Z) eq grandfather(X, Z).

In a polycontextural setting this identity of Y, as child and as father, can not be presupposed but has to be established in a possible context. Y as child and Y as father has to be brought together in a way that the transitivity can hold. It is easily possible that the transitivity is broken for some reasons and that it has to be re-established. The reason why the transitivity can be broken lies in the poly-contextural assumption that a entity or a relation is not a simple identity but involved in a cluster or an intersection of a multitude of possible contextures. Only for restricted and regulated situations a complex situation can be reasonably reduced to a mono-contextural one in which transitivity holds unrestricted. Therefore, identity can not be presupposed it has to be realized from case to case.

Because of the relative autonomy of both relations in a complex kinship system, we can calculate and study them simultaneously, realizing some elementary parallelism. This is obviously not possible in a strict biological interpretation of the father-child-relation. There we have to accept the hierarchical dependencies of the relations. But again, we have to be aware that this is the case only because we restrict the setting to a mono-contextural case. In contrast, real world social relations are always highly complex.

Therefore we have two options, the mono- and the polycontextural. The advantage of the later one is flexibility, the advantage of the first one is stability. Both have there weakness, flexibility is risky and dangerous, stability is restricting and killing.

2.3 Prolog´s double parallelism dismantled in polylogical systems

As mentioned above, Prolog has additionally to its well known parallelism of OR- and AND-procedures, and some others, a new form of parallelism which is introduced by the process of deconstructing, dis-weaving, decomposing, de-sedimenting its basic ontology as presuposed in Prologs data base. This poly-contextural thematization of Prologs ontology goes together with the possibility to modell its classical parallelism into the architectonic parallelsim of polycontextural logic systems similar to the modeling of the graph parallelism of functional languages into the polycontextural architecture.

2.3.1 Polylogics

Each dimension, each level, each contexture of a complex object or system has its own ontology and its own logic. Furthermore, on a model-theoretic level, each contexture has its own theory of complete lattices in the sense of Garret Birkhoff which allows a very detailed analysis of the coneptual space of the objects belonging to each contextures. But lattices are not mapping the interactions between lattices of different contextures.

ThinkArt Lab
[email protected]