What are the principles of systems science?

Systems science was defined by Bertalanffy as the development of a mathematical-logical formulation of principles applicable to many different systems. Principles such as relativity are hypotheses regarding the laws of nature, supported by data, and best formulated as mathematical functional relationships. Principles such as Euclid's axioms describe formal systems. Neither scientific hypotheses nor mathematical postulates can be proven, but they must be refutable by the finding of contradictions with data or internal inconsistencies in a set of postulates.

What are the principles of systems science? With this question Tom Mandel raised an important issue and triggered an ongoing discussion. As part of this discussion, a roundtable was organized for the World Congress of the Systems Sciences (Toronto, July 2000), to be followed by an open discussion at the ISSS the same year and the following one. Each of the participants was invited to present a single principle, hoping in this manner to focus the discussion, as well as to introduce a diversity of perspectives.

Each section of this article is written by one author. Part one presents the principles chosen, including a brief statement, and a short description.

Y.P. Rhee

Department of National Ethics Study Seoul National University Seoul, Korea Email address:rheeyp@plaza.snu.ac.kr

Systems science is to develop the unifying principles vertically or horizontally through the universe of the individual sciences, which brings us near to the goal of the unity of science. When systems science as the transdisciplinary science can be highly developed in the future, the language of systems science can be the universal language in all the fields. This basic language can comprehensively unify the natural sciences, social sciences, and humanities in the long perspective.

a) Systems science as universal language . Science is unity. All statements can be translated into the physical language. All states of affairs are of one kind and are known by the method utilized in the transdisciplinary science. If the physical language is alone being intersubjective, the physical language is the language of science. It can be said that science is the system of intersubjectively valid statements. In order to be the language of the whole of science, the physical language needs to be not only intersubjective but also universal. Thus in systems science the physical language is intersubjective and can serve as the universal language.

b) Systems science as the unifying science for the macro-micro linkage It is generally assumed that in systems science the system must be brought into the interaction with its environment. Furthermore, it is assumed that in respect with the complex system there must be the dynamic relations between the macroscopic level and the microscopic level. I is known that one of the most important problems is the eventual feedback between the macroscopic structures and the microscopic events. In the complex system the macroscopic structures emerging from the microscopic events would in turn lead to a modification of the microscopic mechanism. Thus in systems science the macro-micro linkage provides systems scientists new analytic method for understanding of complex systems.

c) Systems science as the unifying evolutionary science. Scientific development becomes the piecemeal process that includes scientific technique and knowledge. In fact, science has been developed by its revolutionary breakthroughs including the accumulation of individual discoveries and inventions. In the last three decades the core the scientific revolution comes from the major conceptual shift in physics, namely, the nonlinear revolution. The major elements of the nonlinear revolution combine to form the important shift in our understanding of scientific evolution as a single overall process from molecules to humankind. It is clear that a new consensus on systems science as the unifying evolutionary science has been emerging among systems scientists. The proper theoretical and methodological problem is not how to reduce one to the other but how they are linked and interconnected. Thus systems science can further develop by incorporating new conceptual shift from the transdisciplinary science in terms of the unifying evolutionary framework.

James Bohman, New Philosophy of Social Science: Problems of Indeterminacy(Cambridge, Mass.: The MIT Press, 1991).

Rudolf Carnap, Unity of Science(Bristol BSI 5RR, England: Thoemmes Press, 1995).

Thomas S. Kuhn, The Structure of Scientific Revolutions (Chicago: University of Chicago, 1970).

Ilya Prigogine and Isabelle Stengers, Order out of Chaos(Toronto : Bantam Books, 1984).

Ilya Prigogine, The End of Certainty(New York: The Free Press, 1997).

T. Mandel

A system itself is different from an element because systemic inquiry studies how elements act together-it studies their relationships. It is these relationships which have emergent properties which are then experienced as the whole. The whole is our experience of the emergent properties of relationships, much like information on this page is found in how the black and white are put together, and not that information is black or white. Thus what constitutes a system are the particular relationships such as interaction, organization, feedback, and so on.

Classical (reductionistic) science studied objects (see Peirce) subjectively isolated (see Whorf) as an elemental entity (thing). Following Heraclitus, many system scientists focus on relationships, see Charles Francois proposal for a theory of connections.

Systemics also is very old, with roots far deeper than the cybernetic (feedback) science it came (blossomed) into modern vogue through. Let us be clear, while the science of systems evolved one aspect (slice) at a time - the underlying notion of wholism, the basic idea of a system, on the other hand, can be traced back to the beginning of recorded history. Following Heraclitus, Empedocles, Lao Tzu, many system scientists focus on relationships,

Systemics is not a new development or logical consequence of an old science. It is a new way of looking at old science, a new research paradigm. Systemic science therefore does not invalidate old science, rather it attempts to integrate it.

In order to accomplish this systemic inquiry, from the ontological (fundamental) perspective, it is necessary that all viewpoints are assumed valid. This is because first if all all viewpoints are under consideration sooner or later. The systemic (holistic systems) approach is first of all, and necessarily so, multi-perspectual. So we take into consideration what is known as analytic and synthetic thought. We consider both the parts and the wholes.

What is "new" is that we (can) do this by considering the relationships common to both parts and wholes What I have just said is a general statement. If I say the same thing in specific terms, I would be saying that "clapping of two hands is how all systems work." This is untrue. But in one general sense, it is true.

Perhaps one of the more significant validations is the role of philosophy (the general) and science (the particular). Systemics embraces both as a complementary pair. This redefinition of philosophy as a complementary to science is new, and bears repeating; Philosophy is the study of general principles and science is the study of specific applications of those principles. The boundary between them is based of the principle of verification,

Ontologically, conceptual knowledge can be assigned to four perspectual levels as a fourfold archetypal scheme: 1) The object, (Monism) 2) Distinct objects (Dualism) 3) Their relationships (Relationalism) 4) And all of the above as a Whole (Wholism). Classical science has as its emphasis the Object. Systemics, on the other hand, has as its emphasis the Relationships - how objects interact to form emergent wholes. Bertalanffy says, "Compared to the analytical procedure of classical science with resolution into component elements and one-way or linear causality as basic category, the investigation of organized wholes of many variables requires new categories of interaction, transaction, organization, teleology..."

There are (extraordinary) features about systems that can play an important role in research: first is the observation that properties of the whole cannot be discovered by an analysis of the constituent parts. This is in effect saying that any analysis of just parts will not lead to any knowledge about what they do as relational components in a system. Very little can be learned about depth of vision from the analysis of one eye. Bertalanffy says: "The meaning of the somewhat mystical expression, "The whole is more that the sum of its parts" is simply that constitutive characteristics are not explainable from the characteristics of the isolated parts. "

The second extraordinary is emergence. Emergence is difficult to explain to many, so in simple geometrical terms, picture a single point drawn on a sheet of paper. We can assign the value of location to any point we draw. By simply adding a second point on the paper, a new feature, the line, emerges. Now we can assign a value of distance between the two points. Similarly, a third point gives us area and a fourth gives us volume. In each case something new emerged that cannot be found in the previous state. The emergent relationships between relationals are experienced by an observer as a new whole e.g, the emergent relationship between gases is experienced by us as liquidity. Relationships are of a different categorical level from objects. In short, objects are nouns (identity) , while relationships are verbs (action).

This new emphasis on relationships requires a new language not at all unlike switching from noun to verbs. (Adding a gerund to a noun makes it a verbing.) Korzybski calls for a "non-elemental" language based on structure. Unfortunately. little has been done toward a relational language in spite of WIttgenstein's Contextuallity, Whorf's Linguistic Relativity and Bohm's Relational rheomodes.

Perhaps I should have mentioned this in the beginning, but all of this writing is a map which is "not the territory". It is how I drew it, not what it is like. This is true for everyone, our language is elemental, so we have learned to think elementally, in terms of separate things. As if everything is separate from each other. But reality is relational, and we shouldn't get the two mixed up. "Do not mistake the pointing finger for the moon," Zen says. (Just this bit alone constitutes a major paradigm shift...)

Having said that, systems are capable of working with many kinds of specific relationships. The primary relationship is organization. A collection is not systemic. A collection of batteries, light bulbs and wire does not make a flashlight. Only when they are organized in the correct way (put together) does the flashlight work. (Hmmm, What is the MCL [Mechanical Correlate of Light] in the flashlight? )

There are other aspects such as process, the flow of electrons in a flashlight for one example. The process is necessary because systems interact, and it is this interaction over time that is described by a process. We turn the light off by breaking the process, the flow, the circuit.

Relationships can also be described in terms of information, again, in the sense that information is presented as a process flow as opposed to "bits of information". Systemically it is mutual informing rather than "a bit of information". The difference between our conceptual systems and the natural systems, the map and the territory, is most evident when we examine how a natural system works, and compare this to our conceptual representations, it is evident that we are selecting rather than describing. Conceptual systems are created by forming boundaries (ironic because forming boundaries got us into a mess to begin with...), like the skin on our body. By forming this skin, systems therefore can be open or closed to the environment. Inert matter could be considered closed systems, while living matter is open.

The obvious point is that open systems cannot be taken alone. Gaia, as a system, includes the sun, right? In general, systems reside within an environment. Development is both hierarchal and holoarchal, vertical and horizontal. (A radically new development bears repeating here, the discovery of the ZPE, aka "vacumn energy", enables a vertical hierarchy with a nearly infinite ground, AND a holoarchy that non-locally interconnects.) As Erwin Laszlo writes in his latest book, "space does not separate us, it joins us."

A reference is sometimes necessary to consider internal or external relationships in relation to the observer. This is because there are subjective and objective perspectives. A subjective perspective is an abstraction of an objective reality. It is necessary to know that our statements of objective reality is more like a selection, if you will, which we subjectively make according to the assumptions we have formed, or have not formed. Whorf tells us that our language determines to a significant degree what our assumptions will be, and further that identical data presented to various researchers will produce different conclusions depending on the assumptions dictated by our language. Language is a tool, meant to be used (not used by) and the kind of language determines what sort of tool we are using.

Systemic inquiry deals with qualities as opposed to quantity. Mathematical formulations are certainly desirable, but the mathematics of systemics may be extraordinary too. Ordinarily, one plus one equals two. But in a systemic world, one plus one equals one. Or eleven. To formally express this requires a new mathematics. (see Group theory) Point is that mathematics is also a language, and what sort of mathematics is being used determines what and what cannot be said by that mathematics.

Complexity in a system is a matter of viewpoint. Again a new perspective is at work. just as important is simplicity. Indeed. complexity is relative-complementary to simplicity. Steward and Cohen propose a development that goes like so - from simplicity to complexity to simplexity to complicity (note the spelling). Picture the evolution of an embryo. The process of differentiation/integration develops from simple to complex and back to simple, but now part of something else acting complex.

Murrray Gell-Mann, co-founder of the Santa Fe Institute has created a new science he calls Plectics, the study of the simple and complex. Systems do not necessarily unite. In most cases the interaction is integration. Compare epoxy with concrete. Epoxy unifies parts A and B. Concrete integrates parts A and B.

(Note) Because the domain of conceptual systems is so vast, applicable from the atomic systems (e.g., carbon cycle) to literature as a system, a great deal of knowledge is relevant. So much so that I have given up trying to compile it (An excellent compilation of three thousand terms by Charles Francois was recently published as the International Encyclopedia of Systemics and Cybernetics) and have instead turned to models - perhaps the new language of systemics.

But most interesting of all is the possibility that there is a general scheme which nature has been working with. If nature operates according to a single principle, then this principle would be interpenetrative. It is likely that nature "began" as a simple act, the simplest action, and has reiterated that same principle up till now. We would therefore be able to find it in all aspects of reality.

Bertalanffy thinks so, enough to quote Nicholas of Cusa citing the coincidentia oppositorum., but Bertalanffy wonders if this is an artifact of our "languageing" or does in fact have a metaphysical reality. Salk thinks so, He says, "In order to understand anything we must have a sense of the fundamental connections which form the backdrop of all experience."

L. H. Kauffman

University of Illinois at Chicago

Abstract. All (formal) systems are interpreted (formal) systems. Each abstract pattern has its origin in experience and returns to that experience. The boundary between systems as systems in the world and systems as mathematical systems can only be drawn as a convenience (or a hindrance). In reality, systems are articulations of direct experience and the articulation of experience is the act of creation of (described) systems.

0. Introduction

Mathematics is the study of patterns. To the mathematician his or her structures
take on the appearance of a permanence that can seem far more real than
the familiar world of objects, processes and human experiences. This vivid
presence of objects (as simple as the number 17 or the form of a triangle)
that seem to have existence only in the minds that think them leads to the
notion that they really exist independent of those minds. Is this the case?
And what is the relation of mathematics to the worlds of our experience?
This essay explores these questions by first taking up the theme of formal
systems and then branching out into a discussion of the meaning of formalism
in relation to mathematics as a whole.

The main point that I wish to make here is that on the one hand we stabilize and make real for ourselves mathematical structures by using formalisms of all sorts. On the other hand no single formalism can capture in its entirety what needs to be expressed. In ordinary language, in the arts and in literature it is obvious that every mode of expression has its limits. In mathematics, the limitations of a given mode of expression can actually lead the way to new modes of thought that transcend that mode of expression.

Mathematics arises from our experience of the world and our need to express
that experience in language and pattern and process. But just as language
and the world are intertwined so it is with mathematics and the worlds that
it describes and creates. Ultimately we cannot distinguish between the models
and the worlds that they model. It is a paradox. The very process of careful

abstraction and clear modeling (where the map is not the territory) leads
to an intertwining of pattern, language, process and models that is the
very ground of our existence.

The essay itself explores these themes. It is a rough attempt at an articulation.

I. Genesis of Formal Systems

I will consider the nature of formal systems as a mathematician

tends to speak of them. I will speak informally about formal systems.

It is important to realize that the idea of mathematics as an abstract game of symbols is relatively recent. Up until the discovery of non-Euclidean geometry in the 1800's it was assumed that mathematics described collections of necessary truths about the world, truths derived from unassailable axioms. Then with Bolyai, Loabshevsky, Gauss and Riemann it became clear that the basic patterns of geometry did not have to assume the parallel postulate (Through a point distinct from a line there is exactly one line parallel to that line.). Then the patterns of geometry became just that, patterns and were malleable and susceptible to a multiplicity of interpretations. The truth of geometry became a truth relative to the axioms and the axioms became mere suppositions in a sea of possible suppositions.

Along with this advent of non-Euclidean geometry there came the rise of symbolic logic, particularly at the hands of George Boole, who saw an analogy between algebra and logic. Boole did not just symbolize logic, he saw that ordinary algebra could be interpreted as logic. He recognized that one could write A+B and mean "A or B" and that one could write AB and mean "A and B". He saw that algebraic rules such as the distributive law A(B+C) = AB+AC could be interpreted for logic. He saw in a global way that one could set up an algebraic system that looked very much like the ordinary algebra of numbers, but that this system would allow logical calculation and the analysis of complex arguments.

Thus, from the point of view of algebra, Boole's revolution is as big as the revolution that produced non-Euclidean geometry. Now one had non-standard algebra, open to interpretations that led far beyond the confines of the description of the properties of numbers.

This revolution in algebra was not confined to algebraic logic. Boole's

contemporary Sir William Rowan Hamilton discovered the Quaternion and in
so doing introduced one of the first significant non-commutative algebras.
Hamilton's Quaternions are an algebraic structure on four dimensional space
that turns out to have far reaching consequences for the structure of rotations
in three-space and for many applications in physics. In fact it was not
until the advent of quantum theory in the 20th century that the quaternions
came into their own as the group SU(2) so central to the quantum mechanics
of spin.

The difference in this change of view about algebra was every bit as significant as the change that happened when Descartes observed that geometry could be done by using algebra. But let us recall the structure of Descartes' discovery Descartes saw that if we model the two dimensional Euclidean plane by pairs of coordinate numbers (x,y) then geometric relationships could be mapped precisely to algebraic relationships. Cartesian geometry is based of the distance function (x2 + y2)[1/2], representing the distance of the point (x,y) from the origin (0,0) in the plane. Thus a circle of radius R becomes the set of points (x,y) such that x2 + y2 = 1. The formalism of numbers remains the formalism of numbers. The patterns of geometry remain the patterns of geometry. But now these two fields of mathematics are one.

As we now know, these discoveries of Descartes opened up the way for non-Euclidean geometry since a major pathway to that arena is the use of different sorts of distance functions to map out the non-Euclidean models. His discovery also paved the way to the discovery of the Quaternions, for the precursor of the Quaternions was the discovery of the geometric interpretation of the complex numbers by Gauss and Argand. That discovery can be stated in utmost simplicity with the equation

a+bi = (a,b).

On the left hand side we have the complex number a+bi with its imaginary
part bi and the imaginary value i whose square is -1. On the right hand
side is the Cartesian geometric point in the plane. The correspondence explains
everything!

All the strange formal properties of the complex numbers have beautiful
geometric interpretations, and a whole new field of mathematics opens up.
(The beginning of this is the equation (a+bi)(a-bi) = a2+b2, relating Euclidean/Cartesian
distance to the product of a number and its conjugate. Along with this is
the fact that i(a,b) = (-b,a), interpreting i as a rotation of ninety degrees
in the Cartesian plane.)

Although used for hundreds of years before Gauss and Argand, these imaginary numbers were only understood as an abstract formalism, a game of symbols where (a+bi)(c+di) = (ac -bd) +(ad+bc)i. This formula is obtained directly by multiplying out the products and using the equation i2=-1. And yet it was known that these complex numbers could be used to reason to perfectly real and sensible answers to the solutions of higher degree equations. It is here that one begins to see also the seeds of a view of mathematics as pure formalism, for what is the nature of these abstract manipulations that lead to real answers?! It is useful here to pause and give some examples of this sort of magic. First of all let z=a+bi and z* = a-bi. Then as we indicated in the last paragraph, zz* = a2 +b2. It is also easy to see that (zw)* = z*w* for any complex numbers z and w (We leave this as an exercise for the reader.). Thus

(zz*)(ww*) = (zw)(z*w*) = (zw)(zw)*

and these equations tell us that

The product of two sums of two squares is itself a sum of two squares.

In fact, the equations give us a specific answer to that question, for if z=a+bi and w=c+di then

zw = (ac-bd) +(ad+bc)i and so we have

(a2+b2)(c2+d2) = (ac-bd)2 + (ad+bc)2.

For example

(32 + 42)(52 +72)

= (3x5 - 4x7)2 + (3x7 + 4x5)2 = 132 + 412.

An abstract formalism informs the real world of numbers.

Hamilton's discovery of the Quaternions comes on the heels of the geometric interpretation of the complex numbers. In fact it was Hamilton who pointed out that the essence of this interpretation was a method of multiplying couples of numbers in the form

(a,b)(c,d) = (ac-bd, ad+bc),

exactly the abstraction of the formula for the product of a+bi and c+di. With that understanding, Hamilton went into a long investigation hoping to find good ways to multiply triples -- to find an algebra structure for three-dimensional space. This direct attack was doomed to failure, but when he went to quadruples and to four-dimensional space, then success shined on the project and the Quaternions were born.

At this stage abstract algebra was born, and with it the possibility of thinking of mathematics as the study of pure formal games. Games of rules for the manipulation of certain kinds of symbols. Vast possibilities open at this thought, for then mathematics becomes a study of patterns of all kinds codified in precise languages whose meanings need only exist in relation to the play of the patterns themselves. Truth is abandoned to indication. Indication is abandoned to void in the form of the creation of those systems themselves.

The power of formalism was appreciated in real mathematics. Newton's
calculus demonstrated that the right formalism could penetrate the depths
of dynamical natural law. Lebniz dreamed of a calculus ratiocinator, but
also gave us the remarkable notation for the Newton/Leibniz calculus that
streamlines its understanding and application to the present day. Euler,
master of creative formalism encapsulated infinity in his discovery of the
intimate relationship of p, e and i in the formula eip +1 = 0, his incredible
discovery that that p2/6 is sum of the reciprocals of the squares of the
natural numbers and his fantastic proof of the infinitude of the prime numbers
that is encapsulated in the formula

1 +1/2 + 1/3 + 1/4 + ...

= (2/1)(3/2)(5/4)(7/6)(11/10)(13/12)(17/16)...

(The series on the left is the sum of the reciprocals of the natural numbers. The product on the right is the product over all prime numbers p of the ratios (p/(p-1)). It is not hard to see the series on the left diverges (slowly!) and so the product on the right must be infinite. Hence there are infinitely many prime numbers.)

Bertrand Russell and Alfred North Whitehead took these themes to a new place in their Principia Mathematica, attempting to found all of mathematics on symbolic logic, their logic was codified into a formal system with specific syntactic rules governing all allowable operations. A proof a sequence of formulas, each following from the next according to either the specified initial rules or by a jump sanctioned by a previously proved theorem. In essence, each proof could be expanded until it was a sequence of formulas each following from the next according to the specified rules of the system. In essence then, all theorems of mathematics were to be the emanation of of one small set of rules based on the formalization of logic. A fantastic mechanical loom of reason. Russell was quoted as saying "Mathematics is the subject where we do not know what we are talking about nor whether what we are saying is true."

This quote from Russell is a perfect parody of the situation of analyzing
an uninterpreted formal system. If the role of the mathematician is to be
the examiner of the consequences of a formal system on the basis of its
stated rules, then indeed he need have no regard for meaning and no regard
for truth! But

this is as strange a parody of the mathematician as is Searles' parody of
a translator of languages! Recall that Searles' translator sits in an isolated
room (the Chinese room) with a dictionary and other lists of rules. Pieces
of paper are handed to him. He looks up their translations in his book.
He knows nothing of Chinese. He just follows the rules. He writes down the
translations on other slips of paper. He follows the rules. Sometimes the
rules direct him into reactions that are not exactly translations but "responses"
to what it written. He does his work. To the outside world the Chinese Room
appears to converse in Chinese! The operator of the room knows nothing of
this. He just follows the rules of a formal system plus dictionaries. He
has no intelligence for Chinese. Yet Operator plus Room form a perfect Chinese
conversant.

Shift to present time. Computers prove theorems! In fact, a few years ago a computer at Argonne Labs in Illinois solved a problem in Boolean algebra that had stumped human beings for fifty years. The feat was accomplished by using the computer exactly as the operator of a formal system that encoded the problem. Strictly speaking, one knew how to direct the computer in a search for a legal sequence of moves from the beginnings of the system that would constitute a proof that "Robbins algebras are Boolean." The computer needed no meaning, no idea of truth. It just followed the rules, engaged the search and after a week of computation found the pathway that solved the problem.

Russell and Whitehead thought that they could reduce all of mathematics to a system that could be fed into such a computer. Just add electricity and wait. All theorems will come forth. The concept seemed right at the time. Just get the basic principles encoded and everything else will come out via mechanism in the meaningless gyrations of legal sequences of moves.

There is something wrong here, and that wrongness was eloquently pointed out in two different but related ways by Ludwig Wittengenstein in his early work the Tractatus Logico Philosphicus and in his later work on the foundations of mathematics. It was given a death blow by Kurt Gödel who showed that any sufficiently rich formal system will have theorems that it cannot prove that can nevertheless be proved by any competent mathematician who has access to the system. We will illustrate the basic form of Godel's argument, but the key point in his work is the rehabilitation of a basic dramatization of the mathematician. Gödel's mathematician does not just check whether sequences of rules in the formal system are obeyed or not obeyed. He is not Searles' operator in the Chinese room. Not at all! Gödel's mathematician analyzes the structure of the formal system as a whole. He looks on the formal system in question as a mathematical structure to be studied. That system, for Gödel's mathematician, is like a triangle in the hands of Euclid. Euclid proves that the triangle has the sum of its angles equal to 180 degrees, a fact quite unknown to the triangle herself! Gödel's mathematician stands meta to the formal system and endows that system with an interpretation as formal system among formal systems. As soon as a system is well defined in the way that it can be regarded as uninterpreted and strictly rule driven it becomes an object of study and thus acquires the interpretation of a structure to be looked at among the panoply of other such structures.

Shift again to modern times. We make a working computer program. This program is precisely designed to be operable by a machine. It must work without any hint of its possible interpretation. In creating it and working with it we work in the class of computer programs that behave in such and such ways in such and such a language. The program is interpreted as a program. The formal system is interpreted as a formal system. There is no such thing as an uninterpreted formal system.

II. Gödel's Incompleteness Theorem

In this section I will give a miniature version of Gödel's argument
and a description of his actual argument. The miniature example is due to
Raymond Smullyan and I call it the "Smullyan Machine". The Smullyan
Machine acts at any given time by printing a string of characters on a tape
that emanates from its side. The operator of the machine presses the single
button on the top of the machine to active each such printing.

We are told that the only characters that the Machine can use for its printing
are from the seet { (,),~,P,R }. Thus the Machine might print (((((~))))))PPPRR~~R(.

Certain types of character strings are singled out as "Machine sentences" (M-sentences). These are all of the following form

P(X) or ~P(X) or PR(X) or ~PR(X)

where X can be any character string whatever. Thus ~P(RRR(P~) is an M-Sentence with X=RRR(P~. We are told the following properties of the Machine:

1. M-sentences can be interpeted as descriptions of the possible actions of the Machine. In particular, P(X) is interpreted as the information that the machine can print the string X all by itself. ~P(X) is interpreted as the information that the machine cannot print the string X all by itself. PR(X) is interpreted as the information that the machine can print the string X(X) all by itself, and ~PR(X) is interpreted as the information that the Machine cannot print the string X(X) all by itself.

2. Under this interpretation, the Machine only prints true M-sentences.

Theorem. There exist M-sentences that are true, yet unprintable by the Smullyan Machine.

Proof. Let S=~PR(~PR). If the Machine can print S then the interpretation
of S tells us that the machine cannot print X(X) where X=~PR. Thus the machine
cannot print ~PR(~PR). Thus we have shown that if the machine prints S then
a contradiction ensues since S asserts its own unprintability. Therefore
the Machine (which prints only true M-sentences) cannot print S. But S asserts
her

own unprintability by the Smullyan Machine. Therefore S is a true M-sentence
that is unprintable by the Machine. QED

The Smullyan Machine illustrates a number of key issues about formal systems and their limitations. The Machine is simpler than the systems to which the full Godel theorem applies, and of course the Machine does not engage in proofs. Nevertheless, the key issues of reference and interpretation in a metalanguage are neatly illustrated. To confront the issue of reference directly note how we constructed S as a sentence that refers to itself. We might have attempted to solve ~P(X) = X. If there were such an X then the M-sentence ~P(X) would assert its own unprintability. However, there can be no solution to ~P(X) =X since ~P(X) has four more characters than the string X. Here we meet the direct notion of identity for character strings on which this construction was based. In fact, the way out was through the referential nature of R(X). R(X) in the interpretation of Machine sentences, stands for X(X). Unless X has a single character, X(X) has more characters than R(X). This is an image in the Machine's formal system of the usual linguistic condition of reference where the referent is smaller than the that to which it refers. Then we try to solve ~PR(X) = ~X(X) and discover the unique solution X=~PR that makes the Machine incomplete with respect to truth.

How does this situation compare with the full Godel Theorem? In that theorem the formal system is coaxed into producing a sentence whose interpretation is "This sentence is unprovable in the present formal system.". Gödel's method is to set up a code wherein every piece of text in the formal system has a code integer from which the text can be decoded. The formal system is assumed to be rich enough to contain all the usual properties of the integers and the capability of formalizing proofs and algorithms involving integers. Lets write

g -----> F

to indicate that the formula or text F of the formal system has code number g. Now suppose that

g -----> F(u)

where F(u) is a formula with a free variable u (for example F(u) could be the statement u > 2). Let #g be the code number of F(g) so that

#g -----> F(g).

#g is the code number of the statement obtained by substituting the code number of F(u) into the variable u. Lets repeat

g -----> F(u)

#g -----> F(g).

Now the association of #g to g is a function from integers (a special subset of integers that encode statements with one free variable) to integers. Thus the specification for the computation of #g can be expressed in the formal system. Thus the formal system can be assuemd to know (in the sense of its internal definitions) the symbol #. Therefore we can consider formulas of the form F(#u) that invoke the computation of #u. Suppose we have such a formula F(#u). It has a code number g so we have

g -----> F(#u)

#g -----> F(#g).

As a result, F(#g) refers to its own code number. In our interpretation, F(#g) refers to itself. This takes care of self-reference.

Now consider provability. Let B(u) be the sentence in the formal system that asserts the provability of the decoding of the number u. B(u) is a statement about numbers that is decidable by an algorithm and so we can assume that it is stateable in the formal system. Given that, we let g be the code number for ~B(#u). Then

g -----> ~B(#u)

#g -----> ~B(#g).

~B(#g) asserts it own unprovability in the formal system. Hence it cannot be proven without contradiction within the formal system. Therefore it is a true theorem that can be stated in the formal system, but is not provable within the system. This completes the proof of Gödel's Incompleteness Theorem.

The very precision of the notion of a formal system, the fact that it can be pinned down by coding, is the seed of incompleteness. The incompleteness comes about through the ability of the mathematical observer to analyze the system as a mathematical object.

Mathematics, as a whole, is not derived from a single formal system. If it were then we could step outside that system (in principle!) and apply Gödel's argument, producing a true Theorem (and its proof) unattainable by the formal system itself. Proof is not mechanical. Mathematics does not proceed inevitably from a codification of logic or any other codification.

We started with the idea of a formal system, quite self-contained, designed to handle ordinary arithmetic, but constructed so that every bit of arithmetic and logic that it needed was contained in its own formalism. The system was then apparently free from needing any interpretation. All processes could continue via its own internal rules. The statement ~B(#g) looked at from within the system was just a statement about the properties of a certain computable number #g. The formal system itself has no way to survey the consequences of the provablity of ~B(#g). But if it is consistent then it will never write a proof of this statement. Our proof as outsiders to the system of this same statement is a proof using the meaning of its interpretation in terms of provability. This meaning is part of our exact discussion of the meaning and behaviour of the formal system, just as exact as our discussion of whether or not the Smullyan machine can or cannot print a given statement.

Meaning arises through interpretation and this meaning is the meaning that is assigned to the formal system itself. That it is a formal system provides a sufficient context and meaning for us to demonstrate its incompleteness.

III. Experience

This essay has been a discussion of syntax and semantics. Gödel's theorem shows that any attempt to found mathematical knowledge fully on syntax is doomed to failure exactly because the essence of mathematics is to make meaning out of syntax.

I have given numerous examples showing how the syntax in mathematical formalism is intimately related to its meaning. There is great freedom in examining a structure on a purely syntactic basis. This freedom is not at all the same thing as asserting that all of mathematics can be done in pure syntax (as though performed by a machine). In fact it is exactly in our experiencing of the formal system that its meaning arises and its incompleteness is fulfilled. After all, the Gödelian sentence that is unprovable inside the formal system is certainly not unprovable! It is proved by us, outside the formal system through our experience of that system.

This leads directly into the major theme of the relation of experience to mathematics. So far in this essay we have emphasized the role of personal experience in relation to the mathematics itself. This is the experience of the person working with the formalism and its associated ideas. It is through this experience that the mathematical person comes to live in a ground of language that allows the proof of theorems unavailable to a given formal system. And it is this same ground of experience that is related to the process of abstracting mathematics from the activities of the everyday world.

The everyday world is the source of all our mathematics. It is through our experiences in that world that systems of counting came to be invented. It is through our experiences in that world that systems of geometry became codified. The order and sophistication of these invented/ discovered systems varies from culture to culture. The Incas reckoned by using knots tied in clusters of string (Quipu). Yet they did not (apparently) develop a topological theory of knots. Indeed our familiar mathematics did not develop a theory of knots until the end of the nineteenth century. The theory developed strongly in the twentieth century and it still in a state of development. Yet the everyday experience of knots and weaves has been with us since nearly the beginning of human history. It is never obvious when a cultural pattern of activity will become a mathematical activity. There is no way to draw a boundary separating the patterns developed in culture, in practical work, in the arts from the pool of patterns that will be of significance to mathematics.

One may wonder that the everyday world is in fact built of "mathematical stuff". In physics one is impressed with the precise correspondence of natural action and certain mathematical laws (such as the equations for gravity, electromagnetism, gravity and quantum theory).

In the realms of art, poetry and emotion we do not have quantitative models, but this does not mean that mathematics is not involved. One of the benefits of understanding formal systems for what they are, is that mathematics as a whole becomes a non-numerical study of pattern. Whole areas of mathematics are non-numerical and qualitative at their base. In this perspective one only arbitrarily draws a line between mathematical structures and the patterns of the dance, the instructions for a weaving or the text and meaning of a musical score.

It is my belief that Wittgenstein's dictum "The limits of my language are the limits of my world." (Tractatus Logico Philosophicus) extends deeply into the relationship of mathematics and our worlds. The limits of mathematical language exist only to be transcended. In this sense there are no limits and this is so of our worlds as well. When we ask what are the limits of language, and see that in this sense of creativity there are none, we enter directly into the realm that develops art, poetry, music, dance and mathematical structure. That we distinguish these from one another in ordinary life is only a limitation of ordinary language.

PRINCIPLES OF UNCERTAINTY IN SYSTEMS SCIENCE

George J. Klir

Center for Intelligent Systems

and

Department of Systems Science & Industrial Engineering

Binghamton University - SUNY

Binghamton, New York 13902-6000, U.S.A.

There are three inseparable principles:

o Principle of minimum uncertainty.

o Principle of maximum uncertainty.

o Principle of uncertainty invariance.

These principles may also be viewed as principles of uncertainty-based
information. The common thrust of them is that they are sound information
safeguards in dealing with systems problems. They guarantee that when we
deal with any systems problem, we use all information available, we do not
unwittingly use information that is not available, and we do not lose more
information than inevitable.

The three principles apply to nondeterministic systems, in which relevant
uncertainty (predictive, retrodictive, prescriptive, diagnostic, etc.) is
formalized with a mathematical theory suitable for each application (probability
theory, possibility theory, evidence theory, etc.). The principles can be
made operational only if a well-justifies measure of uncertainty in the
theory employed is available. Since types and measures of uncertainty substantially
differ in different uncertainty theories, the principles result in considerable
different mathematical problems when we move from one theory to another.

When uncertainty is reduced by taking an action (performing a relevant experiment
and observing the experimental outcome, searching through an archive and
finding a relevant document, etc.), the amount of information obtained by
the action can be measured by the amount of uncertainty reduced ¾
the difference between the a priori uncertainty and a posteriori uncertainty.
Due to this connection between uncertainty and information, the three principles
of uncertainty may also be viewed as principles of information. Information
of this kind is usually called uncertainty-based information [Klir &
Wierman, 1999].

Principle of Minimum Uncertainty

The principle of minimum uncertainty is an arbitration principle. It facilitates
the selection of meaningful solutions from a solution set obtained by solving
any problem in which some initial information is inevitably lost. By this
principle, we should accept only such solutions for which the amount of
lost information is minimal. This is equivalent to accepting solutions with
the minimum relevant uncertainty (predictive, prescriptive, etc.).

A major class of problems for which the principle of minimum uncertainty
is applicable are simplification problems. When a system is simplified,
it is usually unavoidable to lose some information contained in the system.
The amount of information that is lost in this process results in the increase
of an equal amount of relevant uncertainty. Examples of relevant uncertainties
are predictive, retrodictive, or prescriptive uncertainty. A sound simplification
of a given system should minimize the loss of relevant information (or increase
in relevant uncertainty) while achieving the required reduction of complexity.
That is, we should accept only such simplifications of a given system at
any desirable level of complexity for which the loss of relevant information
(or the increase in relevant uncertainty) is minimal. When properly applied,
the principle of minimum uncertainty guarantees that no information is wasted
in the process of simplifications.

Given a system formulated within a particular experimental frame, there
are many distinct ways of simplifying it. Three main strategies of simplification
can readily be recognized.

· simplifications made by eliminating some entities from the system
(variables, subsystems, etc.)

· simplifications made by aggregating some entities of the system
(variables, states, etc.)

· simplifications made by breaking overall systems into appropriate
subsystems.

Regardless of the strategy employed, the principle of minimum uncertainty
is utilized in the same way. It is an arbiter which decides which simplifications
to choose at any given level of complexity.

Another application of the principle of minimum uncertainty is the area
of conflict-resolution problems. For example, when we integrate several
overlapping partial models into one larger model, the models may be locally
inconsistent. It is reasonable then to require that each of the models be
appropriately adjusted in such a way that the overall model become consistent.
To guarantee that no fictitious (biasing) information be introduced, the
adjustments must not decrease the uncertainty of any of the partial models
involved, but may increase it. That is, to achieve local consistency of
the overall model, we are likely to loose some information contained in
the partial model. This is not desirable. Hence, we should minimize this
loss of information. That is, we should accept only those adjustments for
which the total loss of information (or total increase of uncertainty) is
minimal. The total loss of information may be expressed, for example, by
the sum of all individual losses or by a weighted sum, if the partial models
are valued differently.

6.2 Principle of Maximum Uncertainty

The second principle, the principle of maximum uncertainty, is essential
for any problem that involves ampliative reasoning. This is reasoning in
which conclusions are not entailed in the given premises. Using common sense,
the principle may be expressed by the following requirement: in any ampliative
inference, use all information available but make sure that no additional
information is unwittingly added. That is, the principle requires that conclusions
resulting from any ampliative inference maximize the relevant uncertainty
within the constraints representing the premises. This principle guarantees
that our ignorance be fully recognized when we try to enlarge our claims
beyond the given premises and, at the same time, that all information contained
in the premises be fully utilized. In other words, the principle guarantees
that our conclusions are maximally noncommittal with regards to information
not contained in the premises.

Ampliative reasoning is indispensable to science and engineering in a variety
of ways. For example, whenever we utilize a scientific model for predictions,
we employ ampliative reasoning. Similarly, when we want to estimate microstates
form the knowledge of relevant macrostates and partial information regarding
the microstates (as in image processing and many other problems), we must
resort to ampliative reasoning. The problem of the identification of an
overall system from some of its subsystems is another example that involves
ampliative reasoning.

The principle of maximum uncertainty is well developed and tested within
the classical information theory based upon the Shannon entropy, where it
is called the maximum entropy principle. This entropy principle was founded,
presumably, by Jaynes [1983]. Perhaps the greatest skill in using this principle
in a broad spectrum of applications, often in combination with the complementary
minimum entropy principle, has been demonstrated by Christensen [1985-1986].
Literature concerned with the principle is extensive. An excellent overview
is a book by Kapur [1989], which contains an extensive bibliography.

A general formulation of the principle of maximum entropy is: determine
a probability distribution that maximizes the Shannon entropy subject to
given constraints, which express partial information about the unknown probabilities,
as well as general constraints (axioms) of probability theory. The most
typical constraints employed in practical applications are the mean (expected)
values of random variables under investigation, various marginal probability
distributions of an unknown joint distribution, or upper and lower estimates
of probabilities.

6.3 Principle of Uncertainty Invariance

The third principle, the principle of uncertainty invariance, facilitates
connections among representations of uncertainty and information in alternative
mathematical theories. The principle requires that the amount of uncertainty
(and information) be preserved when a representation of uncertainty in one
mathematical theory is transformed into its counterpart in another theory.
That is, the principle guarantees that no information is unwittingly added
or eliminated solely by changing the mathematical framework by which a particular
phenomenon is formalized. As a rule, uncertainty invariant transformations
are not unique. To make them unique, appropriate additional requirements
must be imposed.

In comparison with the principles of minimum and maximum uncertainty, which
have been investigated and applied within probability theory for at least
40 years, the principle of uncertainty invariance was introduced only in
the early 1990s [Klir, 1990]. It is based upon the following epistemological
and methodological position: every real-world decision or problem situation
involving uncertainty can be formalized in all the theories of uncertainty.
Each formalization is a mathematical model of the situation. When we commit
ourselves to a particular mathematical theory, our modeling becomes necessarily
limited by the constraints of the theory. For example, probability theory
can model decision situations only in terms of conflicting degrees of belief
in mutually exclusive alternatives. These degrees are derived in some ways
from the evidence on hand. Possibility theory, on the other hand, can model
a decision situation only in terms of degrees of belief that are allocated
to consonant (nested) subsets of alternatives; these are almost conflict-free,
but involve large nonspecificity.

Clearly, a more general theory is capable of capturing uncertainties of
some decision situations more faithfully than its less general competitors.
Nevertheless, every uncertainty theory, even the least general one, is capable
of characterizing (or approximating, if you like) the uncertainty of every
situation. This characterization may not be, due to constraints of the theory,
as natural as its counterparts in other, more adequate theories. However,
such a characterization does always exist. If the theory is not capable
of capturing some type of uncertainty directly, it may capture it indirectly
in some fashion, through whatever other type of uncertainty is available.

To transform the representation of a problem-solving situation in one theory,
T1, into an equivalent representation in another theory, T2, we should require
that:

(i) the amount of uncertainty associated with the situation be preserved
when we move form T1 into T2; and

(ii) the degrees of belief in T1 be converted to their counterparts in T2
by an appropriate scale, at least ordinal.

These two requirements express the principle of uncertainty invariance.

Requirement (i) guarantees that no uncertainty is unwittingly added or eliminated
solely by changing the mathematical theory by which a particular phenomenon
is formalized. If the amount of uncertainty were not preserved then either
some information not supported by the evidence would unwittingly be added
by the transformation (information bias) or some useful information contained
in the evidence would unwittingly be eliminated (information waste). In
either case, the model obtained by the transformation could hardly be viewed
as equivalent to its original.

Requirement (ii) guarantees that certain properties, which are considered
essential in a given context (such as ordering or proportionality of relevant
values), be preserved under the transformation. Transformations under which
certain properties of a numerical variable remain invariant are known in
the theory of measurement as scales.

Due to unique connection between uncertainty and information, the principle
of uncertainty can also be conceived as a principle of information invariance
or information preservation. Indeed, each model of a problem-solving situation,
formalized in some mathematical theory, contains information of some type
and some amount. The amount is expressed by the difference between the maximum
possible uncertainty associated with the set of alternatives postulated
in the situation and the actual uncertainty of the model. When we approximate
one model with another one, formalized in terms of a different mathematical
theory, this basically means that we want to replace one type of information
with an equal amount of information of another type. That is, we want to
convert information from one type to another while, at the same time, preserving
its amount. This expresses the spirit of the principle of information invariance
of preservation: no information should be added or eliminated solely by
converting one type of information to another. It seems reasonable to compare
this principle, in a metaphoric way, with the principle of energy preservation
in physics.

Examples of generic applications of the principle include problems that
involve transformations from probabilities to possibilities and vice versa,
approximations of fuzzy sets by crisp sets (defuzzification), and approximations
of bodies of evidence in evidence theory by their probabilistic or possibilistic
counterparts.

Christensen, R. [1985], "Entropy minimax multivariate statistical
modeling - I: Theory." Intern. J. of General Systems, 11(3), pp. 231-277.

Christensen, R. [1986], "Entropy minimax multivariate statistical modeling
- II: Applications." Intern. J. of General Systems, 12(3), pp. 227-305.

Jaynes, E. T. [1983], Rosenkrantz, R. D., ed., Papers on Probability, Statistics
and Statistical Physics. Reidel, Dordrecht.

Kapur, J. N. [1989], Maximum Entropy Models in Science and Engineering.
John Wiley, New York.

Klir, G. J. [1990], "A principle of uncertainty and information invariance."
Intern. J. of General Systems, 17(2-3), pp. 249-275.

Klir, G. J. and Wierman M.J. [1999], Uncertainty-Based Information: Elements
of Generalized Information Theory. Physica-Verlag/Springer-Verlag, Heidelberg
and New York.

H. Sabelli

Chicago Center for Creative Development

The interaction of opposites creates complexity. Systems are processes, i.e. transformations of energy (action). Oppositions between positive and negative actions encode information, and their synergic and antagonistic interplay creates tridimensional structure, and higher dimensional organization. This hypothesis is modeled by the process equation At+1 = At + k * t * sinAt, in which the bipolar (positive and negative) feedback generates a sequence of patterns: convergence, bifurcation cascade, periodicity, chaos, and bios.

This principle formulates Heraclitus' interlocking concepts of diversifying unity, union of opposites, and creative becoming in terms of physical action, complementary information, and organization by creative feedback. These concepts guide the ongoing development of methods to study creative processes, and that made possible to recognize bios, a pattern that appears to be common in natural processes hereto suspected to be random or chaotic.

1. Action (energy): Physical action, the conjoint change of energy and time, is the universal constituent of reality at every level of organization. There is nothing simpler than action, because the Plank's quantum, the smallest unit of existence, has the dimensions of action (both energy and time), not of energy alone. Likewise macroscopic processes are made of actions; e.g. cardiac action is the product of the force, duration and frequency of contraction. Energy is conserved as a quantity, but it is continually transforming in quality. Everywhere there is movement, nowhere rest or equilibrium. The uni-verse literally is, as its etymology indicates, an unidirectional flow.

This hypothesis has rich corollaries: (1) Change and conservation: Action implies both change and transitivity. (2) Monism: One and the same stuff underlies the diversity of the universe. This is possible because the "stuff" that makes the universe is action rather than static substance. At the simplest physical level, all forms of energy convert into each other (first law of thermodynamics) and into matter (Einstein's law). Ideas exist only as embodied in energetic and material processes, and are therefore capable of modifying other physiological processes. (3) Spontaneity: Action determines itself; it requires no external cause. (4) Quantity: Action is made up of discrete units at every level of organization -e.g. Planck quanta, action potentials, atoms. Action is a quantity. (5) Pasteur's universal asymmetry: Beginning with time, natural processes are asymmetric, in one dimension --action--, in two dimensions --information--, in three dimensions -structure--, and in multiple dimensions of organization, statistical distribution and systems' hierarchy. Thus systems are lattices (sets ordered by asymmetric and transitive relations), one of the mother structures of mathematics (Bourbaki, Piaget, MacLane).

2. Opposition (information): Opposition is a universal pattern of action, that carries information, and is embodied in complementary structures.

Opposition is universal, in space, in time, and in quality. There are opposites in every respect: action and reaction, positive protons and negative electrons, electromagnetic polarities, complementary DNA strands, anabolism and catabolism, feminine and masculine, cooperation and competition, certainty and uncertainty. As processes differentiate (bifurcate), the number and diversity of opposites multiply. Sharing a common origin, opposites are fundamentally similar; diverging from each other, they are fundamentally different and, in some respect, antagonistic.

Opposites are distinct but united. Every physical entity is both a particle and a wave (Bohr' quantum complementarity). Every person is both feminine and masculine, albeit to different degrees. Evolution and decay coexist in cosmology and history (enantiodromia), contrary to one-sided views of progress and of entropic decay. Opposites are complementary, not mutually exclusive. The relation between opposites shows three distinct layers. At the local level of organization, at which we are accustomed to think, opposites are separated in space, time or quality (local formulation of the logical principle of no-contradiction). Notwithstanding, opposites coexist as components of every process (global principle of dialectic contradiction). Quantum mechanics adds a third layer with its principle of superposition. These three layers can be illustrated with Schrödinger's famous cat. At the quantic level, the event that may trigger the cat's death is probabilistic, and thereby unpredictable at any particular time. At a particular time, the cat is either dead or alive, irrespective of whether or not the box has been opened and an observation has been made. At the process level, every living organism eventually dies, and while alive it is continually renovating its cell, literally living and dying at the same time. Process theory thus formulates the ancient idea of the union of opposites as dialectic global contradiction, local logical no contradiction, and quantic superposition. This is not an eclectic compromise or a verbal avowal, but the statement of a complexity that has practical applications. Superposition, and its negation in entanglement, is fundamental to the design and eventual construction of quantum computers. Logical no contradiction is essential in mathematics. Dialectic contradiction governs natural and human processes. Schrödinger described the fact that the cat was necessarily dead or alive, in spite of the existence of superposition at the quantic level, as the result of the entanglement of atoms. From the systems perspective, one may point out that atoms in fact never are isolated, but they always belong to a system, they always are entangled to others.

Opposites can be synergic (e.g. opposition of the thumb and the other fingers, opposite sides of a square), antagonistic, or both (nonlinear opposites, e.g. complementarity). Linear antagonists are inversely related: as one waxes, the other wanes. Naturally occurring opposites can both grow or diminish at the same time, albeit rarely if ever independently.

Harmony and conflict always coexist. There is harmony in the tension of opposites, as in the bow and the lyre (Heraclitus). Even homeostatic systems contain mutually antagonistic components; conflict is not the sole motor of change, and it often destroys more than it creates -- at variance with Darwin, Marx, and Freud.

Opposites are co-dominant, each prevailing in a different way, and/or at different time or place. Parents control their young children, but eventually power shifts to the younger generation. Male supremacy is widespread, but in modern society women outlive men, and everywhere mothers are the first universe, the first love, and the first identification figure (female priority). Even in authoritarian regimes, government depends on the governed. There never is absolute primacy. Each hierarchy generates its dual (a theorem of lattice theory). This principle applies not only to relations between classes at a given level of organization, but also to the relation between different levels of organization, illustrating Y.P. Rhee's complementarity of vertical and horizontal dimensions of systems. In the vertical dimension, the priority of the simple leads to the supremacy of the complex. The priority of the objective is followed by the supremacy of the subjective. These concepts enter into François' notion of priority of the simple and supremacy of the complex in the upbuilding of systems regulations and controls, and also relate to Salthe's notion that in order to capture the complexity of a system, minimally three scalar levels must be explicitly represented in the model.

Every entity contain its opposite in a diminished form: opposite qualities result from quantitative difference, not from different composition. Yet opposites are not polar extremes of a linear continuum, because they are in part similar and in part different, in part synergic and in part antagonistic, at times dominant and at times dominated. They must be conceived as located in any relation within a two dimensional plane. Synergic opposites may be almost parallel, antagonism may approach linear polarity. Paradigmatic opposites are fully complementary, exactly orthogonal to each other (a relation usually interpreted as statistical independence). The simplest model for complementary opposition is the complementarity of the sine and cosine functions. Validating the model, fundamental processes, such as light waves, a carrier of energy and information, are constituted by orthogonal, bipolar and continuous electrical and magnetic fields.

Information is the news of a difference (Bateson). Asymmetric opposition, the difference between two values, is the unit of information. Communication is an interaction. Action carries information. Information is an action, not a separate substance. The quantum is the smallest unit of information. Uncertainty applies to subquantic fluctuations. According to quantum mechanics, the apparently empty space consists of small fluctuations of energy, so fast that space appears symmetric and uniform, formless and informationless. Void and action , as other opposites, differ in quantity, not in substance. Macroscopic processes include quantum uncertainty; quantum flux contains action quanta. Certainty and uncertainty are not mutually exclusive, but coexisting opposites, as highlighted by the principles presented by G. Klir. There also is a third complementary: misinformation -error, myth, deception.

Although opposites can be asymmetric (e.g. massive protons and small electrons make up atoms), opposition is a fundamental symmetry of nature, and a heuristic principle in physics. Physical entities are postulated, and subsequently discovered, by reasons of symmetry. Fundamental physical processes can be described as a group (a set in which every member has an opposite, its inverse), the second mother structure of mathematics. Just as inverses are necessary but not exclusive members of a group, natural opposites are embedded into larger systems of complementarity. Triads are common (e.g. tridimensional space, three primary colors), and fundamental (period 3 implies an infinite sequence of periodicities and infinitations --Sarkovskii's theorem).

3. Non-uniform, relatively stable structure (matter) is a universal component of systems. Tridimensional interactions generate non-uniform stabilities, i.e. structure, rather than formless, uniform, disordered equilibrium. Matter is the tridimensional structuration of mass. Mass exists everywhere, condensed as in matter or sparse as in the energy that fills the void. Matter is energy concentrated in a tridimensional structure, i.e. a relatively stable tridimensional asymmetry. Energy and matter are interconvertible E = m * c2. As the speed of light c represents a maximum in energetic interactions, and communication involves two simultaneous actions, c2 represents the maximum rate of communication. Interpreting c2 as information in Einstein's law, energy E equals matter M times information I. In cosmological evolution, there is a net conversion from energy into matter and information I, so the symmetric relation "=" in Einstein's law represents two opposite but unequal actions:

E M I

Although components may play only one function in the system, action, information and structure are three inseparable aspects of each entity: energy is particulate; waves are associated with matter; information can only be carried by an energetic or material messenger; structure conserves, and action carries, information. A nerve impulse is an electrical current, a signal, and a displacement of ions. Behavioral actions are inseparable from subjective emotions and ideas, and from macroscopic brain structures and molecular neurohormone structures. These are practical, rather than speculative, notions: we decrease marital conflict with blockers of acetylcholine (a molecular trigger for rage), and one can treat depression with medications or by replacing PEA (the mediator of psychological energy). Causation includes energetic, informational, and material factors, an Aristotelian concept newly developed by I. Balsamo.

Creative organization: Material structure represents only a particular case of the formation of complex forms from simpler ones, and even at the physical level it coexists with higher dimensional organization. A distinction between tridimensional material structure (e.g. a building), and higher dimensional organization (e.g. the institution housed in the building) is cogent. Material particulate structure is universal, and more stable. Interrupting the flow of energy causes death, i.e. terminates living organization, but leaves temporarily unchanged the physical body; in contrast, organization such as Prigogine's dissipative structures, ceases when the throughput of energy is interrupted.

Einstein's law and cosmological evolution establish that in fact action creates matter; biological evolution and human history indicate that evolution creates life and mind. Systems are in continual and continuous transformation, as described by topology, the third mother structure of mathematics. A system is a lattice of actions (energy change in time), a group of relations (i.e. of repetitive interactions generating mutual and bipolar feedback, i.e. information), and a continuous topological transformation of tridimensional matter. This hypothesis exemplifies the notion of natural systems as embodiments of mathematical form, that runs from Pythagoras to Kauffman's principle.

Creation involves the formation and transformation of patterns of limited duration (e.g. life), diversification, symmetry-breaking, novelty, and complexification, features that can be measured in empirical data using process methods, and that do not obtain in deterministic processes.

Bios, the pattern of creative processes: Bios is a newly found type of organization composed of transient and novel patterns. Most of these patterns are aperiodic, and of greater amplitude, morphological diversity, and sensitivity to initial conditions than chaos. Bios also includes transient periodicities and linear transitions, features that, together with sensitivity to initial conditions, differentiate deterministic bios from 1/f patterns generated by statistical processes. Bios shows novelty (less recurrence than its randomized copy), diversification (increase variance with increased duration), non-random complexity, and 1/f power spectrum (implying change and conservation). Diversity, novelty, complexity, and episodic patterning are found in natural processes, but are absent in chaos.

Biotic patterns are found in empirical time series portraying meteorological, biological, and economic processes. This widespread occurrence of biotic patterns suggests that bios may be the canonical form of natural processes. It is thus cogent that bios can be generated by mathematical models of bipolar feedback.

Co-creation (self-organization): The interaction of different processes spontaneously creates organization. Interactions co-create form, ranging from transient patterns to stable particles, and higher dimensional organization. The balance between opposites creates stability of structure. In this manner, opposites co-create systems. Systems are organized by relations, as T. Mandel describes. Every system is autodynamic because it contains opposites, and it relates to opposite processes in the environment. Only through these interactions systems "self"-organize. There are many forms of co-creation, as illustrated by two sexes procreating a new individual, oppositely charged particles combining to form atoms, and subsystems engaging in mutual feedback.

In the interaction between processes, the sum of opposite is the energy of the system. Conversely, increasing the energy of a system increases both opposites; e.g. high energy binds and split atoms, energetic persons make stronger bonds and trigger stormier conflicts. The energy of the system determines the occurrence of bifurcations, and thereby the pattern of the process (equilibrium, periodicity, chaos, bios). The difference between opposites is the information that determines the direction of change. This is suggested by empirical experiments and catastrophe models. Creation is fostered by a moderate intensity and a near symmetry of opposites. This is suggested by experiments with models for creative feedback.

Creative feedback: A fundamental form of co-creation is feedback. Systems continually interact with their environment, and thereby receive ceaseless feedback. The inputs received by a system are at least in part reactions to its previous action; in turn, each input contributes to determine the following action. As such interaction is repetitive, it provides feedback, and more specifically, mutual feedback. Through its repetitive interactions with others, each subsystem becomes both self-referential and co-creative. Positive feedback generates growth. Negative feedback maintains stability. Resistance to growth (logistic equation) generates periodicities and chaos, homologous to eddies and turbulence. Bipolar feedback (process equation) also generates bios.

In natural systems, feedback usually is bipolar, i.e. positive and negative.
Natural environments are enormously diverse, so the output of any system
is synergic to some processes in the environment, and opposed to others,
and in turn systems continually receive synergic and antagonistic inputs.
Bipolar feedback is creative. This is illustrated by the process equation
At+1 = At + k * t * sinAt, in which At models action, that is additive and
changes in time, the trigonometric function, ranging from 1 to -1, models
bipolar feedback, and the feedback gain increases with time as determined
by the constant k. As the intensity of the feedback increases, this recursion
generates convergence to a steady state, bifurcation, periodicity, chaos
and bios. This represents a progression in complexity. Steady state, periodic
and chaotic patterns remain uniform in time. Bios is constituted by sequences
of episodic patterns that expand in amplitude and diversity with time, rather
than converging to a stable state; they also are more variable than their
randomized copies, demonstrating novelty. Diversity, novelty, complexity,
and episodic patterning are hallmarks of creativity.

As noted, the biotic pattern is similar to those observed in time series
of natural processes known to continually generate novelty, such as heartbeat
intervals or language. Organic forms can be generated by models of bipolar
and mutual feedback, such as At+1 = At + g * Bt * sin(Bt) and Bt+1 = Bt
+ k * At * cos(At), or At+1 = At + g * sin At + k * cos(Bt), and Bt+1 =
Bt + j * sin(Bt) + h cos(At), where g, h, j, and k are constants. In a system,
feedback necessarily is reciprocal.

Creative development: These observations support the hypothesis that the interaction between synergic and antagonistic complementary opposites in natural processes may be a major factor for creative evolution. As action and bipolar feedback are universal, the sequence of patterns generated by the process equation may portray a fundamental pattern of development at all levels of organization: from one origin through bifurcation into pairs of opposites, multiple oppositions (2n periods), chaos, and triads (period 3, that implies all others) to bios. Cosmological and embryological development include these transformations, along with many more concrete features. A development that is both determined and creative offers an alternative to the emergence of complexity by chance, or by supernatural intervention, in a universe that is determined, and/or bound to entropic decay. If creation is the spontaneous result of interaction, the universe is a process that, far from tending towards entropic disorder, generates an infinitely complex attractor:

Creation implies the diversification of systems, a fundamental fact highlighted by H. Bhola, at variance with equilibrium models of health and economics, and with homeostatic models of family and social systems. Co-creation provides guidelines for social and personal growth, and indicates the inadequacy of purely conflictual models of biological and social evolution. Co-creation also represents a systemic manner of thinking, which this article, with its cross-disciplinary perspective and its multiple authorship, exemplifies.

Kauffman, L. and Sabelli, H. 1998. The Process equation. Cybernetics
and Systems 29: 345-362

Sabelli, H. 1984. Mathematical Dialectics, Scientific Logic and the Psychoanalysis
of Thinking. In Hegel and the Sciences, Edited by R.S. Cohen and M.W. Wartofsky.
New York: D. Reidel Publishing Co. 349-359.

Sabelli, H. Union of Opposites: A Comprehensive Theory of Natural and Human
Processes. Lawrenceville, VA: Brunswick Publishing, 1989.

Sabelli, H. The Union of Opposites: from Taoism to Process Theory. Systems
research 15: 429-441, 1998.

Sabelli, H. 1999. Process theory: mathematical formulation, experimental
method, and clinical and social application. Toward New Paradigm of System
Science. P. Y. Rhee editor. Seoul: Seoul National university Press, pp 159-
201.

Sabelli H. and Carlson-Sabelli L. 1989. Biological Priority and Psychological
Supremacy, a New Integrative Paradigm Derived from Process Theory. American
J. Psychiatry 146 1541-1551.

Sabelli HC and Carlson-Sabelli L. 1991. Process Theory as a Framework for
Comprehensive Psychodynamic Formulations. Genetic, Social, and General Psychology
Monographs. 117:5-27.

Sabelli, H. and L. Carlson-Sabelli. 1995. Sociodynamics: the application
of process methods to the social sciences. Chaos Theory and Society (A.
Albert, editor). I.O.S.Press, Amsterdam, Holland, and Les Presses de l'Université
du Québec, Sainte-Foy, Canada.

Sabelli , H. and L. Kauffman 1999. The Process Equation: Formulating And
Testing The Process Theory Of Systems. Cybernetics and Systems 30: 261-294.

Sabelli, H., Carlson-Sabelli, L., Patel, M and Sugerman, A. 1997. Dynamics
and psychodynamics. Process Foundations of Psychology. J. Mind and Behavior
18: 305-334. Special issue edited by L. Vandervert Understanding Tomorrow's
Mind: Advances in Chaos Theory, Quantum Theory, and Consciousness in Psychology.

Iris Bálsamo

National Academy of Sciences of Buenos Aires

Institute of Public Law, Political Science and Sociology

ibalsamo@datacop3.com.ar

The advance of science is associated to the empirical test of its principles. In dynamic system models, there are four types of causation that correspond to Aristotle's efficient, formal, material and final causes. They refer to systems described by structure, organization, domain of changes in the system, and domain of interactions. Formulated as law, causation fulfils the four senses of scientific law referred to dynamical systems - objective, nomological, nomopragmatic and meta-nomological.

The cause-effect law as connection between events have been characterizing
the modern Science during the last four centuries. This Aristotelian efficient
causation, named necessary and sufficient condition by Galileo Galilei,
becomes insufficient for understanding the complex phenomena of Systems
Sciences. In the same sense, Albert Einstein and Max Planck recognized the
insufficiency of modern causation for understanding the complex phenomena
of quantum Physics. They proposed the ampliation and refinement of the cause-effect
connection between events to subject the metaphysical causation to the experimental
conditions of modern science (Planck, 1933).

The Aristotelian causality

According to Aristotle, the causal principle constitutes the mean for discovering
the truth in Nature and Industry. The true knowledge arises when the question
on the four genders of causes is answered. Each one of these codifies a
specific information in the construction of knowledge or discovery of the
truth. The formal cause answers the question what is?, it refers to the
thing, the pattern, the form; efficient cause is the agent, it refers to
the stimuli, perturbations, interactions, inputs; material cause codifies
the space in which something exists, its constituent, while final cause
answers the question for what?, it is defined by the purposes, objectives,
functions, emergencies, results and outputs.

To represent the knowledge according the four causes implies to make explicit
the whole situation of knowledge. While the efficient causation connects
events (i.e. stimulus with reactions), the concurrence of the four causes
can take over the whole object or process studied in its particular situation
(i.e. a system answering in a specific way under certain stimulus). It implies
to understand that each time a stone breaks a glass, it is the glass which
specifies the changes to suffer by action of the stone. So it is possible
to understand why different systems react or answer in a different way under
the same stimulus (e.g., to compare linear and non linear patterns). Conversely,
to understand why a system develops different patterns of behaviors under
different treatments (e.g., microparticle's and wave's behaviors under classic
forces and uncertainty relations, respectively). In Industry and Management
the use of the four causes makes possible to understand why similar objectives
(final cause) can be got applying different strategies to imply different
systems, agents and courses of action. In Engineering, it let understand
why the only variation of matter in processes of any type produces different
results. And more strictly, why each time identities are distinguished,
differences are hidden or underestimated.

Dynamics of systems

There are two basic conceptual tools for describing systems. They are structure
and organization. The structure refers to the relations among components
plus their properties. The organization, which evokes the Aristotelian organon,
corresponds to the identity of the system or the structure which gives identity
to the system. So then, things which are not originally regarded as systems
can be described by organization and structure for being finally regarded
as systems (e.g., collections of properties).

There are two basic operative concepts in dynamics of systems, interactions
and changes. Interactions take place between the system and other/s system/s
in the environment, and inside the system when it has access to the distinction
between internal and external environment. And changes are referred to differences
in properties, components and relations of the structure and/or organization
of the system.

Interactions and changes are subdivided in destructive and constructive
according they result in the loss of identity of system or not, respectively.

The combination between the systemic distinction between organization and
structure, and the dynamic distinction between interaction and change makes
possible to configurate the four domains that any complex unity specifies
(Maturana and Varela, 1984):

i. Domain of change of state or structural change: those differences that
a system may suffer without loss of identity. Structural changes are differences
in the components and/or in the relations among them. While changes of state
are differences in the properties.

ii. Domain of destructive changes: those differences in the organization
of a system which produce loss of identity.

iii. Domain of productive interactions: those actions between the system
and the environment which produce changes of state or structural changes.

iv. Domain of destructive interactions: those actions between the system
and the environment which produce changes with loss of identity.

The full dynamic sense of the Aristotelian causation is retrieved to refer
the four types of causes to the dynamics of systems so described.

The efficient cause (ec) codifies information relative to constructive and
destructive interactions; the final cause (fic) codifies information relative
to constructive and destructive changes; the formal cause (foc) codifies
information relative to the organization and the structure; and the material
cause (mc) codifies the information relative to the components and the space
of existence specified by their properties.

Conversely, when the information relative to the dynamics of systems is
codified by the four genders of causes, the whole knowledge of something
- process or object - emerges in its relative truth.

Causal Law

The metaphysical causal principle is subjected to the experimental conditions
of modern Science to be formulated as scientific law with reference to systems
dynamics. The four senses of scientific law are (Bunge, 1958):

1. objective: the law with reference to objects;

2. nomological: the law of laws;

3. nomopragmatic: the law with technological purposes;

4. metanomological: the law with theoretical purposes.

So then, the four senses of Causal Law are (Bálsamo, 1999):

Causal Law1: every system (foc, mc) specifies which interactions (ec) can
destroy it or produce changes of state or structure without loss of identity
(fic).

The law can take over the infinite complexity associated to multiplicity
of dimensions at different scales, functional differentiation, hierarchy
of levels, hetero and self-organizations. This sense of causal law is relevant
to discover or to construct systems whose organization or identity is unknowledged
and has to be infered or produced. In these cases, the key question is,
which kind of systems can suffer determined type of changes under certain
type of stimuli?. Additional complexity may be introduced to refer the components
of the system as negative and positive feedbacks. In these cases, dynamic
definitions of systems may be got by specific combinations of negative and
positive causal loops in Jay Forrester's sense.

Causal Law2: evolution = conservation + variation. Conservation is defined
by those structures that can not change without loss of identity. Variation
is defined by those structures that can change without loss of identity.
Mechanisms and processes of selection, refraction, reflexion and resonance
are responsible for evolution. According the system dynamics model implied,
the causal law verifies if changes are referred to organization, structure,
relations, components, and/or properties. When the organization is conserved,
the system evolves. When the organization changes, the system is transformed
(metamorphosis) generating a new identity.

Causal Law3: for the evolution of a system do not interact destructively
with it.

Destructive interactions produce disintegration and loss of identity. When
it takes place a new organization or identity arises. According the system
dynamics model implied it is possible to know if destructive interactions
are referred to organization, structure, relations, components, or properties.
So the causal law is a basic tool to test the changes in the system under
different treatments, and to simulate and manage them in different sceneries
and environments (Bálsamo, 2000).

Causal Law4: Coordination of action of coordination of action. The four
Aristotelian causes (ec; foc; mc; fic) are organized in two ordinate pairs.
It produces twelve sequential combinations. The relation R between the two
ordinate pairs is oclussion, inclusion or belonging.

Causal Law4 = { (c1; c2) R (c3; c4) }

This sense of causal law is relevant to search consistency in the construction
of models or theories. Thus, for example, to identify the specific linkages
by which non linear patterns of behavior (foc) produce large changes (fic)
in a social, biological or physical space (mc) under small perturbations
(ec). Another example, to construct complex models identifying the specific
circumstances (ec) which facilitate or interfere the self-organization (fic)
of systems (foc; mc).

The experimental character of the causal law is revealed not only to test
the different meanings that the four gender of causes can assume, but also
to make explicit that the only concurrence of one, two or three genders
of causes produces incomplete knowledge and misunderstandings. Understanding
and complete representation of knowledge emerge from the concurrence of
the four genders of causes.

So then, a more strict formulation of the causal law is got by means of
the amplification and refinement of the cause-effect connection between
events according the Einstein and Planck's conditions for the foundations,
unity and advance of Science in general and Systems Science in particular.

Bálsamo, Iris B., 1999: "The Causal Problem by Einstein", Proceedings of 4th European Congress on Systems Science, Editors Lorenzo Ferrer Figueras et al, Spanish Society of System Science, Valencia, Spain, 25-36.

-------2000: "Conditions for Testing Performance" in The Performance of Social Systems: Measurement and Problems, Editor Francisco Parra-Luna, Kluwer Academic/Plenum Publishers, London.

Bunge, Mario, 1958: Metascientific Queries, Springfield: Charles Thomas.

Maturana H. and Varela F., 1984: The Tree of Knowledge: The Biological Roots of Human Understanding. Boston: Shambhala, 1987.

Planck, Max, 1933: Where is Science going?, with a Preface by Albert Einstein, London: Allen & Unwin.

S. Salthe

As viewed from without by the systems modeler, the relationships between different scalar levels in a system are not direct interaction, but mutual constraint, with higher scale systems supplying boundary conditions on those nested within them, while these latter provide "initiating conditions" for events that will emerge between them and the upper levels, which can be referred to as events at a focal level. Initiating conditions propose, boundary conditions dispose. In order to capture the complexity of a system, minimally three scalar levels must be explicitly represented in models. This scheme refers to synchronic subsystems, and does not address diachronic matters or development.

Systems models are abstract representations of the underlying structures (form and behavior) of conceptually separable portions of the world. They should consider both form and development (predictable directional changes). Typically these models, as in the ones listed below, are made as if the modeler were outside of the system in question. (a) Concerning systems form synchronically, we need to employ the scalar hierarchy in order to represent extensional complexity. This involves subsystems at different scalar levels -- minimally three in order to prevent reduction of complexity to a favored level. Relationships between different levels do not involve direct interaction, but, instead, mutual constraint, with higher scalar levels imposing boundary conditions on those nested within them. These latter provide (what I call) "initiating conditions" for events at a focal level that will emerge between them and the upper levels. Initiating conditions propose, boundary conditions dispose. Each scalar level is characterized by its own temporal characteristics -- relaxation times and turnaround times; each has its own what I call "cogent moment". The moments of higher scale events contain many moments of entrained lower scale events. Examples of scalar hierarchies would be: [rock [molecule [atom]]] or [population [organism [cell [DNA segment ]]]]. This scheme refers to subsystems synchronically, and does not address diachronic matters or development (b) Concerning systems diachronically, we need to focus on the predictable, or constitutive, changes of development (evolution, or individuation, is non- systematic and historical). These can be represented as a specification hierarchy, whereby each developmental stage is represented as a subclass of the prior stage, and will come to have subclasses within it as well, representing subsequent stages, each being a refinement on the prior one. The general process involved in development is increasing specification, the system developing from a vague primordium toward an increasingly definite senescence. While, classically, systems models are fully explicit, we will need vaguer, or fuzzier, discourse in order to deal with immature systems. Then development would need to be modeled in such a way that the language used will become increasingly definite as a system continues to develop. The developmental trajectory of a system can be traced all the way back to its inception as a material system, giving rise to the following kinds of representations: {physical system { material system { chemical system { biological system { social system { psychological system }}}}}} or. in a more particular case, { dissipative structure { oocyte { embryo { larva { immature { mature {senescent }}}}}}}. Development is epigenetic. That is, the acquisitions of prior stages are not typically discarded, but rather modified, integrated or interpreted by later developmental events, which are constrained by them. This gives rise to what I call intensional complexity, which allows a system to be examined from many different viewpoints -- say, physically, chemically or (if relevant) psychologically. (c) These synchronic and diachronic tools can be used with any system, abiotic, biotic or social. Abiotic systems would not have as much intensional complexity -- for example, they may not be integrated at the psychological level -- but all systems are in principle equally complex extensionally.

Salthe, S.N., 1985. Evolving Hierarchical Systems: Their Structure and Representation. Columbia University Press.

Salthe, S.N., 1993. Development and Evolution: Complexity and Change
in Biology. MIT Press. --

Priority of the simple and supremacy of the complex in the upbuilding of
systems regulations and controls

C. François

Asociación Argentina de Ciencias Sistémicas y Cibernéticas

The upbuilding of any system necessarily starts from multiple interactions
among a number of compatible elements. Such interactions are also related
to competition among the elements for resources extracted from their common
environment If competition is not to be finally destructive for most or
all the elements, it must be compatible with the maintenance and enhancement
of interrelations among them, within sustainable environmental conditions.
Tolerable competition is made possible by automatic reciprocal limitations
among the elements through multiple feedbacks, generally local in the first
stages of systems organization. As such feedbacks tend to be specific and
repetitive, they lead to the appearance of permanent regulations. These
may be at the beginning of the process of statistical character, but tend
normally to become stabilized and differentiated in specific ways, according
to the nature of the resources and the more specific needs of some groups
of elements. In this way, incipient regulations lead to the appearance of
hierarchical controls and eventually of meta-controls and meta-meta-controls.
In the elemental phases of this type of processes, we observe priority of
the simple. Later on, when the system becomes hierarchically organized,
supremacy of the complex appears. In this way bottom up processes lead to
their top down ordering.

H. S. Bhola Indiana University

There are general systems principles that apply to all systems. However,
the substance of systems may vary from the physical, mechanical, ecological,
biological, social to the conceptual. Therefore, the General Systems Theory
should be complemented with a General Systems Taxonomy. Second level general
principles may have to be stated to focus on clusters of systems differentiated
by their substances, and their internal dynamics of interactivity.

1. Systems thinking as an epistemology is itself embedded in a larger system of epistemological approaches for understanding and acting on the world.

1.1 Constructivist thinking understood as making assertions about reality that are indeed individual and social constructions, based in transactions with experienced reality, is an unstated assumption of systems thinking, since system "boundaries" do not exist a priori but are constructed by systems thinkers in their special contexts to serve special needs.

1.2 Dialectical thinking understood simply as "mutual shaping" within relations is also an unstated assumption of systems thinking which makes possible to posit and to understand the important concept of emergence so central to systems thinking.

2. Thus, systems thinkers should see systems thinking as one angle of an equilateral "epistemic triangle" formed by a necessary and sufficient set of epistemological approaches: systems thinking, consructivist thinking and dialectical thinking.

2.1 The epistemic triangle does not exclude positivist thinking since, positivist thinking is one instance of construction of reality which does indeed make sense in conditions and contexts of control.

2.2 The epistemic triangle does not negate the epistemologies on which critical theory and postmodernism are built and does indeed resonate both to the epistemologies and the ideologies of critical theory and post-modernism.

2.3 Systems thinking by itself is a necessary but not a sufficient epistemology. It is only one angle of the epistemic triangle. All the three epistemologies will have to be part of the methodological assumptions to fully understand and effectively implement purposive action. In any act of praxis, at any one moment, one of the three epistemologies may be used as the arrowhead of analysis and elaboration depending on the nature and structure of the problem in view.

3. The conceptual span between the general and the specific in General Systems Theory (GST) is too big to allow enough of the concrete to be carried to the general, or to bring enough of the general to bear on the specific.

3.1 Therefore, a structure of differentiated categories must be built at various levels between the general and specific.

4. The need for a Unified Systems Taxonomy (UST), immediately below the
most general theoretical level of GST seems conceptually compelling. The
unification of sciences within systems is impossible unless those sciences
are accommodated within the vertical-horizontal matrix of systems thinking.

4.1 At this point in time, it may not be possible to offer a complete and
comprehensive taxonomy of systems, but two different dimensions for differentiation
seem promising:

i. The dimension of substance, and

ii. The dimension of interactivity

4.1.1 The substance or stuff of systems may vary from the physical, technical-mechanical, ecological, biological, social to the conceptual. It is through such conceptual advance that systems thinking can become interdisciplinary.

4.1.2 Interactivity is the internal dynamics of a system which may determine the possibilities of dialectic and emergence within and between them.

5. Systems thinking has sought to be a "systems science" and as suggested by Bertalanffy has done much at the level of mathematical-logical formulations. To advance systems theory as a "social science", as well as a practice, we need to work on applied systems thinking.

5.1 In using systems thinking in understanding and intervention, we should indeed be using the epistemic triangle with systems thinking as its arrow-head.

5.2 To take the systems thinking (within the epistemic triangle) down to the level of purposive action, that is, to describe, to understand, to plan, to implement or to evaluate a purposive action, we will need suitable models for elaboration of the means and ends calculus as well as the structure and content of a particular purposive action.

6. Configurations-Linkages-Environments-Resources (CLER) Model of purposive action fully reflects the assumptions and conceptual structure of the epistemic triangle, with particular emphasis on systems thinking.

6.1 The model suggests that any purposive action should be seen as an ensemble of three as follows: [Planner] X [Objective] X [Adopter]

6.2 The planner and Adopter should be elaborated in terms of the CLER categories: What is the system of agents and agencies involved in planning? What are the linkages within and between, configurations? Which are formal, which informal? What is the surrounding environment? What are their resources: conceptual, institutional, material, of personnel, influence and time? The same questions should be asked about adopter systems.

6.3 The Objective of the purposive action should be enalyzed in terms
of its implications for the evolution of the Planner and Adopter systems.

6.4 Thinking dialectically within the ensemble of three above, action points
should be generated and strategically sequenced.