GST and concepts defining systems properties

First we have to define the word system and emphasize its subjective nature. From a linguistic point of view, the word originated from Greek where it denotes a connected or regular whole. A system is not something presented to the observer, it is something to be recognized by him. An example is how man begun to recognize our planetary system during the middle age.

Observe how system into system runs,

What other planets circle other suns.

(A. Pope)

Most often the word does not refer to existing things in the real world but rather to a way of organizing our thoughts about the real world. The constructivist view of reality (E. von Glaserfeld 1990) states that systems do not exist in the real world independent of the human mind; only a worm’s eye view defines the cell (or whatever subunit of a system) instead of a wholeness. The fictionalist view takes a further step and states that the systemic concept can be well suited to its purpose even if we know that it is incorrect or full of contradictions in a specific situation. A system cannot be understood by analysis of the parts because of their complex interactions and because purpose or meaning can only be immanent in the whole. A system is in itself always an abstraction chosen with the emphasis on either structural or functional aspects. This abstraction may be associated with, but must not be identified with, a physical embodiment. Anyhow, the relationship between the elements should have as much attention as the elements being related.

An apposite definition of the word system has been given by the well- known biologist Paul Weiss: ‘A system is anything unitary enough to deserve a name.’ More aphoristic is Kenneth Boulding’s (1985) ‘A system is anything that is not chaos’, while West Churchman’s view that a system is ‘a structure that has organized components’ seems more stringent. Also the cybemetican Ross Ashby’s definition must be noticed: “A system is a set of variables sufficiently isolated to stay constant long enough for us to discuss it”.

An often used common sense definition is the following: ‘A system is a set of interacting units or elements that form an integrated whole intended to perform some function’. Reduced to everyday language we can express it as any structure that exhibits order, pattern and purpose. This in turn implies some constancy over time. A system’s purpose is the reason for its existence and the starting point for measuring its success. “The purpose of a system is what it does”.

Another pragmatic definition used especially in the realm of management is that a system is the organized collection of men, machines and material required to accomplish a specific purpose and tied together by communication links.

A more scientific definition has been given by Russell Ackoff (1981), who says that a system is a set of two or more elements that satisfies the following three conditions.

  • The behaviour of each element has an effect on the behaviour of the whole.
  • The behaviour of the elements and their effects on the whole are interdependent.
  • However subgroups of the elements are formed, all have an effect on the behaviour of the whole, but none has an independent effect on it.

A system-definition by Derek Hitchins (1992) considered both pragmatic and scientific is the following: ‘A System is a collection of interrelated entities such that both the collection and the interrelationships together reduce local entropy’.

Finally, a short resume of the presented perspectives gives the following: SYSTEM, an organized whole in which parts are related together, which generates emergent properties and has some purpose.

An often applied mathematical definition of the word system comes from George Klir (1991). His formula is however extremely general and has therefore both weaknesses and strengths. See Figure 2.1.

In the formula, T stands for a set having arbitrary elements, but it may also represent a power set. R stands for every relationship that may be defined on the set with its special characteristics.

It must however be emphasized that a set of elements, all of which do the same thing, forms an aggregate, not a system. To conform with the definition of a system, there has to be a functional division and co-ordination of labour among the parts. This implies that the components have to be assembled in a certain way in order to build a system.  A system is distinguished from its parts by its organization. Thus, a random assembly of elements constitutes only a structureless mass unable to accomplish anything. Nor does an orderly assembly of elements necessarily form a system. The beautiful organization of the atoms of a crystal does not qualify it to be a system; it is an end product in itself, one not performing any function.

Figure 2.1 A formula defining a system.

To qualify for the name system, two conditions apart from organization have to be present: continuity of identity and goal directedness. Something that is not able to preserve its structure amid change is never recognized as a system. Goal directedness is simply the existence of a function.

The structure of a system is the arrangement of its subsystems and components in three-dimentional space at a certain moment in time. Systems differ from each other in the way they are organized, in the particular mechanisms and dynamics of the interrelations among the parts and with the environment. This may also be expressed as order in the relationship among the components which enter into a system.

Systems are usually classified as concrete, conceptual, abstract or unperceivable. The most common, the concrete system (sometimes called physical system), exists in the physical reality of space and time and is defined as consisting of at least two units or objects. Concrete systems can be non-living or living. Another distinction can be made, that between natural systems (coming into being by natural processes) and man-made systems.

A living or organic system is subject to the principles of natural selection and is characterized by its thermodynamic disequilibrium. Living systems are generally more interesting for how they act than for what they look like. The functional aspect thus becomes the most important one. A standard biological definition of a living system uses the following points:

  • Selfregulation
  • Organization
  • Metabolism and growth
  • Reaction capacity
  • Adaptability
  • Reproduction capability
  • Development capability

As a complex, organized, and open system, it is also defined by its capacity for autopoiesis (H. Maturana and V. Varela 1974), which means ‘self-renewing’ and allows living systems to be autonomous. (Autopoiesis as a concept may be applied even to conceptions and ideas. See Chapter 3). The activities of autonomous systems are mainly directed inward, with the sole aim of preserving the autonomy per se. Maintaining internal order or own identity under new conditions demands frequent internal reorganization. The autopoietic system has to establish a boundary between the universe of which it is a part, and itself in order to maintain its identity. All autopoietic systems are under influence of random variations which provide the seed of possiblity that allows the emergence and evolution of new system identities. Such variations can be seen among bees and ants in order to increase the variety of the systems to which they belong.

Characteristic for autopoietic systems is metabolism, repair, growth and replication. These systems maintain their organization through a network of component-producing processes which in turn generate the same network which produced them. Advanced autopoietic systems are capable not only of organizing themselves but also of ordering their environment with growing efficiency. In contrast, an allopoietic system gives rise to a system which is different from itself. The term heteropoietic, implies human designed systems with a purpose.

The following specific qualities differentiate living systems from non- living ones.

  • the presence of both structural and genetic connections in the system;
  • the presence of both co-ordination and subordination in the system;
  • the presence of a unique control mechanism (e.g. the central nervous system) working in a probabilistic manner possessing a certain number of degrees of freedom in the system;
  • the presence of processes which qualitatively transform the parts together with the whole and continuously renew the element.
  • The capability to learn or to have an extensive repertoire of instinct responses adopted to different situations.

Living systems in general are energy transducers which use information to perform more efficiently, converting one form of energy into another, and converting energy into information. Living species have developed a genius system to overcome entropy by their procreative faculty. Higher levels of living systems include artefacts and mentefacts. An artefact is an artificial object and can be everything from a bird’s nest to computers and communication networks. A mentefact is a mental creation, exemplified here by data, information or a message. Among artefacts, a distinction has to be made between machines, tools, etc. and structures. The former have limited lives, become worn out and are replaced by better ones. The structures, however, are constructed to be permanent like the pyramids of Egypt or the Great Wall of China.

Living systems in general, are teleonomic — they are unconsciousely fulfilling a goal. A mindless procedure thus  could produce  design without a designer. Teleologic explanations of living systems are seldom relevant. Sometimes, however, such explanations seem useful in the descriptive language of an observer. Today, many researchers consider everything which can reproduce itself as living (see Chapter 6).

Living systems theory, formulated within the General Living Systems theory, or GLS, must be regarded as a component of the General Systems Theory. GLS, pioneered by James Miller (Living Systems 1976), is presented among the cornerstone theories of the next chapter.

A conceptual system is a system of concepts. It is composed of an organization of ideas expressed in symbolic form. Its units may be words, numbers or other symbols. A conceptual system can only exist within some form of concrete system, for example a computer. An example is a computer which drafts the specifications and plans for another physical system before it is actually created. A conceptual system (the Ten Commandments) can also regulate the operation of a physical system (the human being). As a system it is itself always timeless. Change is inapplicable to it as it does not exists in space and time.

In an abstract system, all elements must be concepts. It is of an intermediate type in that its components may or may not be empirically observable. The units of such systems are relationships abstracted or selected by an observer in the light of his interest or theoretical viewpoint. The relationships between the mental abstractions and their classes constitute the system. In psychology, for example, the structures of psychic processes are described by means of a system of abstract concepts. Abstracted systems abstract elements of the real world and map these as components of the system model. They mix empirical and conceptual factors. Cultures as systems is an example. In an unperceivable system the many parts and complicated interrelationships between them hide the actual structure of the system.

All systems have a certain type of structure. Concrete systems, for example, exist physically in space and time, building a specific pattern. Conceptual and abstract systems have no defined position in space nor a well-defined duration in time. However, as time goes on, all systems change to a certain extent. This is called process; if the change is irreversible, the process is called historical.

Here a further distinction must be made, between open and closed systems. An open system (all living systems) is always  dependent upon an environment with which it can exchange matter, energy and information. Its main characteristic is its organization which is controlled by information and fuelled by some form of energy. Other qualities are that they are selective and within certain limits, selfregulating. Proceeding up in a hierarchy of system levels, the systems become more and more open when they engage in a wider interchange with a greater variety of aspects of the environment. More complex systems move toward growth and expansion when they tend to import more matter and energy than is required for the output. This should not be taken as a contradiction of their strive for dynamic equilibrium. The ever existing dynamics makes a system understandable only over time.

Common characteristics of an open system has been defined by Katz and Kahn (1966) according to the following ten points:

  • Importation of energy
  • The throughtput
  • The output
  • Cycles of events
  • Negative entropy
  • Information input and the coding process
  • Equilibrium and dynamic homeostasis including adaptation
  • Differentiation (elaboration, complexification)
  • Integration and co-ordination
  • Equifinality

The closed system (e.g. the biosphere) is open for input of  energy only. The differences between open and closed systems are relative. An organism is a typical example of an open system but, taken together with its environment, it may be considered as a closed system.

Expressed in terms of entropy, open systems are negentropic, that is, tend toward a more elaborate structure. As open systems, organisms which are in equilibrium are capable of working for a long time by use of the constant input of matter and energy. Closed systems, however, increase their entropy, tend to run down and can therefore be called ‘dying systems’. When reaching a steady state the closed system is not capable of performing any work.

An  isolated system is one with a completely locked boundary closed to all kinds of input. Independent of its structure or kind, it is constantly increasing its entropy into a final genuine steady state. While this concept is very seldom applicable in the real world, the cosmos is the environmentless, isolated system context for all the other systems which may arise within it.

The systems that we are interested in exist within an environment. The immediate environment is the next higher system minus the system itself. The entire environment includes this plus all systems at higher levels which contain it. Environment may also be defined as both that which is outside of the direct control of the system and any phenomenon influencing the processes and behaviour of the system. The environment can exert a degree of control over the system but cannot be controlled by the system.

For living systems, however, environment must be seen as a part of the organism itself. The internal structure of living organisms contains elements which earlier in the evolution were part of its external environment. This is confirmed, among other similarities, by the chemical composition of blood and sea water. A system’s environment has no boundaries nor needs any.

Environment is something which exists in a space, a concept which is defined with respect to the kind of system in focus. Pragmatic space is that of physical action which integrates a living system with its natural, organic environment. Perceptual space is that of immediate orientation, essential for the identity of a conscious being. Existential space forms a stable image of an individual environment and connects it to a social and cultural identity. Cognitive space is the conscious experience of the physical world, while logical or abstract space belongs to the world of abstract or conceptual systems, thus providing us with a tool to describe the others.

Through the constant interaction between system and environment, environment affects systems and systems in turn affect the environment. When it comes to social systems, this interaction is especially pronounced. Its scope is suggested in the following pairs.

In order to define a system’s environment, its boundary must be defined. The boundary surrounds the system in such a way that the intensity of interactions across this line is less than that occurring within the system. Often a non-spatial marker, it denotes what does or does not belong to the system. To cross a boundary normally requires modification or transformation in some way. In the case of information, boundaries possess a coding and decoding property. In other words that which comes out is very seldom identical with that which goes into a system. As systems do not always exist boundary to boundary, the concept of interface is necessary to denote the area between the boundaries of systems.

As rare as the concept of a closed system is that of a solitarily existing open system. Generally, systems are part of other systems and are included in a hierarchy of systems. Systems theory regards the concept of hierarchy as a universal principle existing in inorganic nature, in organic and social life and in the cosmos. Virtually all complex systems recognized in the real world have the tendency to organize hierarchically. An easily comprehensible example of hierarchy is the phenomenon of science. In Figure 2.2, it is shown how large objects are made of smaller ones. Here the science of the small object explains the larger one in a kind of logical reductionism. Note that the higher levels have qualities not predictable from the lower ones (See page 64). In living systems, a hierarchical structure improves its ability to learn, evolve, and adapt. Sometimes, the term heterarchy is used to denote the opposite of hierarchy and a structure without different levels. In the heterarchy, processes are governed by a pluralistic and egalitarian interplay of all components. Systems theorist tend to say: within each level, heterarchy; between each level, hierarchy. A hierarchy is normally a control hierarchy and therefore becomes structurally simpler when moving upwards as higher levels are structurally simpler than lower ones. The higher levels control certain aspects of the behaviour of subsystems. Thus, less complex systems control more complex ones. Selective disregard on a higher controlling level is a general property of control systems. Emergence of higher control levels therefore is a simplification of system functions. A hunting pack of hyenas are both functionally and structurally simpler (as a group) than the individual hyena. The individual is, taken as an inclusive total system, more complex and more unitary. Generally, evolution always moves from the lower to the more complex type of system, and from the lower to the higher level of organization. In an organism only the whole can display will — none of the parts can. The parts have a low degree of freedom.

Figure 2.2 A hierarchy of science.

In a hierarchic structure, subsets of a whole are ranked regressively as smaller or less complex units until the lowest level is reached. The lowest level elements build subsystems that in turn structure the system, which itself is a part of a superior suprasystem. The ranking of these is relative rather than absolute. That is, the same object may be regarded as an element, a system or a component of the environment, depending on the chosen frame of reference. See Figure 2.3.

Figure 2.3 A multilevel systems hierarchy.

Hierarchical thinking creates what has  been called the paradox  of hierarchy. It implies that a system can be described if regarded as an element of a larger system. Presenting a given system as an element of a larger system can only be done if this system is described as a system.

A more elaborate hierarchical terminology used in this context is:

  • macrosystem
  • system
  • subsystem
  • module
  • component
  • unit
  • part

At a given level of the hierarchy, a given system may be seen as being on the outside of systems below it, and as being on the inside of systems above it. A system thus has both endogenous and exogenous properties, existing within the system and determined outside of the system respectively. Again, as above, the status of a component in a system is not absolute: it may be regarded as a subsystem, a system or an element of the environment. In order to carry out their functions in a suprasystem, subsystems must retain their identities and maintain a certain degree of autonomy. A process whereby the interaction increases in a certain part of the system often ends up in a new local structure. This is called centralization and small variations within this part can produce essential changes of the whole system. However, like a chain, a hierarchy is never stronger than its weakest point, the top. If the top disappears, nothing will work.

Another kind of hierarchic view is expressed in the holon (from wholeness) concept, coined by the Hungarian-born author, Arthur Koestler, in 1967. Wholes and parts do not have separate existences in living organisms or social organizations. These systems show both cohesion and differentiation. Their integrative and self-assertive tendencies exist side by side and are reflected in their co-operative behaviour. This ‘Janus’ effect (from the Roman two-faced god Janus) is a fundamental characteristic of subwholes in all kinds of hierarchies.

Figure 2.4 Integrative and assertive relationships of a holon represented by circles.

The global structure of the holon hierarchy is nested. At least five levels are discernible in Figure 2.4.

Normally, the term wholeness applied to a system indicates the following: variation in any element affects all the others bringing about variation in the whole system. Likewise, variations of any  element depend upon all other elements of the system. In a sense, there is a paradox of wholeness telling us that it is impossible to become conscious of a system as a wholeness without analyzing its parts (thereby losing the wholeness).

The concepts of hierarchy and wholeness are especially relevant in living things where organisms at each higher level of complexity originate as a symbiosis of those from the previous levels. This is demonstrated in Figure 2.5 where different organisms are shown at each of the four levels.

Systems can be interrelated in a non-hierarchical way when part of a multilateral structure. This situation exists when certain elements occur simultaneously in many systems. See Figure 2.6.

In a system, elements can be interconnected during a certain period of time. If the connection exists during only one specified time the multilateral structure is called temporal. If the connection is intermittent, the structure is called cyclic.

The concept of system can be applied to a vast number of different phenomena: the solar system, the academic system, the nervous system, etc. A characteristic of them all is that the whole is greater than the sum of its parts, a phenomenon often referred to as the system principle. This principle includes the system’s emergent properties or its synergetic effects. Synergetic comes from the Greek word for ‘working together’. Water can illustrate an emergent phenomenon: although hydrogen and oxygen individually have no water-qualities, water will emerge when the two elements are brought together. Emergent properties are lost when a system breaks down to its components. Life for example-does not exist in organs removed from the Also, when a component is removed from the whole, that component itself will lose its own emergent properties. An eye removed from a body cannot see any longer. Emergence thus is the creation of new organized wholes which forces their subsystem to obey a set of critical boundary-conditions. In a hierarchy, emergent properties denote levels under the condition that an emergent whole at one level is merely a component of an emergent system at the next higher level.

The genesis of emergent properties can hardly be explained in advance and not be deduced from a system’s elements. If the prerequisites had been slightly otherwise something quite other may have happened. To promote emergence can only be done by the creation of a richness of variation.

A suprasystem taken as a whole displays greater behavioural variety and options than its component systems, that is, it is more synergetic. Each system has a special organization that is different from the organization of its components taken separately.

The evolutionary process by which new adaptive capabilities and higher levels of complexity and control are generated in a system is called metasystem transition (Turchin 1977). This emerges when a new level of control arises that manages many individually preexisting processes. The controlling subsystems are integrated into a metasystem and brought under a new higher control level. A metasystem transition can take place over a scope which is a substructure of the considered system. Thus, formation of an army from conscripts is a typical metasystem transition. It results in a new hierarchy of control where autonomous individuals are put under the command of officers. Turchin predicts that the next large metasystem transition will produce an intergrated system that includes the whole of humanity.

Normally systems show stability, that is, constancy of structure and function under fluctuation, which maintains the same internal state under various pressures. Systems which can restore their stability by changing operation rules when important variables exceed their limits are said to be ultra-stable. Stability then does not exclude adaptability; only systems which change with time and adjust to environmental pressures can survive. A system can never be optimally adapted to its environment since its evolution itself will change the environment so that a new adaption is needed. In an evolutionary process, no absolute distinction can be made between system and environment. What is system for one process is environment for another.

In open systems, for example, biological and social systems, final states or objectives may be reached in different ways and from disparate starting points. This property of finding equally valid ways is called equifinality. The reverse condition, achievement of different ends through use of the same means, is called multifinality.

A basic concept in GST is that of entropy. Originally imported from the area of thermodynamics, it is defined as the energy not available for work after its transformation from one form to another (see also p. 12). Applied to systems it is defined as a measure of the relative degree of disorder that exists within a closed system at a defined moment of time. The natural tendency of physical objects to disintegrate and fall into random distribution can be studied in a sand castle built on the beach on a hot day. How biological organisms deteriorate as they age can be seen everywhere in our environment.

Both examples relate to the effects of entropy, of the transformation of energy from high quality to low. Living systems can however, as open systems, counteract this tendency through purpose and organization, by importing more energy from their environment than they expend to it. Storing the surplus energy in order to survive is to reverse the entropic process or to create negentropy. A living being can only resist the degradation of its own structure. The entropic process influencing the structure and environment of the whole system is beyond individual control.

Systems may be classified according to type of complexity, as has been done by Warren Weaver (1968).

In the organized-complexity system, the typical form for living systems, only a finite but large number of components will define the system. Systems within this category can also be classified as middle- number systems. When a limit is reached, the living system decomposes into irreducible particles. As stated earlier, the total system always represents more than the sum of its parts. This type of complexity cannot be treated with the statistical techniques that so effectively describe average behaviour within unorganized complexity. Successful investigations of organized complexity became feasible first with the emergence of computer technology.

The unorganized-complexity system can only refer to non-living systems where the number of variables is very large and in which each variable has a totally unpredictable or unknown behaviour. The system has, nevertheless, orderly average properties and may be defined in terms of a probability distribution according to an infinite number of events. Its behaviour can be explained by the laws of statistical mechanics and its components may form aggregates. The frequency and type of telephone calls in a large telephone exchange offer a good example.

The organized-simplicity system is characterized by simple systems such as machines and other human artefacts having only a small number of components. This kind of system may be treated analytically.

A similar classification of systems has been made by Herbert Simon (1968). He distinguishes decomposable, nearly decomposable and non- decomposable systems. In a decomposable system the subsystems can be regarded as independent of one another. A given example is helium, an inert gas: the intermolecular forces will be negligible when compared to the intramolecular forces. In near decomposable systems the interaction between the subsystems is weak but not negligible. The intercomponent interactions are usually weaker than the intracomponent interaction. Organizations may be considered to be near decomposable. Non- decomposable systems are directly dependent on other systems or explicitly affect them. A heart/lung machine is such a system.

Another classification of systems is made on the basis of their behaviour or function. A classification of this kind has been made by the doyen of management research, Russell Ackoff (1971). According to this, goal-maintaining systems attempt to fulfil a pre-determined goal. If something deviates, there is only one response (conditional) to correct it. Here, the thermostat and other simple regulatory mechanisms can serve as examples.

In goal-seeking systems choices concerning how to deal with variable behaviour in the system are possible. Previous behaviour stored in a simple memory permits changes based on learning. The autopilot meets the requirements: it maintains a preset course, altitude and speed.

Multigoal-seeking systems are capable of choosing from an internal repertoire of actions in response to changed external conditions. Such automatic goal changing demands distinct alternatives; generally the system decides which means of achievement are best. A prerequisite is an extended memory with the ability to store and retrieve information. The automatic telephone exchange is a good example.

Reflective, goal-changing  systems  reflect  upon  decisions  made. Information collected and stored in the memory is examined for the creation of new alternatives for action. Will, purpose, autonomy, feedforward (see p. 81), learning and consciousness define this process, existing only within living systems.

Another often used system dichotomy is that of static and dynamic systems. A static system is a structure which is not in itself performing any kind of activity. A dynamic system has both structural components and activity. Examples of such systems are respectively a radio tower and a military squad with its men, equipment and orders.

Some other special categories of systems which need to be mentioned are the irrational and null. Both violate the principle of causality (see p. 14) and cannot be handled by way of rational judgement. In the irrational system there is no correspondence between input and the presumed system response. In the null system, all input produces no output or an output is produced without significant input. While both systems are also unmeasurable systems, we must first be aware of the difficulties often involved in identifying complex system flows. ‘Occult behaviour’ sometimes has a very natural cause.

Sometimes it is necessary to apply some basic mathematical criteria to the concept of systems. For a continuous system, the input can be continuously and arbitrarily changed; the result will then be a continuous variable output. In a discrete system, the input is changed in discrete steps, giving corresponding discrete changes in the output.

It may also be necessary to distinguish between deterministic and stochastic systems. According to the principle of nature’s predictability (see p. 16), the deterministic system has inputs and outputs capable of being interpreted and measured on a single-event basis. The output will be the same for each identical input; repeated trials will always give the same results. The stochastic system works instead with identical inputs and its elements cannot be returned to their original state. The factors influencing the system obey uncertainty and statistical variation. Nonetheless, if appropriate stochastic analysis is employed, systemic behaviour may be possible to predict.

Finally, a distinction has to be made between simulative and non- simulative systems. Extremely small changes in the input into systems which are large-scale, complex and non-linear are often amplified through positive feedback. Such changes can thus initiate exponential transformations of the whole system. An example of a non-simulative system is global weather, characterized by deterministic chaos. The system sensitivity for initial data eludes prediction. Furthermore, any physical system that behaves in a non-periodic way is unpredictable.

The ‘butterfly effect’ where the flaps of the wings of a butterfly start a movement in the air which ends up as a hurricane has fascinated many and captures the unpredictability of non-linear systems. No computer program exists which can model this system. Such a program would be just as complex as the weather system itself. Therefore, some meteorologists say that the only computer capable of simulating the global weather is the Earth itself, the ultimate analogue biocomputer.

In a simulative system the complexity of the computer program always falls far below the complexity of the system simulated.

Let us finally have a look at systems in their most general form when they involves nature, man, society and technology and use Harold tinstone’s (1984) combinations.

  • man — a biological system
  • nature — the solar system
  • technology — a communication satellite
  • nature/technology — a waterwheel
  • man/ society — a legal system
  • nature/man/ society — a primitive village
  • man/society/technology — an information system

Source: Skyttner Lars (2006), General Systems Theory: Problems, Perspectives, Practice, Wspc, 2nd Edition.

Leave a Reply

Your email address will not be published. Required fields are marked *