Cybernetics and concepts defining systems processes

In order to predict the behaviour of a rational system before a certain response from it occurs, it is essential to have some knowledge of general control mechanisms. Although automatic control systems have been documented in the Western field of engineering for some 2000 years, the working theory of these has been limited and seldom used outside of engineering. (The Greek Ktesibios invented in 300 BC an automatic control device: the water flow that controlled his water clock, or clepsydra.)

In his book dating from 1948, Cybernetics or Control and Communication in the Animal and the Machine, Norbert Wiener, an American researcher at MIT, gave control  theory new life. As a  mathematician and  universal thinker his fascination for logic and electricity intertwined with his insight in automation led to the ideas of cybernetics. The term cybernetics is derived from the Greek noun, kubernetes, which associates to pilot or rudder.

One of the oldest automatic control systems is in fact related to the turning of a heavy ship’s rudders: the steam-powered steering engine. This supportive mechanism was called a servo (from the Latin servitudo from which English has its servitude and slave). The effectiveness of servomechanisms is based on the fact that it has no choice and can only react in a predefined manner to events in its environment.

In his book Wiener intended cybernetics to embrace universal principles relevant for both engineering and living systems (One of these principles is that all the processes in the universe seem to be cyclic). He did succeed in showing that these principles could fruitfully be applied at the theoretical level in all systems, independent of any specific context.

Shortly after cybernetics emerged as an independent area of its own, it became part of GST. For practical purposes, the two areas were integrated within the wide domain of different problems that became the concern of systems science.

The field of cybernetics came into being when concepts of information, feedback, and control were generalized from specific applications, like engineering, to systems in general, including living organisms, abstract intelligent processes and language. Thus cybernetics named a field apart from, but touching upon, such established disciplines as electrical engineering, mathematics, biology neurophysiology, antropology, and psychology. By use of cybernetics the rich interaction of goals, predictions, actions, and response were brought together in a new and fruitful way. The attraction of cybernetics was that it brought order and unity to a set of disciplines that otherwise tend to be pursued as relatively closed specialisms.

Philosophically, cybernetics evolved from a constructivist view of the world (see p. 57), where objectivity derives from shared agreement about meaning. The world is invented by an intelligence acting in a social tradition, rather than discovered. Thus information (or intelligence) is regarded as an attribute of an interaction rather than a commodity stored in a computer. It is something that is used by a mechanism or organism (a system), for steering towards a predifined goal.

From its start, cybernetics has been concerned with errors in complex systems of control and communication. In cybernetics, the concepts of control and communication are closely interrelated. Information concerning function and control is communicated among the parts of a system but also between it and its environment. This has the aim of achieving a condition of equilibrium which is the maintenance of order. In living systems, this holding of physiological variables within certain limits is called homeostasis. Cybernetics, then, concerns the restoring of stability within all kinds of systems. Stability in this context is the opposite of a steady-state which means the maintenance of entropic randomization — being the most stable and statistically probable state. Themodynamical steady-state is the hallmark of a non-living world.

The fact that cybernetic control systems operate with a low, often insignificant, expenditure of energy means that their degree of efficiency is high. This is possible inasmuch as their basic function is to process information, not to transform energy. Cybernetic regulation must not be confused with any amplification of the affected flow which may happen when amplification exists as well.

Cybernetic control is normally exercised with some defined measures of performance. Three often used measures are the following:

  • Effectiveness. This is a measure of the extent to which a system achieves its intended transformation.
  • Efficiency. The measure of the extent to which the system achieves its intended transformation with the minimum use of resources.
  • Efficacy. A measure of the extent to which the system contributes to the purposes of a higher-level system of which it may be a subsystem.

To understand the concept of control from a cybernetic perspective, some further distinctions are necessary. Control can be defined as a purposive influence toward a predetermined goal involving continous comparision of current states to future goals. Control is:

  • Information processing
  • Programming
  • Decision
  • Communication (reciprocal)

A program is coded or prearranged information that controls a process (or behavior) leading it toward a given end. Teleonomic processes are always guided by a program. Generally, it makes sense to speak of the following standard programming levels:

  • DNA and genetical programming.
  • The brain with its cultural programming.
  • The organisation with its formal decision procedure.
  • Mechanical and electronic artifacts with their algorithms.

Adaptation and development demands reprogramming.

In a broader view, it makes sense to speak of four control levels:

  • DNA’s control of the cell.
  • The brain’s control of the organism.
  • The bureaucratic control of the social system.
  • The big computer system’s control of society.

With them, the following general control-problems are associated:

  • To maintain an internal structure (resist entropy) BEING.
  • To complete a goal (in spite of changing conditions) BEHAVING.
  • To remove bad goals and preserve good ones  BECOMING.

As a starting point for the comprehension of the basic terms of cybernetics, a system may be represented by three boxes: the black, the grey and the white. The purposeful action performed by the box is its function. Inside each box there are structural components, the static parts, operating components which perform the processing, and flow components, the matter/energy or information being processed.

Relationships between the mutually dependent components are said to be of first order. Here, the main example is symbiosis, the vitally important co-operation between two organisms. Relationship of second order is that which adds to system performance in a synergistic manner. Relationship of third order applies when seemingly redundant duplicate components exist in order to secure a continued system function.

Each box contains processes of input, transformation and output. (Note that output can be of two kinds: products useful for the suprasystem and/or waste. Also, note that the input to one system may be the output of its subsystem.) Taken together these processes are called throughput, to avoid focus on individual parts of internal processes.

The box colours denote different degrees of user interest in the understanding or knowledge of the internal working process of a system. A black box is a primitive something that behaves in a certain way without giving any clue to the observer how exactly the result is obtained. As Kenneth Boulding wrote:

A system is a big black box

Of which we can’t unlock the locks

And all we can find out about

Is what goes in and what comes out.

A black box approach can therefore be the effective use of a machine by adjusting its input for maximum output (cold shower to bring down fever). A grey box offers partial knowledge of selected internal processes (visit nurse for palliative treatment). The white box represents a wholly transparent view, giving full information about internal processes (hospitalize for intensive treatment). This command of total information is seldom possible or even desirable.

Below a certain level, questions cannot be answered, or posed; complete information about the state of the system can therefore not be acquired. See Figure 2.7.

Flowever,  when  good  understanding  of  the  whole  transformation process is necessary, the following  five elements have to be calculated.

Figure 2.7 Degrees of internal understanding.

  • The set of inputs: These are the variable parameters observed to affect the system behaviour.
  • The set of outputs: These are the observed parameters affecting the relationship between the system and its environment.
  • The set of states: These are internal parameters of the system which determine the relationship between input and output.
  • The state-transition function: This will decide how the state changes when various inputs are fed into the system.
  • The output function: This will decide the resulting system output with a given input in a given state.

System processes may or may not be self-regulated. There is no halfway between. Living systems are always controlled. The primary control is within the system, never outside.

System processes may or may not be self-regulated. A self-regulated system is called a closed-loop system and has its output coupled to its input. In the open-loop system, the output is not connected to its input for measurement. An example is an automatic sprinkler system, depicted as an open-loop system in Figure 2.8.

Figure 2.8 Open-loop system.

In a sprinkler system, a smoke or heat sensor activates the opening of water valves in order to extinguish a fire. Once activated, the system continues to deliver water until the reservoir is empty or somebody shuts it off.

Although an open-loop phenomenon, buffering must be considered as a simple kind of regulation. It signifies passive moderation or absorbation of deviations or perturbations and lacks active intervention. In the long run buffering is unable to maintain any desired values. An example is the walls of a heated room that act as a temperature buffer. Another is a dam which also can be considered as a rain buffer.

The regulatory mechanisms of closed-loop systems are called feedforward and feedback. Feedforward is an anticipatory control action, intended to produce a predicted, desired state in the future. (To meet the events in advance as preparation for the future, living systems are always adjusted to a future state which.not yet exists.) The process uses information from the input by contrast with negative feedback (as we will see) which uses information from the output. It acts on any systemic element after the activation of the input, but before the outputs actually occur. This action is chosen so that its effects on the output cancels out the effect of the deviance in the same output. In a feedforward system, system behaviour is preset, according to some models relating present inputs to their predicted outcome. Thus the present change of state is determined by an anticipated future state, computed in accordance with some internal model of the world.

Feedforward occurs before an event and is part of a planning loop in preparation for future eventualities. It provides information about expected behaviour and simulates actual processes. Feedforward behaviour therefore seems goal directed or teleological. The goal is in fact built in as part of the model which transduces between predicted future states and present changes of state. To make a budget and to state goals for an organization are examples of feedforward activities. See Figure 2.9. Another example is the planning of the captain onboard a super tanker in order to pass a confined strait. He uses feedforward to be able to arrive in the right place in right time.


Figure 2.9 Feedforward loop.

Feedback is a basic strategy which allows a system to compensate for unexpected disturbances. This is done through feedback loops that maintain certain variables constant or regulate the types and amounts of particular components. It is often defined as the ‘transmission of a signal from a later to an earlier stage’. Information concerning the result of own actions is thus delivered as a part of information for continuous action. As a control mechanism it acts  on the basis of its actual rather than its expected performance. Hereby it is error-actuated, and serves to correct a system performance which is already deteriorating. Feedback is a key concept in cybernetics. When the negative feedback of a system disappears, the stable state of the system vanishes. Gradually its boundaries disappears and after a while it will cease to exist. A metaphysical limerick has been dedicated to it by an anonymous poet.

Said a fisherman at Nice,

‘The way we began was like this

A long way indeed back

In chaos rode Feedback

And Adam and Eve had a piece.’

A generalized theory has been developed to describe the behaviour of closed-loop systems and of systems containing a number of interacting elements using feedback.

Understanding of feedback phenomena started in the 1800s. James Maxwell, father of the famous Maxwell equations of the electrodynamics, and creator of Maxwell’s demon was once contacted by steam engineers. They wanted him to figure out why the governors on their engines  did not always work right; sometimes the steam engine exploded. Maxwell analyzed the engine under changing load as a system of non-linear differential equations, and concluded the system would follow one  of four alternatives. The nature of these alternatives are explained in Figure 2.12. His analysis was the first explicitly cybernetic investigation of a regulatory systems.

System conduct may however become very complex if several feedback elements are interconnected; the resulting dynamics will often be difficult to calculate. The main concepts of this generalized theory are presented in the sections below.

Negative feedback is a fraction of the output delivered back to the input, regulating the new output to a multiplier smaller than one. This kind of feedback tends to oppose what the system is already doing, and is thus negative. An increase on the feedback level generates a decrease in the output, providing self-correction and stabilization of the system. ‘More leads to less, and less to more’. Measured values of output activities are compared with desired values or reference standards, continously or at intervals, and the belonging input activities are increased or reduced. All in order to bring the output to the desired level. This is sometimes referred to as error nulling. Systems with feedback automatically compensate for disturbing forces not necessarily known beforehand. The principle of the negative feedback loop is seen in Figure 2.10. Negative feedback is the idea of diminishing return or dying away tendency. In the market, this phenomenon ensures that no company or product can grow big enough to dominate the market. That the more you do of  anything, the less useful, less profitable or less enjoyable the last bit becomes.

A device which acts continuously on the basis of information in order to attain a specified goal during changes, called a servomechanism, is an example of applied negative feedback. Its minimal internal structure consists of a sensor, an effector and a connecting link. Simple servomechanisms are James Watt’s centrifugal regulator from the 18th century and the contemporary rudder machinery on steamships which adjusted the steering angle. For Watt’s centrifugal regulator, see Figure 2.11.

The perfect servomechanism corrects errors before they occur. Its smooth and co-ordinating activity is dependent upon the amount of compensatory feedback. Both under- and over-compensation generate oscillations that are more or less harmful to the regulated system. Another example is the simple but reliable pneumatic autopilot in the DC-3 aircraft. Corrections within predefined settings (altitude, course) are handled by the system while changes of the system itself (new course, etc.) are determined by its suprasystem, here — the pilot. Directions (route, schedule) are given by the supra-suprasystem, the flight traffic control.

A control mechanism can also be discontinuous. An example is the simple thermostat which can only perform two actions: turn the heat on or turn it off. Discrete control of this kind is common in all kinds of modern electronic equipment.

Figure 2.10 Feedback loop.

Figure 2.11 James Watt’s speed controlling centrifugal regulator. Engine speed change generates counteracting forces from the regulator. The steam is choked or released, thereby returning the engine to normal operating speed.

If the multiplier is greater than one, a state of positive feedback exists. In this kind of regulation, each new output becomes larger than the previous, with an exponential growth and a deviation-amplifying effect. A positive feedback mechanism is always a ‘run-away’ and temporary phenomenon. Positive feedback implies deviation amplification, often like a vicious circle while negative feedback is deviation correction. It can be recognized in events like exponential population growth, the international arms race, financial inflation and the compound interest of a bank account. Its self-accelerating loop is normally brought to a halt before the process ‘explodes’ and destroys the system. A negative feedback inside or outside of the system will sooner or later restore more normal behaviour. See diagrams in Figure 2.12 which present the nature of negative and positive feedback.


Figure 2.12 The nature of negative and positive feedback.

In diagram A, a normally functioning negative feedback loop  is shown. Here, the deviation is smoothly bringing the process back to the standard or reference value. In diagram B, the correction is slightly over- compensated, resulting in successively damped oscillations. A smoothly growing positive feedback loop is shown in diagram C. In diagram D, a negative feedback loop with over-compensation resulting in a positive feedback situation, is shown.

The combined effects of an emerging positive feedback being inhibited by a growing amount of negative feedback most often follow the non-linear logistic equation which exhibits sigmoid growth. The effect of a shift in loop dominance in a population-growth diagram is shown in Figure 2.13. The loops change when the population reaches half of its maximum. The negative feedback keeps the system in check, just as positive feedback propels the system onwards.

Figure 2.13 Shifting of loop dominance in population-growth diagram.

The elementary negative feedback presented here operates according to a preset goal. The only possibility is to correct the deviation. Conditional response is impossible inasmuch as no alternative exists and the regulation normally works exponentially toward the equilibrium state. This kind of direct deterministic regulation is called first order, negative feedback. Second order, negative feedback is defined as feedback based on other feedback. It is thus more indirect than that of the first order, which comes either from an immediately preceding step or directly from monitoring. This more indirect second order regulation causes sinusoidal oscillations around an equilibrium if undamped. If damped by a first order feedback, the regulation will follow a damped sinusoidal curve. See curves in Figure 2.14.

Higher order, negative feedback regulation also operates with oscillations around an equilibrium. Over-reacting feedback chains can bring about a growing reaction amplitude, thus rendering the system unstable. To be stable, the regulatory mechanisms have to be adequately damped. The system’s own friction is often enough to have this function.

Figure 2.14 Second order feedback with sinusoidal oscillations.

Different levels of goal-seeking as a cybernetic feedback process have been proposed by Karl Deutsch (1963). His goal-seeking hierarchy with four levels may be compared with Ackoff’s behavioural classification of systems (see p. 72).

  • First order  goal-seeking:  This  stands   for  immediate  satisfaction, adjustment, reward.
  • Second order goal-seeking: Self-preservation is achieved through the preservation of the possibility of seeking first order goals by controlling the
  • Third order goal-seeking: Preservation of the group, species, or system requires control over first and second order goal-seeking beyond the individual life-span.
  • Fourth order goal-seeking: Preservation of the process of goalseeking has priority over the preservation of any particular goal or group as This is in effect the preservation of the relationships of the ecosystem.

Sometimes it is necessary to distinguish between extrinsic and intrinsic feedback. Extrinsic feedback exists when the output crosses the boundary and becomes modified by the environment before reentering the system. Intrinsic feedback prevails when the same output is modified internally within the system boundary. While the concept of feedback is generally defined as being intrinsic, from the system’s point of view, both types are equal. Normally the system is unaware of the actual feedback type. See Figure 2.15.

Figure 2.15 Extrinsic and intrinsic feedback. 

In cybernetic control cycles, time plays an important role. Variations in speed of circulation and friction between different elements of the system can occur.  Such delays and lags are important regulatory parameters which counteract inherent oscillatory tendencies of a feedback control process. They are often employed to set physical limitations on the system, slowing down the action, but dynamically.

Important variables (especially the output) are prevented by this limitation from jumping abruptly from one value to another. To give a wider perspective, the signification of time is given in the following systems:

A   delay can completely inhibit a regulatory action for a certain amount of time, after which action starts with full impact. A lag is a gradual regulatory force, reaching its full impact after a certain amount of time. Feedback systems with lags may have destabilizing effects with the pertinent loss of control. The effects of delays and lags combined are even more devastating (see Figure 2.16).

The feedback processes presented here operate in a variety of control systems. Their main function is to keep some behavioural variables of the main system within predefined limits. The end objective is to maintain an output that will satisfy the system requirements. The ideal control system produces a regulation which cancels out the effect of possible disturbances completely. It behaves different during different states but only in one way during identical states.

One must understand that the working principle of cybernetic control systems is to generate information, not to transform energy. In that way they have a nearly nonexistent consumption of energy and a high efficiency. This regulation may not be confused with some kind of amplification of the regulated flow even if such an amplification  may exist in this connection.

The general control system with its basic five control steps works according to the following:

  1. A control center establishes certain desired goal parameters and the means by which to attain them.

Figure 2.16 Delays and lags in control cycles.

(Reprinted with permission from J.P. van Gigch, Applied General Systems Theory, Harper & Row, NY, 2nd Ed., 1978.)

  1. Goal decisions are transformed into action outputs, which result in certain effects on the state of the system and its environment.
  2. Information about these effects are recorded and fed back to the control center.
  3. The control center tests this new state of the system against the desired goal parameters to measure the error or deviation of the initial output response.
  4. If the error leaves the system outside the limits set by the goal parameters, corrective output action is taken by the control center.

The first fundamental component of the regulatory mechanism in the basic control cycle is the receptor (sometimes called sensor or detector), a device registering the various stimuli which, after conversion into information, reach the controller unit (see Figure 2.17).

Figure 2.17. A general control system.

A comparison is made between the receptor value and a desired standard stored in the comparator (discriminator). The difference provides a corrective message which is implemented by the effector (activator). Through monitoring and response feedback to the receptor, self- regulation is achieved. Figure 2.17 shows that the regulation takes place on the input side and the sensing mechanism is situated on the output side. In more sophisticated systems with third-order feedback, the controller also includes a goal-setter with its reference standard, a decider (selector) and possibly a designer which formulates both the goals and the decision rules of the system.

A summary tells us that a controlled system must be able to read the state of an important variable and examine if this is under, on, or above the permitted value. This is done by the system’s detector. Therefore the system has to possess certain preferences. The handling of these preferences is managed by the selector which choses the alternative best corresponding to the preferences. Moreover, the system must be able to act in order to realize the existing preferences if they do not exist. This function is performed by the effector. Detectors and effectors are the analogs of eyes and muscles.


Figure 2.18 Diagram of a learning system.

A = Receptor; B = Educable decision unit; C = Effector; D = Comparator; E = Goal-setter.

  • The detector receives information from the environment
  • The selector chooses an appropriate reaction
  • The effector executes the chosen response

We have seen earlier that one of the most significant advantages of living systems was adaptation achieved by learning. This advantage is however not restricted to living systems only; machines working according to cybernetic principles may also be able to learn. If information moving backward from a system’s performance is able to change the general method and pattern of performance, it is justifiable to speak of learning. A general cybernetic pattern for a system capable of learning is shown in Figure 2.18.

The input information enters the system via the receptor and reaches the educable internal decision unit. After processing, the information will reach the effector and there become an output. The behaviour of the decision unit is however not predetermined. (Through a double path the same input, and the output decision as well, are simultaneously led to an evaluating mechanism.)

From the evaluating mechanism, the comparator, a parallel path leads to the decider. This receives simultaneously the same input as is given to the receptor, and also the same output as delivered from the effector. The decision unit compares the cause in the input with the effect of the output on the basis of the evaluation criteria stored in the comparator. If the decision is correct or ‘good’, the decider is ‘rewarded’, and if incorrect or ‘bad’, the decider is ‘punished’. In reality this results in a modification of its internal parameters which is a kind of self-organization and learning.

There is of course a risk of confusing self-organization with learning. All systems able to learn must necessarily organize themselves, but systems can organize themselves without learning. The faculty for modifying its behaviour and adapting is not in itself sufficient for it to be regarded as a learning system. The point is that the rules must be adjusted in such a way that a successful behaviour is reinforced, whereas an unsuccessful behaviour results in modification. Thus, the important thing is the internal modification of the information transfer.

When defining living systems, the term dynamic equilibrium is essential. It does not imply something which is steady or stable. On the contrary, it is a floating state characterized by invisible movements and preparedness for change. To be in dynamic equilibrium is adapting adjustment to balance. Homeostasis stands for the sum of all control functions creating the state of dynamic equilibrium in a healthy organism. It is the ability of the body to maintain a narrow range of internal conditions in spite of environmental changes. All systems do, however, age and, from a certain point of maturation, slowly deteriorate toward death. This phenomenon is called homeokinesis and gave rise to the concept of the homeokinetic plateau, depicted in the diagram of Figure 2.19.

This constant deterioration can be compensated for by extended control and mobilization of resources within the limits of the homeokinetic plateau. Here the negative feedback is stronger than the positive and a temporary homeostasis can be maintained within the thresholds of the plateau. Below and above the thresholds, the net feedback is positive, leading to increased oscillations and finally to the collapse of the system. The only alternative to break-down for a system going outside the homeostatic plateau is an adaptation through change of structure. This adaptation is however beyond the capabilities of an individual organism.

Figure 2.19. The homeokinetic plateau

The homeokinetic plateau is a quite natural part of what can be called a system life cycle. In living systems this consists of birth, evolution, deterioration, and death. In non-living systems, such as more advanced artefacts, the system life cycle can be divided into following phases:

  • identification of needs
  • system planning
  • system research
  • system design
  • system construction
  • system evaluation
  • system use
  • system phase-out

The coherence between the different phases are further demonstrated in Figure 2.20. Note that the first phase can be considered a consumer phase, the intermediate phases producer phases, and the last two again a consumer phase.

Living or non-living, the following four options for system annihilation exist: accident, predation (murder), suicide and, natural causes.

Figure 2.20 System life cycle of advanced artefacts.

Finally, a concept sometimes used is that of second order cybernetics. The distinction between this and first order cybernetics is based on the difference between processes in a subject which observes and in an object which is observed respectively. Another definition is the difference between interaction between observer and observed in an autonomous system (second order) and interaction among the variables of a controlled system (first order). Second order cybernetics thus implies that the observer is always a participant, interacting with the system.

Source: Skyttner Lars (2006), General Systems Theory: Problems, Perspectives, Practice, Wspc, 2nd Edition.

Leave a Reply

Your email address will not be published. Required fields are marked *