We start our considerations with a simple question: What is reality? Auguste Comte (1798-1857) – the inventor of positivism and sociology – provided a simple answer: Just open your eyes! This sounds quite a bit different from appreciating the human capability of understanding as a luxurious evolutionary gift we should be aware of with grateful humility. We better turn to Ernst Mach (1838-1916) who influenced the further evolution of physics, psychology and philosophy significantly. Freely interpreted and translated, he saw understanding as a two-step process: adaptation of thoughts to validated observations followed by adaptation of thoughts with other thoughts. The first step is concerned with human perception, the second with deriving comprehensive theories.
Imagine a scene of four experienced systems engineers at a systems engineering workshop discussing the subject of this page. One asks to write down in a single sentence what raises attention outside the window. The answers were:
The workshop took place in a former hometown of one participant. Most likely, it is easy for you to identify the sentence recorded by this person. Have you any clue why? As a further hint, it is the same functional cognitive principle that let you feel driving on unknown roads more strenuous than driving on known roads for the same time under similar conditions.
A short excursion to the history of scientific psychology deemed to be necessary. It shall provide confidence that the statements on psychology below and throughout the website are not based on a single bold hypothesis among others. All statements about psychology and its role in systems engineering are an attempt to synthesise state-of-the-art knowledge with supporting evidence from philosophy, psychology and further neurosciences.
Ancient humans introduced a body-soul dualism to explain death. René Descartes (1596-1650) transformed the body-soul dualism into a body-mind dualism. He postulated that the mind constitutes the self: “Cogito ergo sum.” This view culminated in the enlightenment and lasts in reminiscences until today. David Hume (1711-1776) investigated human thinking by philosophical introspection. Wilhelm Wundt (1832-1920) founded experimental psychology by applying experimental approaches known from natural sciences to psychological research. The further evolution of scientific psychology emancipated psychology from philosophy as a distinct discipline. John B. Watson (1878-1958) proposed a radical approach by denying introspection as a viable research method at all. Only physical measurements of stimuli and reactions on stimuli are considered in behaviourism. In a collateral simplification, he denied the influence of any complex architecture of the nervous system on the brain’s functionality. Despite the boldness of this assumption – considering the human brain as the most complex organ invented by the biosphere’s evolution so far – behaviourism found many followers for several decades. The Gestalt psychologists around Max Wertheimer (1880-1943), Wolfgang Köhler (1887-1967) and Kurt Koffka (1886-1941) aimed to understand the brain’s internal functions by applying a mix of physical measurements and personal statements from the test persons in their experiments. The advance in diagnostic measurement technologies, in especially electroencephalography (EEG) and magnetic resonance tomography (MRT), enabled Ulrich Neisser (1928-2012) to merge both approaches into cognitive psychology. From behaviourism, cognitive psychology inherits the continuity of methods, and the continuity of research results from Gestalt psychology. With our knowledge today, René Descartes’ body-mind dualism is falsified: The mind is a body function. Antonio Damasio provides significant evidence that the mind does not just reside on top of the body. The mind interacts with other body functions in both directions.
Our behaviours are reactions on information delivered by our senses. Undeniably, this fact can be assumed to be true. Many assumptions exist guessing how sensual information is transformed into behaviour. A complete and consistent theory covering all details is missing. It is worth to consider several viewpoints in order to search for supporting and falsifying arguments. The simplest model says: Behaviour is just determined by information sensed at the same time. Only radical behaviourists promote such naïve positivistic views. Pragmatists emphasise the essential role of experience: More experienced people act more prudently in the scenarios they are experienced in than unexperienced people. Are genetic predispositions influential as well? In contemporary argumentative battles, the influence of genes and experience has frequently been discussed as an exclusive or. Research on creativity provides indications of the genes’ impact in addition to the impact of deep experience in the particular field gained over time. In families with a member creating innovations with high social impact, other members more likely show similar traits becoming famous for their creativity as well. On the other hand, also family members with pathological mental characteristics are observable with a significant higher probability than in the overall population. Extending the scope, we may take the fact that all vertebrates share and extent the same organic brain architecture as further evidence for the influence of genes. Different vertebrate species share many behaviours but show also many rather different traits. Epigenetic impacts have been discovered as well. However, the research on epigenetics is at an early stage. Considering that evolution tends to continually optimising solutions, it is a bit questionable why copying the genome in every cell division containing so much material not identified as genes so far should be an exception. In conclusion, it is advisable to consider a wide range of contributing factors in addition to sensory impressions: genetics, epigenetics and experience. Considering the human capability to use argumentative language, experience includes learning from explanations and theories.
Generating analogies is an initial step in the derivation of theories as discussed below in more detail. If you get stuck in progressing towards a mature theory, you may be tempted to draw conclusions from analogies like from a mature theory. For example, in the Renaissance the secret of life was explained by anatomists as a kind of fluid management. Body fluids circulate through the body consisting of circuitry with pipes and valves representing the body’s organs. There are voices saying that Descartes used this analogy as supporting evidence to justify a body-mind dualism. He may have been not convinced that the complexity of the mind could be implemented by body fluid circuitry alone. The contemporary mind analogy is a digital computer according to the architectural principles established by John von Neumann (1903-1957). There are some indications that John von Neumann among many of his contemporaries fostered and believed in the principal equivalence of digital computers and the human brain despite all differences known already. Terms like neural network or artificial intelligence implicitly claim capabilities of the human brain as achievable by digital computers. In turn, terms like neural disk and the search for a central processing unit implementing consciousness within the brain trivialise the complexity of the brain’s architecture and functionality. All this is as far-fetched as calling Democritus who coined the term atom the first nuclear physicist.
Although no mature comprehensive theory of the human brain exists by now, psychology and the neurosciences progressed by investigating the brain’s functionality from both ends. Meanwhile, the principles of human perception are quite well understood, and we know how motivations lead to actions. For all in between, some brief hypotheses with supporting evidence are available, but do not match the maturity of the research status on human perception and motivation.
Human perception is a rather complex and resource consuming activity. Evolution found a solution with remarkable low energy consumption providing appropriate real-time capabilities to cope successfully with specific scenarios from the overall living environment. The nerve pulses from all senses are processed and filtered in multiple steps applying techniques and algorithms implemented by the brain’s organic structure. These processes include augmentations from contextual models that represent an expected state of reality combined with forecasts how the situation may develop. As long as reality matches expectations, the individual reacts by adjusting the internal body state and by performing appropriate behaviour without any specific attention. In behaviour controlled this way, mainly the sensory cortex and the motor cortex show significant activity in MRT scans. The brain’s activity patterns change when observations do not match with expectations from the contextual models. Then, attention processing starts with activities of varying intensity all over the cerebral cortex. Astonishingly, some areas may even be inhibited by lowering the frequency of nerve pulses below the idle frequency required to maintain the electro-chemical equilibrium. One third of a second later, a specific brain wave indicates that the processing proceeds to a moment of consciousness.
The known intricacies of human perception falsify naïve positivism as well as radical behaviourism. The other extreme position would be to challenge the existence of a real world at all by postulating human world views as mere constructions. Indeed, what is more real: our impressions of colours or the theory of electromagnetic waves? Ernst Mach voted for a positivism based on our sensual experiences. Later logical positivists of the Vienna Circle like Rudolph Carnap (1891-1970) promoted a unity of science based on a universal language for the expression of scientific theories. As the claims for a unity of science and a universal language are unproven assertions, the pragmatism of Ernst Mach avoiding further absolute presumptions is favourable for engineering innovative and complex systems.
It is time to draw conclusions how human perception influences systems engineering and what can we do to make good use of human capabilities:
According to Ernst Mach, the adaptation of thoughts with other thoughts is based on four principles: analogy, economy, continuity and abstraction. Analogy means to compare new scenarios with already known ones. Economy is characterised by the search for causal explanations able to describe experiences and to forecast future developments reliably by the simplest theory possible. Continuity means that a theory remains valid and applicable in case of variations and small perturbations. Abstraction just comprises disregarding all beyond the considered scenario. Remarkably, what Ernst Mach found out by pure introspection about how we construct theories resembles our today’s understanding how human perceptions are augmented by mental models. His arguments refute naïve positivism and laid the foundation that supported later scientists to formulate relativity theory and quantum mechanics. Decades later, the mathematician, physicist and philosopher Hans Reichenbach (1891-1953) emphasised the role of observations dissenting with existing world views in these physical discoveries. For good reasons, the falsification principle introduced by Karl Popper (1902-1994) may be interpreted as instantiation of human attention processing.
The most important conclusion from the preceding paragraph is that our theories do not establish a kind of higher reality. They provide just views on reality filtered by exploiting the capabilities of our human brains. Because evolution equipped us with a nervous system making us the dominant species on Earth today, we may assume that our theories provide us with reasonable images of reality. However, we know from all past experience: Our capabilities to forecast comprehensive scenarios of the future are rather limited in general. Is this now all we need to consider for succeeding in systems engineering? The answer is a clear no. When Norbert Wiener (1894-1964) invented a general feedback theory, theoretical models with unprecedented complexity became feasible far beyond what Ernst Mach and his contemporaries could have imagined. Due to the far-reaching implications, it is worth to review the four principles in the light of complexity.
Plain analogies may be drawn in four ways. The new scenario is the same as a known one, for example another system with two mechanical bodies of arbitrary shape. Alternatively, the new scenario is a simplification or a complication of a known one. For example, the scenarios show two bodies with certain symmetries as a simplification or more mechanical bodies as a complication. The fourth option is a transfer to a rather different scenario. Just think about similarities in the dynamics of two-body problems with the population dynamics between two species, one vegetarian and the other carnivorous.
In more complex scenarios, analogies to several theories may apply at once. The temptation to transfer existing theories as a simple superposition to the new scenario is usually misleading. For example, a magnetically alleviated vehicle shall be modelled as a magnetically tethered flying object in order to investigate aerodynamic influences in detail. It would be easy to build on the existing system equations for magnetic levitation and for flight dynamics. In a second step, coupling terms between both subsystems need to be searched for. This will become a daunting task that better never should have been started at all as you end up with coupling terms not really traceable to physical phenomena and an overdetermined system. Newton’s second law cannot be applied to individual subsystems in isolation. It must be applied once to the whole system. Only in this case, emergent functions and features of the whole system not deducible from the subsystems in isolation are modelled accurately. In conclusion, building theories on previously unknown complex systems by analogy is rarely expedient without starting from the whole and then proceeding to the details.
Furthermore, the example shows also that complex problems need to be solved by multidisciplinary endeavours. An application specific training upfront is impossible in most cases due to the manifoldness of complex problems. The experience has to be gained together on the job by starting early with conceptional development and following a sound systems engineering approach throughout development.
In most scientific theories and their applications, the system scope for explanation according to the economy principle is the same as used with respect to the abstraction principle. For example, the classical physical theory of mechanics promises a time reversible world ignoring that friction cannot be fully understood without thermodynamics and the recognition of entropy. In technical applications, lubrication, bearings and cooling are the main means to reduce adverse impacts. With reference to the abstraction principle, it is claimed commonly that – relying on global compensation – adverse impacts outside the system boundaries generate only small disturbances in the further system environment. This approach comes to its limits when the technical system causes significant adverse impacts in the further system environment beyond the system scope considered. Especially, whenever a significant portion of available resources are consumed, the abstraction principle becomes questionable long before global resource limits are in sight.
In a sound systems engineering approach, all possible significant adverse impacts along the complete system life cycle need to be considered. Traditionally, continuous risk management starting early with conceptual system development shall identify all technical and non-technical risks. Identified risks are further processed to achieve risk avoidance, risk elimination, risk reduction by risk mitigation or acceptance of residual risks imposing requirements on the system. Generally, risk management applies also for the impact of system features and functions beyond abstraction boundaries. In case of modelling capabilities allowing the inclusion of functional interactions beyond abstraction boundaries, it is preferable to extend the abstraction boundaries appropriately and to consider the effects as an integral part of system design. Consequently, the system scopes according to the economy principle on one hand and the abstraction principle on the other may differ in engineering applications when objectives remain unchanged.
Alternatively, system objectives may be adopted to stick system boundaries according to both principles together. In system theory, this would be possible as long as controllability is given for the full system space in which observability is granted. Always taking this route sounds more convincing than it actually is. The economy principle focusses on strong causal dependencies expressible as high correlations between system state variables. With increasing complexity, also low correlations may be observed between system state variables. Controlling highly correlated system state variables lead to efficient system solutions. With decreasing correlations between system state variables system efficiency is diminishing.
We need to keep in mind: High correlations indicate always strong causality. Although low correlations may be a hint for weak causality, this is not always true due to practical reasons regarding sample rates and observation times. Sample rates and observation times are usually adopted to the frequency range relevant for the observed process. Reasonable observation times do not account for long term effects in far lower frequency ranges compared to the frequency range of the observed process. For example, emissions of greenhouse gases by individual technical processes show low correlation with global warming, but there is a strong causality with accumulated greenhouse gases in the atmosphere leading to global warming in the long term.
Sample rates cause a number of further annoying effects and limitations. As an example, think about tuning the highest string of a guitar with taking the lowest string as a reference. The frequency difference is about two octaves equivalent to a factor of four between the frequencies. When both frequencies do not fully match a beat frequency is hearable. Tuning the guitar means to reduce the beat frequency to zero. Offsite this perfect tuning we hear three tones: the frequency of the lowest string, the frequency of the highest string, and the beat frequency. To detect all three frequencies by time sampling, the sample rate must at least be twice the frequency of the highest frequency according to a theorem established by Claude Shannon (1916-2001) together with Harry Nyquist (1889-1976), and Edmund Taylor Whittaker (1873-1956). Only under this condition, the two half waves of a sine frequency are identifiable unambiguously. If the sampling rate in our example would be just three times the frequency of the lowest string, only the frequency of the lowest string and the beat frequency that may be even lower are detectable. The character of the beat frequency as mere interference between two other frequencies is not restorable. To avoid such artifacts, low pass filtering of the original signal on the analogue side is required before the sampling takes place, also known as anti-aliasing – in our guitar example equivalent to not plugging the highest string. This is not always feasible. The simplest solution would be to apply anti-aliasing just on the sampled signal. In computer graphics, such low pass filtering may be applied for smoothing contours and colour gradients. However, beat frequencies may show their presence for example as moiré patterns providing the impression of periodic colour gradient changes that do not exist originally. What may be acceptable in computer graphics is not tolerable in automatic control systems where control activities should only be initiated by true physical value changes. Conclusively, the necessity of low pass filtering demands an arbitrary decision to ignore the influence of all higher frequency signals above half the sample rate frequency. It is a convenient reasoning to declare all high frequency parts of a signal as uncorrelated white measurement noise having no impact on the process under consideration, but this assumption is not always granted as discussed further below.
On a large scale, these considerations contribute to the explanation why market economies focussing on specific solutions in scope-limited scenarios show an overall better performance than planned economies claiming a comprehensive and holistic view on the economy in total. However, a circular economy cannot be built completely on efficient system processes alone. There are always biospheric processes that do not comply with human expectations regarding efficiency in terms of costs and time scales. As high efficiency means high profits and low efficiency means low profits, market capitalism must be augmented by regulations to ensure investments in low efficiency system processes for sustainability reasons. For practicing systems engineering in real life, accepting wider abstraction boundaries than the system scope with respect to the economy principle characterises mature systems engineering approaches.
Of all four principles the continuity principle is the most contested one. Warnings regarding the butterfly effect and the appearance of black swans are frequently used in ubiquitous contexts. From the point of human perception and system theory, the continuity principle is the most straight one. Assumptions of mathematical steadiness and differentiability have axiomatic character in the system dynamics modelled in continuous time. Aleksandr Mikhailovich Lyapunov (1857-1918) provided a rather general definition of stability. Stability means that quantitatively limited disturbances lead to reasonable low deviations from the previous steady state. This describes the augmentation effects of human perception accumulated by personal experience and knowledge well. With every new experience something may be learnt that alters human perception slightly in the long term.
But what is about chaos theory questioning any justifiable trust in any continuity hypothesis? Chaos theory as defined initially by Edward Norton Lorenz (1917-2008) questions long term determinism. Convinced advocates warn about unexpected rapid scenario changes. Indeed, long term determinism is an illusion in general. Although it is possible to predict roughly when the Earth becomes uninhabitable for living creatures, when our solar system terminates to exists, or that multicellular beings like us die, it is always impossible to predict the future in comprehensive levels of detail.
With respect to modelling the future mathematically, several limitations apply: It is impossible to determine true initial conditions as there are no measurements with arbitrary accuracy, and mathematical models are just generalised approximations of reality with deviations of states from calculated values accumulating over time. Another issue is rarely discussed in chaos theory: the generally assumed distinction between states varying over time versus time-invariant and state-independent parameters defining the relations between states. This distinction known as ergodicity assumption is somewhat contestable in general and does not hold in case of emergent features and behaviours at all. Then, parameters may change their values, additional states may become important, and new relations may need to be considered. All this goes beyond what is envisaged by scientists creating mathematical models usually.
The relevance of chaos theory is illustrated by the so-called butterfly effect: A butterfly flapping its wings in China may cause a tornado in the US eventually. At first, the statement may be surprising, but assumed all models have been established and validated to sound scientific standards and that simulation results are correctly calculated the statement stands as it is. For a critical review of the statement, three issues deserve attention: mathematical model fidelity, the implied causality claim, and the quantitative probability.
The butterfly effect was raised in the 1970’s. Since then, the quality of mathematical meteorological models has been remarkably improved. Finite element models allow local weather forecasts in a far smaller grid than ever before. Most likely, today’s meteorological weather models feature tighter coupling and stronger augmentation than the weather models used in the 1970’s. However, the possibility of the butterfly effect has not been falsified as far as known.
More debatable is the implied causality claim. A tornado contains much more energy than a butterfly invests for flapping its wings. In the causal relation between both events, a power amplifier is needed to boost the energy up. For analogy, let us turn to an audio amplifier. The incoming signals are processed in a preamplifier to generate the audio signal transmitted to the power amplifier that increases electrical currents without distorting the wave form. In this case, it is legitimate to claim causality of the preamplifier output for the music propagated by the loudspeakers. It is rather unlikely that meteorological processes act analogue to an audio power amplifier. Instead, the energy boost is achieved by numerous atmospheric interactions. The causality between the butterfly flapping its wings in China and the tornado in the US is a hypothetical causality typical for statistical mechanics: If all molecules in a defined gas volume would have the same energy state, the whole volume would feature a certain pressure and temperature. Then, the pressure and temperature of the whole gas volume may be calculated from the energy state of a single molecule using Ludwig Boltzmann’s (1844-1906) theorem. The occurrence probability of such scenarios is in the magnitude of once-in-the-universe events or less. There is no reason to fear catastrophic events as a consequence from chaos theory. The probability of other accidents is much higher by several magnitudes. Therefore, efficiency, sustainability, safety, and security of systems engineered by humans are much more important and urgent concerns. Technical solutions require a clear understanding of causal dependencies to evaluate their desired results and unwanted consequences. References to chaos theory are no excuse for not striving for high-integrity technical systems.
Of course, catastrophic events are challenging the continuity principle. Against catastrophic cosmological and geological events humans are rather helpless. Regarding cosmological events, the remote location of our solar system in our galaxy has allowed a continuous evolution disturbed by collisions with other bodies of the solar system, but without ever extinguishing life totally. Similar considerations yield for geological events. Above all, human civilisations would not exist without geological activity. The evolution of the biosphere is just an integral part of the evolution of the Earth.
We close this section with a discussion on the myth of black swans that are unforeseeable and occur by surprise. Strictly speaking, the myth of black swans is less associated with the continuity principle than with the abstraction principle. The origin of the myth of black swans is linked to the financial crises in the second half of the 2000’s. In theory, stock markets evaluate economic prospects and rate the value of civil and government enterprises according to their evaluation results. In fact, stock markets today are widely dominated by high frequency trading. Despite the existence of many long-term investors, the average dwell times before bonds are traded again are far shorter than economic facts are changing. Although economic facts still influence the ratings on the stock markets, high frequency trading reacts more by observing trends how other traders behave. High frequency trading is a kind of gambling with the advantage – compared to casino gambling – that losses are transferred to the general public finally. The promoters of the myth of black swans want to make the general public believe that high frequency trading is fostering a kind of a more agile overall economy. As they have no idea of the distinction between timely uncorrelated white noise and time-correlated coloured noise with its resulting aliasing caused by their high frequency trading activities, the occasionally devastating consequences for the overall economy surprises them.
Historically, there is a straight line from Ernst Mach to Norbert Wiener. Ernst Mach introduced the role of the human observer in scientific research. The American pragmatists built their thoughts on Ernst Mach’s approach. Norbert Wiener learnt philosophy from leading American pragmatists. As inventor of a general feedback theory, he widened human modelling capabilities to deal logically with mutual dependencies. Humans always lived in rather complex environments. The new modelling capacities allowed an improved understanding of complex causal dependencies and provided probabilistic methods for analysis.
In consequence, these advanced modelling capabilities led to techniques for controlling complex systems and for shaping our world according to human wants to an unprecedented extent. The new knowledge was exploited in an optimistic anything-goes fashion. There is little evidence that much thought was spent on risks and limits of the advanced scientific capabilities. However, over time the challenges and issues became evident. Advances in the theory of science turned more and more into a sociology of science considering the works of Thomas Kuhn (1922-1996) and – from a systems engineering point of view also relevant – Imre Lakatos (1922-1974).
However, most scientific educated people continued along scientific principles successfully applied by the major physical advances in the early 20th century, namely relativity theory and quantum mechanics. General feedback theory was commonly applied as a technique without considering the epistemological implications of which some have been addressed on this page above. Therefore, new limits of scientific knowledge are rarely considered before the adverse impacts regarding the complexity of a global civilisation and the Earth as human-friendly living environment become evident.
The invention of systems engineering as multidisciplinary endeavour to cope with complex matters was a necessary step in the right direction. However, if system engineers act with an anything-goes attitude, systems engineering will fail its promises. There are no viable technical solutions for all issues without pervasive societal changes supporting a transition to a sustainable human-friendly living environment on Earth for future generations. Especially, expectations of new technologies promising unlimited energy and material consumption not interfering with the biosphere and providing human living with a zero environmental footprint are an illusion. In the recent past, electrical power generation from nuclear fuels has not fulfilled such promises considering especially the unresolved nuclear waste handling over time spans exceeding the historical time period of human civilisations by far. Some of the technology visions promoted today may likely be doomed to a similar fate. Technologies staying close to circular biospheric processes may be more promising in the long term independently from whether they provide maximum efficiency or allow maximum profit according to today’s economic accounting criteria in all processing steps.