First WhatsNewHelpConceptInfoGlossaryHomeContentsGalleryThemesOur PapersSearchAction !
BackNext TourbusIntroductionTutorLinksApplicatOnlineRelatedOfflineSoftwareExhibitionFun

Complexity Philosophy as a Computing Paradigm

Chris Lucas CALResCo

CALResCo Group (Complexity & Artificial Life Research), Manchester U.K.

(A talk presented at the 'Self-Organising Systems - Future Prospects for Computing' Workshop held at UMIST 28/29 October 1999 in conjunction with the EPSRC 'Emerging Computing' Networks Initiative. Published as the appendix to William Roetzheim's book "Why Things Are - How Complexity Theory Answers Life's Toughest Questions", 2007, ISBN 978-1-933769-26-4 ).

Abstract

Self-organisation imposes a set of axioms that prove different in many ways to those usually adopted in scientific work. These assumptions are common to most of the complexity specialisms, and relate to system properties that are uncontrolled, nonlinear, coevolutionary, emergent and attractor rich, as well as being heterarchical, non-equilibrium, non-standard and non-uniform. Additionally, behaviours showing unpredictability, chaotic instability, mutability and phase changes, along with inherently undefined values, self-modification, self-reproduction and fuzzy functionality add issues that seem inimicable to traditional computing approaches. In this paper we attempt an overview of the philosophical implications of complex systems thought, and investigate how this alternative viewpoint affects our attempts to design and utilise adaptive computer systems. We classify the types of complex system that relate to self-organisation and contrast the old inorganic paradigm (control based) with the new organic (self-organising) perspective. Some important aspects are identified that need attention when attempting to apply this viewpoint to program design, and we also examine how these factors manifest in natural self-organising systems in order to obtain pointers for the artificial implementation of such ideas. The overall requirements for self-organising computing are considered and we explore some alternative ways of looking at some specific problems that may arise. We conclude by asking how these issues relate to a typical modern artificial life simulation and discuss various ways of moving forward in the area of practical contextual computer system design.

1. Introduction

In recent years we have seen considerable activity within the complexity sciences. Much of this has been of a specialist nature, concerned with the investigation of specific problems and the development of experimental models and techniques for dealing with complex systems. Yet the ideas emerging from such studies also have major implications on our thought processes and challenge many of our traditional scientific axioms, especially in relation to the possibility of self-organization within local and not global contexts [Heylighen, Kauffman, Lucas1997a].

Before we apply these new philosophical ideas to computing it is well to understand both the concepts behind them and how they relate to conventional programming approaches. In many cases it would be fair to say that conventional computing follows the paradigms common to conventional science, ideas that also form a strong part of our social and educational predispositions. Those ideas are deeply ingrained in our belief systems and it is often difficult to see clearly which of them are inapplicable to new modes of thought. Our focus here will assume an ultimate goal of creating an hypothetical artificial lifeform that can exist autonomously in a human environment [Kanada], in other words a self-contained system operating in a wider context.

We will start by outlining the philosophical ideas generally accepted as being involved in complexity thinking, in other words the concepts that differ from our conventional technological approaches, before considering the various types of complexity that can exist and which are studied in the complexity research fields. The most complex of these relates to self-organisation itself in an organic mode of operation and we will then compare, from a computing viewpoint, the modes of operation of organic and inorganic systems. Moving on to consider the implications of applying of self-organisation concepts to the world around us, we consider how these ideas are manifested in typical human scenarios. The computational requirements needed to apply these features to programs are then outlined, followed by the problems that remain and some suggested approaches. We look at how these issues affect Echo, a typical Artificial Life simulation system based upon Complex Adaptive System (CAS) thinking, before outlining ways forward for future research.

2. Complexity Philosophy

Complexity philosophy is an holistic mode of thought and relates to the following properties of systems. Not all these features need be present in all systems, but the most complex cases should include them.

We can summarise the structure of complex systems in an overall heterarchical view (Figure 1) where successively higher levels show a many to many (N:M) structure, as does the overall metasystem.

Heterachical Hypersystem Figure 1

Part interactions create emergent modules with new properties. These modules themselves interact as parts at an higher level and this process leads to the creation of an emergent Hierarchical system (the upward causation). The components at each level connect horizontally to form an Heterarchy - an evolving web like network of associations. This combination of hierarchy and heterarchy within a system is called a Dual Network by [Goertzel]. Additionally systems can have overlapping members at each level (e.g. individuals can belong to many social groups, molecules to many substances, a situation to many models and a model to many situations). These large scale interacting emergent systems are called Hyperstructures by [Baas], groups of interlaced dual networks constrained by downward causality (where emergent properties then govern the parts that formed them [Campbell]). Here we extend these ideas slightly by allowing explicit cross level interference between systems (e.g. an individual affecting another country overall, a cell affecting an external part). This extended design we call here an Heterarchical Hyperstructure (to reflect the flexible interrelationships between levels typical of human systems). We could also call this three dimensional structure a CAS Cube (intrasystem, interlevel, intersystem) or a Triple Network. Natural hyperstructures typically will have thousands of components and connections per system, rather than the few shown here for illustration, and generally therefore complex systems are very high dimensional. Given that a metasystem has such a set of structures, then the overall fitness will relate to the interdependent properties at all levels, in other words to the full contextual environment.

3. Types of Complexity

Previous work has identified four classes of complexity [Lucas1999], of which only the last is directly relevant to our focus here. In this more general treatment we will extend these concepts to cover high-dimensional complexity, where in the limit the system is assumed to possess infinite components. These four nested complexity types (the later including the former) are:

Type 1: Static Complexity - Fixed structures, frozen in time

For example the visual complexity of a computer chip or a picture. These relate to fixed point attractors, in other words to Class 1 CA systems [Wolfram], but in multidimensional systems these do not necessarily relate to homogeneity, although they are closed equilibrium systems. This form of complexity is studied by such techniques as Algorithmic Information Theory [Chaitin] and is also common in physics.

Type 2: Dynamic Complexity - Systems with time regularities

This includes such states as planetary orbits, heartbeats, seasons. They are cyclic attractors and relate to Wolfram Class 2 CAs. Multiple cycles may be superimposed in highly complex systems (decomposable by such techniques as Fourier analysis). These closed systems are those conventionally studied in the sciences, where the time regularity gives the repeatability necessary for prediction, and again are equilibrium systems where initial transients have been discarded.

Type 3: Evolving Complexity - Open ended mutation, innovation

This mainly relates to the process of evolution in nature where a single cell gave rise to an extraordinary diversity of forms and functions (Linnean taxonomy). Also related are diffusion aggregration and similar branching tree structures. These are historically constrained and form ergodic or strange attractor systems, equivalent to Wolfram Class 3 CAs. They involve searches of state space, but more importantly the creation of new areas of state space, new possibilities by the production of new components (and conversely shrinkage of state space by the destruction of failed options). These are open, non-equilibrium systems and can be regarded as existing on a permanent non-repeatable transient. The high-dimensionality here is embodied in the large populations typically encountered which taken together ensure evolutionary uniqueness.

Type 4: Self-Organising Complexity - Self-maintaining systems, aware

Operating at the edge of chaos, these systems loop back on themselves in nonlinear ways and generate the rich structure and complex mix of the above attractors associated with Wolfram Class 4 CA systems. This is the advent of autopoiesis, the creation of adaptive self-stabilising organic systems that can swap between the available attractors depending upon external influences and also modify and create the attractors coevolutionarily (by learning). They differ from the purely evolving category in that state space is canalized by the self-organising nature (downward causation) of their internal emergent processes, thus possible functions are self-limiting. These systems occupy dissipative, semi-stable, far-from-equilibrium positions exhibiting the typical power law distribution of events familiar from critical systems at the phase transition [Bak], they are structurally and organisationally both open and closed, with semi-permeable material and informational membranes allowing the passage of operational triggers driving their attractor modes.

In this progression from relatively simple static recognition and classification, through predictable systems. innovative systems to self-maintaining systems we encounter increasing environmental awareness (in the sense of perception) by the system - there is a process taking progressively more information from the environment. It is the ability to evolve such awareness that we wish to capture in the use of the self-organisation paradigm for computing purposes.

The main additional characteristics of Type 4 aware systems are:

4. Contrasting Inorganic and Organic Computing Modes

Let us now contrast some current inorganic technological approaches versus those of the alternative organic paradigm. Note that these divisions are illustrative and not meant to imply strict divisions and boundaries between these approaches, many systems will cross the boundaries on some of these criteria.

ASPECTS RELATED TO STATIC CONCEPTIONS
Mode Inorganic Organic
ConstructionDesignedEvolved
ControlCentralDistributed
InterconnectionHierarchicalHeterarchical
RepresentationSymbolicRelational
MemoryLocalisedDistributed
InformationCompletePartial
StructureTop downBottom up
Search spaceLimitedVast
ValuesSimpleMultivariable
ViewIsolatedEpistatic

Expanding these somewhat:

Construction: Designed v Evolved

Here we compare systems created to meet a human end or goal and those whose designs appear by trial and error, internally goal directed if at all.

Control: Central v Distributed

There is a main procedure controlling the system in centralised global control but no such prioritising structure in distributed local control. Levels of control may thus be formal, with a single entry point, or informal with multiple entry points corresponding to each agent or to sets of them.

Interconnection: Hierarchical v Heterarchical

There is normally one linear path through tree like man-made systems (giving a single output) compared to the multi-route web-like nature of natural structures (with multiple outputs).

Representation: Symbolic v Relational

The system can intelligently represent the external world in its data structures (e.g. traditional AI) or may operate directly on the world itself without embodied intelligence in more natural mode (e.g. subsumption architectures [Brooks]).

Memory: Localised v Distributed

Data may be stored in discrete locations (whether in one place or many, e.g. COBOL records) or may be held in an holistic form in the program structure and connections as emergent properties (e.g. Neural Network).

Information: Complete v Partial

The program may have all the knowledge and operations it needs to fully solve the problem mathematically (within a restricted domain), or may need to rely on partial data and inadequate resources, giving approximate or probabilistic results (sampling or simplification of multidimensional space).

Structure: Top down v Bottom up

Top down relates to a design of a system starting from the overall function and gradually adding detail, whilst bottom up starts with the lowest level parts and by combining them creates an unplanned function.

Search space: Limited v Vast

Options can be limited by design constraints, a well defined function (with a global optimum or unique solution), compared to having all state space available for potential use (multiple local optima) where we must balance the conflicting rewards of accepting a current limited function against those of searching for a better one (especially important where better options become available with time).

Values: Simple v Multivariable

Artificial systems are usually designed to deal with individual subjects or values (e.g. banking) whereas natural systems may have multiple simultaneously active values (e.g. mind). This is the difference between one-dimensional and multidimensional thinking.

View: Isolated v Epistatic

Isolated systems assume all variables can be treated separately (a reductionist - genecentric view) whilst epistatic ones recognise that the individual solutions interact and need treating as a whole (an holistic - schema view).

ASPECTS RELATED TO DYNAMIC CONCEPTIONS
Mode Inorganic Organic
ConstraintsStaticDynamic
ChangeDeterministicStochastic
LanguageProceduralProduction
OperationTaughtLearning
InteractionDefinedCoevolutionary
FunctionSpecifiedFuzzy
UpdateSynchronousAsynchronous
FuturePredictableUnpredictable
State SpaceErgodicPartitioned
CausalityLinearCircular

Expanding these also:

Constraints: Static v Dynamic

System may be constrained internally, for example engines, where the static parts have fixed degrees of freedom or they may be constrained only by external dynamic environmental factors (e.g. natural selection) giving behavioural freedom within certain limits.

Change: Deterministic v Stochastic

The options that can be dealt with can be pre-specified (a fully bivalent transition table, i.e. standard IF..THEN) or can arise by chance (a probabilistic form of interaction, i.e. molecular encounters).

Language: Procedural v Production

The program structures are laid down and inviolate in conventional computing but can themselves be changed in organic style systems (e.g. L-systems, Genetic Programming, Classifier Systems) which incorporate self-modifying abilities.

Operation: Taught v Learning

The system may be instructed to follow well known and fixed processes (or algorithms) or may be able to create these processes itself (as humans do by discovery). The system can be designed to cope with errors (unexpected data) by rejection or expected to change itself to adapt to any new data.

Interaction: Defined v Coevolutionary

The agents may have specified connectivity or this may itself evolve contextually in a flexible manner, both in terms of spatial and temporal information transfer.

Function: Specified v Fuzzy

Mechanical systems are intended to perform a well defined function (e.g. process a cheque), organic ones perform a function that may be contextually free to change (e.g. our human goals).

Update: Synchronous v Asynchronous

System may operate in an ordered way, step by step, or all the parts may operate independently at their own speed.

Future: Predictable v Unpredictable

The performance may allow the system to be taken for granted as consistent or it may throw the occasional surprise with a leap to a new attractor structure.

State Space:: Ergodic v Partitioned

System is assumed to function for all possible inputs, i.e. is globally applicable or may change itself to operate in only limited canalized contexts (i.e specialising).

Causality: Linear v Circular

The program flow may be from input to output in a well specified manner or partial output may redefine the inputs along the way by cybernetic feedback paths.

5. Organic Application Implications

These aspects relate mostly to connectivity, the idea that, unlike traditional reductionist approaches, we must consider the interactions between parts as being more crucial to their behaviour than is their composition. The ability to control connectivity allows systems to adopt positions between static and chaotic phases, the edge-of-chaos state that maximises adaptability or information processing ability [Langton].

6. Complexity Theory

Putting some of the main points together we can arrive at a definition of what sort of theory we are proposing by using complexity thinking. This scientific theory both helps to classify the nature of organic systems and predicts what we must do in general terms to create artificial equivalents:

Complexity Theory:

Critically interacting components self-organize to form potentially evolving structures exhibiting a hierarchy of emergent system properties.

The elements of this definition relate to the following:

This theory, or something like it, lies behind much work in the complexity sciences and it is qualitatively well supported by experiments and discussions in both natural [Bak, Goodwin, Maturana, Nicolis] and artificial systems [Epstein, Fontana, Langton, Ray], with increasing quantitative support being developed [e.g. Kauffman]. In biological terms we can relate this mode of operation to a contextual living system (Figure 2).

Organic Loop Figure 2

In this causality loop the section from Genotype to 'emergent' Phenotype forms the metabolism of the organism - the 'critically interacting' autopoiesis stage, with Building blocks as the 'components', whilst the Variation and Selection sections include both the cross-generational 'potentially evolving' stage and the equivalent action of mind (where the genotype relates to a distributed neuronal connectivity specification). The symbolic, self-organizing semantic and pragmatic components together are in overall coevolution within an hyperstructure, maintaining the system at the phase boundary - neither static (dead) nor chaotic (disintegrating).

7. Natural Self-Organising Systems

Let us look now at some examples of life based self-organising systems and see if we can extract any lessons for applications in the computing field.

Family - Dynamic connectivity is important

Social laws generally do not apply within a family, people adapt to each other and compromise to achieve results. This adaptability includes the amount of contact made with others (self-organising to an optimum for each person) and this requires the ability to establish and break connections at will. Without this possibility we get on one another's nerves (due to mutual interference) or can't progress problems (due to the unavailability of family members).

Committee - Coevolutionary attractors are changed by learning

Strangers fight for supremacy in the initial meetings, but given a need for decisions this eventually creates a semi-stable working arrangement that nethertheless can achieve poor results. The classification of the environment can only utilise the available attractors. Without learning by mutual information exchange we cannot alter our attractor structure so as to enable better optima to be reached.

Company - Decentralised innovation is possible in CAS

Complex Adaptive Systems (CAS) are often proposed as models of how self-organisational ideas can be applied to improve business success and survival [Sherman]. Such systems give freedom to the parts to explore state space, to innovate, and this presupposes the absence of that centralised control and structural rigidity that we often assume to be essential in social organisations. In fact such innovation requires that the centrally imposed rules are broken.

Politics - Diverse approaches are acceptable

Different values amongst different groups leads to conflicting proposed solutions, attractor structures that often seem incompatible. Standardisation would reduce fitness for many, so diversity in operation seems essential if we are to approach any overall optimum. This relates to many alternative ways of doing things, so that suitable compromises or niches [Horn] can be reached in different situations - many paths to one end.

Ecologies - Niche optimisation implies new specialisms

Niche behaviour allows many different needs to coexist. This implies multiple values, and is a form of division of labour in which the creatures do not each try to maximise every value but individually optimise a limited number. This suggests we should have limited goals and look for temporary local answers in an incremental way rather than concentrating on future global utopian solutions (which will not persist due to coevolution).

Psychology of Mind - Look ahead and consistency modelling are advantageous

Alternative scenarios are often generated and evaluated before we act. This emphasises the efficiency of offline generate and test techniques in evaluating the best option before presenting this to the environment for coevolution. But it also implies a tendency to adopt inconsistent self-supporting interpretations, an internal consistency or operational closure that can mismatch to real adaptive needs [Goertzel] and lead to delusional errors - poor local optima rather than global ones. We need to ensure consistency in the global coevolutionary and social contexts also.

8. Self-Organizing Computation Requirements

Let us look now at some goals for applying self-organisation to computing, looking here not at state-of-the-art achievements but at the idea of having a machine that behaves as well as an average higher organism. This replaces the idea of duplicating our human cognitive facilities by the notion that meaning is embodied in situated animal sign exchange [Brier].

9. Potential Problems

The achievement of the above goals in a computer program context generates many problems and we can suggest some approaches.

Environmental Robustness - Interface dimensionality methodologies

Robustness relates to avoiding the system disintegrating over time, and to the necessary compromises between its ability to correlate with the immediate environment yet maintain its structure as a system. If the environmental coupling is too tight the system will become unpredictable (trying to respond to too many perturbations), yet conversely if too loose the system will become unresponsive, settled into a single attractor. We thus need to either explicitly define the dimensionality of our interfaces or provide methods for this to evolve.

Predictability - Human acceptability and norm redefinition

Humans need to be able to have confidence in a system and this concerns being able to understand and relate to the system behaviour. Systems that do unpredictable things can only be allowed in situations where that is acceptable to the users, and this excludes very many social situations where conformance to norms is expected - evolved computers will have none and thus we may need to change our own social expectations regarding machines instead.

Real World - Vague definitions and emotional acceptance

Many real world tasks are fuzzy problems, ill-defined scenarios that relate badly to typical academic research simplifications. If we are to generate genuinely useful adaptive programs then these will need to perform in noisy and sub-optimal environments which abound in conflicting and emotional goals. We need to understand this human environment much better than we do presently (where emotions are generally ignored) [Fell].

Performance - Transient operation in place of equilibrium

The time taken to adjust to new situations will be crucial to the satisfactory performance of new organic technologies. We do not have aeons to evolve solutions and must better understand how brains learn (from single examples) with low performance parts if we are to succeed in real-time optimisation. To obtain good performance we may need to use transient (short lived) attractors due to coevolution time restraints [Lucas1997b].

Repair - Correctability and psychological control for machines

Our ability to correct inappropriately evolved systems may be crucial if these are released into the real world, due to the dangers posed by free format adaptation (there are shades of Asimov's Laws of Robotics here). There may be a need for a form of psychological counselling for errant robots, ways of redefining their internal operation by external means.

Evaluation Function - Appropriateness and multilevel intuition

We need a better evaluation technique for multidimensional systems, one more in tune with how we ourselves do this in, as yet, poorly understood intuitive ways - a fast holistic mode. This evaluation needs to include multiple levels and not just the single level optimisation (internal genetic or phenotypic) often seen, and should both take into account the contextual (associative search) nature of solutions and the multidimensional nature of rewards or needs.

Epistasis - Anticipating feedback responses from user

The high significance of nonlinear interactions between variables makes evaluation difficult across system boundaries. The program needs to anticipate environmental reaction (e.g. provide look ahead as in chess programs) to avoid myopic counterproductive 'solutions' being proposed that neglect the user's likely response. This relates to implementing the predictive mode common to science and then monitoring the results, an unsupervised, reinforcement learning mode involving cycles of evaluation and improvement [Sutton].

Multiple Matching - Prioritisation of needs hierarchy

Parallel operation implies that more than one rule may be simultaneously active, thus a prioritisation scheme may be necessary (subsumption style). Whether we try to evolve this (difficult) or impose it implies a conflict between unconscious parallel operation and an explicit design, consciously imposed.

Unknown Function - Emergent abilities uncorrelated to human expectations

Being environmentally driven means that we cannot say what function the system will evolve to meet. It may well be a different one than was intended, since a general evolutionary system will be epistemically autonomous [Cariani]. The evolutionary stable states (ESSs) available to the system cannot be known in advance for unpredictable environments. We need techniques with which to constrain system functions to just those desired, to encourage appropriate emergence, and this implies that performance measures or rewards must still be specified by humans.

Modifiability - Customisation needs externalised

Many current systems are customised for particular clients. This facility may be hard to incorporate in adaptive systems and may require providing appropriate external constraints with which to evolve corporate identities. The idea of fixed ways of doing things is alien to adaptive techniques and thus the whole concept of group identity may need to be discarded in the long term.

Fault Tolerance - Redundancy costs or disposability

Provision for self-repair or redundancy may be necessary and this seems to be better included early in the design [Thompson], yet this may need many generations to evolve and involve many failure costs along the way. Trade-offs between survival and cost may be necessary, perhaps leading to disposable and recyclable programs and robots.

Computation - Processor limitations constrain available goals

Unlike natural systems, we need to compute self-organization (evaluating transition rules for example) and not just let it happen physically. This will have major performance implications unless we can find another way to add parallel processing power. We also need to define the operational envelope, our functional limits, to avoid trying to over engineer and this implies restricting ourselves to simplified systems (no androids).

10. Current Artificial Life Systems

Many of the approaches to investigating complex and self-organising systems adopt the agent based ideas common to artificial life and it is worthwhile to consider here how a typical approach of this type (we will use the Echo system [Hraber]) corresponds to the criteria we have outlined.

Echo is a modern Complex Adaptive System simulator, an auto-adaptive genetic system similar to Tierra but employing more sophisticated interactions and with a fitness measure that goes beyond just reproduction. In the standard configuration this fitness measure relates to the ability of the agents to perform environmental interactions and thus relates to our concern in this workshop.

These systems incorporate many of the features often proposed for complex adaptive systems and which are included in our list of the properties of complexity philosophy. Those features incorporated by Echo are:

  • Autonomous Agents
  • Emergence
  • Dissipation
  • Rule Diversity
  • Unpredictability
  • Mutability

  • Nonlinearity
  • Coevolution
  • Hetergeneous
  • Edge of Chaos
  • Punctuated Equilibria
  • Reproduction

But comparing the system however with our full criteria for self-organising complexity typically highlights a number of inherent limitations:

Self-Modification - Connectivity between agents is fixed

It is also local and probabilistic. This cannot evolve and precludes the adaptability and semantics that prove possible with dynamic connectivity and emergent attractors.

Values - Pre-specified tag resources only

These are imposed directly, not by the evolutionary emergence which would be necessary if the system were to adapt to changes in requirements and innovation. This restriction is typical in ALife systems, and relates to the genome coding adopted, it precludes phenotypic categorisation changes.

Function - Pre-specified interaction types (combat, trade, mating)

These are based upon specific uses of the inherent values and this restriction is again typical, relating to the explicit fitness evaluation functions that tend to be necessary. This restriction precludes emergent approaches to functional balance and operational priorities.

Development - Growth excluded, no self-organisation of phenotype

No hierarchical development seems possible, the most that emerges is at the species level and as the phenotype and genotype are linearly related this is really a single level. Emergence may need to be grounded in a stable higher level arrangement before this can in turn generate further hierarchical levels.

External Coupling - Self-contained environment, no real world involvement

This is an isolated system, so selection also must be internal and does not relate to any human needs. In addition, typically the environment is simplistic and does not mimic the richness and structural plasticity of natural environments, severely restricting the possible classification types that could emerge under structural coupling [Quick].

Modelling - Multigenerational evolution, no learning

Long optimisation times are needed due to the multigenerational computation required before the system settles to a functional balance. No agent memory is available to permit varying contextual attractors, nor to allow offline evaluation of options, so short term dynamic evolution is not enabled here.

Autopoiesis - Too simplistic agent to support self-maintenance

Such short genomes and associated simplistic phenotypes seem inadequate to provide the complexity needed to allow the development of the autocatalysis processes, at agent level, necessary to implement full Type 4 Self-organising Complexity. Additionally, no mechanism that could implement dissipative self-production is apparent.

Whilst these sort of systems have a major role in evaluating possibilities and studying coevolution, in general they seem to fall well short of the sort of situated and flexible systems necessary for an hybrid computer/human environment.

11. Ways Forward

Let us suggest a few speculative research directions:
Parallel solutions - Internal selection of clones

This idea (thought to be how the brain works [Calvin]) relates to having multiple competing solutions, cloning by recruitment and resonance. It uses an internal fitness measure which relates to probabilistic pattern matching in that the most numerous clone wins (strongest chorus). These solutions change dynamically and follow all Darwinian principles, also resembling magnetic domains or spin glasses. It is inherently multidimensional in that the strongest overall match entrains the most clones. Learning is akin to self-organising maps with the sculpture of patterns by forced cycles adjusting connectivity in multiple overlapping attractors.

System Diversity - External niche competition

We here suggest multiple modular programs, each optimised and competing in parallel for a different environmental sub-problem or niche. This idea mimics economic competition, and allows the user to directly choose which module combinations best meet their needs. Each program would make offers to the user of benefits and the corresponding costs and thus their relative successes would correlate automatically to user demand profiles. This relates to the work of [McFarland]. Unlike traditional take-it-or-leave-it packages, this approach maintains maximum openness and flexibility.

Nested POE Models - Multilevel organic loops

In this viewpoint we can regard the different biological processes as reflecting organic loops (Fig2) operating over different timescales and at different structural levels. These correspond to increases in organisms (Phylogenetic), in cells (Ontogenetic), and in synapse connectivity (Epigenetic). This allows the same routines to be used for models incorporating all 3 levels and also allows combinations of modes to be used within each level - corresponding to the POE space envisioned by [Sipper].

Self-Regulation - Contextual metabolism

Here we invoke a metabolic technique that catalyses operation at the transcription stage, depending upon local context. Our genome is multipurpose, as in real biological systems, not all genes are active in any one situation. The editing (syntax) procedures usually missing from artificial systems add another layer between genome and building blocks (Fig 2) - this relates to the Contextual GAs of [Rocha]. This pre-processing can allow much greater flexibility for any genome, and permits true contextual multimode operation to be implemented with the generation of tissue types. Adding tags mimicking cell adhesion molecules should allow symbiotic relationships to form higher level structures.

Epistasis - Contextual development

The N:M relation between DNA and Protein (many equivalent genes, many shapes for one protein), allows compression of the genome (removing redundancy and making use of 'order for free'). But this reduces the search space available to the system and may need new approaches making use of additional local contextual information to specify the actual self-organising result. This relates, in Boolean Network terms, to the genome specifying connectivity and to the environment specifying the starting states. This dynamic mode swapping may help to implement a genetic prioritisation or subsumption architecture.

Hormonal Loops - Contextual connectivity

In brains hormones provide global regulation of neuronal activity by affecting neurotransmitter levels. We can model this also at physical (temperature), cellular (enzyme), social (information), and ecosystem (resource density) levels to design systems that cybernetically stabilise edge-of-chaos. Since innovations require more chaos and stability more order, this can also be used to dynamically regulate adaptability - a form of threshold control.

Mixed Models - Hybrid analogue/digital construction

This combines an analogue self-organizing development scheme based upon non-computational techniques (the physical forming of the phenotype in hardware from building blocks - or a simulation thereof), with a digital perturbation engine, giving alternative production rules which specify and write the initial conditions to hardware. This latter system is perturbed by mutation in GA fashion to achieve variation for later selection. This seems to be an extension of the GA plus Self-Organisation work of [Dellaert & Beer] but using hardware to implement the cellular development in order to reduce computational needs.

Multioption - Internal offline modelling

In this method we need to generate and test multiple models offline before use. This corresponds to traditional AI symbolic intelligence in taking into account look ahead in evaluating the options available to the user, but instead of looking for local (machine) advantage (as in chess) we require here a policy to maximise user advantage. The total representation of the world in traditional data structures is however rejected, in favour of an hybrid combination of distributed (parallel) associations along with serial label manipulation [Sperduti], mimicking the combination of mostly unconscious association plus conscious direction familiar from our human behaviour.

Cooperation - Interactions based on mutual benefit

A competitive bias is evident in many simulations, putting combat first as in Echo, whereas (contrary to the claims) this is a last resort for both animals and humans if fitness is to be maximised in a multidimensional environment. Fitness is enhanced by creativity (positive-sum actions including trade) and reduced by conflict (negative-sum - hence the social laws prohibiting it). The zero-sum (resource swapping) mode often employed in models is unrealistic and almost never exists socially or biologically. Basing techniques instead on forms of mutual aid is expected to improve emergent features, since combined structures are then encouraged and not destroyed [Watson].

Incremental - Forgetfulness and dynamic categorisation

In this option we mutate and replace one option at a time in a population, so that older less used options are gradually replaced by newer variants. This relates to the schema ideas of [Holland] and by mutating options allows new categorisation to be created experimentally (in the same way that a child learns to generalise). In this proposal however a population of category attractors is maintained whose basin of attraction sizes depend upon their successes. Unlike schemas, these are recurrent attractors more in the neural network mode than classifiers.

Nanotechnology - DNA computing

This uses the latest techniques of using real world organic components instead of silicon to mimic natural building blocks. It often uses massive parallelism at molecular level to help solve difficult NP-complete problems [Adleman]. In our context it can be envisioned as a method of optimising the epistatic multivariable evolutionary interface problems that will occur in trying to relate adaptive programs to the complex environment in which they are intended to operate. Engineering organic computers of this sort reduces macrosystems to microsystems, with resultant step increases in speed and parallelism. Feasibility for interactive computing is however unknown.

Lamarkian Operation - Learning across generations

As Darwinian evolution depends upon selection by death, and such deaths are not normally a feature of artificial systems, we can reject the need for genetic reproduction altogether and instead use systems that evolve in more Lamarkian ways [Ackley and Littman] passing environmental knowledge on directly, e.g. dividing the system asexually, cloning a total system from new parts, or generally duplicating knowledge structures in direct ways . These sorts of techniques can include externalised shared data (online style books), environmentally situated data (e.g. bridges) or retainable memory (e.g. EPROMs).

12. Conclusion

This overview has highlighted many differences between conventional computing approaches and those derived from an organic viewpoint. Most of these differences have been poorly addressed so far, especially in combinations where epistatic interactions (compromise solutions) are important. We are however here trying to duplicate 4 Billion years of natural evolution and have so far not yet separated those aspects that are essential from those only contingent. Many current models are employed, but in general these each abstract only a few limited properties for evaluation and none come very close to incorporating the full Type 4 multilevel complexity common to natural self-organising and self-maintaining systems [Ziemke].

We need to understand and make use of self-organisational shortcuts and especially to consider the metabolic contextual implications of situated self-organisation, concentrating less on the genetic building blocks and more on their internal interactions. The connectivity approach used in Complexity Philosophy is appropriate to this view. Some work has started in trying to take these issues into account [e.g. Kennedy] but a great deal still needs to be done before we are able to grow adequate and resilient adaptive programs for real-world application in unrestricted domains.

13. References

First WhatsNewHelpConceptInfoGlossaryHomeContentsGalleryThemesOur PapersSearchAction !
BackNext TourbusIntroductionTutorLinksApplicatOnlineRelatedOfflineSoftwareExhibitionFun
Page Version 4.83 February 2007 (Paper V1.0 October 1999)