Last Modified: ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?



Homepage / Publications & Opinion / Archive / Articles, Lectures, Preprints & Reprints

THE WORLD'S HEAVIEST AEROPLANE?
Peter Cochrane

As the machines beget ever more powerful machines, the exponential expansion in software size and complexity continues to overtake and swamp them. The remarkable advances in computer hardware speed and storage density sustained by a doubling of capabilities every 18 months since 1960 (as predicted by Gordon Moore) are no match for the software explosion. Word processing and other applications that required 0.5Mbyte of RAM 10 years ago are now demanding well over 5Mbyte, and there are many more of them. So today's Power PCs apparently run slower than a 386 of only a few years ago. In industry the disease seems even worse with control systems amounting to millions of lines of code. What is happening? Has the software industry has lost control? Will it continue to just consume all future hardware gains, ignore optimisation, and provide ever more complex and unwanted facilities embedded in more and more lines of over complex code?

At the present rate of software expansion we will need a super computer to write an office memo by the year 2010. And this phenomenon is almost universally true of all commercial, defence and engineering systems - software just keeps expanding. It is as if we have learned nothing from our decades of working with hardware where we delight in doing more with less. Superficially, the engineering differences between hardware and software now seem minimal, and the cost of software manufacture is often greater. So why do we not optimise and worry about software cost and efficiency? Why do we seem to be doing apparently less with more? Are we really trying to build the world's heaviest aeroplane, or is there more to all this?

It could be that software is something so new and complex that it will defy all human effort to analysis and formalise. Never before have we had to construct systems or tackle problems involving 100s and 1000s of loops and I/O functions. This being the case we could be in a new realm of the unknowable. Much of our software seems beyond our established mathematical models and techniques - and defies our limited human mental capacity to understand. So what are we to do? Of course we can continue on our present course and suffer a continuing, and probably terminal, slow down. Alternatively, we can pin our hopes on new programming languages that are tighter, lighter weight, smarter and better organised. Perhaps these will see us take the vital step to software building blocks that can be glued together in an understandable and efficient (Lego-like) manner. But then, perhaps not, software history does not bode well in this direction

In the physical world, we built bridges of wood and stone and steel, investigated the material properties, and later discovered molecules and atoms. In the software world we seem to have started with the electrons and have yet to discover molecules, let alone the concept of wood, stone and steel. We currently lack any suitable abstractions to form a systematic view, and we know little or nothing of the general properties. So software modules, discrete building blocks, might be the conceptual fix we need. However, progress in this direction has been very slow and there may be a new alternative for many of our future needs that are network, system and information based.

Developments in artificial life systems now see genetic mutation and exchange creating a different richness of solutions. Software that writes itself in a similar manner to the evolutionary process of life is now a crude reality. Control systems requiring millions of lines of code have been replaced by dramatically fewer evolutionary lines, and purists now worry about not understanding the way the machines do it. But the truth is we are not all that clever about understanding how we do it either, the complexity is generally well beyond a single human mind. So here is a new world of machine generated code, where they programme and learn, and we unknowingly use the tools produced. Most impressively, the machines may soon watch us, learn from our habits as they change and continually modify the code to meet our requirements.

Self organisation and chaos are vital ingredients for all carbon based life. Every living thing exists on the edge of a strange attractor, just a hairs breadth from death, in a risky, fit for purpose, non-linear world of weak hierarchies. A world where simple rules predicate complex behaviour. Here uncertainty, competition, mutation and reproduction are key to survival and progress. Unless life lives on the edge, it does not live at all. So far, these principles have not been applied to engineered systems which are largely linear, optimised, strongly hierarchical, non-competitive, and minimise risk through large safety margins and free energy.

It is curious that we are moving in a direction of creating ever more complex software to perform essentially simple tasks. In contrast, nature does the converse, generating unbelievably complex behaviour from incredibly simple software. The difference of course is the millions of years nature has been allowed to get it right. We on the other hand face much shorter time scales. But, simple life systems; worms, ants and bees have been simulated on modest computers realising the major interactions in nests and communities. Some of this work has now moved to practical application as control software for networks and information agents. It also shows much promise for creating a new means of engineering complex systems.

Whilst the underlying software for each entity may be only a few hundred lines of easily understood code, the emergent behaviour of a society of such entities is a another matter. This generally defies prediction and is full of surprises. It looks as though systems of this type cannot be engineered from the traditional standpoint of our established methods and principles. We may have to let go of our long held desire to define and constrain all the outcomes by specifying, designing and testing systems. We may just have to stand back and watch the behaviour emerge and develop.

Exponentially growing communication, mobility and information working is creating an increasingly chaotic (in the mathematical sense) world. The notion that everything can be controlled, ordered and specified in a manner reminiscent of the early days of the telephone network is a grave error. No matter how many people are employed writing software, there will never be enough. Systems will not be able to keep up with the developments of applications, peripheral devices, and new modes of human and machine interaction.

Just 20 years ago all telephones were on the end of a wire and static with users making an average of 2 or 3 telephone calls per day at unrelated times. True they were busy hours, and meal times and tea breaks would see a distinct lack of calls, but by and large calls were governed by random events. This all changed with the arrival of TV phone in programme. Someone from Liverpool singing a song on TV could result in half a million people telephoning London to cast a vote for their local hero in the space of 15 minutes. A new world of network chaos was born. With the arrival of the mobile telephone a new phase erupted. Traffic jams, train and plane cancellations all trigger correlated activity - everyone calls home or office within a few minutes. Naturally enough, cellular systems become overloaded as thousands of people demand to be connected at the same time. So a transition has occurred, from a random world of reasonably distributed events, to a highly localised and correlated world of activity triggered by anything causing us to act in unison.

For the near future, consider the prospect of network computers. When several of us meet our low cost NCs will be plugged into the same line, network, or server. At critical times during our discussion, will all simultaneously attempt to access information or download to distant colleagues. This will be correlated activity with a vengeance and on a large scale that is difficult to contemplate.

Probably the most famous and example of correlated activity between machines was the computerisation of the London Stock Market and the Big Bang. Here machines programmed with similar buy and sell algorithms had no delay built in. Shortly after cutting over from human operators to machines the market went into a synchrony of buy, sell, buy, sell. This is an existence theorem for uncontrolled and catastrophic chaos - it is possible.

Many people equate chaos to randomness, but they are very different. Chaotic systems exhibit patterns that can be in a near cyclic manner often difficult for us to perceive. Random systems on the other hand are totally unpredictable. Curiously without computers we would know little or nothing about chaos, and yet they may turn out to be the ultimate generators of network chaos on a scale we might not be able to match.

Today the dominant language on planet earth has a binary base. There are now far more conversations between machines each working day than humans. Their high bit rates means there is more information transfer in a 24 hour period than between the whole of the human race cumulatively back to the birth of Eve. Very soon, there will be more things (machines) able to telecommunicate than people, and we have no idea what patterns of behaviour will emerge. I would bet on chaos - with low averages and massive correlated, peaks.

Anyone who drives on motorways will have experienced traffic waves created by some unseen event ahead. Probably the best place to experience this phenomena in the UK is on the M25 when, for no apparent reason, the traffic speed can oscillate between 10 and 70 mph for long periods. Sometimes the traffic comes to a complete halt and then lurches forward to 40 mph and back down to 0. This is the classic behaviour of a system of independent entities in a serial queue having a delay between observation and action. In this case the observation might be an accident, a breakdown, or someone driving foolishly. The delay is between our eye, brain and foot. As soon as we see something and we reach for the break pedal then very shortly afterwards so does everyone else, and so the wave starts.

There is no doubt about it but rubber necking when driving a car is very dangerous, but people do it. An accident or incident occurs and people slow down to take a look, and then on the far side they speed up. Strangely when the incident has been cleared away the wave that has been set may last the rest of the day. Whilst the traffic is dense, the wave motion persists long after the event has subsided. The system has an unseen memory - us. Only when the traffic density thins out does the memory fade away. Might we then expect similar phenomena in electronic systems for communication between people and machines.

Packet switching and transmission systems are ideal for the creation of information waves. To date these have largely gone unnoticed because terminal equipment?s order packets to construct a complete message, file or picture and end users see nothing of the chaotic action inside networks. But information packets jostle for position and queue for transmission slots in a similar manner to cars entering a busy a highway. Only when we try to use such networks for real time communication do we experience any obvious arrival uncertainties. Our speech sounds strange with varying delays in the middle of utterances and moving pictures contain all manner of distortions.

Packet systems are fundamentally unsuited to real time communication between people and machines. So why use them? It turns out that for data communication where arrival time is not an issue, they are highly efficient in their use of bandwidth. These systems were born in an era where bandwidth was expensive and they represent an entirely different paradigm for switching and transmission in the telephone network. However, the champions of ?packet everything? always like to tell you that this is the true way for information to be communicated. Curiously they often do this by sending you a single line e-mail message with a 35 line header.

Telecommunications, networking, and almost all resource allocation in computing, will increasingly be about having all the desired capability in the wrong place at critical times. Finding information, people, packet and bit routings, or free CPU power represent an increasingly large class of problem looking for a solution. It could be that just throwing cheap bandwidth at the problem and exploiting the vast capacity of optical fibre might offset the need for large and complex software systems. Mother nature often adopts this tactic by selecting a fit for purpose rather than optimal solution. However, whilst near zero cost bandwidth is certainly feasible, it is not certain it would solve all our problems, and we do not have the economic mechanisms to realise it anyway.

So, a new line of thinking started with a study of ants. Some of these creatures have only 200 neurons and only about 400 lines of code defining the majority of their individual and simple behaviour. But their social behaviour, the interaction of 1000s, is phenomenally complex, adaptive and resilient. Emulating them proved relatively simple and rewarding with 400 lines of software resulting in ant-like behaviour on networks, entities able to seek out and retrieve information on networks. These later became a new breed of information agents! The addition of memory cells, plus communication between agents to compare missions, sites visited and information won, made them extremely versatile and efficient.

An extension of these concepts created new network restoration algorithms necessary to counteract underground cable damage and network equipment failures. The new software amounted to less than 1k of code, constructed and tested in weeks, and replaced 1.6M lines of structured hierarchical code produced over years. But it was still dead, wholly deterministic, with fixed lines of code that did not learn or mutate, a creation of man?s hand. It could never learn or evolve - in the strict sense.

Taking a leaf out of natures book, it is clear that we will increasingly need evolutionary systems to meet a growing chaotic demand and operational change. Genetics, sex, mutation and progeny spring to mind, but we cannot afford to wait for millions of years of chance mutation. Looking at carbon systems we see a world dominated by one and two sex systems. Two sexes are the most adaptable, complex and intelligent. So we might suppose that sex in software, with the super speed of machines might suffice. But should we be constrained by nature as to the mechanism and numbers involved, or indeed the nature of the progeny? Probably not. In software there are no constraints whatsoever. Morality and society do not exist to constrain the riches of behaviour

It turns out that in a stable environment, or a fixed and bounded problem class, then two or three sex systems seem to dominate in being the most adaptive and speedy at finding solutions. However, as the problem space grows beyond the scope of a two sex system to mutate and evolve, then a failure to find a solution becomes an increasing occurrence. The solution to this is simple, just increase the number of sexes to fill the problem space. In nature the single sex systems dominate with the flora and fauna, whilst the two sex systems are the smartest. But fungi have 10,000 sexes and some insects 7 or 17. In the event of some future cataclysm, guess who dies and who survives!

We might now envisage a silicon world where a learnt and positive behaviour is passed on from one generation to another in a similar way to it?s carbon forbears (us). There is then a new degree of freedom unavailable in carbon - progeny by instalments. This might be a new means of avoiding the evolutionary cul-de-sacs that so evidently hamper carbon. When you have evolved to become an elephant you cannot back track to become mouse, no matter how many generations you wait. Progeny by piece parts, many offspring glued together to make the whole, may provide a further degree of freedom and class of solution.

To date it has been demonstrated that such a natural engineering approach can produce viable solutions to the Travelling Salesman Problem that are very low cost but extremely efficient in terms of code and time to converge. It has also been possible to produce systems that sort and prioritise information stacks, predict and model the behaviour of markets and companies. They also increasingly exhibit remarkable degrees of intelligence to the point of evolving swimming, and search and find behaviour, in screen based robots. Whilst such software is unlikely to see machines writing their own word processing packages or spread sheets, these are an insignificant burden for humans compared to the system and network problems we have yet to face.

When all of this comes together with noisy decision making, a subtle blending of random uncertainty and chaos, instead of the full determinism of nailed down logic, we may have the right conditions for silicon life. At this point the technology will not require our hand to steer it to find solutions. We will then have to be content to be the spectators of the evolution, and try to understand and decode the outcomes of this new engineering tool.

For the most part people do not understand people, and people do not understand machines. The big question is, will machines understand machines and people, and are we smart enough to spot artificial life when it spontaneously erupts?

All materials created by Peter Cochrane and presented within this site are copyright ? Peter Cochrane - but this is an open resource - and you are invited to make as many downloads as you wish provided you use in a reputable manner