Beyond seeing is believing

Posted by Peter Cochrane on June 28, 2010

28 June 2010, Peter Cochrane, BCS EVA Conference, London, 5-7 July 2010 Proceedings pp 23-25

1. PROLOGUE
Art, science, mathematics, technology, engineering and medicine became concentrated in silos around the time of the reformation. This both accelerated the disciplines whilst holding back the benefits to be gained from their linking. Today they are coming back together as a much-required force for innovation and understanding at a time when we are being progressively overtaken by vast amounts of data. This in turn demands complex combinations of modelling and visualisation to engender understanding, establish confidence and aid decision-making.

To put human progress in perspective it is worth remembering that all of our achievements up to the industrial revolution were eclipsed in the first 200 years immediately after. Similarly, the information revolution eclipsed this progress in just 20 years. This is the nature of exponential progress - machines breed better and more powerful machines, but in a much shorter time. This powers further discovery and leads to more information and understanding. Human knowledge is now estimated to double every 2 years. Unfortunately, our wisdom seems to progress at a somewhat slower pace!

Whilst visualisation is the fastest way for us to assimilate and understand complex situations, we remain limited in our ability to comprehend dynamic multi-dimensional situations beyond level 4 or 5. This dictates the use of additional machine support and more senses than sight alone. Animation, modelling and haptic interfaces are now at the forefront of our efforts to come to terms with complexity.

Here we sample some of the tried and tested visualisation techniques and then go on to examine the rise of Artificial Intelligence (AI) and how it might further impact our efforts to understand ever more complex scenarios than we face today and in the future. At the cusp of this work is the very understanding of intelligence itself.

TECHNOLOGY PERSPECTIVES
In the 1975-95 period all the technologies of visualisation we enjoy today were in their infancy. Electronic displays, Virtual Reality (VR), Augmented Reality (AVR), 3D, Stereo and Surround Sound, Tactile Gloves and Haptic Feedback all existed. But, it was all basic, big, power hungry, expensive and the preserve of corporate, government and university laboratories with sizeable budgets. Since 1995 tech advances have seen huge improvements in resolution, fidelity, sensitivity, power consumption, size, weight, and dramatic reductions in cost.

In addition, computing power and memory have grown exponentially, with optical fibre and wireless connectivity becoming near ubiquitous. There has also been a realisation that networking is a primary mechanism for collaboration and advance. Further developments of note are those in humanoid and non-humanoid robotics, telepresence, artificial life, modelling and games, 3D replicators, nano-structures, bio-manipulation and design, plus of course the rise in machine-based intelligence.

However, the word 'visualisation' conjures TV or PC screens, VR headsets and caves. We conjecture that we need more because human awareness mostly involves five senses and not one! At the very least we can engineer three of these today: sight, sound and touch. The remaining two are not impossible, but they are difficult and generally have less impact in this field.

There is now one outstanding dimension we should also consider for inclusion. In real world environments most animate objects respond in some way that can be interpreted to be 'intelligent'. So the question is; could AI provide further and significant enhancements to the world of visualisation? Certainly the world of computer games and virtual worlds would suggest so!

BIG PROBLEMS
The technologies of search, navigation and interaction are now established and understood in the world of the PC and the internet. Their shortcomings are also well documented and the need for further advancement is abundantly clear. What use a search engine that tells you that there are 97,763,400 references on AI, and here are the first 10? Similarly; a computer output to a problem that runs to 120,000 x A4 pages! Clearly, we have some way to go.

How could AI help? How about monitoring our activities and any collaborators in order to narrow searches, present relevant data when we ask, and take pre-emptive action when we do not. And how about identifying individuals and teams that are working on similar or symbiotic problems so we could link, share results and collaborate. But there is a bigger prize! We also need computer models, situational awareness and decision support.

Whilst building facsimiles of the real and unreal worlds is difficult, expensive and time consuming, AI systems should be able to 'guide our hand'. After all, AI has done so for some time in other spheres such as designing the chips, boards and wiring in this computer I am using to type these words for example.

WHAT KIND OF INTELLIGENCE?
When Gary Kasparov was defeated by Deep Blue (1997), he and the world of chess, were outraged. Sound bites included: 'something strange is going on; it didn't play a regular game of chess; it didn't play like a human; it didn't play fair' etc. No one asked the most important question; how did it win? The key here was a new intelligence had entered the game; a powerful computer that didn't, or couldn't, think like us; and nor should it, because it was bringing something new - a new dimension, a new way of solving the problem. That was the value of Deep Blue - a new approach.

Today we are constantly surprised by AI systems and the answers they contrive, and on many occasions we lack the facility to fully understand. But that does not preclude us using the results! We have gradually realised that the solution of many industrial, scientific and governmental problems will continue to defy human abilities.

So far our partnership with machines has proved profitable, and what lies before us is an even richer future, where the combination of AI with Artificial Life (AL) will most likely see the spontaneous creation of new intelligences.

DEFINING INTELLIGENCE?
There are over well over 100 published definitions of intelligence. Unfortunately, none of them provide any real understanding, enlightenment, or an iota of quantification. Worse, the long established IQ measure by Alfred Binet (1904) is both a flawed concept and a really unhelpful idea in this instance.

In engineering a commonly used comparison technique is to count the number of processors and interconnects, and then multiply the two to create a single figure of merit. This seems far too simplistic to be meaningful and does not reflect any notion of intelligence. In fact, estimates of machine intelligence on this basis would suggest Hal 9000 (2001 The Movie) should be alive and well at this time, but clearly he is not! Recent research has shown that individual neurons are not mere 'on-off switches' with synapses that discharge into a network of fixed interconnections, but individually intelligent entities that dynamically reconfigure.

So all the prior art that assumes that a brain scales linearly appears to be flawed and the mechanism has to be far more complex and subtle. To put this in context we can relate many real life experiences, observations, signal theory and thermodynamics.

Cutting to the chase, the primary argument goes along the following lines:
(i) Slime moulds and jellyfish (et al.) exhibit intelligent behaviour without a distinct memory or processor - they have 'directly wired' sensors and actuators.
(ii) Our machines have distinct memory and processing entities, but this is seldom so in organic systems where there are overlaps in functionality.
(iii) Whilst intelligent behaviour is possible without memory or processor, and simple sensors and actuators alone can furnish that facility, the converse is not true - sensors plus actuators are a prerequisite to intelligence.
(iv) All forms of intelligence encountered to date invoke state changes in their own, and external, environments with an expansion or compression of the quantity of the original information input. For example; the answer to the question 'why is the sky blue' would contain far more words, whilst the reply to 'do we know why the sky is blue' would be a simple yes!
So it seems reasonable to assume an entropic measure to account for the reduction or increase in the system information or state change. We therefore define our comparative measure of intelligence as:
The Change in Entropy = Ic = IEi -EoI
Where Ei = Starting entropy and Eo = Completion entropy

A reasonably general formula for simple machines results from a combination of analysis and adjustment to meet practical system limitations as follows:
Ic = K log2[1 + AS ( 1 + P.M )]

Where the parameters S = Sensor, A = Actuator, P = Processor, M = Memory, are related to weightings of the complex operators involved.

We can now confirm two essential properties: with zero processor and/or memory power intelligence is still possible, but with zero sensor and/or actuator power it is impossible. This is consistent with our life experience and experimental findings.

There is now a further observation, that flies in the face of the conventional wisdom of those that worry about 'The Singularity' of the machines taking over because they outsmart us, is that the speed of intelligence growth is logarithmic and not linear.

So, a 1,000 fold increase in the product of Processing and Memory (P.M ) power, only sees intelligence increase 10 fold by virtue of the log2 function. Hence a full 1,000,000 increase in P.M sees intelligence grow by a factor of 20 times. This is far slower than previously assumed and goes some way to explain the widening gap between prediction, expectation and reality!

A further important observation is that sensors and actuators have largely been neglected as important components of intelligence and visualisation to date. This oversight is one that we should now be addressing as we advance the science and application of both fields.

WHAT DOES ALL THIS MEAN?
With the arrival of a myriad of sensor components and their rapid deployment on the periphery of networks, the internet, robotics, large and small systems, we are much closer to creating true (artificial) intelligence than ever before. And when combined with our existing and established approaches to visualisation it could result in some significant advance in the way we view, experience, and react to complex situations.

Interestingly, this will also see a marked change in the way our systems react to us! So if this is only a matter of when, and not what if, there is only one question left to ask; will we be smart enough to recognise a new intelligence when it spontaneously erupts on the internet or within some other complex system we build?