 |
Homepage / Publications & Opinion / Archive / Articles, Lectures, Preprints & Reprints

THE TECHNOLOGY OF TOMORROW
Much of our thinking and expectation in and IT is conditioned by a history of radio and copper cables of constrained bandwidths and high distance related costs; with centralised switching systems and services; software of massive complexity to realise simple functions; and technologically convenient interfaces. But we now have very low cost transmission and computing - and as a result we see in prospect: the migration of intelligence to the periphery of networks; total customer control of services; distributed switching; increased mobility; bandwidth on demand; radical changes in services and hold times. In the next decade we can expect to see all of the established wisdoms, practices and operating regimes overturned and replaced by new modes and concepts. New forms of software and operating regimes, humanised interfaces, reductions in computer illiteracy and greater mobility.
1. Prologue
Most telecommunication networks are still designed as if the constraints of the copper and radio past were still with us: compress the signal; save bandwidth; minimise hold times; continue with the old protocols, interfaces and switching architecture's - although they are now the new bottle-necks; use the established design rules despite technology changes! The apportionment of operational and economic risks also seems vested in the past and demand realignment. The wholesale reduction in the hardware content of networks relative to the massive expansion of software is a prime example, as is the interface on most telephonic services which is dictated by a base technology that is now over 100 years old.
The advance of fibre optics has now removed the bandwidth bottle neck and distance related costs. Integrated electronics has made information processing and storage almost limitless and effectively free, whilst software has introduced the potential for system and network failures on a grand scale. What should we be doing ? First, we need to start thinking and engineering systems and not hardware, software, networks and applications as if they were disconnected. Second, we need to embrace new tools and techniques to help us understand our increasingly non-linear world. Third, we have to humanise the interfaces and information presentation to facilitate understanding and control.
2. New Networks
In less than 20 years, the longer transmission spans afforded by optical fibre has seen a reduction in the need for switching nodes and repeater stations. The arrival of the optical amplifier and network transparency will accelerate this process and realise further improvements across a broad front including: reduced component count; improved reliability; reduced power and raw material usage; increased capacity and utility. A further logical (and revolutionary) development will see the concentration of more traffic onto single rather than several fibres in parallel. Optical fibre transparency and Wavelength Division Multiplex (WDM) offer a more reliable option.
As transmission costs continue to fall, the balance between transmission, switching and software has to be re-addressed. Radical reductions in the number of network nodes, consolidated switches, reductions in network management and constraints imposed by software are good targets. We might also have to move away from local calls that just span the local area and expand, over some relatively short period to encompass continents and the whole planet.
Whilst cellular radio technology is now well developed, and satellite mobile will shortly follow, we suffer an apparent lack of spectrum, although we have yet to exploit microwave frequencies > 30 GHz. We also have the ability to take radio off air, up-convert it to optical wavelengths, feed it down fibre that is transparent through optical amplification, down-convert and re-transmit at a distant cell undistorted. In addition, the performance of free space optics for home and office has been demonstrated to be very similar to microwaves, but with the advantage of a vastly greater bandwidth. Research systems are already providing pica-cellular illumination of the desk, personal computer, individual staff members, and offer the potential of active badges/communicators and inter-desk/computer links. Local loop free space optics might also be anticipated as an alternative to fibre, copper and radio.
Satellite technology has been eclipsed by optical fibre which now carries over 70% of all international traffic. However, satellite now has a new role in broadcast, getting into difficult locations, mobility and service restoration. The path delay (~300ms) for geostationary satellites is problematic for real time communication. However, Low Earth Orbit (LEO~1000km) cellular satellite systems in the sky using will see a new lease of life for this sector. Other prospects include direct satellite to satellite links using free space optics, on board signal processing and switching, individually positioned multiple micro-beams, the use of much higher frequencies, and even optical links from satellite to ground. All of this can be expected to extend satellite technology by an order of magnitude (or so) beyond that of today. Fundamentally, satellite systems look set to migrate to mobile, difficult, and rapid-access applications.
For global mobility the major challenges are likely to remain the organisational and control software necessary to track and bill customers - and the ability to deflect/hold/store calls that are traversing time zones at unsociable hours. A conventionally organised and controlled network cannot fulfil these requirements.
With the increasing multiplicity of carriers, growth in mobility, demand for new types of service, and growing complexity of signalling and control for call and service set-up, as well as the complex nature of bits on demand services, it may also be necessary to abandon the practice of charging for bandwidth. We already see network signalling and management overheads approaching 50% of network capacity, with billing costs exceeding 15% of turnover - and growing. The future service trajectory requirement of customers, carriers and networks cannot sustain such non-essential growth - a new mode of operation is necessary!
3. Switching and Transmission
Only a decade ago, a typical UK repeater station had to accommodate 2.4 Gbit of speech circuit capacity. Today, it accommodates 6.8 Gbit and projections indicate that 40 Gbit will be required by the end of the millennium. This packing density cannot be achieved with conventional electronics alone, another degree of freedom is required - and wavelength is the obvious choice. Recent developments have seen the demonstration of suitable technology such as contactless 'D type' (leaky feeder) fibres embedded in printed circuit back planes. When coupled to an Erbium doped fibre amplifier a loss-less multi-tap facility is realised, which can distribute 10 Gbit/s and higher rates almost endlessly. An interesting concept now arises - the notion of the infinite back plane. It could be used to link major cities through the use of optically amplifying fibre that offers total transparency. Such concepts naturally lead to the idea of replacing switches by an optical ether operating in much the same way as radio and satellite systems today. The difference is the near infinite bandwidth of the optical ether. Demonstrators have already shown that a central office with up to 2M lines could be replaced by an ether system, but suitable optical technology is probably still some 15 years away.
4. Signal Coding
Throughout the history of the video telephone, video conferencing and telepresence systems development, a single determination has prevailed. This can generally be classified as the ?copper mind set? in which bandwidth and distance are assumed to be expensive commodities today, and in the future. The majority of effort has therefore been expended in the areas of signal compression and coding for networks that are presumed to be of poor performance, restricted bandwidth, low utility and high price. In reality, the advances made in fibre transmission and wide band switching over the same period are largely negating these constraints and we can look forward to switched, wide band services at a low price before the end of the century. Extreme coding schemes that compress video down to 2 Mbit/s or less then become questionable.
5. Signal Format
ATM is often quoted as the ultimate answer for future bandwidth-on-demand services. In an electronic format this view is likely to be short lived as terminal equipment development will overtake the ATM capacity available within the time frame of network provision. It will also become apparent that ATM switching nodes are inefficient when realised on a large scale, as is network throughput. Other major problems are associated with the limited capacity per customer coupled with the fixed and variable delays which necessitate complex protocols and signal processing. Interestingly, photonic ATM looks far more attractive as it offers a near infinite capacity! Such a possibility also falls in line with the development of soliton systems.
The history of telecommunications has seen a cyclic alternation between digital and analogue formats. Computation has also followed an alternating analogue - digital history. We might thus pre-suppose that this alternation will continue into the future, and may even be encouraged by the inherent qualities of optical fibre, network transparency, ether nets and photonic computing. Perhaps the future will not be all-digital!
6. Software
In the software domain, very minor things pose a considerable risk, which appears to be growing exponentially. New ways of negating this increasing risk are necessary as the present trajectory looks unsustainable in the long term. Perversely, the unreliability of hardware is coming down rapidly whilst that of software is increasing, so much so, that we are now seeing sub-optimal system and network solutions. From any engineering perspective this growing imbalance needs to be addressed. If it is not, we can expect to suffer an increasing number of ever more dramatic failures.
It is somewhat remarkable that we should pursue a trajectory of developing ever more complex software to do increasingly simple things. This is especially so, when we are surrounded by organisms (moulds and insects!) that have the ability to perform complex co-operative tasks on the basis of very little (or no) software. An ant colony is one example where very simple rule-sets and a computer with ~200 (English garden ant) to 2000 (Patagonian ant) neurons is capable of incredibly complex behaviour. In recent studies, the ANT (Autonomous Network Telepher) has been configured as a new form of network control and management element. These realise a considerable advantages over conventional software. For network restoration, only 800 lines of ANT code replaced the >106 lines previously used in an operational network. Software on this scale (<1000 lines) is within the grasp of the designer's full understanding, and takes only a few days to write and test by a one man team!
Developments in evolutionary software also looks set to come to fruition by about the year 2000. At that time we can expect machines to exceed - or at least equal - human capacity to generate software.
7. Network Management
Monitoring systems and operations, extracting meaningful information, and taking appropriate action to maintain a given grade of service are becoming increasingly complex and expensive. Much of the data generated by networks is redundant and the complexity of the management role increases in proportion (at least) to the amount of data to be handled. Whilst there are network configurations and modes of operation that realise a fault report rate proportional to ?N'(the number of nodes), the nature of telecommunication networks tends to dictate an N2 growth. A large national network with thousands of nodes can generate information at rates ~1 Gbyte/day under normal operating conditions. Clearly, maximising the MTBF and minimising ?N' have to be key objectives. A generally hidden penalty associated with the N2 growth is the computer hardware and software, plus transmission and monitoring hardware overhead. For very large networks this is now growing to the point where it is starting to rival the revenue earning elements!
8. People & Reliability
The sub-optimal behaviour of systems and networks is commonly masked by maintenance activity, which itself results in performance below specification. In most cases this can be traced back to human intervention - imperfect repair, interference and incidental damage/weakening of individual or groups of elements. Models assuming a finite probability of latent faults being introduced by human intervention - that is repair/maintenance action creating latent faults in the serviced unit or those nearby - reveal an overall network performance reduction of 50% to be a reasonable expectation! This level of weakening is also supported by practical experience across a broad range of equipment, system and network types including line plant, radio, electronic and photonic transmission, switching and computing hardware.
9. Quantum Effects & Node Reduction
All of our experience of systems and networks to date, coupled with the general development of photonics and electronics, point towards networks of fewer nodes, vastly reduced hardware content, with potentially limitless bandwidth through transparency. With networks of thousands of nodes, failures tend to ve localised and isolated - barring software related events! The impact of single or multiple failures is then effectively contained by the ?law of large numbers' with customers experiencing a reasonably uniform grade of service. However, as the number of nodes is reduced the potential for catastrophic failures increases, with the grade of service seen at the periphery becoming variable. The point at which such effects become apparent depends on the precise network type, configuration and control, but as a general rule networks with <50 nodes require design attention to avoid quantum effects under certain traffic and operational modes. That is, a failure of a node or link today, for a given network configuration and traffic pattern, may effect only a few customers and go almost unnoticed. The same failure tomorrow could effect large numbers of customers and be catastrophic purely due to a different configuration and traffic pattern.
10. Telepresence
Existing video conferencing and video telephone technology presents us with images of another human of the wrong size, the wrong colour and generally blurred or jerky and distorted with movement. The images lack good synchronisation between speech and lip movement, have a voice that does not emanate from the lips, but from some loudspeaker to the side, do not permit eye contact or body language and do not approach the illusion of "being there". Moreover, in video conferencing, these limitations are compounded by the need for more than one screen and the lack of any shared work space. This creates an un-natural and sterile work place which is difficult to become acclimatised to if all of the people in a communications session are not already acquainted.
A sector of telepresence that is fundamentally simple but highly effective, is based on the mounting of a miniature TV camera (or cameras for stereoscopic vision) on a headset. A remote operator/viewer then experiences the illusion of "being there" by effectively "sitting on the shoulder" of the human platform. This technology can be used to address the often experienced problem which is summed up by the comment "if only I could see what you can see - then I might be able to help you". In all manner of service, maintenance, engineering and other human activities, the need to "be there" is prevalent. The availability of micro-miniature TV cameras, coupled with modern telecommunications has made this a demonstrable proposition. The addition of a head-up display and pointing system also allows the wearer and/or remote user to target particular objects with accuracy, while at the same time the remote viewer can send graphical indications and data in support of their joint vision and interaction. Applications of this ISDN based technology include; remote maintenance, equipment installation, telemedicine, news gathering etc.
11. Video Window
Video window experiments and developments have been conducted over the last 15 years and have now arrived at a point where all the components are available as commercial products. The principle of operation is to present human beings in real size representation on a high definition projection TV screen. By suitably arranging furniture and decor, the illusion of a continuous room or meeting-place can be created. Moreover, using electronic processing and steerable microphones, it is possible to focus the acoustics on any one speaker and arrange for his voice to emanate from the appropriate part of the image. With the camera mounted in front of the screen, it is difficult to achieve the illusion of eye contact and ideally a behind-the-screen camera is required. One method of achieving this employs a liquid crystal shutter of wall size. The white board can be made an integral part of this teleconference room and a facility that is usually lacking, or poorly realised, in today's teleconference environment.
12. Computer Work Space
Whilst multimedia developments place a small picture of the human being in a corner of the screen and have the PC screen dominated by computer data, a more realistic approach might be to reverse the process. Switching between people and data is one solution, but also the subliminal overlay of one above the other with an increasing density of representation for any person speaking would also be possible and would probably enhance the process. A further alternative for computer data display and manipulation involves the use of a large screen with a direct hand interface. Instead of a mouse or keyboard, the hand allows the user to activate and manipulate displayed data. Further levels of sophistication include direct voice activation within a very few years, whilst full cognition is likely to be 10-15 years in the future.
3D systems are already finding uses in applications where it is essential to obtain depth information; particularly for defence, sub-sea, medical and hi-tech systems. We can expect these high value 3D services to permeate through into the CAD/CAM arena. Architects, car constructors and aeronautical engineers have all need to ?walk through' their designs to experience the novelty and examine detail. We might also anticipate the acceptance of 3D for applications in video-telephony and conferencing.
13. Computer Interface
Computer illiteracy is not a basic human limitation, but more a symptom of poor interface design. The GUI allowed more people to enter the computer domain than any previous advance. Moving to voice I/O and ?hands in the screen' will probably exceed this previous advance. This is likely to release a latent demand for both computing and communication capacity, services and facilities that far exceeds our experience to date.
14. Virtual Reality
VR is a rapidly evolving technology in which the user, as in telepresence, experiences a sense ?of being there?, but in a computer generated virtual (synthetic) rather than the real world. The user is able to move within the virtual world and interact with it. The technology has a growing application in a wide range of activities including education and training, CAD-CAM, surgery and, most immediately, entertainment through the games industry. The use of an extended version of immersive VR, with combined real and computer generated images may offer the ultimate form of teleportation and video conferencing. However, the requirement to use headsets will create an un-naturalness in human interactions, which may offset the advantages gained. On the other hand, suitably lightweight and miniature elements such as active spectacles are now available and may become popular.
15. Final Remarks
The increasing speed at which consumer and office electronics can be developed and deployed is promoting: a rapid migration of increased computing power and artificial intelligence towards the customer; a growing diversity of service type and demand; an increasingly mobile population; and a rising customer expectation in terms of service level/availability and network performance. Such a demand can only be satisfied by networks that respond on a similar time scale by: increased network transparency via photonic amplification and switching; reductions in the amount of network plant and number of nodes; new forms of software and control; a migration to hybrid forms of signalling; customers becoming responsible for service configuration and control; the provision of bandwidth on demand; new forms of billing.
As we move into the 21st Century today's video conferencing and video telephones will look very quaint and functionally inadequate. New display and projection systems are will enable us to create a new sense of realism. Attaining the objective of feeling that ?you are there? - teleported to a new location with humanistic interfaces and facilities should be our ultimate objective. The key benefits of these new technologies will include: a global reduction in the need for physical transport and travel; more efficient and reduced use of raw materials and energy; more efficient operations for companies and organisations; more efficient and effective work practices; more productive and less stressful lifestyles.
All of this is now possible by the removal of the capacity bottleneck inflicted by a copper and radio infrastructure, which has already been overtaken and outmoded by optical fibre. For the most part all of the other technologies required are available or coming to fruition. Probably the most demanding requirement is the change of mind set required in the associated industries, regulators and governments.
|