Last Modified: ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?



Homepage / Publications & Opinion / Archive / Articles, Lectures, Preprints & Reprints

The Hidden Benefits Of Optical Transparency
Peter Cochrane, Roger Heckingbottom, David Heatley.

The optical fibre amplifier will bring about network transparency and reductions in manning levels, interface problems, software and operating costs, whilst improving reliability and performance.

There is little doubt that the advent of the optical fibre amplifier is set to revolutionise telecommunications on a broad range of technological, operational and application fronts. The simplification of repeater hardware, the opening up of the latent fibre bandwidth and the enormous increase in cable capacity that will ensue are often cited as the major advances heralded by this technology. We should note however that the optical amplifier is merely the first rung of a ladder for technology migrating towards even greater capabilities. For example, we already have gratings being written into the core of optical fibres, with the potential of programmable fibres in the not too distant future. Such technology might allow real time frequency and signal selection within a network in a new and revolutionary way. Contactless connectors, active leaky feeders and optical wireless are other recent arrivals on the scene that exploit optical transparency. There is therefore a very real prospect of achieving all-optical communication for mobile units via a fixed transparent optical network. On a global basis the need for switching remains, but for cities and certain small countries, that requirement is now diminishing as we look towards the early part of the 21st century. Wavelength routing using linear and non-linear properties offers the prospect of non-blocked switchless communication for up to 20 million customers.

Whilst the above prospect is stunning when compared with the technology of the previous decade, it is in the area of economics that optical transparency affords a unique opportunity. What is generally not recognised is that there is a raft of hidden benefits, specifically: major reductions in interface technology, network control and management software; reliability improvements; reductions in the number of switching sites and craft people. System and network operating costs that are a small (even trivial) fraction of that today look certain to be realised. If this is the case then we can expect market forces to steer telco's away from their traditional business - bit transport. In the next 20 years there is little prospect of maintaining turnover and profitability on the basis of POTS and bit transport alone. The migration away from these areas is already apparent and will accelerate with the realisation of transparent optical networks. As bandwidth becomes a commodity item, to be traded in the same way we currently trade coal or oil, it will be the service and information provided that will represent the highest value-add and most lucrative market. In short, we are moving towards a future world where bandwidth is effectively free and distance is irrelevant.

A Network Vision
Current network thinking is predicated by the copper past and the rigid limitations and abilities of twisted pair, coax and static microwave radio links, specifically: switches located in the centres of population; a local loop averaging only 2 km in reach; central battery power feeding; numerous flexibility and distribution points; charging for time, distance and bandwidth. All of these are a throw-back to the days of a limited resource. The need for thousands of local switches and hundreds of central offices for a typical European nation or RBOC territory has led to the creation of hierarchical telecommunication networks where the traffic is incredibly thin in the local loop and incredibly thick at the core and international level. The tiers of the network have been organised to allow a grooming of the information being transported so that it may be most economically transported from one customer terminal to another. The shear number of switches and transmission systems involved dictate such a topology. However, there is now a far better alternative. If we exploit the inherent reach capability of optical fibre and extend the local loop to >40km, we then see complete national networks of <100 switches possible. At this level there is no point in creating a hierarchical structure. In fact, it can be argued that a hierarchical structure across the entire planet would be a redundant concept. Suddenly the network has become purely a local loop: no core network, no international level, just the local loop!

Looking Back
The history of telecommunications has seen the transmission path migrate from a single wire for the earliest form of telegraphy to a twisted pair for true telephony, then coax for multi-channel telephony and now multiple fibres. In the future this trend may culminate in a single ubiquitous fibre with varying amounts of active elements in the transmission path. This is summarised in Fig 1 for a 100 km route and is compared with the corresponding trends in complexity, reliability, cost, bandwidth and material usage.


Fig 1. The history of cable telecommunications


Fig 2. Cyclic trend in analogue -v- digital transmission

In concert with this Fig 2 shows the cyclic nature in the prevalence of digital and analogue transmission over the same time span. It is likely that with the removal of electronics and other blocks on bandwidth in the transmission path, the resulting photonic transparency will encourage a partial move back to analogue communication! By analogue we refer to those instances which exploit transparency between terminal stations, there being no signal-specific regeneration on route, only broad band amplification. Our definition of analogue therefore spans what is today considered to be true analogue through to a blend of analogue and digital formats.


Fig 3. Network costs with projected technology progression relative to 1970

In Fig 3 the economics of this situation are shown across all the technology eras given in Figs 1 and 2. It has to be remembered that the primary reason for going digital in the first place was to realise the lowest cost solution for an international network that was striving to maintain a given standard of communication for telephone traffic. It turned out that digital switching and transmission gave the lowest cost solution by about a factor of 2 compared with any combination of analogue and digital. But this outcome was dictated by repeater spacings at the time of ~2km on twisted pair and coax. With repeater spacings now extended to > 50km on fibre routes, the economics of networks have changed radically. When this is combined with a >100 fold reduction in the number of switching sites, we soon reach a milestone where a combination of analogue switching and transmission creates the most economic network.

Network Reliability
With the development of Pulse Code Modulation in the 1960's digital transmission was realised for the first time, with all its associated benefits of consistency of signal quality, low interference, etc. The down side was that large numbers of repeaters were required between switching centres to reshape, regenerate, and repeat the signals - the 3R process. Early repeaters proved to be the dominant source of system failure, so much so that elaborate measures were introduced into the network to improve end-to-end reliability. Primarily this involved N+1 stand-by and diversity routing.

N+1 stand-by provides 1 hot stand-by circuit for every N traffic carrying circuits within the same cable or duct route. When a failure occurs the affected traffic is automatically switched over to the stand-by circuit. This technique offered real benefits at the time of its introduction, but detailed studies have shown that this is no longer the case, particularly since the introduction of optical fibre. Fig 4 shows that by merely duplicating the power supplies associated with repeaters and other active elements (bearing in mind how few these elements are becoming), circuit availability is improved to a level on a par with N+1 stand-by, particularly on long lines, but at a fraction of the cost in terms of equipment, maintenance and network overheads.


Fig 4a. Unavailability of 100km systems with/without N+1 standby


Fig 4b. Unavailability of 1000km systems with/without N+1 standby


Fig 4c. Unavailability of 10,000km systems with/without N+1 standby

Diverse routing differs in one key area from the N+1 strategy: the stand-by route follows a totally different path between the common end points, which enables a better link availability to be achieved. The geographic separation between the main and stand-by routes can range from a few kilometres within cities to hundreds of kilometres within countries, and ultimately whole countries or even continents for international links. Again studies have shown that the original rationale behind diverse routing is no longer completely valid, with power supply duplication producing a similar reliability improvement. However, since diverse routing is generally limited to long distance, high capacity links because of its high implementation and operating cost, and since it also affords a useful level of network flexibility, there is no compelling argument to see its wholesale removal. In any case, there is a general trend towards self organisation within networks that reduces the need for diverse routing whilst at the same time improving plant utilisation.

Software
Today's networks rely heavily on software for their management and service provision. In contrast to hardware, where failures generally have a small localised impact, minor errors in software pose a considerable, and widespread, risk to network operation. If the present trajectory in software development is maintained, the magnitude of the risk will grow exponentially as we look to the future. Indeed, we already see the reliability of hardware improving rapidly whilst that of software is reducing (Fig 5), leading to sub-optimal system and network solutions.


Fig 5. Outage disaster scale for networks.

From any engineering perspective this growing imbalance needs to be addressed. If it is not we can expect to suffer an increasing number of ever more dramatic failures. The introduction of optical transparency is likely to see a reduction in the scale and complexity of software through the corresponding reductions in switching and routing.

Critical Mass
Because modern networks contain thousands nodes the effect of individual node failures tends to be localised and isolated - barring software related events! Furthermore, the impact of single or multiple failures is effectively governed by the "law of large numbers", with individual customers experiencing a reasonably uniform grade of service. However, as the number of nodes is reduced, the potential for catastrophic failure increases, with the grade of service experienced at the periphery becoming extremely variable. The point at which such effects become apparent depends on the precise network type, configuration, control and operation, but as a general rule networks with <50 nodes require careful design to avoid quantum effects occurring under certain traffic conditions. That is, a failure of a node or link today for a given network configuration and traffic pattern may effect only a few customers and pass almost unnoticed. The same failure tomorrow could effect large numbers of customers and be catastrophic due to a different configuration and traffic pattern.

As we move towards networks with fewer nodes, and increasing mobile communications at the periphery, we should do so with caution!

Network Management
Monitoring the operation of systems and networks, extracting meaningful information and taking appropriate action to maintain a given grade of service is becoming increasingly complex and expensive. The level of complexity increases in proportion (at least) to the amount of data being handled, much of which is redundant or unusable. Consider for example the quantity of data generated by the failure of a switching node. It generates a fault report while all related nodes also generate reports. For a fully interconnected network of N switching nodes, this results in one failure report plus error reports from the N-1 other nodes. If in a large network we allow for two or more nodes failing simultaneously, we can show that:

For example; a network of 500,000 switching nodes, each with an MTBF of 10 years will suffer an average of 137 node failures and will therefore generate an average of 68.5 million reports per day. This assumes that each node is communicating with all the others, which is of course somewhat extreme. However, even if we consider the least connected case, we find that:

which shows that the reports still count in the millions. Whilst there are certain network configurations and modes of operation that realise a fault report rate proportional to N, the nature of telecommunication networks to date tends to dictate a ~N 2 growth. Indeed, a large national network with thousands of switching nodes can generate information at rates of ~2Gbyte/day under normal operating conditions. Maximising the MTBF and minimising N must clearly be key design objectives.

A penalty associated with the N"2 growth that is generally hidden is the overhead due to computer hardware and software, plus transmission and monitoring equipments. For very large networks this is now growing to the point where it is rivalling the revenue earning elements - a trend that most definitely cannot be justified or sustained. The reliability improvements and operating simplifications that accompany the introduction of optical transparency will clearly have a beneficial impact on these issues.

Human Interdiction
It is important to recognise that network reliability is as much a people issue as it is concerned with equipment and technology. For example, it is estimated that over 50% of the faults experienced by telco's today are in the local loop, around half of which are in some way attributable to their own craft people performing installation and maintenance activities. The replacement of manual distribution points and wiring frames with software based routing over a Passive Optical Network (PON) represents a significant gain. When combined with gains arising from the eradication of corrosion, the overall saving can be over 40% of the operating total. Similar savings are also possible for repeater stations and switches. A network based solely on fibre with <100 nodes would have a fault rate of less than 20% of today's network. This figure assumes fibre to the home (FTTH). If however the fibre to the kerb (FTTK) option is adopted, with copper or radio drops to the customer, the above gains will not be realised and the overall running costs will remain high. FTTK also introduces a finite bandwidth limitation that prevents future capacity upgrades. FTTH is therefore key to the lowest cost, highest reliability and utility future.

Manning Levels
If we correlate our discussion above of equipment reductions and reliability improvements with manning levels, we find they follow the trend shown in Fig 6. Here we have normalised the figures to a 1960's telco comprising 100,000 employees serving 10 million customers. With equipment reductions on the scale we envisage, combined with related changes (improvements in most cases) in reliability, network structure, etc, we predict that ~30,000 employees would optimally service the size of customer base that most European telco's enjoy today.


Fig 6. Manning levels in TElco's normalised to the size of customer base

Intelligence
It is clear that there is a rapid migration of intelligence towards the periphery of telecommunications networks. Obvious present day examples are desk top computers and lap tops that are linked to others via the network to create new working environments. By the year 2015 we might well see the first super computer with processing power and information storage on a par with humans. Some 15 or 20 years later such a machine might appear on our desk, or even on our wrist. Customers, and ultimately machines, will wish to connect to the network and demand the right service, bandwidth and access. The demands of this form of telecommunications will be stunning by today's standards, for example, clock rates in the Gbit regime will be the norm, rendering ISDN line rates as inappropriate as a Morse key. We can also expect to see optical computing based on analogue techniques in reasonably widespread use. Given these changes it is hard to envisage any service or utility that a telco can uniquely place within its network. Indeed, as the cost of transmission and switching continues to fall, all manner of customer driven operations will be conducted on the periphery of the network by a multiplicity of operators who will have little concern or interest in how the bits get from A to B. The good news for telco's is that the traffic generated by such operations is likely to create a massive information flow resulting in a tenable business. It is then likely that telecommunications will go the same way as many commodities before - we will have to sell 100x more at 1/100th of the present price to maintain the turnover, while at the same time reducing operating costs to ensure profitability against tightening margins. If not, it is hard to see how telco's will grow their business on the basis of presently perceived markets.

An Industry Tension
It is interesting to reflect that telco's think in terms of switches whilst the computer industry thinks in terms of routers. On the one hand, the computer industry has a desire for total freedom and anarchy in communications with no one intervening in the process (e.g. internet). On the other hand, the telco's wish to have switches in place at strategic locations so they may control and manage the overall process of communication. The two halves of the IT industry are thus moving in diametrically opposing directions. A tension is building up, and in reality neither side will win unless some compromise is reached as a mix of both philosophies will be required in networks for some time. To opt for either as a total solution would merely lead to a very sub-optimal solution for a sizeable user population.

Protocols
A further tension evident within the IT industry is the enthusiasm exhibited by the telecommunication fraternity for standards and common interfaces - after all it is the only way they can inter-work. For decades protectionism has reigned in that the attainment of standard interfaces could be postponed until the interface in fact became an international border. As a consequence we have FDM and TDM systems designed with no simple means of interfacing. For example, we have 2Mbit/s in Europe whilst Japan and America have 1.5Mbit/s. In contrast, the Synchronous Digital Hierarchy (SDH), ATM and packet switching all embrace internationally agreed standards and benefits as telecommunications becomes truly global.

The computer industry however has a different objective. The absence of standards equates directly to customer lock-in and loyalty by a defacto inability to move hardware or software from one supplier to another. This limitation has to be overcome if we are to enjoy the full benefits of IT. Fortunately there are promising signs of this happening. In any event, optical transparency offers an elegant means of side-stepping many of these issues.

Delay
An inherent and critical parameter in all forms of human communication is delay. Today this is detrimentally manifest on geostationary satellite links where ~300ms between utterances can lead to difficult communication through ?double talks?. The next most common manifestation arises from the use of code compression on speech and video because of the misguided axiom - bandwidth is expensive. The latest GSM speech codecs incur a delay in excess of 140ms, and the horrors of video conferencing are all too apparent, e.g., lack of lip sync, jerky, distorted and non-lifelike images, etc. None of this is necessary! The bandwidth problem abated with the introduction of fibre.

The advent of ATM presents the next big challenge. For computer, mail, fax and other forms of delay insensitive communications ATM affords some obvious advantages, but why anyone should wish to inflict it on the human race for speech, visual and real time computer communications is a real mystery. Packetising everything is not a sensible credo, even if you love ATM! So far there have been no large scale trials or modelling of such networks, and as a consequence we do not adequately understand what the service implications will be. Models to date only address the linear portion of an overall performance characteristic that is highly non-linear. In the not to distant future we may become aware of information waves and chaotic actions that introduce new forms of brittleness in the telecommunications infrastructure!

Perversely the prospect of optical ATM looks very attractive as the potential bandwidth available makes the possibility of blocking, and thus unpredictable delays, an unlikely event. Thus a combination of Solitons, ATM, and total transparency may realise the ultimate network protocol with a minimum delay.

Technology Speed Up
All of the above is compounded by the fact that the development of technology is accelerating and we see a world where most standards are dead on arrival, or at least moribund, and have little hope of being universal. Yet we have to find a solution that allows us to inter-work on a global basis over a network of fixed and mobile links. Optical transparency has a key role to play in this context. If customers can be allocated a unique (or shared) carrier and be left to utilise / modulate it as they see fit, then for the first time the telecommunication industry can stand back and watch the interface war as it rages between the providers of customer premises equipment. Bluntly, the user doesn't care whether he is communicating with another user via MPEG, JPEG or other coding standards, all he cares about is initiating the call and having his terminal equipment and the network do the rest and take care of all the problems. Significantly, the smartest of the computer manufactures are already installing software programmable codecs in their products to enable them to communicate with all forms of modem, voice and vision codecs as they emerge.

The Local Call We can put a stamp on an envelope and it will go from New York to Boston or New York to California for the same price despite the grossly different distances and the fact that the process is obviously labour intensive and distance related. Why is it then that telephone calls over the same distances are charged at different rates by an industry that is largely de-coupled from distance related costs? In the UK experiments have already been conducted in which the whole network has been configured to bill customers at only local call rates during certain holiday periods. Not surprisingly the response has been fantastic! Suddenly people who normally made say 2 calls each of 5 minutes duration on a Sunday afternoon were making upwards of 5 calls of half hour duration. A network that was reasonably quiet on a Sunday compared to a normal working day was suddenly close to overload throughout the period. This then is real evidence of a tremendous latent demand for telecommunications that is linked inextricably to cost. There is also an existence theorem that says you don't have to charge for distance, you don't have to charge for time and you don't have to charge for use - all you do pay is a fixed subscription - it is called internet! It would therefore seem inevitable that telecommunications will go in this general direction. If an impediment to this progress is encountered it will not be down to the technology, for it is abundantly clear that this goal can be reached in the very near future with what is available in the laboratory and rapidly moving towards manufacture. If there are to be any future impediments they will be artificial and the creation of governments, regulators and outdated copper mind sets.

Closing Remarks
The Clinton/Gore initiative for a national super highway is being mirrored in Europe under the 4th framework programme and is, to some extent, in place in the UK in the form of the Super JANET network. Opening up high speed telecommunications in this way will cause the computer industry to release machines of a very different nature than hitherto. Perhaps we will at last see a welcome end to code compression on visual and speech signals to the point where they are unusable. Perhaps we will be able to realise distributed computing and employ telepresence, teleconferencing, multimedia and virtual reality as a means of communication in a humanised framework that allows us to stop travelling, stop wasting the planet's resources and get on with the task of building a civilisation. Without transparent optical networks it is quite likely that none of this nor the information society will be achievable!

Bibliography
Cochrane, P., Heatley, D.J.T., et al., "Optical communications - future prospects", IEE Electronics & Communication Engineering Journal, Vol. 5, Nš 4, August 1993, pp. 221-232.
Cochrane, P. and Heatley, D.J.T., "Reliability aspects of optical fibre systems and networks", IEEE International Reliability Physics Symposium, San Jose USA, 11-14th April 1994.
Cochrane, P. and Heatley, D.J.T., "Optical fibre systems and networks in the 21st century", Interlink 2000 Journal, February 1992, pp. 150-154.
Cochrane, P., Heatley, D.J.T. and Todd, C.J., "Towards the transparent optical network", 6th World Telecommunication Forum, Geneva, 7-15th October 1991.
Heatley, D.J.T. and Cochrane, P., "Future directions in long haul optical fibre transmission systems", 3rd IEE Conference on Telecommunications, Edinburgh, 17-20th March 1991, pp. 157-164.
Hill, A.M., "Network implications of optical amplifiers", Optical Fibre Communication Conference (OFC'92), San Jose USA, 2-7th February 1992, paper WF5, p. 118.
"Modelling Change in Telecommunications", BTTJ Special Issue, Vol12/2, April 94.

Biographies
Peter Cochrane is a graduate of Trent Polytechnic and Essex University. He is currently a visiting professor to Essex, Southampton and Kent Universities. He joined BT Laboratories in 1973 and has worked on both analogue and digital switching and transmission studies. From 1988 he managed the Long Lines Division where he was involved in the development of intensity modulated and coherent optical systems, photonic amplifiers and wavelength routed networks. In 1991 he established the Systems Research Division and during 1993 he was promoted to head the Research department at BT laboratories with 620 staff. He is also the Development Board member for technology and has published widely in all areas of telecommunications studies.

Roger Heckingbottom received his first degree and doctorate from Oxford University. He is currently a visiting professor at the University of Wales, Cardiff. He joined BT Laboratories in 1966, initially involved in the study of semiconductor surfaces. Through the 1970's his research broadened to cover growth and characterisation of semiconductors for optical communications. In 1980 he was appointed head of the Materials Division, adding optical fibres and component reliability to his earlier responsibilities. Since 1990 he has been concerned with the application of optics in communications networks. He currently manages the Networks Research Division with 145 staff. He has published widely and served on several national committees responsible for university research in the UK.

David Heatley obtained his doctorate in Optical Communications Systems from the University of Essex in 1989. He joined BT Laboratories in 1978 to work on the development of analogue and digital optical fibre systems designed specifically for video services. In 1985 was appointed head of a group responsible for the development of optical receivers for terrestrial and undersea applications. In this capacity he was a member of the team that received the Queen?s Award for Technology in 1990. He is now with the Systems Research Division and heads a team with special responsibility for future studies and telemedicine. During his career he has published widely on telecommunications, ranging from discrete components, through system to networks. He is a Member of the IEE and a Chartered Engineer.

All materials created by Peter Cochrane and presented within this site are copyright ? Peter Cochrane - but this is an open resource - and you are invited to make as many downloads as you wish provided you use in a reputable manner