Last Modified: ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?



Homepage / Publications & Opinion / Archive / Articles, Lectures, Preprints & Reprints

The hidden benefits of optical transparency
Peter Cochrane, Roger Heckingbottom, David Heatley

Prologue
There is little doubt that the advent of the optical fibre amplifier is set to revolutionise telecommunications on a broad range of technological, operational and application fronts. The simplification of repeater hardware, the opening up of the latent fibre bandwidth and the potential increase in cable capacity that will ensue are often cited as the major advances heralded by this technology. What is not generally recognised is that there is a raft of hidden benefits that span: major reductions in interface technology, network control and management software; reliability improvements; reductions in the number of switching sites and craft people. We should also note that the optical amplifier is merely the first rung of a ladder for technology migrating towards even greater capabilities. For example, we already see gratings being written into the core of optical fibres, with the potential of programmable fibres in the not too distant future. Such technology might allow real time frequency and signal selection within a network in a new and revolutionary way. Contactless connectors, active leaky feeders and optical wireless are other recent arrivals on the scene that exploit optical transparency. There is therefore a very real prospect of total optical communication from mobile unit to mobile unit via a fixed transparent optical network. On a global basis the need for switching remains, but for cities, and indeed some small countries, that requirement is now diminishing as we look towards the early part of the 21st century. Wavelength routing using linear and non-linear modes offers the prospect of non-blocked switchless communication for up to 20 million customers.

Whilst the above prospect is stunning when compared with the technology of the previous decade, it is in the area of economics that optical transparency affords the most dazzling opportunity. System and network operating costs that are a small fraction (even trivial level) of that today look certain to be realised. If this is the case then we can expect market forces to steer telcos away from their traditional business - bit transport. In the next 20 years there is little prospect of maintaining turnover and profitability on the basis of POTS, switching based services and bit transport alone. The migration away from these areas is already apparent and will accelerate with the realisation of transparent optical networks. As bandwidth becomes a commodity item, to be traded in the same way we currently trade coal or oil, it will be the service and information provided that will represent the highest value-add and most lucrative market. In short, we are moving towards a future world where bandwidth is free and distance is irrelevant. In this paper we examine the development of these features and briefly discuss the implications for operators.

A Network Vision
Current network thinking is predicated by the copper past, the rigid limitations and abilities of twisted pair, coax and static microwave radio links. Specifically, switches located in the centres of population, a local loop averaging only 2 km in reach, central battery power feed, numerous flexibility and distribution points, charging for time distance and bandwidth. All of these are a throw back to the days of a limited resource. The need for thousands of local switches and hundreds of central offices for a nation has led to the creation of a hierarchical telecommunications network where the traffic is incredibly thin in the local loop and incredibly thick at an international level. The tiers of the network have been organised to allow a grooming of the information being transported so it may be most economically transported from one customer terminal to another. The shear number of switches and transmission systems involved dictate such a topology. However, if we exploit the inherent reach capability of optical fibre and extend the local loop to >40km, we then see complete national networks of <100 switches possible. At this level there is no point in creating a hierarchical structure. In fact, it can be argued that a hierarchical structure across the entire planet would be a redundant concept. Suddenly the network has become purely a local loop: no long lines, no international level, just the local loop!

An Industry Tension
It is interesting to reflect that telcos think in terms of switches whilst the computer industry thinks in terms of routers. On the one hand, the computer industry has a desire for total freedom and anarchy in communications with no one intervening in the process (e.g. internet). On the other hand, the telcos wish to have switches in place at strategic locations so they may control and manage the overall process of communication. The two halves of the IT industry are thus moving in diametrically opposing directions. A tension is building up, and in reality neither side will win unless some compromise is reached. For some considerable time it is likely that a percentage of both philosophies will be required in networks. To opt for either as a total solution would merely lead to a very sub-optimal solution for a sizeable user population.

An inherent parameter in all forms of human communication is delay. Today this is more manifest on geostationary satellite links where ~300ms between utterances can lead to difficult communication through ?double talks?. The next most common manifestation arises from use of code compression on speech and video vecause of the misguided axiom - bandwidth is expensive. The latest GSM speech codecs incur a delay in excess of 140ms, and the horrors of video conferencing are all to apparent, e.g., lack of lip sync, jerky, distorted and non-lifelike images, etc. None of this is necessary! The bandwidth problem abated with the introduction of fibre.

The advent of ATM presents the next big challenge. For computer, mail, fax and other forms of delay insensitive communications ATM affords some obvious advantages, but why anyone should wish to inflict it on the human race for speech, visual and real time computer communications is a real mystery. Packetising everything is not a sensible credo, even if you love ATM! So far there have been no large scale trials or modelling of such networks, and as a consequence we do not adequately understand what the service implications will be. Models to date only address the linear portion of an overall performance characteristic that is highly non-linear. We should beware information waves and chaotic actions that introduce new forms of brittleness in the telecommunications infrastructure!

Perversely the prospect of optical ATM looks very attractive due to the fact that the potential bandwidth available makes the possibility of blocking, and thus unpredictable delays, an unlikely event. Thus a combination of Solitons, ATM, and total transparency may see the ultimate network protocol with a minimum delay.

Protocols
A further tension evident within the IT industry is the enthusiasm exhibited by the telecommunication fraternity for standards and common interfaces - after all it is the only way they can inter-work. For decades protectionism has reigned in that the attainment of standard interfaces could be (and most often was!) postponed until the interface became an international boarder. As a consequence we have FDM and TDM systems designed (perhaps even purposely so !) with no simple means of interfacing. For example, we have 2Mbit/s in Europe whilst Japan and America have 1.5Mbit/s. However, the Synchronous Digital Hierarchy (SDH), ATM and packet switching all embrace internationally agreed standards and benefits as telecommunications becomes truly global. In contrast the computer industry has a different objective. The absence of standards equates directly to customer lock-in and loyalty by a defacto inability to move hardware or software from one supplier to another. This limitation has to be overcome if we are to enjoy the full benefits of IT. In principle optical transparency offers a means of side-stepping this issue.

Technology Speed Up
All of the above is compounded by the fact that the development of technology is accelerating and we see a world where most standards are dead on arrival, or at least moribund, and have little hope of being universal. Yet we have to find a solution that allows us to inter-work on a global basis over a network of fixed and mobile links. Optical transparency has a key role to play in this context. If customers can be allocated a unique (or shared ?) carrier and be left to modulate it as they see fit, then for the first time the telecommunication industry can stand back and watch the interface war as it rages between the providers of customer premises equipment. Bluntly, the user doesn't care whether he is communicating with another user via MPEG, JPEG or other coding standards, all he cares about is initiating the call and having his terminal equipment and the network do the rest and take care of all the problems. Significantly, the smartest of the computer manufactures are already installing software programmable codecs in their products to enable them to communicate with all forms of data modem, voice and vision codecs as they emerge on the market place.

History Of Telecommunications
We should reflect that during the various technology migrations in telecommunications a single wire for the earliest form of telegraphy has moved to open wire pairs, then to twisted pair, coax, multiple fibres and possibly in the future a single ubiquitous fibre with varying amounts of active elements in the transmission path. This is summarised in Fig 1 for a 100 km route and is compared against the corresponding trends in complexity, reliability, cost, bandwidth and material usage. In concert with this, Fig 2 shows the cyclic nature in the prevalence of digital and analogue transmission over the same time span. It is likely that with the removal of electronics and other blocks on bandwidth, the resulting photonic transparency will encourage a partial move back to analogue communication! By analogue we refer to the use of an optical carrier that is modulated in a way defined entirely by the customer, there being no signal-specific regeneration on route, only broadband amplification. Our definition of analogue therefore spans what is today considered to be true analogue through to a blend of analogue and digital formats.

In Fig 3 the economics of this situation are shown across all the technology eras given in Figs 1 and 2. It has to be remembered that the primary reason for going digital in the first place was to realise the lowest cost solution for an international network that was striving to maintain a given standard of communication for telephone traffic. It turned out that digital switching and transmission gave the lowest cost solution by about a factor of 2 compared with any combination of analogue and digital. But this outcome was dictated by repeater spacings at the time of ~2km on twisted pair and coax. With repeater spacings extended to > 50km on fibre routes, the economics of networks change radically. When this is combined with a >100 fold reduction in the number of switching sites, we soon reach a milestone where a combination of analogue switching and transmission creates the most economic network.

Network Reliability
With the introduction of Pulse Code Modulation in the 1960's, digital transmission was realised for the first time, with all its associated benefits of consistency of signal quality, low interference, etc. The down side was that large numbers of repeaters were required between switching centres to reshape, regenerate, and repeat (i.e., 3R) the signals. Early repeaters proved to be the dominant source of system failure, so much so that elaborate measures were introduced into the network to improve end-to-end reliability. Primarily this involved N+1 stand-by and diversity routing.

N+1 stand-by provides 1 hot stand-by circuit for every N traffic carrying circuits within the same cable or duct route. When a failure occurs the affected traffic is automatically switched over to the stand-by circuit. This technique was valid at the time of its introduction, but detailed studies have shown that this is no longer the case, particularly since the introduction of optical fibre. Fig 4 shows that by merely duplicating the power supplies associated with repeaters and other active elements (bearing in mind how few in number these are elements are becoming), circuit availability is improved to a level on a par with N+1 stand-by, particularly on long lines, but at a fraction of the cost in terms of equipment, maintenance and network overheads.

Diverse routing differs in one key area from the N+1 strategy; the stand-by route follows a totally different path between the common end points. The geographic separation between the main and stand-by routes can range from a few kilometres within cities to hundreds of kilometres within countries, and ultimately whole countries or even continents for international links. Once again studies have shown that the original rationale behind diverse routing is no longer completely valid, with power supply duplication producing a similar reliability improvement. However, since diverse routing is generally limited to long distance, high capacity links because of its high implementation and operating cost, and since it also affords a useful level of network flexibility, there is no compelling argument to see its wholesale removal. In any case, there is a general trend towards self organisation that reduces the need for diverse routing, whilst at the same time improving plant utilisation.

A further important reliability factor arises from the modern day practice of using add-drops at strategic points in the network to directly remove/insert traffic on high capacity links without the need for conventional down/up multiplexing. Such operations support switching and routing in networks where blocks of traffic rather than individual circuits are involved. Although a recent innovation, the deployment of add-drops is governed by the reliability rules established in the copper past. As long as these elements are active electronic entities this policy makes a certain sense, but with the future deployment of passive fibre splitters and WDM elements to realise add-drops in PONs, such constraints become invalid.

When we look to the future, it is imperative that we recognise that many of today's practices concerning network design and reliability represent a hindrance and a barrier to progress. We must bite the bullet today and take whatever steps are necessary to safe guard that future.

Other Reliability Considerations


Software
Today's networks rely heavily on software for its management and service provision operations. However, it is the minor errors in software, either in the base code or in its implementation, that pose a considerable risk to network operation. If the present trajectory in software development is maintained, the magnitude of the risk will grow exponentially as we look to the future. In contrast, the reliability of hardware is improving rapidly whilst that of software is reducing, so much so that we are now seeing sub-optimal system and network solutions. From any engineering perspective this growing imbalance needs to be addressed. If it is not we can expect to suffer an increasing number of ever more dramatic failures. The introduction of optical transparency is likely to see a reduction in the scale and complexity of software through the corresponding reductions in switching and routing.

Critical Mass
In networks of thousands of nodes, failures tend to be localised and isolated - barring software related events! The impact of single or multiple failures is then effectively governed by the "law of large numbers", with individual customers experiencing a reasonably uniform grade of service. However, as the number of nodes is reduced through optical transparency and other developments, the potential for catastrophic failures increases, with the grade of service experienced at the periphery becoming extremely variable. The point at which such effects become apparent depends on the precise network type, configuration, control and operation, but as a general rule networks with <50 nodes require careful design attention to avoid quantum effects occurring under certain traffic conditions. That is, a failure of a node or link today for a given network configuration and traffic pattern may effect only a few customers and pass almost unnoticed. The same failure tomorrow could effect large numbers of customers and be catastrophic due to a different configuration and traffic pattern in existence at the time. As we move towards networks with fewer nodes, while at the same time increase the extent of mobile communications at their periphery, we should do so with caution !

Network Management
Monitoring the operation of systems and networks, extracting meaningful information and taking appropriate action to maintain a given grade of service is becoming increasingly complex and expensive. The level of complexity increases in proportion (at least) to the amount of data being handled, much of which is redundant or unusable. Consider for example the quantity of data generated by the failure of a switching node. It generates a fault report while all related nodes also generate reports. For a fully interconnected network of N switching nodes, this results in one failure report plus error reports from the N-1 other nodes. If we allow for two or more nodes failing simultaneously, we can show that:

A network of 500,000 switching nodes each with an MTBF of 10 years will suffer an average of 137 node failures and will therefore generate an average of 68.5 million reports per day. This assumes that each node is communicating with all the others, which is of course somewhat extreme. However, even if we consider the opposite extreme, i.e., the least connected case, we find that:

which shows that the reports still count in the millions. Whilst there are certain network configurations and modes of operation that realise a fault report rate proportional to N, the nature of telecommunication networks to date tends to dictate a ~N 2 growth. Indeed, a large national network with thousands of switching nodes can generate information at rates of ~2Gbyte/day under normal operating conditions. Maximising the MTBF and minimising N must clearly be key design objectives. A penalty associated with the N"2 growth that is generally hidden is the overhead due to computer hardware and software, plus transmission and monitoring equipments. For very large networks this is now growing to the point where it is rivalling the revenue earning elements - a trend which most definitely cannot be justified or sustained. The reliability improvements and operating simplifications that accompany the introduction of optical transparency will clearly have a beneficial impact on this issue.

The "Richter" Disaster Scale
A number of network failures on a scale not previously experienced have occurred in recent years, almost all directly attributable to software. Quantifying their impact is not only of interest and but is in fact essential if future network and software design is to be conducted correctly. The key difficulty is the diversity of failure types, causes, mechanisms, and customer impact. A simple ranking of failure by severity is obtained by adopting the Richter Scale for earthquakes. The total network capacity outage (loss of traffic) in customer affected time may be defined as:

where N = number of customer circuits affected

T = total down time

Fig 5 illustrates this scale and indicates the severity of recent "brown outs" that have actually occurred. It is clear that today's networks have an inherent, latent failure risk of major proportions that must at least be controlled, but ideally eradicated. Optical transparency makes an important contribution towards achieving this.

Human Interdiction
It is important to recognise that network reliability is as much a people issue as it is an equipment and technology issue. For example, it is estimated that ~50% of the faults experienced by telco's today are in the local loop, around half of which are in some way attributable to their own craft people carrying out their everyday installation and maintenance activities. The figure for long lines, although lower, is nevertheless significant. The replacement of manual distribution points and wiring frames with software based routing on a PON represents a significant gain. When combined with gains arising from the eradication of corrosion, the overall saving can be as high as 40% of the operating total. Similar savings are also possible for repeater stations and switches. A network based solely on fibre with <100 switches would have a fault rate of only ~20% of today's network

This figure assumes fibre to the home (FTTH). If however the fibre to the kerb (FTTK) option is adopted, with copper or radio drops to the customer, the above gains will not be realised and the overall running costs will remain high. FTTK also introduces a finite bandwidth limitation that prevents future capacity upgrades.

Staff Numbers
If we correlate our discussion above of equipment reductions and reliability improvements with manning levels within telcos, we find that manning levels follow the simple trend shown in Fig 6. Here we have normalised the figures to a telco comprising 100,000 employees and serving 10M customers. With equipment reductions on the scale we envisage, combined with related changes (improvements in most cases) in reliability, network structure, we predict that ~30,000 employees would optimally service the size of customer base that most European telco's enjoy today.

Intelligence
It is clear that there is a rapid migration of intelligence towards the periphery of telecommunications networks. Obvious present day examples are desk top computers and lap tops that are linked to others via the network to perform tasks required by customers. By the year 2015 we should see the first super computer that will have a processing power and information storage ability on a par with humans. Some 15 or 20 years later such a machine is likely to appear on our desk, or even on our wrist. Customers will wish to connect such machines to the network in exactly the same way as is done today - and the network will have to be able to handle this! The demands on this form of telecommunications will be stunning: clock rates in the Gbit range will be the norm, rendering ISDN line rates as useless as a Morse key. We can also expect to see optical computing based on analogue techniques arising during the same period along with a mix of biological and electronic entities. Given these changes it is hard to envisage any service or utility that a telco can uniquely place within its network. As the cost of transmission and switching continues to fall then all manner of operations can be conducted on the periphery of the network by a multiplicity of operators who will have little concern or interest in how the bits got from A to B. The good news is that the traffic generated by such operations is likely to create a massive information flow resulting in a tenable business. It is likely that telecommunications will go the same way as many commodities before - we will have to sell 100x more at 1/100th of the present price to maintain the turnover, while at the same time reducing operating costs to ensure profitability against tightening operating margins. It is hard to see how telcos will grow their business on the basis of presently perceived markets, but in this paper it is not of direct concern as we are primarily interested in the impact of photonic transparency.

The Local Call
We can put a stamp on an envelope and it will go from New York to Boston or New York to California for the same price, despite the fact that the process is labour intensive and that the cost of that labour is undoubtedly distance related. Why is it therefore that telephone calls over the same two distances are charged at different rates by an industry that is largely de-coupled from distance related costs? In the UK experiments have already been conducted in which the whole network has been configured to bill customers at only local call rates during holiday periods. Not surprisingly the response has been fantastic! Suddenly people who normally made say 2 calls each of 5 minutes duration on a Sunday afternoon were making upwards of 5 calls of half hour durations. A network that was reasonably quiet on a Sunday compared to a normal working day was suddenly close to overload throughout the period. This then is real evidence of a tremendous latent demand for telecommunications that is linked inextricably to cost. There is also an existence theorem that says you don't have to charge for distance, you don't have to charge for time and you don't have to charge for use - it is called internet! It would therefore seem inevitable that telecommunications will go in this general direction. If an impediment to this progress is encountered it will most definitely not be the technology for it is abundantly clear that this goal can be reached in the very near future with technology that is already moving out of the laboratory and into manufacture. The impediments will be placed by governments, regulators and copper mind sets.

Closing Remarks
The Clinton/Gore initiative for a national super highway is being mirrored in Europe under the 4th framework programme and is, to some extent, in place in the UK in the form of the SuperJANET network. Opening up high speed telecommunications in this way will cause the computer industry to release machines of a very different nature then hitherto. Perhaps we will at last see a welcome end to code compression on visual and speech signals to the point where they are unusable. Perhaps we will be able to employ telepresence, teleconferencing, multimedia and virtual reality as a means of communication in a humanised framework that allows us to stop travelling, stop wasting the planet's resources and get on with the process of building a civilisation. Without transparent optical networks none of this will be possible, nor will the information society!

Bibliography
Cochrane, P., Heatley, D.J.T., et al., "Optical communications - future prospects", IEE Electronics & Communication Engineering Journal, Vol. 5, Nš 4, August 1993, pp. 221-232.

Cochrane, P. and Heatley, D.J.T., "Reliability aspects of optical fibre systems and networks", IEEE International Reliability Physics Symposium, San Jose USA, 11-14th April 1994.

Cochrane, P. and Heatley, D.J.T., "Optical fibre systems and networks in the 21st century", Interlink 2000 Journal, February 1992, pp. 150-154.

Cochrane, P., Heatley, D.J.T. and Todd, C.J., "Towards the transparent optical network", 6th World Telecommunication Forum, Geneva, 7-15th October 1991.

Heatley, D.J.T. and Cochrane, P., "Future directions in long haul optical fibre transmission systems", 3rd IEE Conference on Telecommunications, Edinburgh, 17-20th March 1991, pp. 157-164.

Hill, A.M., "Network implications of optical amplifiers", Optical Fibre Communication Conference (OFC'92), San Jose USA, 2-7th February 1992, paper WF5, p. 118.

About The Authors
Peter Cochrane is a graduate of Trent Polytechnic and Essex University. He is currently a visiting professor to Essex, Southampton and Kent Universities. He joined BT Laboratories in 1973 and has worked on both analogue and digital switching and transmission studies. From 1988 he managed the Long Lines Division where he was involved in the development of intensity modulated and coherent optical systems, photonic amplifiers and wavelength routed networks. In 1991 he established the Systems Research Division and during 1993 he was promoted to head the Research department at BT laboratories with 620 staff. He is also the Development Board member for technology and has published widely in all areas of telecommunications studies.

Roger Heckingbottom received his first degree and doctorate from Oxford University. He is currently a visiting professor at the University of Wales, Cardiff. He joined BT Laboratories in 1966, initially involved in the study of semiconductor surfaces. Through the 1970's his research broadened to cover growth and characterisation of semiconductors for optical communications. In 1980 he was appointed head of the Materials Division, adding optical fibres and component reliability to his earlier responsibilities. Since 1990 he has been concerned with the application of optics in communications networks. He currently manages the Networks Research Division with 145 staff. He has published widely and served on several national committees responsible for university research in the UK.

David Heatley obtained his doctorate in Optical Communications Systems from the University of Essex in 1989. He joined BT Laboratories in 1978 to work on the development of analogue and digital optical fibre systems designed specifically for video services. In 1985 was appointed head of a group responsible for the development of optical receivers for terrestrial and undersea applications. In this capacity he was a member of the team that received the Queen?s Award for Technology in 1990. He is now with the Systems Research Division with special responsibility for future studies and telemedicine. He is a Member of the IEE and a Chartered Engineer.

All materials created by Peter Cochrane and presented within this site are copyright ? Peter Cochrane - but this is an open resource - and you are invited to make as many downloads as you wish provided you use in a reputable manner