×
The question isn’t who is going to let me; it’s who is going to stop me.
--Your friends at LectureNotes
Close

Note for Telecommunication Network and Optimization - TNO by Abhishek Apoorv

  • Telecommunication Network and Optimization - TNO
  • Note
  • 6 Topics
  • 2040 Views
  • 91 Offline Downloads
  • Uploaded 3 months ago
Abhishek Apoorv
Abhishek Apoorv
0 User(s)
Download PDFOrder Printed Copy

Share it with your friends

Leave your Comments

Text from page-1

2 Network Design “Form Follows Function.”1 Within the first part of this chapter, we will examine the network structure of large communication networks. Starting with a brief review of the history of the Internet, we will highlight those basic principles, which still have a significant influence on the architecture of current backbone networks, which we will demonstrate in a short overview of such a (possible) architecture, resulting in the basic architecture used in this thesis. Subsequently, we will discuss how other networks or users are connected to these backbone networks. On this basis, we will show how different solutions for a similar problem can be assessed, thereby discussing various cost models and other important criteria towards “good” network design. In the last section we will give an outline of network planning as an important step of a network design process. 2.1 Network Architectures 2.1.1 Historical Background The Internet has undergone tremendous changes from its very modest beginnings in 1969 (when it was a two node network) to its incredible size of today. Nevertheless, the last technological update requiring an update for every user was the switch to IPv4[Def81] on January 1st, 1983 — more than 25 years ago. Considering that for the International Standards Organization (ISO), the Internet of the end-70s appeared to be a mere “academic toy network” [HL96, p. 247], this has to be conceived as quite an achievement. What made this impressive technological longevity possible? In order to answer this question, let us take a look at the environment the Internet was developed in. Even in the most simple network imagineable (consisting of the two hosts cited in the beginning), two different computers were involved and with the growth in the subsequent years, this scenario became more and more complex2 . Hence, we can safely state that a certain degree of hardware independence was absolutely necessary. 1 Louis Sullivan (1856-1924) in “The tall office building artistically considered”, Lippincott’s Magazine, March 1896 2 By the end of the 1970s, some of the most prominent computer systems in academia were Digital’s PDP/10 running TOPS-10, TOPS-20 or ITS, PDP/11 running UNIX or VMS, IBM S/370 running OS/MVT and many, many more. All of these machines had significant differences in both software and hardware. 5

Text from page-2

2 Network Design Another important point is that the experimental network was also the working network: There was simply no dedicated test network at that time. Although this might appear to be simply an inconvenience, it enforced modularity, as it was hard to require large portions of software to be changed to add or improve some functionality because this would have affected many users. Furthermore, for example a mail protocol that would take down the entire node in case of an error would not gain much popularity either. The last point already hints at the last observation we want to highlight: implementation, i. e. in order to gain acceptance protocols had to have a showcase or a proof of concept. This was a large departure to the approach taken by the ISO and other standardization bodies. When they presented the Open Systems Interconnect (OSI)model, it was little more than a design and the first full implementation known to the author was demonstrated in 1987 with DECnet Phase V. Modularity and hardware-independence are usually considered good practice in software engineering, however as we tried to illustrate above, there might have been less intention behind this design than pressure from the environment. To summarize, the Internet protocol suite offered a practical solution to a problem (connect computers) and was sufficiently well-designed that there was never enough pressure to restart from scratch despite the more and more evident shortcomings. We should further note that although ISO took quite a while to design the OSI-model, their model is far from being flawless and exposed some of the errors (political decisions instead of technical ones3 ) that caused severe trouble for Asynchronous Transfer Mode (ATM) some years later. Nevertheless the OSI-model is considerably more general than the model behind TCP/IP and thus (although sometimes modified as in [Tan03]) still in widespread use. One of the more obvious aspects, we can easily discover in today’s Internet (and many other communication networks) is the notion of layers. In this showcase of moularity, one layer offers a set of capabilities (which sometimes are also called “service”) to the next layer above. With this simple mechanism, changes in one layer will only affect the next upper layer. Naturally, this layering comes with the usual price of abstraction: More overhead (caused by additional headers, . . . ) in every layer and a certain loss of information from layer to layer. Since one layer will not provide all information it has to the upper layer (otherwise we will not gain any modularity), we will loose possibly useful information (for example IP cannot directly provide any information about the physical channel quality). In the remainder of this thesis we will use the term layer not in its strict ISO/OSI sense, but in a more general meaning that will be probably self-explanatory after the next section. The rule of thumb for this use is that every protocol which has its own hardware platform is considered as a layer: For example IP is a layer, TCP not (since we have IP routers). 3 6 For example Tanenbaum [Tan03] attributes the seven layers in the OSI-model to politics rather than to technical thoughts. Otherwise two overfull and two almost empty layers would be hard to explain.

Text from page-3

2.1 Network Architectures 2.1.2 Vertical Layers This notion of layering is still prominent in networking as it was some 25 years before. Modern backbone-networks are not built exclusively any more for one application any more, they have to transport a number of different services and thereby serve many different kinds of customers at the same time. An example backbone vertical layer structure (which is part of our network architecture) is depicted in Figure 2.1. Starting from below, we can see the following layers: IP/MPLS SDH/SONET DWDM Figure 2.1: Example vertical layer structure DWDM At the very bottom we have an optical DWDM network. DWDM networks build the foundation stone of today’s backbone network offering huge data-rates at viable costs. Modern transmission systems can multiplex up to 160 channels with data-rates of up to 40 Gbit/s each over distances of up to 3000 km. It is worth noting however that the boundaries between “typical” metro/regional and backbone equipment are becoming more and more fuzzy and might merge to one “super-platform” in the future [CS07]. A typical transmission system is shown in Figure 2.2. TX Mux Booster In-Line Amplifier Preamplifier Demux RX Figure 2.2: DWDM Transmission System A client signal, which is usually referred to as a grey signal, arrives at the DWDM transponder TX. Typical client interfaces are (Carrier-Grade) Ethernet or SDH/SONET. The transponder “translates” this signal to a DWDM signal with a fixed bandwidth (typically 2.5 Gbit/s, 10 Gbit/s or 40 Gbit/s) and transmits this signal on one of the 7

Text from page-4

2 Network Design eligible wavelengths (which we can also interpret as the colour of the signal) to the multiplexer. The multiplexer Mux can be seen as an “inverse prism”: It multiplexes all the coloured signals from the transponders on the client side on one fibre on the trunk side. Each of the different wavelengths may be used with a different data-rate, which is – considering the large price differences between the necessary transponders – an important cost-factor. As a matter of fact, conversion from the optical to the electrical domain or vice-versa, may it be for light-path termination or regeneration is one of the most important influences for the resulting equipment costs of a network. Besides differing in data-rate and range, transponders can be tunable or limited to one fixed wavelength. While the former offers greater flexibility and greatly simplifies the keeping of spare parts, the latter can be significantly cheaper. On the way to the opposite transmission system, the signal has to be amplified in regular intervals due to signal degradation. Since amplifiers cannot differentiate between signal and noise, they amplify both, which imposes an upper limit on the transmission length. This again depends on the signal quality generated by the transponder in the source node. If this limit is reached, the signal will have to go through so-called 3R regeneration (reamplifying, reshaping, retiming) which is usually performed via two transponders in a back-to-back configuration. The costs of this full O/E/O-conversion (optical/electrical/optical) are quite notable, which leads to ongoing research for an all-optical 3R regeneration [TBFC+ 04, SST07]. The termination of the light-path is symmetric to its source: A demultiplexer Demux spreads the different wavelengths on a fiber and a transponder RX translates the trunk signal back to a client signal. Only customers requiring very high data-rates and having other requirements (privacy, special protocols, etc.) would buy a DWDM-connection (i. e. a wavelength) directly. We will examine in Section 4.1 heuristic planning methods for DWDMnetworks with increasing cross-connection capabilities, in Section 4.2 the influences of new, adaptive transmission methods and in Section 5.4 failure localization in transparent or translucent DWDM-networks. Before we move on to the node architectures of the next layer, let us first inspect the topology from the perspective of the next layer, which we illustrate in Figure 2.3. We can see two paths on the DWDM-layer: A transparent light-path between nodes ADWDM and DDWDM and an opaque connection between DDWDM and EDWDM respectively. The transparent light-path is one single wavelength, which passes transparently through the intermediate nodes, i. e. these nodes do not alter the signal in any way. The consequence of this configuration is that in the next layer, node ASDH appears to be directly connected to DSDH just as DSDH and ESDH . This hiding of information about the underlying topology is typical for layered networks. In this work, we will address the cost-efficient construction of such DWDM layers in Chapter 4 and will examine some of the consequences for failure localiziation in Section 5.4. 8

Lecture Notes