What are the benefits of ATM management

Stefan-Marr.de

Stefan Marr

Computer Networks Seminar 2006

Hasso Plattner Institute for Software System Technology

stefan.marr at hpi.uni-potsdam.de

Abstract

The mechanisms for traffic management and quality of service assurance, which are defined in the ATM specification, are presented. For this purpose, the ATM service categories are explained in more detail and, based on this, the quality of service parameters relevant for ATM are introduced. On this basis, the function of the traffic contract and then the advanced methods for managing network traffic are presented. The possibilities of the network to send the sender explicitly information about the available bandwidth and the frame support for ATM are also described. Finally, the conceptual relationship between the individual mechanisms in the router is illustrated and the traffic management capabilities of ATM and TCP / IP are compared.

Keywords: ATM, traffic management, quality of service, networks, traffic management, cell rate control

With the Asynchronous transfer mode ATM for short, a transmission method for data of all kinds has been available since 1989 that comes up with various properties with which it can also compete with the Ethernet, which is very widespread today. However, due to the historical development and the costs associated with ATM hardware, this method is almost only used in large Internet backbones today.

In contrast to e.g. Ethernet, cells of a fixed length (53 bytes) are used in an ATM network, on which the fundamental technical differences are based. A very prominent feature of ATM are the capabilities in the area of ​​traffic management and quality of service assurance, which makes this procedure very interesting even today for very quality-critical applications, such as the professional transmission of lossless compressed video data in real time, but also the already mentioned operation of Backbones.

In order to ensure the optimal utilization and efficient operation of such a high-speed network, suitable procedures and measured variables are required for managing and monitoring the traffic in this network. The activities undertaken for this purpose are commonly referred to as Traffic management summarized. More precisely, traffic management is the process of monitoring and analyzing traffic in a network. The procedures for responding to changes in traffic or environmental conditions, as well as ensuring optimal network performance are also included. This topic goes hand in hand with the aspect of Quality of service or. Quality of Service (QoS), which stands for the compliance or guarantee of certain parameters when transmitting data over a network.

In this elaboration, the capabilities and procedures are presented that are made available by ATM to implement traffic management and quality of service assurance in a high-speed network.

Since ATM was developed especially for use in broadband networks and it was intended that any kind of data, audio or video material should be able to be transmitted via these networks, particular attention was paid to this problem. The aim was to use the capacities as efficiently as possible while still being able to guarantee all the agreed quality features.

An overview of ATM in general and the technical features is given in [8]. This elaboration should deal exclusively with the mechanisms for traffic management and quality of service control available in ATM networks. To this end, the following sections deal with the variables that can be measured in ATM networks and the mechanisms provided in [1] in these networks.

To this end, the ATM service categories are first introduced and their respective areas of use are named, in order to then introduce the variables generally relevant for the categories and the data traffic characterization. Section 4 takes a closer look at the ATM traffic contract, as the agreements made in it for a connection form the basis for the traffic management mechanisms that are described in the fifth section. The main focus here is on cell rate control for available bit rate services, the procedures for the guaranteed frame rate and the representation of the relationship between the individual mechanisms.

In order to highlight the special features of ATM compared to the traffic management capabilities of the very widespread TCP / IP, these are compared in section 6. This is followed by a short summary and a list of abbreviations in the appendix.

ATM networks are designed to be able to transport various types of data streams and to provide them with the required quality of service. In order to be able to meet the various demands that different types of data streams have on network parameters such as cell loss and delay times, six different categories of data streams have been defined for ATM according to [4].

In the field of real-time services, between Constant bit rate (CBR) and Real-time variable bit rate (rt-VBR) differentiated. CBR is used for the transmission of data streams with fixed data rates, such as occur with uncompressed video or audio data. For such applications, low transmission delays (Cell Transfer Delay - CTD) and a low delay variance (Cell Delay Variation - CDV) important.

Rt-VBR is intended for e.g. compressed audio or video data streams. Here, too, low transmission delays and also a low delay variance are decisive. In contrast to CBR, however, the data rate is variable over time.

The larger area of ​​the data streams does not have such high demands on CTD and CDV and has more of a burst character. The data is delivered to the network in a surge. Similar to rt-VBR there is Non-Real-Time Variable Bit Rate (nrt-VBR). Services with high response time requirements, e.g. bank transactions or email, fall into this category. The requirements include a low rate of cell loss (Cell Lose Ratio - CLR) and low CTD.

With Available bit rate (ABR) there is a category for standard traffic such as file transfers. There are no requirements for CLR and CTD here. A minimum cell rate can be guaranteed for these services and they also have the option of using free capacities.

Fig. 1 ATM service categories

Is relatively new Guaranteed frame rate (GFR) to support IP backbones. The aim here is to be able to optimally transport frame-based data streams. These can, for example, come from a LAN to an ATM router. In order to optimize the performance here, the packets or frame limits are observed and these are included, e.g. for countermeasures in the event of network overload. As with ABR, a minimum transfer rate is guaranteed and free capacities can also be used.

For the remaining capacities there is still the category of Unspecified bit rate (UBR). These services are also called best effort services designated. Cell losses and delays do not play a major role in this area. This is used for data transfers such as news feeds, file transfers or message transmission. No requirements are placed on the network here and mechanisms on higher layers are mostly used to ensure data integrity.

With the optional extension of UBR in [2] it is possible to describe the characteristics of a UBR traffic flow in a more differentiated manner. To do this, he is given a so-called Behavior Class assigned. Using this information, the network should have the option of being able to handle the granting of quality of service parameters on UBR connections in a more differentiated manner. However, details here depend on the implementation and are not specified. With [3] it is also optionally possible to specify a minimum desired cell rate for UBR services, but this is only used by the network for optimization purposes and is not guaranteed.

In Fig. 1, the behavior of the individual service categories to one another was illustrated once. In the case of the CBR services, the constant utilization of capacity over time can clearly be seen. With VBR, on the other hand, this utilization varies over time. With ABR and GFR is an optional Minimum cell rate (MCR) can be recognized as a dashed line and with UBR it can be clearly seen that it is the best effort Category trade and thus only excess capacities can be exhausted.

The categories of services presented in the previous section require very different properties and values ​​from the network that are to be ensured.

In this section, according to [4,5,7], the parameters specified for ATM are to be presented, with which the quality of a service can be measured and the desired parameters can be ensured on the basis of this information.

Under Cell Transfer Delay (CTD) is the delay in the transmission of a cell, measured from the transmission of the first bit to the reception of the last bit. The design of ATM means that there are negligible delays through the network itself, since the processing and transmission delays are minimal. Most of the delay is caused by network node overload. As a quality of service parameter, the maxCTD set the desired upper limit for this delay. Cells containing the maxCTD fail to comply, should either be discarded or delivered as delayed.

The Cell Delay Variation (CDV) is the amount of variation in the delay between the arrival of two consecutive cells on a given connection. This is based on the use of cells of a fixed size and the associated time slots in which individual cells are sent. By sending additional OAM cells, there are delay fluctuations in the user data stream. In the case of the important for quality of service assurance peak-to-peak CDV it is the variation in the delay of the cells, which is smaller than the maxCTD.

Regardless, there is still the size of the Cell Delay Variation Tolerance (CDVT). It is usually placed directly at the user's network interface (User network interface - UNI) and the traffic generated by the user must comply with this CDVT range in order to benefit from the QoS guarantees.

With the Cell Lose Ratio (CLR) is the ratio of lost to transferred cells in a certain interval.

The upper limit of the cell transmission rate is determined with the help of the Peak Cell Rate (PCR). PCR is defined as PCR = 1 / T, with T being the minimum distance between two cells.

The is similar to this Sustainable Cell Rate (SCR), which designates the upper limit of the average cell rate of an ATM connection. This information is required for VBR services, for example, in order to enable an efficient division of the available resources between several VBR services without having to reserve the PCR and thus wasting bandwidth. The SCR should therefore also be smaller than the PCR in order to be useful.

Another important parameter for characterizing data streams is the Maximum burst size (MBS). This stands for the maximum number of cells that can be sequentially transferred away with PCR. With the Minimum cell rate (MCR) the minimum cell rate to be guaranteed is specified for ABR and GFR. There is also a special for GFR Maximum frame size (MFS) specified. The MFS is the maximum number of cells that a frame can be.

In the following section, the values ​​defined here are used to make an agreement between the user and the network about quality of service parameters.

In ATM networks there is a so-called traffic contract for every connection. Traffic contract. With the help of the traffic contract, the characteristics of the data traffic over a line are described and at the same time the quality of service requirements for the network are specified. With this mechanism it is possible to communicate the parameters required for a service directly to the network. With the data contained, the network can determine the optimal settings for the connection to which this contract belongs, as well as for all other connections in compliance with their contracts.

In addition, this data can be used to decide in advance whether the capacity of the network will still allow the connection requested, or whether this would impair the properties of existing connections that have already been guaranteed. The actions for making this decision are at ATM using Connection Admission Control (CAC) designates and enables preventive traffic management based on the traffic contract.

The data of the transport contract are stored in the so-called Connection traffic descriptor specified. This consists of the Source traffic descriptorwhich contains the appropriate parameters depending on the service category of the connection. These include PCR, SCR, MBS, MCR and MFS. The connection traffic descriptor also includes the specification of the CDVT and a Conformance definition.

Using an implementation-dependent algorithm, the Conformance definition examines every cell that comes through a UNI. The mechanism for monitoring the cells themselves will be Usage parameter control (UPC) called. Even in ideal situations there are cells that do not meet the conditions that have been agreed. However, the network is only obliged to ensure the selected quality of service for conforming cells. Conformity is with the Generic Cell Rate Algorithm (GCRA). The GCRA ultimately determines the cell rate for each individual cell and checks whether this is still within the tolerance. A detailed description of the algorithm can be found in [4] p. 372f.

It is now important for the conforming cells to meet the guaranteed parameters for the network in the best possible way. At this point, however, it should be noted that that too QoS commitments are inherently probabilistic. The parameters can only be approximately met by the network. This is due on the one hand to the fact that the accuracy for the specification of the parameters is significantly higher than the accuracy with which they can be measured and on the other hand the quality of service varies over time, due to the randomness of the data traffic itself. The exact adherence to the parameters can only be determined over longer periods of time and partially, depending on the service category, only for categories of connections and not for individual connections via an ATM network.

The most obvious mechanisms for ensuring the quality of service are already in place with CAC and UPC. This check protects the network from excessive overload within certain limits, since connections are only permitted if the necessary resources are available. In addition, a mechanism for resource management with the aid of virtual paths is specified in ATM, which is to be described below.

Since, however, not all service categories can be described, e.g. with an upper bandwidth limit, reactive mechanisms are still required in order to recognize occurring overload situations and to minimize the effects. The corresponding reactive procedures provided by ATM are also presented in this section.

5.1. Resource management with virtual paths

The usage of Virtual Path Connections (VPCs) enable more efficient resource management. In this way, the implementation of CAC can be simplified by reserving bandwidth for a VPC, which is then allocated to individual Virtual Channel Connections (VCCs) can be distributed, as indicated in Fig. 2. Here the VCCs are the dashed lines inside the VPCs.

This has the advantage that when the CAC checks, it only needs to be checked at the network nodes at which the VPCs used in each case end, to determine whether the quality of service can be ensured and not, as usual, at each individual node. In the figure, for example, the VPC b ends at the VC switch and the VCCs contained are continued via the VPC c. The VP switches do not have to be checked here, but only at the point where the path ends. This bandwidth reservation naturally has the disadvantage that the bandwidth may not be fully used. Another advantage is the ability to prioritize groups of connections if they are sorted according to service categories.

With the help of VPCs it is also possible to increase the efficiency of traffic management messages, since e.g. only one overload message has to be transmitted for the entire path and not one for each VCC. The virtual paths appear to the network as a whole like normal connections. Among other things, the service category and the desired quality of service must be specified for them.This information should then also be determined depending on the VCCs that are to be routed through this VPC in order to ensure the quality of service for the VCCs.

Fig. 2 Virtual paths and virtual channels [4]

5.2. Selective cell discard

One of the first consequences of congestion on a network is that data is lost due to overflowing buffers. For this reason, selective cell discard (Selective Cell Discard) implemented. This makes it possible to give the network information about how important a cell is, or to use this information to influence the decision as to which cells may be discarded first in the event of overload.

To differentiate between cells, the Cell Lose Priority-Bit (CLP) in the cell header indicates its importance for the application. Labeling with CLP = 1 indicates that these cells are less important and should be discarded first. The cells are either marked as less important by the application themselves or by the UPC mechanism if they do not meet the agreed conditions. Cells with CLP = 0 are given preferential treatment and are considered more important. This is mostly control information or cells that have qualified for quality of service assurance.

5.3. Traffic shaping

If there has not yet been an overload, but the traffic characteristics do not optimally match the given resources, as occurs particularly with very bursty traffic flows Traffic shaping used to smooth traffic flows and reduce cell clumping. This can also lead to a reduction in the average CTD and a fairer distribution of resources.

According to the specification [1], the use of traffic shaping is left to the ATM implementation. According to [4], e.g. a Token Bucket-Algorithm.

5.4. Explicit Forward Congestion Indication (EFCI)

With EFCI it is possible, in situations in which a network node is overloaded, to inform the recipient of the cells that there has been an overload on the path taken by the data. For this purpose, according to [4], the first two bits of the payload field are set to the value 01 in the cell header. If an intermediate node receives the cell marked in this way, it is no longer allowed to change this value. Since according to the specification [1] this procedure is optional for CBR, VBR, GFR and UBR, it should not be relied on. How EFCI works in conjunction with ABR is explained in more detail in the next section.

5.5. ABR cell rate control with resource management cells

For the service categories CBR and VBR, the traffic contract and the UPC mechanism are the basis for compliance with the agreed quality of service parameters. In this method, only a definition of the traffic characteristics and the marking of non-conforming cells are used, but no feedback about the utilization of the network is used. So this approach will open-loop designated.

However, this approach cannot be used if the traffic, as is the case with UBR and ABR, can at most be specified via a PCR. In the case of UBR, the best effort Approach is used and the situation in the network is derived from the cell losses. This procedure is comparable to the overload mechanisms of TCP.

With a closed-loop, a feedback-based approach, ABR goes a step further at this point and uses information provided directly by the network to dynamically adapt the transmission rate of a service. This makes it possible to ensure a fairer distribution of resources and low cell loss rates.

In order for ABR to achieve the goal of making optimal use of the available bandwidth without affecting other service categories, it uses direct feedback from the network. Given the high transmission rates that are possible with ATM, this approach leads to the problem that the network responses may take so long that the transmitted cells overload the network before a response is received at the transmitter. This means that the network nodes need correspondingly large buffers in order to keep the cell loss rates low. For the applications, this means that they have to be tolerant of unexpected cell delays and adjustments to their transmission rate.

Fig. 3 ATM Resource Management cells [4]

ATM uses what is known as a mechanism to implement this mechanism Resource management (RM) cells. Here between Forward resource management Cells (FRM) and Backward Resource Management Cells (BRM) differentiated.

FRM cells are sent directly from the sender and travel via the intermediate nodes to the recipient, where they are returned marked as BRM cells. This process is shown in Figure 3.

The actual procedure now works in such a way that the sender receives a Allowed Cell Rate (ACR) and adjusts it dynamically. The adjustments are made on the basis of the received RM cells. Various information is encapsulated in these RM cells. With the Congestion Indication Bit (CI), detected overload is signaled, whereupon the transmitter should then reduce its data rate. With the No increase Bit (NI), the sender is requested not to increase the data rate any further and via the Explicit cell rate Field (ER), the network can even specify how high the transmission rate of the sender may be, in which case the sender will use the agreed rate Minimum cell rate (MCR) does not have to fall below. The details can be found in [4] p. 381 and in [1] 5.10.

The RM cells that are used for this mechanism are themselves normal ATM cells that contain data about the current connection as a payload, for example the ER, Current Cell Rate, Minimum cell rate and other bits of information, such as those already mentioned.

Most of the RM cells are sent by the data transmitter itself, all of them No. - 1 cell, being normally No. = 32 applies. The sender sets the CI bit to 0. In addition, these cells are usually sent with the bit for no overload and with the ER set to the desired cell rate. On the way to the recipient, this information can now be adjusted through intermediate nodes. If an overload occurs at an intermediate node, it can indicate this on the FRM cell by setting CI or NI and e.g. adapt the ER. It then sends the FRM on to the recipient. Since it could sometimes take too long for the information to reach the sender, it is also possible for intermediate nodes to create BRM cells of their own accord and send them directly to the sender. The BRM cells created by the intermediate node contain the same information, but do not have to cover the entire route to the receiver and then to the transmitter so that it can react. So, above all, time is saved at this point. This mechanism is named after [6] Backward Explicit Congestion Notification guided.

When an FRM cell reaches the recipient, it normally sends it back unchanged but marked as a BRM cell. However, if he has received data cells that were marked with the ECFI bit beforehand, he also sets the CI bit on the RM cell. On the way back, it is also possible for the intermediate nodes to change the BRM cells in order to indicate overload and thus to bring the transmitter to adapt its transmission rate relatively quickly.

5.6. GFR traffic management

For the Guaranteed frame rate-Services there are also some additional mechanisms for traffic management. Overall, however, GFR is just as simple as UBR and, depending on the network configuration, does not offer any traffic policing or traffic shaping mechanisms. There is also no guarantee that frames will be transmitted. Depending on the network load, frames can be lost, especially if the cell rate is above the agreed minimum cell rate. However, like ABR, this is guaranteed by the network.

The advantage of GFR over the other service categories lies in the consideration of frame limits. According to [1], a AAL Protocol Data Unit designated. The limits of a frame can be set using the Payload Type Field determined in the cell header. This additional information about the structure of the traffic flow enables a better reaction to e.g. overload situations. It is now possible, if a cell has to be discarded, to discard the entire frame. The remaining cells of the frame do not have to be transmitted to the recipient. Since ATM does not provide a mechanism for selective retransmission, all cells of the frame must in any case be retransmitted if this is requested by a higher layer. By discarding the complete frame, the efficiency of the transmission increases.

The general mechanisms such as the UPC are of course also supported. However, they are optimized to take frames into account. This is how the Cell Lose Priority set to the same value for each cell of a frame and not different for each cell. The UPC algorithm used determines the Conformance and marks them out. The rule here is that if one or more cells of the frame are classified as non-compliant, the entire frame is classified as non-compliant. The F-GCRA is used for checking, i.e. Frame Generic Cell Rate Algorithm an optimized GCRA is used, with the help of which compliance with the parameters agreed in the transport contract can be checked. The F-GCRA is also used to check whether the Maximum frame size is not exceeded.

For a differentiated treatment, the frames are ultimately divided into three levels, for each of which different or no guarantees of quality of service are given. As not conform the cells are designated that violate an agreement of the transport contract. They are either discarded immediately or marked as less important with CLP = 1.

The second stage consists of the cells that are compliant but not qualified for the quality of service parameters. These are the cells that have not yet violated the agreed parameters, but are already above the MCR. For them, the transmission in best effort Sense realized. For the third level, the compliant and qualified cells, the agreed quality of service is completely ensured.

The F-GCRA used for division interprets a frame as a burst of cells. Accordingly, the tolerance to cell bursts must be adapted to the agreed parameters. For all frames that comply with the MFS and are transmitted within the minimum cell rate, it must be ensured that they are within the burst tolerance. The details of the F-GCRA can be found in [4] p. 394f.

The result of the test with the F-GCRA is only obtained from the first cell of the frame. If this is considered qualified, the complete frame is considered to be qualified to ensure the quality of service. Depending on the overload, the network can now first discard the non-compliant cells and, if this is not sufficient to ensure a fair distribution of resources in accordance with the agreements with all other connections, also discard the unqualified frames or the cells of these frames.

5.7. Implementation of traffic management in the router

This section outlines the conceptual structure of a traffic management unit in an ATM router in order to illustrate the relationship between the individual mechanisms. Fig. 4 reduces the facts presented to the relevant elements. Other essential components of an ATM router are not shown for reasons of clarity. This traffic management unit cannot be implemented in this form either, since important aspects such as switching and routing would still have to be integrated in order to form a meaningful and efficiently working unit.

The unit shown now works with the ATM cells and the incoming ATM cells on the left Cell and Path Identifer first examines the cells for their significance for the router. If it is a cell for connection establishment, the CAC will endeavor to record the parameters for a data stream and to check whether the network can adhere to them under the current conditions.

If the connection is not being established, in the case of GFR, a check is also made to determine whether the cell belongs to the current frame. In addition, the information on the affiliation to a specific VPC / VCC is recorded at this point in order to be able to include it in further processing. The UPC then uses the (F-) GCRA to check compliance with the traffic contract, marks the cell if necessary and transfers it to buffer management. If ABR is used, if necessary, BRM cells can be generated via the RM agent and sent back to the sender, or FRM cells can be provided with corresponding information in order to forward them to the recipient.

In the buffer management, the functionalities for selective cell discarding and traffic shaping are implemented in addition to the basic functions in order to be able to react to overload. The cells that are to be routed further are then transferred to the router's queuing and scheduling mechanism in order to be sent.

Fig. 4 Conceptual structure of a traffic management unit in an ATM router

The comparison between ATM and TCP / IP is certainly not appropriate in all cases, since ATM is used less on the protocol level typical for TCP / IP and more as a protocol for the data link layer. Nevertheless, the two approaches should be briefly compared at this point in order to clarify the differences.

With its service categories and the traffic contract, ATM has very sophisticated mechanisms for categorizing and describing data flows. This is not the case with the IP to the same extent. Only rudimentary options are available in both IPv4 and IPv6. With IPv4 there is the option of using 3 bits encoded in the header Type of ServiceField to specify the type of data stream to which the current packet belongs. However, it is not possible to specify the characteristics of the data stream precisely. This does not change fundamentally with IPv6. A slightly increased differentiation with 8 bits is possible here and data streams from packets can be identified via the flow label, but no further description mechanisms are introduced. As a result, the knowledge about a data stream is significantly less than is the case with ATM and fewer traffic management methods can be used for optimization.

At the TCP level, there is also the flow control mechanism. In this way, the receiver of a data stream can indirectly signal to the sender that it should adjust the transmission rate. In TCP, the receiver can forbid further sending by setting the window size to zero until it has been able to process all incoming data, in order to then reset the window size and allow the sender to continue sending .

By specifying an Explicit Cell Rate in RM cells, ATM enables the transmission rate to be controlled directly. Furthermore, what happens to cells that do not meet the agreed parameters is clearly defined for the different service categories. So this is another clear advantage of the ATM mechanisms.

In the event of network congestion, both TCP and ATM offer similar solutions. With TCP there is the Explicit Congestion Notification a mechanism comparable to the ATM EFCI. In addition, however, most TCP implementations also respond to indirect signs of congestion. Packet loss is used as an indicator to adjust the transmission speed. This is not the case with ATM in this form. However, there is a somewhat more efficient solution for delivering explicit overload information in the ABR mode with the use of BRM cells, since the intermediate nodes address the sender directly.

In the area of ​​the Traffic policing There are strict specifications for ATM with the UPC as to how it should be implemented, whereas for TCP there are no specifications. For Traffic shaping However, the available mechanisms in both cases strongly depend on the implementation in the router and are only roughly specified in the standards. Basically, however, the possibilities here are equivalent, both those of TCP and of ATM.

From the point of view of the available traffic management and quality of service assurance mechanisms, ATM clearly has the greater potential. With TCP / IP these options are much more limited and not as sophisticated. In terms of complexity, of course, this has a significant impact. The mechanisms supported by ATM naturally result in a significantly higher complexity of the router systems and thus also higher requirements in terms of operation and configuration. This is particularly disadvantageous in cases where ATM is used as a pure data link layer and overlying protocols such as e.g.IP cannot take advantage of the capabilities and also add their own mechanisms. This is also one of the main reasons why ATM is increasingly being displaced by other technologies and is also coming under pressure in the backbone area.

With the procedures presented here, ATM offers extensive options for traffic management and quality of service assurance. With the approach to classify services in categories and to link certain traffic characteristics to them, the user has the possibility to determine the quality of service parameters suitable for his needs relatively easily.

Via the traffic contract it is also possible to specify parameters that the network but also the service should adhere to. The data from the contract is also used to decide whether the network can still handle the requested connection. With the Usage parameter control together, ATM has suitable preventive security measures to avoid overload.

In addition, the common methods such as selective cell discard, traffic shaping and explicit forward congestion indication are supported and additional ATM-typical advantages such as the use of virtual paths for efficient resource management are used.

A closed-loop feedback mechanism is available for the ABR service category, which allows the sender to adapt its transmission behavior to the available bandwidth based on overload reports or the explicit specification of a transmission rate via resource management cells that are periodically introduced into the system.

With the last considered Guaranteed Frame Rate service category, ATM has also introduced the possibility of responding to the needs of frame-based protocols on higher layers in the stack model and cleverly optimizing the mechanisms here in order to achieve more efficient transmission in overload situations.

Overall, the capabilities of ATM under the aspect of traffic management and quality of service considered here are significantly more extensive than they are, for example, with TCP / IP. For this reason, ATM continues to have its raison d'etre in various niche areas, even if the trend is more and more towards standardizing the networks in the direction of IP, which has its own advantages due to the differences on a technical level.

literature

  • [1] The ATM Forum, Traffic Management Specification, V. 4.1. Mnt. View, CA. March 1999. http://www.mfaforum.org/ftp/pub/approved-specs/af-tm-0121.000.pdf
  • [2] The ATM Forum, Addendum to TM 4.1: Differentiated UBR. Mountain View, CA. July 2000. http://www.mfaforum.org/ftp/pub/approved-specs/af-tm-0149.000.pdf
  • [3] The ATM Forum, Addendum to TM 4.1 for an Optional Minimum Desired Cell Rate Indication for UBR. Mnt. View, CA. July 2000. http://www.mfaforum.org/ftp/pub/approved-specs/af-tm-0150.000.pdf
  • [4] W. Stallings, HIGH-SPEED Networks and Internet: Performance and Quality of Service, 2nd ed. New Jersey: Prentice Hall, 2002, chap. 5 and 13.
  • [5] O. Kyas, ATM networks: structure, function, performance. Bergheim: DATACOM-Verlag, 1993, chap. 11.
  • [6] M. R. Karim, ATM Technology and Services Delivery. New Jersey: Prentice-Hall, 2000, chap. 5.
  • [7] R. Jain, Congestion Control and Traffic Management in ATM Networks: Recent Advances and A Survey. Columbus, OH. The Ohio State University. August 1996.
  • [8] A. Meyer, Asynchronous Transfer Mode. Potsdam, HPI, University of Potsdam. June 2006
  • [9] T. Mickelsson, ATM versus Ethernet. Helsinki University of Technology. May 1999 http://www.tml.tkk.fi/Opinnot/Tik-110.551/1999/papers/07ATMvsEthernet/iworkpaper.html

List of abbreviations

ABRAvailable bit rate
ACRAllowed Cell Rate
ATMAsynchronous transfer mode
BRMBackward Resource Management
CACConnection Admission Control
CBRConstant bit rate
CDVCell Delay Variation
CDVTCell Delay Variation Tolerance
CICongestion Indication
CLPCell Lose Priority Bit
CLRCell Lose Ratio
CTDCell Transfer Delay
EFCIExplicit Forward Congestion Indication
HEExplicit cell rate
F-GCRAFrame Generic Cell Rate Algorithm
FRMForward resource management
GCRAGeneric Cell Rate Algorithm
GFRGuaranteed frame rate
IPInternet Protocol
MBSMaximum burst size
MCRMinimum cell rate
MFSMaximum frame size
nrt-VBRNon-Real-Time Variable Bit Rate
NINo increase bit
OAMOperation and maintenance
PCRPeak Cell Rate
QoSQuality of Service
rt-VBRReal-time variable bit rate
SCRSustainable Cell Rate
TCPTransmission Control Protocol
UBRUnspecified bit rate
UNIVERSITYUser network interface
UPCUsage parameter control
VBRVariable bit rate
VCCVirtual Channel Connection
VPCVirtual Path Connection