52-10-10 Asynchronous Transfer Mode Switching Nathan J. Muller



Similar documents
Circuit-Switched Router Connections Nathan J. Muller

Asynchronous Transfer Mode: ATM. ATM architecture. ATM: network or link layer? ATM Adaptation Layer (AAL)

Switch Fabric Implementation Using Shared Memory

Communication Networks. MAP-TELE 2011/12 José Ruela

Introduction to WAN Technologies

WAN Data Link Protocols

WAN Technology. Heng Sovannarith

WAN. Introduction. Services used by WAN. Circuit Switched Services. Architecture of Switch Services

Inverse Multiplexing ATM, Bit by Bit Robin D. Langdon

What You Will Learn About. Computers Are Your Future. Chapter 8. Networks: Communicating and Sharing Resources. Network Fundamentals

Link Layer. 5.6 Hubs and switches 5.7 PPP 5.8 Link Virtualization: ATM and MPLS

Protocol Architecture. ATM architecture

CTS2134 Introduction to Networking. Module 07: Wide Area Networks

LAN Switching and VLANs

Region 10 Videoconference Network (R10VN)

Asynchronous Transfer Mode

Gigabit Ethernet. Abstract. 1. Introduction. 2. Benefits of Gigabit Ethernet

Overview of Routing between Virtual LANs

Module 15: Network Structures

November Defining the Value of MPLS VPNs

Local-Area Network -LAN

Operating System Concepts. Operating System 資 訊 工 程 學 系 袁 賢 銘 老 師

Computer Network. Interconnected collection of autonomous computers that are able to exchange information

Chapter 14: Distributed Operating Systems

Network Technologies

Chapter 16: Distributed Operating Systems

An ATM WAN/LAN Gateway Architecture

RMON, the New SNMP Remote Monitoring Standard Nathan J. Muller

TAA: Introduction to Wide Area Networks online course specification

Network+ Guide to Networks 6 th Edition. Chapter 7 Wide Area Networks

Module 4. Switched Communication Networks. Version 2 CSE IIT, Kharagpur

As a preview for further reading, the following reference has been provided from the pages of the book below:

PART III. OPS-based wide area networks

SFWR 4C03: Computer Networks & Computer Security Jan 3-7, Lecturer: Kartik Krishnan Lecture 1-3

The WestNet Advantage: -- Textbooks, ebooks, ecourses -- Instructor Resourse Center -- Student Resource Center

Introduction to Metropolitan Area Networks and Wide Area Networks

Chapter 9A. Network Definition. The Uses of a Network. Network Basics

WAN and VPN Solutions:

Nortel Technology Standards and Protocol for IP Telephony Solutions

Chapter 11: WAN. Abdullah Konak School of Information Sciences and Technology Penn State Berks. Wide Area Networks (WAN)

Transparent LAN Services Offer Visible Benefits

Using & Offering Wholesale Ethernet Network and Operational Considerations

WANs connect remote sites. Connection requirements vary depending on user requirements, cost, and availability.

Distributed Systems 3. Network Quality of Service (QoS)

Please purchase PDF Split-Merge on to remove this watermark.

Introduction to WAN Technologies

Lecture Computer Networks

Computer Networks. Definition of LAN. Connection of Network. Key Points of LAN. Lecture 06 Connecting Networks

Guide to TCP/IP, Third Edition. Chapter 3: Data Link and Network Layer TCP/IP Protocols

The Network Layer Functions: Congestion Control

1 Which network type is a specifically designed configuration of computers and other devices located within a confined area? A Peer-to-peer network

Frame Relay and Frame-Based ATM: A Comparison of Technologies

Communications and Computer Networks

Traffic Control Functions in ATM Networks Byung G. Kim Payoff

APPLICATION NOTE. Benefits of MPLS in the Enterprise Network

TABLE OF CONTENTS LIST OF FIGURES

Building a Bigger Pipe: Inverse Multiplexing for Transparent Ethernet Bridging over Bonded T1/E1s

A Preferred Service Architecture for Payload Data Flows. Ray Gilstrap, Thom Stone, Ken Freeman

SDH and WDM: a look at the physical layer

Optimizing Networks for NASPI

White Paper Abstract Disclaimer

High Speed Ethernet. Dr. Sanjay P. Ahuja, Ph.D. Professor School of Computing, UNF

Multi Protocol Label Switching (MPLS) is a core networking technology that

The Virtues of Narrowband ATM Daniel A. Kosek James P. Cavanagh

Ethernet. Ethernet. Network Devices

SDH and WDM A look at the physical layer

Sprint Global MPLS VPN IP Whitepaper

1. Public Switched Telephone Networks vs. Internet Protocol Networks

BCS THE CHARTERED INSTITUTE FOR IT. BCS HIGHER EDUCATION QUALIFICATIONS BCS Level 5 Diploma in IT COMPUTER NETWORKS

Voice Over IP. MultiFlow IP Phone # 3071 Subnet # Subnet Mask IP address Telephone.

The Evolution of Ethernet

The Internet and the Public Switched Telephone Network Disparities, Differences, and Distinctions

Computer Networks III

Broadband Networks. Prof. Abhay Karandikar. Electrical Engineering Department. Indian Institute of Technology, Mumbai.

IP Networking. Overview. Networks Impact Daily Life. IP Networking - Part 1. How Networks Impact Daily Life. How Networks Impact Daily Life

Exhibit n.2: The layers of a hierarchical network

WAN Technologies Based on CCNA 4 v3.1 Slides Compiled & modified by C. Pham

MPLS: Key Factors to Consider When Selecting Your MPLS Provider Whitepaper

Multi-protocol Label Switching

Wide Area Networks. Learning Objectives. LAN and WAN. School of Business Eastern Illinois University. (Week 11, Thursday 3/22/2007)

ATM. Asynchronous Transfer Mode. Networks: ATM 1

Data Communication and Computer Network

CS 5516 Computer Architecture Networks

Quality of Service in the Internet. QoS Parameters. Keeping the QoS. Traffic Shaping: Leaky Bucket Algorithm

UNDERSTANDING BUSINESS ETHERNET SERVICES

Management of Telecommunication Networks. Prof. Dr. Aleksandar Tsenov

Building integrated services intranets

Connection-Oriented Ethernet On-Ramp Aggregation for Next-Generation Networks

Chapter 1 Reading Organizer

Overview of Asynchronous Transfer Mode (ATM) and MPC860SAR. For More Information On This Product, Go to:

WHITEPAPER MPLS: Key Factors to Consider When Selecting Your MPLS Provider

QoS Parameters. Quality of Service in the Internet. Traffic Shaping: Congestion Control. Keeping the QoS

Performance of voice and video conferencing over ATM and Gigabit Ethernet backbone networks

ICS 153 Introduction to Computer Networks. Inst: Chris Davison

Efficient evolution to all-ip

UNDERSTANDING BUSINESS ETHERNET SERVICES

R2. The word protocol is often used to describe diplomatic relations. How does Wikipedia describe diplomatic protocol?

MANAGEMENT INFORMATION SYSTEMS 8/E

Fibre Channel Overview of the Technology. Early History and Fibre Channel Standards Development

NZQA Expiring unit standard 6857 version 4 Page 1 of 5. Demonstrate an understanding of local and wide area computer networks

Transcription:

52-10-10 Asynchronous Transfer Mode Switching Nathan J. Muller Payoff Ultimately, the integration of ATM switching and routing will support large-scale multimedia internetworks. Increased bandwidth, low latency, and increased network availability are just a few of the advantages that make ATM the switching method of choice for many data communications managers. Introduction ATM switching, also known as cell relay, is a general-purpose method for switching voice, data, image and video traffic. Asynchronous Transfer Mode is designed to operate at rates of up to 155M b/s and beyond. Such speeds are achievable, in large part, because the data is switched at the packet level 53-byte cells rather than at the bit or byte level, thus minimizing internodal processing requirements. Advantages of ATM The benefits of Asynchronous Transfer Mode, or ATM, include increased bandwidth, low latency, and increased network availability through the automatic and guaranteed assignment of network bandwidth, otherwise known as Quality Of Service. quality of service supports isochronous traffic such as voice and video, as well as real-time data traffic for online, interactive applications. ATM is expected to become the fabric of choice for LAN backbones. Its unbounded bandwidth offers a scalable foundation for growing internetworks. ATM is a compelling solution for both WAN and campuswide backbone networks because, unlike LANs, there also is no practical limit on the geographical distance of ATM links. While ATM is currently limited to data transmission over Permanent Virtual Circuit, the long range goal of ATM is to support a multitude of services - data, voice, image, and video - from a single platform. When completed, ATM standards will promote intercarrier and intervendor interoperability. Simplified management is another key feature of ATM. With router-based internetworks becoming ever larger, performing moves, adds, and changes, in addition to keeping track of an endless list of MAC addresses, has become a real management burden. ATM offers a simplified way of managing large networks while maintaining scalability. Generally, users are not looking at ATM with the idea of replacing their existing LAN infrastructure and investment. It is currently too expensive to install ATM to every desktop today except on a selective basis, such as engineering workgroups running data-intensive applications. Currently, ATM applications are for campus area networks that integrate voice, data, video, and LAN traffic over a common backbone. For those who merely want to interconnect their token ring and Ethernet LANs between different sites, then frame relay, T1, Switched Multimegabit Data Service, and existing data circuits are generally quite effective. ATM Protocol Model The capabilities of the Asynchronous Transfer Mode and ATM adaptation layers are quite innovative. The ATM layer is responsible for routing cells along the virtual path established

during call setup and for performing general cell routing, cell multiplexing/demultiplexing, and header error control. These functions are similar to the ones performed in frame relay transport, except that in Frame relay the cell called a frame may vary in size according to the application. The ATM adaptation layer (see Exhibit 1) performs the transition between the message format used by the upper layers of the communications module and the fixedlength cell format used by theatm layer. In addition, the Asynchronous Transfer Mode adaptation layer is responsible for error detection and correction and flow control. Protocol Model Cell Structure Each ATM cell consists of a 5-byte header field and a 48-byte information field (see Exhibit 2). The cell header contains the information needed to route the information field through the ATM network. The header supports five functions: Generic flow control, which enables Customer Premises Equipment to control the flow of traffic for different grades of service. Virtual channel identifier, which identifies a particular connection. For a given connection, the value of thevirtual Channel Identifier may change as the cell traverses the network. Virtual path identifier, which consists of a bundle of virtual channels that are carried end-to-end between terminations on the same physical facility. Payload type, which indicates whether the cell contains user information or network information. The payload of the network information cell does not form part of the users' information transfer, but could provide an out-of-band control message. Cell loss priority, which gives guidance to the networks regarding cells that may be discarded in congestion situations. Header error control, which prevents errors in thevirtual channel identifier from causing cells to be misrouted. ATM Cell Structure Unlike traditional X.25 networks, where the switches operate in store-and-forward fashion to read the address information and implement functions such as error correction, ATM switches use virtual channel identifier to route a cell through the ATM network to its destination. All cells enter the switching matrix in synchrony. In the event that two packets converge at the same output line, one of them is buffered. It is through buffer and queue management (discussed in a subsequent section) that the cells are properly ordered for ATM-level processing.

Cell Switching Cell switching through ATM is accomplished as follows: An application initiates the communications process by sending a connection request message to the transport layer through the presentation and session layer. The connection request message is transformed into cells via the ATM and Asynchronous Transfer Mode adaptation layers, and forwarded to the signaling system 7 (SS7) network. The message is routed through the network to define the path and call parameters that will be used during the transfer of data. When the connection is established, data is transferred between the applications at each end of the circuit. During data transfer, the ATM switches in the network route cells from incoming to outgoing input/output (I/O) ports using the cell virtual path/virtual circuit identifier and Routing Information established during call setup. Like Frame relay, ATM minimizes the amount of processing performed by the network, thus allowing a very high information transfer rate. Congestion Control In an ATM-based network, congestion can occur on any of three levels: The connection level, where congestion can prevent the establishment of a new call. The packet level, where congestion can cause the delay or loss of protocol data. The ATM level, where congestion can cause the delay or loss of cells. New connections must be admitted to the network in a manner that protects the network performance of all existing connections. New connections must be withheld from congested portions of the network. At the ATM level, cells are ordered for processing by ATM concentrators and ATM switches. Action must be taken to prevent an excessive cell flow on any given connection and to help portions of the network recover from excessive cell flows. When establishing new connections, various performance issues come into play at both the connection level and the ATM level. For example, at the connection level, the usage of high-bandwidth services must not consume available resources to the point that lower-bandwidth services experience an abnormal increase in blocked calls. At the ATM level, a new connection should only be admitted to the network if sufficient network resources are available to support that connection. Call Setup An important advantage of ATM is its flexibility in supporting different types of services. A broad range of services, characterized by different traffic behaviors, may share the same ATM network, where packets of fixed-length (cells) belonging to different calls are multiplexed together for transport.

To meet quality-of-service requirements, resources should be allocated among three control levels: Network control level, where virtual channel identifier routing, connection management, and recovery are controlled. Call control level, where call acceptance and overload control are handled. Cell control level, where policing and buffer management are controlled. Together, these functions ensure that cell losses and delays do not exceed a tolerable level for all users. During the call setup phase, the ATM network must decide whether to accept the new call. This depends largely on the availability of bandwidth to support the connection. The amount of bandwidth that should be allocated to the new call is based on various statistical properties and the number of calls already in progress. Three routing considerations come into play in the assignment of bandwidth to new calls: network routing of the virtual channel identifier, path selection inside the switching fabric realized on a per-call basis, and semipermanent virtual connection and virtual path routing performed long term under network management control. Once a connection has been accepted, the cell stream is monitored to ensure that the user has not exceeded the values established in the call setup phase. If these values are exceeded, corrective action is taken through a process known as policing, which is carried out by the policing unit of the local ATM node. This enforcement mechanism is applied at the point of origin before the multiplexing of all traffic. An ATM layer overload occurs when the parameters characterizing the cells' arrival process exceed the maximum values that guarantee the required Quality Of Service. The overload condition has two primary causes. The load on the ATM layer could increase over the maximum allowable value if the traffic offered by one or more virtual connections exceeds the parameters declared in the setup phase. Or, if each virtual call generates traffic according to the declared parameters, too many connections could have been accepted. Policing functions mainly discard or mark the cells. This reduces the probability that an overload will occur due to a virtual call exceeding the declared values. Marking refers to the tagging of violations in the cell's header. That way, when node congestion occurs and cells must be dropped, the marked cells are the first to be discarded. If that does not relieve the congestion, unmarked cells can be dropped. The transient overload generated by the statistical multiplexing of several connections is monitored by the allocation mechanism. If the policing and allocation functions work properly, no frequent overload situation will occur. Even considering resource faults and rerouting of existing virtual connections, the call acceptance function should be able to reallocate virtual calls on the available resources. Bandwidth Allocation A set of parameters that characterizes traffic is used for resource management. Proper operation of the ATM network depends on these traffic characterizations; they are as applicable to the policing function as they are to bandwidth allocation. However, the wide range of bit rate, the statistical profile of the information flow, and the variety of connection configurations can complicate traffic characterizations. In the virtual call setup phase, the first thing the ATM network does is decide whether or not to accept the call. The decision depends on the availability of a path with

enough free bandwidth to support the new connection. The connection process identifies a rule for allocating bandwidth for each new call. The simplest rule is to reserve the peak bit rates, regardless of the offered traffic's burstiness. However, if the presence of bursty traffic is high, network resources will not be optimally exploited. Resource Utilization The three main approaches to high resource exploitation under ATM are: statistical multiplexing, the introduction of different bearer services, and the dynamic shaping of the incoming information flow at the multiplexers. Statistical Multiplexing. In statistical multiplexing, the call acceptance mechanism first determines the amount of bandwidth to be reserved for the new connection. The amount of bandwidth depends not only on the characteristics of the traffic, but on the traffic already being carried on the network. Consequently, multiplexing will vary with the number and the characteristics of the other calls active on the various links. For each call, a bandwidth less than the peak bit rate is allocated. Statistical multiplexing can be done in two ways. With full multiplexing, a complete multiplexing of calls is applied. Although the maximum gain is obtained, some drawbacks occur because of the complexity of the real-time procedures and the possibility of adding a new service class. With limited multiplexing, homogeneous services are grouped into classes and statistical multiplexing is used inside each class. This method offers a compromise between efficiency and complexity. When statistical multiplexing is used, it is better to allocate a bandwidth less than the peak bit rates. In this case, when buffers are full, a loss of cells is unavoidable, so that an accurate cell acceptance rule is mandatory to keep the frequency of cell losses small, as permitted by quality-of-service requirements. Once a connection has been accepted, the originating switch must be monitored to ensure that it does not exceed the traffic values declared in the call setup phase. Multiple Bearer Service Definition. Under this resource management scheme, services with different quality-of-service requirements are integrated in the ATM network. Bearer services with different quality-ofservice requirements, in terms of cell delay and cell loss, can be handled in distinct bearer classes. This means that the network can be divided into several virtual subnetworks that guarantee different levels of performance. If cells are handled in the same way, the most stringent values can be used as quality-of-service requirements for modeling purposes. The use of priorities allows a greater efficiency to be achieved, since low priority can be assigned to cells that can tolerate a higher loss probability. Priorities give preference to certain virtual call types and guarantee that specific cells in a call containing essential information are not lost. Shaping. The other way to enhance performance is to shape the users' information stream by using buffers that hold the instantaneous bursts of cells and release them at a lower speed. The result is a smoother stream that is more easily handled at the network level. The trade-off,

however, is higher transmission delays. The delay may be too high for real-time services, making shaping appropriate only for high bit-rate data traffic. Alternative Congestion Controls No single approach to congestion control can be sufficient. Fortunately, ATM offers several approaches to enforce the usage of network resources by an admitted connection. Admission Control. The purpose of admission control is to establish fair blocking procedures and to ensure that the resources available to each admitted connection are sufficient to meet performance objectives in terms of lost cell ratio, cell transfer delay, and cell delay variation. Cells offered to the network in excess of a variable-bit-rate connection's admission parameters can be immediately discarded. Alternatively, cells can be admitted if network resources are available and the procedures are in place for operating with marked or tagged cells. Buffer and Queue Management Buffer and queue management ensures the proper ordering of cells for ATM-level processing. The ordering of cells for processing at ATM concentrators and Asynchronous Transfer Mode switches is a key factor in determining network efficiency, especially when various mixes of constant- and variable-bit-rate services are involved. At such a concentrator or switch, incoming cells may be processed on a first-come-first-served basis, or according to multiple levels of priority to differentiate the processing order of various cells. Multilevel priorities may be used to differentiate between connections carrying different types of service based on the performance objectives of those services. Generally, multilevel priorities also result in more efficient statistical multiplexing. Traffic Enforcement. Traffic enforcement monitors a connection's resource usage for compliance with limits and acts on violations of those limits. Reactive Controls Even with effective procedures for call admission, buffer and queue management, and traffic enforcement, localized congestion may sometimes occur in a heavily loaded Asynchronous Transfer Mode network. Reactive controls are used to relieve congestion once it occurs. For example, the statistical nature of variable-bit-rate traffic could produce temporary overloads at some internal network buffers. If this happens, certain additional congestion control procedures can be called upon to help the affected portion of a broadband network recover from its congested state. The alternatives for such reactive controls include refusing to accept new connections based on such measures as equipment utilization, buffer fill, alternate routing, and flow control techniques similar to those used in packet-switched networks. Customer Premises Equipment Nationwide Asynchronous Transfer Mode-based network services are already available from AT&T, MFS Datanet, Sprint, and WilTel, as well as regionally from such carriers as

US West and Pacific Bell. Other carriers have announced plans to offer ATM services. Like any other type of service, there must be Customer Premises Equipment to provide the necessary network connectivity. ATM customer premises equipment includes routers, hubs, multiplexers, and switches. Router-Based ATM Networks Most router vendors initially provided ATM connectivity through external Asynchronous Transfer Mode Digital Service Unit operating at the DS3 speed of 45M b/s. The digital service units (DSU) complies with the ATM Forum's Data exchange Interface specification, which allows the equipment to interface with standards-compliant ATM switches in public or private network. The device segments data packets into fixed-length cells of 53 bytes, including overhead, before sending them out over the ATM network. At the other end, a similar device reassembles the cells into data packets. The next step for router vendors is to offer an integrated ATM interface that allows users to network their routers through fiber optic private lines or ATM switches at speeds of up to 155M b/s. Eventually, backbone routers will have the internal capability to switch multimedia traffic, providing the option of carrying both variable-length packets and fixedlength cells simultaneously over separate data paths. Several router vendors are implementing ATM migration strategies. Hub-Based ATM Networks Several hub vendors offer ATM connectivity options. One option is an Asynchronous Transfer Mode hub capable of supporting WAN connections at 45M b/s or 155M b/s. Such products will eventually scale to higher rates of up to 2.4G b/s. Another option offered by hub vendors is a plug-in module that allows the hub to connect to other hubs or switches by means of a private, point-to-point ATM backbone. A variation on this connectivity solution uses a plug-in ATM router module that allows hubs to be connected over a public, switched ATM backbone. Eventually, hub vendors want to provide end-toend ATM switching from desktop to desktop. Multiplexer-Based ATM Networks Sensing that the market for private networks is slowing down, traditional vendors of T1/T3 multiplexers plan to offer migration paths to ATM. Access links to ATM are provided at speeds as low as 9.6K b/s. Most solutions entail plug-in ATM modules for the multiplexers that will provide access to public Asynchronous Transfer Mode services and allow them to interoperate with high-end enterprise ATM switches and hubs. With the addition of the ATM modules, users will be able to take advantage of a mix of carrier services, including frame relay and Integrated Services Digital Network, already supported by many multiplexers, as well as ATM. Thus, users can choose the most economical service based on tariff and application needs. By 1996, ATM carrier services and broadband private lines will become more widely available and such multiplexers will take on the role of enterprise network switches, providing wide-area transport for all network applications.

Switch-Based ATM Networks The first ATM products were adapter cards that connected workstations over private pointto-point links. Next came ATM switches, which were used to link workstations over localarea or campus networks, delivering as much as 140M b/s to each computer. Today's ATM switches can be used to create an enterprisewideatm network. Up to 64 switches can be interconnected over the wide area using DS3 or OC-3 links to form a distributed matrix. Workstations can communicate with other workstations on different switches. Some vendors intend to offer ATM adapter cards for standard bus-based systems, servers, and workstations, thereby enabling users to leverage new applications off their existing network infrastructures. Smaller, premises communications switches for workgroups, departments, and backbones are another group of offerings users can expect. The workgroup ATM switch will be targeted for small office or workgroup environments where communication costs impede new applications deployment. ATM technology used in this context can be an effective multiplexing technology for integrating voice, data, and legacy applications. The departmental switch provides a larger capacity, more flexibleatm premises environment when it makes sense to upgrade. It offers connectivity options to the desktop, servers, workgroup hubs, and backbone links. The backbone switch aggregates Asynchronous Transfer Mode traffic in a campus environment. It also provides bandwidth and network management capabilities to optimize the use of WAN services. Implementation Considerations For Asynchronous Transfer Mode to be universally successful, communications protocols and applications must be optimized to adapt to this new network paradigm. For example, the dominant LAN technologies in use today rely on the Internet Protocol (IP) and the Internet Packet Exchange (IPX)protocol. Both are connectionless and thus not well suited to the connection-oriented environment of Asynchronous Transfer Mode switching. In fact, all major networking protocols and applications programming interfaces, including the Transmission Control Protocol/Internet Protocol (TCP/IP)and IBM's advanced peer-topeer network (APPN), will have to be modified to run on the emerging Asynchronous Transfer Mode networks. Companies such as IBM Corp. and Novell are researching ways for users to migrate their existing applications environments over to anatm networking infrastructure. The ATM Forum is currently developing an ATM API specification to support native ATM applications that fully utilize the switching scheme's ability to prioritize time-sensitive data and handle multimedia applications. The ATM Forum is also working on a LAN emulation specification that defines how Media Access Control addresses will be mapped to ATM addresses. This will make discovery more efficient and limit broadcasts by such protocols as Internetwork Packet exchange. The goal of LAN emulation is to shield the user from ATM, allowing the construction of networks that integrate existing LAN adapter cards and the new ATM switches. The functionality of first-generation ATM switches is limited in other ways as well. Current Asynchronous Transfer Mode switches lack adequate congestion management capabilities. The buffers are too small and they discard data when the network becomes too congested. Asynchronous Transfer Mode switches should be able to choose traffic to be discarded, based on user-defined criteria.

Another area that must be addressed to ensure the success of ATM is the availability of Switched Virtual Circuit (SVCs). Most switches currently used to provision ATM service only support Permanent Virtual Circuit (PVCs). This means that users must configurepermanent virtual circuits to establish fully meshed networks and reconfigure them to add sites or implement network restoral configurations. Configuring the permanent virtual circuits instead of allowing traffic to be addressed dynamically through switched connections can be an arduous task. The full potential of ATM can only be realized with switched virtual circuits. Conclusion Many data communications managers are uncertain about which network technology offers the most cost-effective solution. Available choices include frame relay, Switched Multimegabit Data Service, and Asynchronous Transfer Mode. Some users hold the mistaken view that the three technologies are mutually exclusive. Frame relay andswitched Multimegabit Data Service will eventually function as access interfaces to ATM backbone networks. Frame relay is al ready widely implemented in hardware and services and has proven cost-effective for certain applications. Moreover, efforts are under way to ensure that frame relay, switched multimegabit data services (SMDS), andatm interoperate. The switched multimegabit data services (SMDS) Interest Group, the Frame relay Forum, and the ATM Forum are cooperating in the development of interoperability specifications. When implemented by vendors and services providers, these specifications will provide a high degree of network configuration flexibility. Frame relay links will be able to provide access to bothswitched multimegabit data services (SMDS) and ATM, for example, while traffic on one regional switched multimegabit data services (SMDS) network can be transported over nationwide Asynchronous Transfer Mode backbone links to an switched multimegabit data services (SMDS) network in another region. This flexibility allows users to take advantage of the most appropriate technology without having to replace existing Customer Premises Equipment. Author Biography Nathan J. Muller Nathan J. Muller is an independent consultant in Huntsville AL, specializing in advanced technology marketing and education. He has more than 22 years of industry experience and has written extensively on many aspects of computers and communications. He is the author of eight books and more than 500 technical articles. He has held numerous technical and marketing positions with such companies as Control Data Corp.,Planning Research Corp., Cable & Wireless Communications, ITT Telecom, and General DataComm, Inc. He holds an MA in social and organizational behavior from George Washington University.