4-04-50 The Critical Choice of a Client/Server Architecture John M. Gallaugher Suresh C. Ramanathan



Similar documents
This paper was presented at the 1996 CAUSE annual conference. It is part of the proceedings of that conference, "Broadening Our Horizons:

Client/server is a network architecture that divides functions into client and server

COMP5426 Parallel and Distributed Computing. Distributed Systems: Client/Server and Clusters

ICS 434 Advanced Database Systems

13/10/2015. ACS 3907 E-Commerce. E-Commerce Design Architecture Part 1. Client/Server Architecture. Instructor: Kerry Augustine October 13 th 2015

UNIX AS AN APPLICATION SERVER IN A NETWORK OPERATING SYSTEM ENVIRONMENT

CHAPTER 2 MODELLING FOR DISTRIBUTED NETWORK SYSTEMS: THE CLIENT- SERVER MODEL

Client/Server Computing Distributed Processing, Client/Server, and Clusters

Application Consolidation

Software: Systems and Application Software

Chapter 3. Database Environment - Objectives. Multi-user DBMS Architectures. Teleprocessing. File-Server

What Is the Java TM 2 Platform, Enterprise Edition?

Session 11 : (additional) Cloud Computing Advantages and Disadvantages

IT Architecture Review. ISACA Conference Fall 2003

Accessing Enterprise Data

Realizing the Benefits of Client/Server Computing Peter M. Spenser

Client Server Architecture

Fax Server Cluster Configuration

Introduction to Network Management

Objectives. Distributed Databases and Client/Server Architecture. Distributed Database. Data Fragmentation

ORACLE PLANNING AND BUDGETING CLOUD SERVICE

Making the Most of Your Enterprise Reporting Investment 10 Tips to Avoid Costly Mistakes

Securing Virtual Applications and Servers

ORACLE DATABASE 10G ENTERPRISE EDITION

P R O D U C T P R O F I L E. Gridstore NASg: Network Attached Storage Grid

Agility has become a key initiative for business leaders. Companies need the capability

Oracle Identity Analytics Architecture. An Oracle White Paper July 2010

A framework for web-based product data management using J2EE

Azure Scalability Prescriptive Architecture using the Enzo Multitenant Framework

A Modular Approach to Teaching Mobile APPS Development

Distributed Systems Architectures

Chapter 16 Distributed Processing, Client/Server, and Clusters

P u b l i c a t i o n N u m b e r : W P R e v. A

WHITE PAPER Ten Things You Need to Know about Virtualization

INTRODUCTION TO CLOUD COMPUTING CEN483 PARALLEL AND DISTRIBUTED SYSTEMS

WHITE PAPER. Realizing the Value of Unified Communications

AS/400 System Overview

Understanding Client/Server Computing

Chapter Outline. Chapter 2 Distributed Information Systems Architecture. Middleware for Heterogeneous and Distributed Information Systems

What is Middleware? Software that functions as a conversion or translation layer. It is also a consolidator and integrator.

ABSTRACT INTRODUCTION SOFTWARE DEPLOYMENT MODEL. Paper

Purpose-Built Load Balancing The Advantages of Coyote Point Equalizer over Software-based Solutions

Chapter 1 Introduction to Enterprise Software

High-Volume Data Warehousing in Centerprise. Product Datasheet

Enterprise Workloads on the IBM X6 Portfolio: Driving Business Advantages

Cloud Based Application Architectures using Smart Computing

Chapter 2: Remote Procedure Call (RPC)

DISTRIBUTED SYSTEMS AND CLOUD COMPUTING. A Comparative Study

Software design (Cont.)

Ensuring Web Service Quality for Service-Oriented Architectures. An Oracle White Paper June 2008

Module 17. Client-Server Software Development. Version 2 CSE IIT, Kharagpur

Base One's Rich Client Architecture

Internet and Web-Based Database Technology

TABLE OF CONTENTS THE SHAREPOINT MVP GUIDE TO ACHIEVING HIGH AVAILABILITY FOR SHAREPOINT DATA. Introduction. Examining Third-Party Replication Models

The Role of the Operating System in Cloud Environments

be architected pool of servers reliability and

Enterprise Application Integration

Oracle Application Development Framework Overview

The GeoMedia Architecture Advantage. White Paper. April The GeoMedia Architecture Advantage Page 1

Tier Architectures. Kathleen Durant CS 3200

Architecture Example Point-to-point wire

The Power of Analysis Framework

Enterprise Computing Strategies Brian Jeffery

How To Understand The Concept Of A Distributed System

Distributed System: Definition

System types. Distributed systems

Client/Server and Distributed Computing

The Data Grid: Towards an Architecture for Distributed Management and Analysis of Large Scientific Datasets

Deploying a distributed data storage system on the UK National Grid Service using federated SRB

End-to-End Testing of IT Architecture and Applications

Storage Guardian Remote Backup Restore and Archive Services

ENZO UNIFIED SOLVES THE CHALLENGES OF OUT-OF-BAND SQL SERVER PROCESSING

Zend and IBM: Bringing the power of PHP applications to the enterprise

Service Oriented Architectures

MEng, BSc Applied Computer Science

M.Sc. IT Semester III VIRTUALIZATION QUESTION BANK Unit 1 1. What is virtualization? Explain the five stage virtualization process. 2.

Datamation. Find the Right Cloud Computing Solution. Executive Brief. In This Paper

Version Overview. Business value

The Service Availability Forum Specification for High Availability Middleware

PIVOTAL CRM ARCHITECTURE

Software. PowerExplorer. Information Management and Platform DATA SHEET

ORACLE HYPERION DATA RELATIONSHIP MANAGEMENT

Best Practices for Installing and Configuring the Captaris RightFax 9.3 Shared Services Module

Virtual Operational Data Store (VODS) A Syncordant White Paper

The EMSX Platform. A Modular, Scalable, Efficient, Adaptable Platform to Manage Multi-technology Networks. A White Paper.

Michigan Criminal Justice Information Network (MiCJIN) State of Michigan Department of Information Technology & Michigan State Police

Distributed Database Management Systems for Information Management and Access

EMC Backup and Recovery for Microsoft SQL Server 2008 Enabled by EMC Celerra Unified Storage

Modernizing enterprise application development with integrated change, build and release management.

CUMULUX WHICH CLOUD PLATFORM IS RIGHT FOR YOU? COMPARING CLOUD PLATFORMS. Review Business and Technology Series

Sage Intergy 6.10 Architecture Guide

A Technology Based Solution to Move Client Server Applications to Java /.NET in Native 3-Tier Web Code Structures

Business Applications and Infrastructure Entwined

IMPLEMENTING AND SUPPORTING EXTRANETS

Spreadsheet Simulation

Enterprise Single Sign-On SOS. The Critical Questions Every Company Needs to Ask

Vicom Data Migration Engine. An Indispensable Tool for Storage Vendors and Professional Services Organizations. A white paper from Vicom Systems, Inc.

whitepaper Absolute Manage: Client Management Managing Macs in a Windows Environment

zen Platform technical white paper

Transcription:

4-04-50 The Critical Choice of a Client/Server Architecture John M. Gallaugher Suresh C. Ramanathan Payoff The architectural design of a client/server system affects the initial development cost, dayto-day transactional performance, ongoing maintenance costs, and long-term flexibility and scalability of the application. This article assesses the merits of two- and three-tier architectures to help IS managers choose an appropriate architecture for any given project. Introduction As organizations move up the learning curve from small-scale client/server projects toward mission-critical applications, performance expectations and uptime requirements increase correspondingly, as does the need to remain both flexible and scalable. In such a demanding scenario, the choice and implementation of an appropriate architecture is critical. Architecture affects all aspects of software design and engineering. Before deciding on the type of architecture, the architect considers the complexity of the application, the level of integration and interfacing required, the number of users, their geographical dispersion, the nature of networks, and the overall transactional needs of the application. An inappropriate architectural design or a flawed implementation can result in horrendous response times. The choice of architecture also affects the development time and the future flexibility and maintenance of the application. At the start of every client/server project, IS professionals must make the critical choice between a two-tier and three-tier architecture. This article defines the basic concepts of client/server architecture, describes two-tier and three-tier architectures, and analyzes their respective benefits and limitations. Differences in development efforts, flexibility, and ease of reuse are also compared to further aid IS managers in making the choice of an appropriate architecture for any given project. Overview of Client/Server Computing Despite massive press coverage, there is no clear definition of client/server computing. Client and server are software and not hardware entities. In its most fundamental form, client/server involves a software entity (client) making a specific request that is fulfilled by another software entity (server). Exhibit 1 illustrates the client/server exchange. The client process sends a request to the server. The server interprets the message and then attempts to fulfill the request. To fulfill the request, the server may have to refer to a knowledge source (e.g., data base),process data (e.g., perform calculations), control a peripheral, or make an additional request of another server. In many architectures, a client can make requests of multiple servers and a server can service multiple clients. Client/Server Transactions

It is important to understand that the relationship between client and server is a command/control relationship. In any given exchange, the client initiates the request and the server responds accordingly. A server cannot initiate dialog with clients. Because the client and server are software entities, they can be located on any appropriate hardware. A client process, for instance, could be resident on a network server hardware and request data from a server process running on another server hardware or even on a PC. In another scenario, the client and server processes can be located on the same physical hardware box. In fact, in the prototyping stage, a developer may choose to have both the presentation client and the data base server on the same PC hardware. The server can later be migrated (i.e., distributed) to a larger system for further preproduction testing after the bulk of the application logic and data structure development is complete. Although the client and server can be located on the same machine, this article focuses primarily on architectures used to create distributed applications, that is, those in which the client and server are on separate physical devices. A distributed application consists of separate parts that execute on different nodes of the network and cooperate to achieve a common goal. 98 The supporting infrastructure should also render the inherent complexity of distributed processing invisible to the end user. The client in a client/server architecture does not have to sport a Graphical User Interface; however, the mass commercialization of client/server has come about in large part because of the proliferation of graphical user interface (GUI) clients. Some client/server systems support highly specific functions such as print spooling (i.e., network print queues) or presentation services (i.e., X-Window). Although these special-purpose implementations are important, discussion here focuses mainly on the distributed client/server architectures that demand flexibility in functionality and enhanced graphical user interface (GUI). Types of Client/Server Architecture IS professionals considering a move to client/server computing, whether to replace existing systems or introduce entirely new systems, must determine which type of architecture to use. In the early 1980s, the American National Standards Institute (ANSI), in conjunction with the University of Minnesota, defined a three-layer architecture for building portable systems. This architecture divided data processing into presentation, processing (functionality logic), and data. Client/server architectures can be defined by how these components are split up among software entities and distributed on a network. A variety of ways exist for dividing these resources and implementing client/server architectures. Here the focus is on the most popular forms of implementation of two-tier and three-tier client/server computing systems. Two-Tier Architecture Although a two-tier client/server system can be architected in several ways, this section focuses on examining what is overwhelmingly the most common implementation. In this implementation, the three components of an application (i.e., presentation, processing, and data) are divided among two software entities or tiers: client application code and data base server. 98 M. Bever et al., Distributed Systems, OSF DCE and Beyond, in DCE The OSF Distributed Computing Environment, ed. A. Schill (New York: Springer-Verlag, 1993).

A robust client application development language and a versatile mechanism for transmitting client requests to the server are essential for a two-tier implementation. Presentation is handled exclusively by the client, processing is split between client and server, and data is stored on and accessed through the server. The PC client assumes the bulk of responsibility for application (functionality) logic with respect to the processing component, while the data base engine with its attendant integrity checks, query capabilities, and central repository functions handles data intensive tasks. In a data access topology (see Exhibit 2), a data engine would process requests sent from the clients. Currently, the language used in these requests is most typically a form of Structured Query Language. Sending structured query language (SQL) from client to server requires a tight linkage between the two layers. To send the Structured Query Language, the client must know the syntax of the server or have this translated by an API (application program interface). It must also know the location of the server, how the data is organized, and how the data is named. The request may take advantage of logic stored and processed on the server that centralizes global tasks such as validation, data integrity, and security. Data returned to the client can be manipulated at the client level for further subselection, business modeling, what-if analysis, and reporting. Data Access Topology for a Two-Tier Architecture; the Majority of Functional Logic Exists at the Client Level Advantages of a Two-Tier Environment. The most compelling advantage of a two-tier environment is application development speed. In most cases, a two-tier system can be developed in a small fraction of the time it would take to code a comparable but less-flexible legacy system. Using any one of a growing number of PC-based tools, a single developer can model data and populate a data base on a remote server, paint a user interface, create a client with application logic, and include data access routines. Most two-tier tools are also extremely robust. These environments support a variety of data structures, including several built-in procedures and functions, and insulate developers from many of the more mundane aspects of programming, such as memory management. Finally, these tools also lend themselves well to iterative prototyping and Rapid Application Development techniques, which can be used to ensure that the requirements of the users are accurately and completely met. Tools for developing two-tier client/server systems have allowed many IS organizations to attack their applications backlog and satisfy pent-up user demand by rapidly developing and deploying what are primarily smaller work group-based solutions. Two-tier architectures work well in relatively homogeneous environments with fairly static business rules. They are less suitable for dispersed, heterogeneous environments with rapidly changing rules. As such, relatively few IS organizations are using two-tier client/server architectures to provide cross-departmental or cross-platform enterprisewide solutions. 99 99 Making the Infrastructure Choices, Datamation, January 7, 1994.

screen

Disadvantages of a Two-Tier System. Because the bulk of application logic exists on the PC client, the two-tier architecture faces several potential version control and application redistribution problems. A change in business rules would require a change to the client logic in each application in a corporation's portfolio affected by the change. Modified clients would have to be redistributed through the network a potentially difficult task given the current lack of robust PC version control software and problems associated with upgrading PCs that are turned off or not docked to the network. System security in the two-tier environment can be complicated because a user may require a separate password for each structured query language (SQL) server accessed. The proliferation of end-user query tools can also compromise data base server security. The overwhelming majority of client/server applications developed today are designed without sophisticated middleware technologies that offer increased security. 100 Instead, end users are provided a password that gives them access to a data base. In many cases, this same password can be used to access the data base with data access tools available in most commercial PC spreadsheet and data base packages. Using such a tool, a user may be able to access otherwise hidden fields or tables and possibly corrupt data. Client tools and the structured query language (SQL) middleware used in two-tier environments are also highly proprietary, and the PC tools market is extremely volatile. The market for client/server tools seems to be changing at an increasingly unstable rate. In 1994, the developer of the leading client/server tool was purchased by a large data base firm, raising concern about the manufacturer's ability to continue to work cooperatively with Relational Data Base Management System vendors who compete with the parent company's products. 101 The number-two tool maker lost millions 102 and has been labeled as a takeover target. 103 The tool that received some of the brightest accolades in 1995 is supplied by a firm also in the midst of severe financial difficulties and management transition. 104 The volatility of the client/server tool market raises questions about the long-term viability of any proprietary tool an organization may commit to and complicates implementation of two-tier systems. A migration from one proprietary technology to another requires a firm to scrap much of its investment in application code because none of this code is portable from one tool to the next. Three-Tier Architecture The three-tier architecture (depicted in Exhibit 3) attempts to overcome some of the limitations of the two-tier scheme by separating presentation, processing, and data into separate, distinct software entities (i.e., tiers). The same types of tools can be used for presentation as were used in a two-tier environment, however, the tools are now dedicated to handling just the presentation. When calculations or data access is required by the presentation client, a call is made to a middle-tier functionality server. This tier performs calculations or makes requests as a client to additional servers. The middle-tier servers are 100 M. Dolgicer, When It's Time for a TP Monitor, Client/Server Today(March 1995). 101 T. Smith, Top Execs Tackle Tough Issues Debate Errupts Over Sybase's Gamble, Computer Reseller News (December 19, 1994). 102 M. Ricciuti, Gupta Ranges Further Afield with SQLBase, Info World 17(January 30, 1995). 103 D. Bartholomew, Oracle Hungry For Gupta Hostile Takeover a Possibility as Gupta Stock Tumbles, Information Week (August 1, 1994). 104 E. Heichler, Borland Sees Big Hit with Delphi Tool, Computer World(February 6, 1995).

typically coded in a highly portable, nonproprietary language such as C. Middle-tier functionality servers may be multithreaded and can be accessed by multiple clients, even those from separate applications. Three-Tier Architecture Most of the Logic Processing is Handled by Functionality Servers; Middle-Tier Code is Accessed and Utilized by Multiple Clients Although three-tier systems can be implemented using a variety of technologies, the calling mechanism from client to server in such a system is most typically the Remote Procedure Call, or remote procedure call. Because the bulk of two-tier implementations involve Structured Query Language messaging and most three-tier systems utilize remote procedure call, examination of the merits of these respective request/response mechanisms is warranted. Advantages of a Three-Tier Architecture. RPC calls from presentation client to middle-tier server provide greater overall system flexibility than the structured query language (SQL) calls made by clients in the two-tier architecture. This is because in an remote procedure call, the requesting client simply passes parameters needed for the request and specifies a data structure to accept returned values (if any). Unlike in most two-tier implementations, the three-tier presentation client is not required to speak structured query language (SQL). As such, the organization, names, or even the overall structure of the back-end data can be changed without requiring changes to PC-based presentation clients. Because Structured Query Language is no longer required, data can be organized hierarchically, relationally, or in object format. This added flexibility allows a firm to access legacy data and simplifies the introduction of new data base technologies. In addition to openness, several other advantages are presented by this architecture. Having separate software entities allows for the parallel development of individual tiers by application specialists. It should be noted that the skill sets required to develop client/server applications differ significantly from those needed to develop mainframe-based character systems. User interface (UI)creation, for example, requires an appreciation of platform and corporate UI standards, and data base design requires a commitment to and understanding of the enterprise's data model. Having experts focus on each of these three layers increases the overall quality of the final application. The three-tier architecture also provides for more-flexible resource allocation. Middletier functionality servers are highly portable and can be dynamically allocated and shifted as the needs of the organization change. Network traffic may be reduced by having functionality servers strip data to the precise structure required before distributing it to individual clients at the local area network (LAN) level. Multiple server requests and complex data access emanate from the middle tier instead of from the client, further decreasing traffic. Also, because PC clients are now dedicated solely to presentation, memory and disk storage requirements for PCs may be reduced. Modularly designed middle-tier code modules can be reused by several applications. Reusable logic reduces subsequent development efforts, minimizes the maintenance work load, and decreases migration costs when switching client applications. In addition, implementation platforms for three-tier systems such as the Open Software Foundation's

Distributed Computing Environment (OSF/DCE) offer a variety of additional features to support distributed applications development. These include integrated security, directory, and naming services; server monitoring and boot capabilities for supporting dynamic faulttolerance; and distributed time management for synchronizing systems across networks and separate time zones. Limitations of Three-Tier Architectures. There are of course drawbacks associated with a three-tier architecture. Current tools are relatively immature and require more complex third-generation Language (3GLs) for middle-tier server generation. Many tools have underdeveloped facilities for maintaining server libraries a potential obstacle for simplifying maintenance and promoting code reuse throughout an IS organization. More code in more places also increases the likelihood that a system failure will affect an application, so detailed planning with an emphasis on the reduction/elimination of critical-paths is essential. The three-tier model brings with it an increased need for network traffic management, server load balancing, and fault tolerance. For technically strong IS organizations servicing customers with rapidly changing environments, three-tier architectures can provide significant long-term gains through increased responsiveness to business climate changes, code reuse, maintainability, and ease of migration to new server platforms and development environments. Two- and Three-Tier Development Efforts Exhibit 4 illustrates the time to deployment for two-tier versus three-tier environments. Time to deployment is forecast in overall systems delivery time, not people hours. According to a Deloitte & Touche study, Rapid Application Development time is cited as one of the primary reasons firms chose to migrate to a client/server architecture. As such, strategic planning and platform decisions require an understanding of how development time relates to architecture and how development time changes as an IS organization gains experience in client/server. A Comparison of Time to Deployment for Two- and Three-Tier Architectures The first set of graphs in Exhibit 4 show the initial development effort forecast to create comparable distributed applications using the common two-tier and three-tier approaches previously discussed. The three-tier application takes much longer to develop primarily because of the complexity involved in coding the bulk of the application logic in a lowerlevel third-generation Language such as C and the difficulties associated with coordinating multiple independent software modules on disparate platforms. In contrast, the two-tier scheme allows the bulk of the application logic to be developed in a higher-level language within the same tool used to create the user interface. Subsequent development efforts (shown in the middle set of graphs)may see three-tier applications deployed with greater speed than two-tier systems. This is entirely due to the amount of middle-tier code that can be reused from previous applications. The speed advantage favoring the three-tier architecture results only if the three-tier application is able to use a sizable portion of existing logic. Experience indicates that these savings can be

significant, particularly in organizations requiring separate but closely related applications for various business units. Reuse is also high for organizations with a strong enterprise data model because data access code can be written once and reused whenever similar access needs arise across multiple applications. The degree of development time reduction on subsequent efforts grows as an organization deploys more client/server applications and develops a significant library of reusable, middle-tier application logic. The last comparison in Exhibit 4 makes the important case for code savings when migrating from one client development tool to another. As discussed previously, client tools are highly proprietary and code is not portable between the major vendor packages. In addition, the PC tools market is highly volatile and vendor shake outs and technical leapfrogging are commonplace. In a two-tier environment, IS organizations wishing to move from one PC-based client development platform to another must scrap their previous investment in application logic because most of this logic is written in the language of the proprietary tool. In the three-tier environment, this logic is written in a reusable middle tier, so the developer simply has to create the presentation and add Remote Procedure Call calls to the functionality layer. Flexibility in reusing existing middle-tier code also assists organizations that are developing applications for various PC client operating system platforms. Until recently few cross-platform client tool development environments existed, and most of today's cross-platform solutions are not considered best-of-breed. In a three-tier environment, the middle-tier functionality layer can be accessed by separate client tools on separate platforms. Coding application logic once in an accessible middle tier decreases the overall development time on the cross-platform solution and provides the organization greater flexibility in choosing the best tool on any given platform. Conclusion Two-tier architectures group the presentation component of data processing with most of the non-data base processing in a single client application. Although the robustness and ease of use of two-tier development tools dramatically decrease initial development time, IS organizations may pay a penalty when trying to update functionality simultaneously in a variety of systems, integrate systems, or migrate from a proprietary development tool. Three-tier architectures split the three processing layers into three distinct software entities. This architecture requires more planning and support, but it can reduce development and maintenance costs over the long term by leveraging code reuse and flexibility in product migration. Three-tier architectures are also the most vendor-neutral of the architectures considered and thus facilitate the integration of heterogeneous systems. In Shaping the Future: Business Design Through Information Technology (Cambridge MA: Harvard Business School Press, 1991), Peter Keen pointed out that a firm's long-term ability to compete is enabled or limited by the reach and range provided by the firm's technical architecture. His suggestions for defining a platform include selecting architectures that: Protect existing IT investments. Ensure the firm's ability to adopt new technologies. Provide integration of heterogeneous resources. Accommodate emerging standards embraced by a broad base of firms.

This discussion of popular client/server architectures exposes the weaknesses in the overwhelming majority of current client/server systems systems employing a two-tier architecture as they relate to Keen's platform selection criteria. Such systems may provide adequate work group-level systems that can be developed rapidly and employ empowering interfaces. However, such systems lack the openness, flexibility, scalability, and integration provided by three-tier systems. The case for deploying three-tier systems will strengthen over time as tools mature and momentum for vendor-neutral standards increases. In the meantime, research should focus on examining issues in migration from two-tier to threetier systems, operationalizing the conceptual graphs presented here as they relate to development time, and studying how the level of complexity in three-tier systems acts as a barrier to its widespread acceptance. Author Biographies John M. Gallaugher John M. Gallaugher is a doctoral student in management information systems at Syracuse University in Syracuse NY. His E-mail address is jmgallau@syr.edu. Suresh C. Ramanathan Suresh C. Ramanathan played a leadership role in pioneering client/server activities at the Aluminum Company of America in Pittsburgh PA. He can be reached at ramana01@ssw.alcoa.com.